Covid-19 Information and Resources
Released in March 2023, Xenos is a virtual instrument plug-in that implements and extends the Dynamic Stochastic Synthesis (DSS) algorithm invented by Iannis Xenakis and notably employed in the 1991 composition GENDY3. DSS produces a wave of variable periodicity through regular stochastic variation of its wave cycle, resulting in emergent pitch and timbral features. While high-level parametric control of the algorithm enables a variety of musical behaviors, composing with DSS is difficult because its parameters lack basis in perceptual qualities.
Xenos thus implements DSS with modifications and extensions that enhance its suitability for general composition. Written in C++ using the JUCE framework, Xenos offers DSS in a convenient, efficient, and widely compatible polyphonic synthesizer that facilitates composition and performance through host-software features, including MIDI input and parameter automation. Xenos also introduces a pitch-quantization feature that tunes each period of the wave to the nearest frequency in an arbitrary scale. Custom scales can be loaded via the Scala tuning standard, enabling both xenharmonic composition at the mesostructural level and investigation of the timbral effects of microtonal pitch sets on the microsound timescale.
A good review of Xenos can be found at Music Radar: www.musicradar.com/news/fantastic-free-synths-xenos.
Xenos GitHub page: github.com/raphaelradna/xenos.
There is also an introductory YouTube video:
Raphael completed his Masters degree from Media Arts and Technology in the Fall of 2022, and is currently pursuing a PhD in Music Composition at UCSB.
Some of the works presented are:
The Impact of Navigation Aids on Search Performance and Object Recall in Wide-Area Augmented Reality (Paper). You-Jin Kim (MAT PhD) and Radha Kumaran (CS PhD).
Abstract
Head-worn augmented reality (AR) is a hotly pursued and increasingly feasible contender paradigm for replacing or complementing smartphones and watches for continual information consumption. Here, we compare three different AR navigation aids (on-screen compass, on-screen radar and in-world vertical arrows) in a wide-area outdoor user study (n=24) where participants search for hidden virtual target items amongst physical and virtual objects. We analyzed participants’ search task performance, movements, eye-gaze, survey responses and object recall. There were two key findings. First, all navigational aids enhanced search performance relative to a control condition, with some benefit and strongest user preference for in-world arrows. Second, users recalled fewer physical objects than virtual objects in the environment, suggesting reduced awareness of the physical environment. Together, these findings suggest that while navigational aids presented in AR can enhance search task performance, users may pay less attention to the physical environment, which could have undesirable side-effects.
Comparing Zealous and Restrained AI Recommendations in a Real-World Human-AI Collaboration Task. Chengyuan Xu (MAT PhD), Kuo-Chin Lien (Appen), Tobias Höllerer (MAT, CS professor).
Abstract
When designing an AI-assisted decision-making system, there is often a tradeo! between precision and recall in the AI’s recommendations. We argue that careful exploitation of this tradeo! can harness the complementary strengths in the human-AI collaboration to signi"cantly improve team performance. We investigate a real-world video anonymization task for which recall is paramount and more costly to improve. We analyze the performance of 78 professional annotators working with a) no AI assistance, b) a high-precision "restrained" AI, and c) a high-recall "zealous" AI in over 3,466 person-hours of annotation work. In comparison, the zealous AI helps human teammates achieve signi"cantly shorter task completion time and higher recall. In a follow-up study, we remove AI assistance for everyone and "nd negative training e!ects on annotators trained with the restrained AI. These "ndings and our analysis point to important implications for the design of AI assistance in recall-demanding scenarios.
PunchPrint: Creating Composite Fiber-Filament Craft Artifacts by Integrating Punch Needle Embroidery and 3D Printing (Paper). Ashley Del Valle (MAT PhD), Mert Toka (MAT PhD), Alejandro Aponte (MAT PhD), Jennifer Jacobs (MAT Assistant Professor).
Abstract
New printing strategies have enabled 3D-printed materials that imitate traditional textiles. These flament-based textiles are easy to fabricate but lack the look and feel of fber textiles. We seek to augment 3D-printed textiles with needlecraft to produce composite materials that integrate the programmability of additive fabrication with the richness of traditional textile craft. We present PunchPrint: a technique for integrating fber and flament in a textile by com- bining punch needle embroidery and 3D printing. Using a toolpath that imitates textile weave structure, we print a fexible fabric that provides a substrate for punch needle production. We evaluate our material’s robustness through tensile strength and needle compat- ibility tests. We integrate our technique into a parametric design tool and produce functional artifacts that show how PunchPrint broadens punch needle craft by reducing labor in small, detailed artifacts, enabling the integration of openings and multiple yarn weights, and scafolding soft 3D structures.
Fencing Hallucination: An Interactive Installation for Fencing with AI and Synthesizing Chronophotographs. Weihao Qiu (MAT PhD) and George Legrady (MAT Professor).
Abstract
Fencing Hallucination is a multi-screen interactive installation that enables real-time human-AI interaction in the form of a Fencing game and generates a chronophotograph based on the audience’s movement. It mitigates the conflicts between interactivity, modality variety, and computational limitation in creative AI tools. Fenc- ing Hallucination captures the audience’s pose data as an input to the Multilayer Perceptron(MLP), which generates the virtual AI Fencer’s pose data. It also uses the audience’s pose to synthesize the chronophotograph. The system first represents pose data as stick figures. Then it uses a diffusion model to perform image-to-image translations, converting the stick figures into a series of realistic fencing images. Finally, it combines all images with an additive effect into one image as the result. This multi-step process overcomes the challenge of preserving both the overall motion patterns and fine details when synthesizing a chronophotograph.
The MAT alumni that were selected to participate are:
Yoon Chung Han
Solen KIratli
Hannen E. Wolfe
Yin Yu
Weidi Zhang
Rodger (Jieliang) Luo
The International Symposium on Electronic Art is one of the world’s most prominent international arts and technology events, bringing together scholarly, artistic, and scientific domains in an interdisciplinary discussion and showcase of creative productions applying new technologies in art, interactivity, and electronic and digital media.
Burbano is a native of Pasto, Colombia and an associate professor in Universidad de los Andes’s School of Architecture and Design. As a contributor to the conference, Burbano has presented research within the Art Papers program (in 2017), and as a volunteer, has served on the SIGGRAPH 2018, 2020, and 2021 conference committees. Most recently, Burbano served as the first-ever chair of the Retrospective Program in 2021, which honored the history of computer graphics and interactive techniques. Andres received his PhD from Media Arts and Technology in 2013.
For more information, please read this article on the ACMSIGGRAPH Blog.
The next SIGGRAPH conference is in August 2023 and will be held in Los Angeles, California s2023.siggraph.org.
EmissionControl2 is a granular sound synthesizer. The theory of granular synthesis is described in the book Microsound (Curtis Roads, 2001, MIT Press).
Released in October 2020, the new app was developed by a team consisting of Professor Curtis Roads acting as project manager, with software developers Jack Kilgore and Rodney Duplessis. Kilgore is a computer science major at UCSB. Duplessis is a PhD student in music composition at UCSB and is also pursuing a Masters degree in the Media Arts and Technology graduate program.
EmissionControl2 is free and open-source software available at: github.com/jackkilgore/EmissionControl2/releases/latest
The project was supported by a Faculty Research Grant from the UCSB Academic Senate.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Media Arts and Technology (MAT) at UCSB is a transdisciplinary graduate program that fuses emergent media, computer science, engineering, electronic music and digital art research, practice, production, and theory. Created by faculty in both the College of Engineering and the College of Letters and Science, MAT offers an unparalleled opportunity for working at the frontiers of art, science, and technology, where new art forms are born and new expressive media are invented.
In MAT, we seek to define and to create the future of media art and media technology. Our research explores the limits of what is possible in technologically sophisticated art and media, both from an artistic and an engineering viewpoint. Combining art, science, engineering, and theory, MAT graduate studies provide students with a combination of critical and technical tools that prepare them for leadership roles in artistic, engineering, production/direction, educational, and research contexts.
The program offers Master of Science and Ph.D. degrees in Media Arts and Technology. MAT students may focus on an area of emphasis (multimedia engineering, electronic music and sound design, or visual and spatial arts), but all students should strive to transcend traditional disciplinary boundaries and work with other students and faculty in collaborative, multidisciplinary research projects and courses.