5th ACM SIGPLAN Workshop on Functional Art, Music, Modeling and Design (FARM)

Performance Evening

Old Fire Station Oxford, UK

9 September 2017, 7:30PM-9:30PM

An evening of strange and wonderful music and audio/visual work made from computer code. Featuring an international line-up of artists, including groundbreaking livecoders who work directly with the innards of software, writing and manipulating code while a computer runs it, projecting their screens so you can see the code behind the performance. Performances will include: extreme manipulation of the Amen breakbeat, two audio-visual performances where algorithms bring shape, colour and sound together, an algorithmic take on Hindustani music, and an audio-visual journey into Lambda calculus.

The concert takes place from 7:30PM-9:30PM.

Note you’ll need a ticket to attend the concert - tickets are free, the Old Fire Station asks for a contribution.

What follows is a list of performers with brief notes edited from performer submissions about their performances and bios.

Performance Notes and Bios

Alexandra Cárdenas

Alexandra Cárdenas will perform through live coding, combining her interests in improvisation, composition, programming, live electronics and traditional music. Alexandra projects her screen for the audience to witness what she is writing on her computer. Using SuperDirt (a SuperCollider implementation of the Dirt sampler for the TidalCycles programming language) Alexandra creates her own sounds in SuperCollider and sequences them using patterns written in real time with the software TidalCycles.

Biography

Composer, programmer and improviser of music, Cárdenas has followed a path from Western classical composition to improvisation and live electronics. Using open source software, her work is focused on the exploration of the musicality of code and the algorithmic behaviour of music, especially through live coding. Currently she lives in Berlin, Germany recently completing her masters in Sound Studies at the Berlin University of the Arts.

Joe Beedles

Joe Beedles has custom-built a system (designed in Max) which allows for live audiovisual performance through employing functional programming techniques, algorithms, procedural drawing and seeded autonomous computation. In its current state he interacts with the system in an improvisatory manner, creating rhythmical patterns, seeding upcoming changes to the structure and directing the flow of the performance. A direct visual component, consisting of abstract geometry and shaded hues, works alongside the audio to create a pseudo-synesthetic experience. There is an emphasis placed on the live, real-time, generative aspect of the performance with a focus on interactivity between performer, computer, audience and the space. MSP users such as Autechre have been heavily influential both aesthetically and conceptually to his practice – he likes to abstract elements of techno, noise and glitch fusing FM synthesis alongside found sound and acoustic recordings. Visually he is inspired by a combination of early Windows screensavers (Beziers), building architecture (Antoni Gaudi), oscilloscopes (Robin Fox), Op/Kinetic art (Naum Gabo) and emergent patterns through natural light (shadows and clouds).

Biography

In his work, young Manchester based audiovisual artist Joe Beedles, draws on live recordings and synthesized noise; he focuses on concepts surrounding club music abstraction, blurring the line between ‘the real’ and ‘the simulated’. His current emphasis is on generative systems for live performance, providing audiences with highly-detailed compositions, emphasising magnified yet obscured soundscapes. Upon premiering his work at The Banff Centre in March 2016, Beedles has continued to develop his live set both technically and conceptually, culminating in solo performances, particularly within the Algorave movement and Test Card (Manchester). Beedles – who also goes by the artist name Native –- releases on London based label Laura Lies In. His output is varied with stylistic themes of vocal manipulation, shimmering harmonic structures, swathes of textural ambience and deconstructed technoise. Visually simple geometry is abstracted in reaction to the audio in a pseudo-synesthetic fashion. His EP ‘Polaris’ was released in 2016.

Filippo Guida

Biography

Many studies show how synaesthetic phenomenon induced by images are able to excite the auditory cortex, giving an evidence of the multi-modality of sound experience itself and raising new questions regarding the complementarity of music and visual art already suggested by John Whitney.

In his performance, Filippo Guida deepens some aspects of this theory, already subject of his past works, by manipulating complex video materials (in this case human body gestures) using sound based generative techniques and languages:

The result is a sort of artistic application of VJ techniques through live coding languages (TidalCycles) using a custom made software to manage the video samples (VideoDirt).

Claude Heiland-Allen

GULCII (Graphical Untyped Lambda Calculus Interactive Interpreter)

GULCII is an untyped lambda calculus interpreter supporting interactive modification of a running program with graphical display of graph reduction. Lambda calculus is a minimal prototypical functional programming language developed by Alonzo Church in the 1930s. Church encoding uses folds to represent data as higher-order functions. Dana Scott’s encoding composes algebraic data types as functions. Each has strengths and weaknesses.

The performance is a code recital, with the internal state of the interpreter visualized and sonified. Part 1 introduces Church encoding. Part 2 develops Scott encoding. An interlude takes in two non-terminating loops, each with their own intrinsic computational rhythm. Finally, Part 3 tests an equivalence between Church and Scott numerals.

Biography

Claude Heiland-Allen is an artist from London UK interested in the complex emergent behaviour of simple systems, esoteric geometries, and mathematical aesthetics. Using computer software, and programming his own, has been a part of his practice for both sound and vision since the mid 1990s.

First exposed to functional programming in his first year at Oxford University at the turn of the century, in the form of Haskell (the language which his present-day GULCII is implemented in), he also spends a significant portion of his time writing code for art’s sake in other languages like C and GLSL. Over the last decade he has performed and presented across Europe and beyond.

Neil C Smith

AMEN $ Mother Function

One sample, one function – a live-coded, single function demolition of the most ubiquitous sample in modern music. This new performance work is both an essay in conceptual minimalism and an attempt at filling the dance floor with the aid of one wavetable and a little maths.

Entirely created within the context of a single, pure function running at audio rate, this performance starts with a sawtooth LFO reading from a wavetable filled with the Amen break. Through the course of the performance, the Amen break is gradually unmade into new rhythms, melodic and synthetic sounds. Working with a single sample at a time, eschewing recursion and randomness, provides a restrictive but rewarding creative challenge.

The silent $ in the title (also used in the code), is in tribute to Gregory C. Coleman, the original drummer of the Amen break, who died homeless and broke.

Biography

Neil C Smith‘s work explores ways of opening up and challenging the nature of creativity itself, in particular through the use of technology. Working sonically and visually, solo and in collaboration, he creates live improvised performances, and context-specific generative and interactive installations. His work exists in real-time and real-space – each moment unique and unrepeatable. Based in Oxford, he has shown work and performed across the UK and in Europe/Canada. He is also lead developer of Praxis LIVE, the open-source hybrid visual environment for live creative coding used in this performance.

Joseph Wilk

:heart: :skull-crossbones:

A musical and visual performance binding Unity, Emacs and Supercollider with Sonic Pi. Combining the quality and performance of the Unity Games engine, the power of manipulating text through Emacs and the musical range and diversity of the SuperCollider engine and Sonic Pi. While samples and software+hardware synths are the main musical tools, a lot has gone into their creation. Samples have been sufficiently smashed into tiny pieces through Clojure based DSP techniques, software synths have been grown in vats of QuickCheck (exploding with state spaces) and hardware synths have been throughly corrupted through state mutating code (sorry). What has been done to Emacs is probably best left to your imagination and the source control history.

A performance which breaks down the tools and languages of programming, to rediscover how they enable us all.

Biography

Joseph Wilk’s path into creating music started on a journey to use Artificial intelligence to generate music to send his new born daughter to sleep. It lead down a path of exploring machine creativity and then performing live coding of music and visuals. Most of his work now explores that intersection, creating a visual and musical curated live experience for the viewers. Wilk likes to tell a story, and play with concepts of programming using them in novel and unintended ways. Exploring what code means as a musical instrument and learning for himself, how he can express himself while sharing with people, what’s possible with code. That programming is not just a path to creating a Silicon Valley startup but a form of self expression.