Composing with Data Flow Programming
- 1 Overview
- 2 Learning Diary
- 2.1 Day One
- 2.1.1 Object box
- 2.1.2 Bang
- 2.1.3 Toggle
- 2.1.4 Message box
- 2.1.5 Symbol box
- 2.1.6 Sliders
- 2.1.7 Canvas
- 2.1.8 Metro object
- 2.1.9 Delay object
- 2.1.10 Counter object
- 2.1.11 Route object
- 2.1.12 Sound~
- 2.1.13 Additive synthesis
- 2.1.14 Subtractive synthesis
- 2.1.15 mtof
- 2.1.16 AM synthesis
- 2.1.17 FM synthesis
- 2.2 Day 2
- 2.3 Day 3
- 2.4 Day 4
- 2.1 Day One
- 3 Final Project
- 3.1 Initial iterations, process
- 3.2 Abstract
- 3.3 Architecture
- 3.4 Interface
- 3.5 Master patch
- 3.6 Video
- 3.7 References
Composing with Data Flow Programming was a 5 day (very intense) workshop on working with PureData and understanding the basics of graphical programming environments. The workshop was taught by Juan Vasquez. Juan and Koray mentored me for the final project, The sound of a Finnish Snail, which was done at the end of the workshop . (04/09/17 - 08/09/17)
I knew about PureData(pd) but I had never really worked on it prior to the course. Having had worked with analog electronics, it was helped to understand some of these objects and functions by comparing them with those components and circuits. The course began with a video showing some projects done using pd. We were then introduced to some of the most basic objects which are used in a pd sketch.
The help menu with all the information on that object can be accessed by right clicking on the object. If an object can not be found, it can also be copy pasted from the help menu file.
An object box is the most basic unit. It is like an empty box that you write different functions into like print, osc, bang,metro etc. (Keyboard shortcut cmd+1). Depending on the function inside the box, the inlets and outlets change.
A bang is like a momentary switch. When pressed/triggered it starts whatever is connected to it. It can be written as 'bang' inside an object box or placed from the Put menu (cmd+shift+B). The bang button blinks when activated and can also be used to visually check if we are getting an output or not.
Toggle is like a regular on/off switch. (cmd+shift+T)
A message box holds whatever is written inside it and passes it onto its outlets when triggered. It can also be triggered by clicking on it. It can hold variables or numbers or characters. (cmd+2)
The symbol box stores a symbol and passes it onto its output when triggered. The left input can be used to send a message or a bang. The right input can be used to send a msg which gets stored in the box. (cmd+4)
Equivalent to potentiometers in analog circuits. There can be horizontal or vertical sliders or knobs and their range can be set in their properties by right clicking them. VERY IMPORTANT to remember, when making a slider to control volume, set range always to 0 - 1.
Creates a canvas which hides the objects underneath it. Useful for making interfaces.
Sends a series of bangs at regular intervals. Like a clock or metronome. Left input can be used with a toggle and the right input can hold a message or number which sets the speed. The speed can also be set by typing it after metro in the box. IMP- the right input number overrides the number written in the box. For example, in the image below, the metro will start a 1000 ms but if 300 is clicked it will change to 300 or if the number box is clicked/changed, it will get the rate from that number.
Delays and gives pauses in the flow of the program. Useful for timing things and creating fades and smooth transitions. The inputs and outputs work similar to the metro object.
Counter counts the number of bangs received. The number after counter denotes the max number it will count till . For eg if it says counter 2, the counter will count from 0 to 2 repeatedly (0,1,2,0,1,2,0,1,2....)
Routes a message according to given criteria. In this example it routes the 0 to the first outlet, 1 to the second and 2 to the third outlet. The fourth outlet is for routing everything apart from 0,1 and 2.
The sound objects generate sounds. The left input generally used for inputting a number which translates to the frequency of sound produced. Some objects are osc~(sine wave), phasor~(saw tooth),
We created a small patch that produces the effect of beating by adding sine waves together. Sound synthesis by adding waves together is called additive synthesis.
Similarly, sound synthesis by subtracting some frequencies is called subtractive synthesis. The noise object produces uniformly distributed white noise. We made a patch which used the noise object and low pass, bandpass and high pass filters to create the sound of wind. This patch also has information on how to record the output to a file on the computer.
mtof object converts midi note numbers to frequency.
From floss manual - Amplitude Modulation Synthesis is a type of sound synthesis where the gain of one signal is controlled, or modulated, by the gain of another signal. The signal whose gain is being modulated is called the "carrier", and the signal responsible for the modulation is called the "modulator". We first tried this on a patch with 2 sine waves. We later replaced the carrier by a soundfile store on the computer.
From floss manual - While Amplitude Modulation Synthesis changes the gain or volume of an audio signal, Frequency Modulation Synthesis, or FM Synthesis, is used to make periodic changes to the frequency of an oscillator. In it's simplest form, Frequency Modulation uses two oscillators. The first is the carrier oscillator, which is the one whose frequency will be changed over time. The second is the modulator oscillator, which will change the frequency of the carrier. In this patch we also started using arrays.
Day 2 we moved onto writing expressions (for basic mathematical functions, comparisons, if statements etc), fiddle and GEM for visuals.
NOISA, analysis patch, discussion of project ideas, more work on GEM
Working on final project
For my final project I made a synthesiser and sequencer which produced sound from the video from a microscope. I wanted the sound to be controlled by the behaviour, movement and physical appearance of the organism. During the final class presentation we heard the sound of 2 (very quiet) Finnish snails using this patch.
Initial iterations, process
Initially I though of using different squares of the video for controlling different parameters of the sound.
I started working on this using a colour tracking reference patch I found. I had a lot of difficulty working with it as the video kept getting stuck for some reason and the program kept crashing. So I abandoned this approach.
The abstract can be found here File:Shreyasi Kar - Microscope Sequencer abstract.docx
The master patch became quite messy as the different modules kept getting added. To understand it better, the master patch and sub patches have been broken down into smaller parts.
The first step is to initialise GEM window and to set its open and close buttons
USB camera input
Telling the GEM window to take input from the microscope connected via USB. It is important to use the loadbang followed by the com port number of where the USB device (microscope) is connected. Without the loadbang, it automatically starts the inbuilt WebCam.
Open video file from the computer
Instead of loading a live video there is an option to also load a pre-recorded video saved on the computer. After opening the file playback is looped.
Watching pixels along the line
The pixels of the video are continuously analysed at the centre (along the white line) for their rgba values. The values are then written in an array (0-camera in this patch), which creates a wave there. The input is connected to the pix_film/ pix_video outputs.
For example, if we hold something with vertical stripes of different colours, the array wave will look something like this.
When a video from the microscope is played, we get a way more complex waveform.
The white line is a graphical representation of where exactly the pixels are being analysed. It is drawn like this.
Reading the array
The data from the video is converted to sound in the NOISA subpatch. NOISA was a patch that we made with Juan in the class.
Using 'tabosc' the wave in the array 0-camera is read back and converted to sound. To make the sound more interesting, the array is read back thrice and added to each other. A volume control is added so that we can choose how much of the video we want to hear in the final mix.
Storing the sound as steps for sequencer
Along with going to the final output, the sound of the video can also be used as the steps of the sequencer. At any point, it it possible to begin recording of the step. Once once the button for recording a step is clicked, it stores a sample of 8 seconds as a step in a numbered array. At the end of 8 seconds it automatically increments the step number so that when the button is pressed again the next step is recorded. When the button is pressed 8 times, then 8 steps of 8 seconds each get stored in 8 different arrays.
The steps that are recorded are played back in the sequencer using 'tabplay'. The metro function is used to set the tempo of the sequencer. It is very important to send the metro rate number to all the eight delays which are responsible for stopping the sample after they have started playing. Without this each of the samples will continue playing even when the sequencer has moved onto the next step. Each step can be set to on or off from the main interface.
The the output of the sequencer has a volume control which can be controlled from the interface.
Final output, mixing and recording
The signals from NOISA (which contains the sound generated from the video) and the sequencer are caught and a final volume control is added. The final mix can also be stored on the computer as an audio file by clicking the record on the interface . The file number keeps incrementing every time the record button is pressed. The number can be reset by pressing reset. The recording is stopped by pressing the stop button.