Wednesday, April 29, 2015

SpQ15 OSC

OSC (late '90s) is an alternative to MIDI (1983), hasn't caught on super quickly yet

Yamaha DX7-first MIDI keyboard--we have one down in the classic lab--FM synthesis based

MIDI is designed to handle 8-bit samples, not a lot of data or very rich data.  OSC is designed to handle 32-bit numbers.

Here's the OSC specification site:
http://opensoundcontrol.org/spec-1_0

transport-independent message system--the OSC message is the content, but what's the container it travels in?  There's 2 basic transport protocols, UDP and TCP/IP.  The internet uses those guys.  In the case of the internet, the messages are HTTP messages traveling over a TCP/IP connection.  OSC typically uses UDP.  Main distinction between UDP and TCP/IP is that UDP just sends the data, doesn't care if the other side receives it, whereas TCP/IP waits for acknowledgement that data was received.  A little bit of overhead/latency with TCP/IP, but it ensures both sides know the message was sent and received.  You can send OSC messages on TCP/IP, or USB, or MIDI, but if you want quicker response you can use UDP.

In Max, there's an object udpsend and udpreceive that defaults to sending/looking for OSC messages.

MIDI has a message structure and a hardware specification.  OSC doesn't have a hardware specification.

Data types: In memory, zeros and ones could mean anything.  The computer has to know what it's looking at/for.
float32 tells it to look for a floating point number--first byte tells it where the decimal point is, next three bytes tell it the value
int32 looks for an integer--signed & unsigned (uint32), signed being negative and positive, unsigned being zero on up
char 8-bit, typically unsigned--in Jitter, often used for RGBA, with each component being a char

big-endian is where the largest, most significant digits are on the "left" and the least are on the "right"--Motorola uses this, whereas Intel does little-endian--for OSC, it has to decide to interpret as big-endian or little-endian, it prefers big-endian

in a string, the null ASCII character let's it know the string is done, literally a value of 0--the null character as typed for OSC is \0

the OSC-blob allows you to package another file, say a TIFF, or a PNG, or a JPG, or any file, you enter the size of the blob in bytes using the data type int32--then the blob is some arbitrary number of bytes, but the OSC-blob message tells you how big the blob is--you can use an OSC message to carry anything, using a blob--he doesn't know if Max supports blobs

OSC Packet consists of message and size.

OSC Packet can be either a Message or a Bundle.  The first byte tells it which of those it is.

OSC Message consists of: Address Pattern, Type Tag String, 0+ Arguments--Address, Type Tags, Args

Address Pattern and Type Tags are both OSC Strings

Args could be whatever, however many of them

Address identifies what kind of message it is.  For example, in Jordan's program, /faster.  You could have more slashes and create subcategories of messages, too.  In MIDI, there are fixed message types, like "noteon," but in OSC you have to create the message types yourself, and they can be anything.

Type Tags are a comma "," followed by a sequence of characters corresponding exactly to the arguments to follow.  If there are no arguments, you still need a type tag string: ",\0\0\0".  There are four possibilities: i => int32, f => float32, s => OSC-string, b => OSC-blob.  There are also, less standard possibilities, listed in the specifications.

In Max, it's taken care of automatically.  In programming X-code yourself, you have to include it.

Wednesday, April 22, 2015

SpQ15 FFT's & Spectra

FFT relies on groups of samples that are 2^n in size-in Max, default are 512 and 1024, as well as 2048 and 4096 (FFT window)

For each sample/data point, you'll get a frequency and amplitude in the spectrum--a graph of amplitude over time (waveform) becomes a graph of frequency versus amplitude--an integral

divide sample rate (48000, for ex) by FT size (1024, for ex)--> 46.875 Hz between points in frequency on the resulting graph--a harmonic series on the bin frequency

when you get up to half the sample rate, you get a mirror image

pfft~ object--needs to load in another patch, so name it: pfft~ my_pfft--then add argument for FFT size: pfft~ my_pfft 1024--next argument is overlap factor, 2 means overlap by 1/2, 4 means overlap by 1/4, etc.--pfft~ my_pfft 1024 2--input and output objects are fftin~ 1 and fftout~ 1--connect first two outlets of fftin to first two inlets of fftout

fft~ in 1 has far right outlet that's bin index, other two outlets are real and imaginary parts of it--they're graphed as x and y--if you take the same Cartesian coordinates and convert to polar, the r (radius) value gives you the amplitude of the sound, and the angle theta gives you the phase of the wave--a key element of phase vocoders--put a cartopol~ object between them, to convert Cartesian to polar coordinates

you can do inverse FFT to go from frequency domain into time domain, as well--works as a filter

spectral gate, only louder (for ex) components make it through--maximum amplitude you can get is window size divided by two or four or something--just need two objects, a > object (which sends a 1 if true and a 0 if false) and a multiply object--that gives you cool effects that only pass frequencies louder than a certain amount--basically produces a sinusoidal result--re-synthesizing based on frequency spectrum

ChucK questions 2

What's up with the word "now" and the component of a shred where you have to advance time?  How does ChucK conceive of time passing?  How do you run shreds in parallel with the miniAudicle?  Still command-line thing?  What is meant by suspending a current shred to let time pass?

How to get ChucK to reference outside itself?  That is, read audio files, write to audio files, and also produce MIDI mappable into Live or Max, and also recognize/interact with a MIDI foot controller?

How to create a random note order?  Where to type Math.random2f object?

How to chuck an LFO to blackhole properly so it modulates frequency and/or amplitude?

Monday, April 20, 2015

ChucK Beginnings

Question: How to run multiple shreds in parallel through miniAudicle?

Answer:

Time units: ms, second, minute, hour day, week, samp(le)

Oscillators: SinOsc, SawOsc, SqrOsc, PulseOsc

Standard effects: gain, PRCRev (reverb)

Can have ChucK read a value and then chuck it back to itself.  Add parentheses after the name of the parameter, like s1.freq()

Question: How to read a sound file in ChucK?

Answer:  WvIn reads file (can change rate), WaveLoop loops file (change # of loops/sec, phase offset), WvOut (writes samples to an audio file),

Question: How to record an entire ChucK session?

Wednesday, April 8, 2015

SpQ15 Project Update 3

I would like to learn to amplitude modulate clips, reverse clips, and speed up/slow down during performance, with sound that's recorded live, without the performer touching anything.  Obstacles to this: Ableton won't MIDI/keyboard map these things because they take time to process.  The lag time isn't so much a problem for me, because I wouldn't necessarily want them playing back as soon as they're recorded, but sometime later.  It could be I can record the clips directly into tracks that already have the speed and reverse set already, but there's not an obvious way to pre-set those things, since the sample box only appears after something's been recorded into the clip.

Possibilities include: 1) writing Max patchers that reverse, change speed automatically, and force samples to amplitude modulate, 2) figuring out how to control automation live for the envelope part, although this sounds just about as complicated as the previous option, maybe more.

To write these Max patchers: record clip in Live, transfer audio into buffer in Max, play audio back out in processed form to a NEW clip or track in Ableton to create a new audio file--this must be a pretty common thing to do, right?

As for amplitude modulation, I really need to explore exactly what that means, sounds like, go back to Chris' Max examples and see how he does it.

Monday, April 6, 2015

SpQ15 Project Update 2

Crunchiness:
I put Tetsu Inoue's piece into Live to check out the waveform.

The piece pans drastically.  Each sound has a unique place in the stereo field.  At some places (~:50), the waveform is the same in L and R channels but delayed by small fractions of a second.

The envelopes are very crisp, with one section being individual blips of sound evenly spaced--almost as if the L and R channels were given different algorithms for reproducing a blip of some kind at different rates.  *On frequency spectrum, these blips seem to be fairly even white noise.

Some sounds where the amplitude is vertically displaced in strange ways, so it looks like a wave (~41 seconds).

Starting at ~80 seconds, that amplitude vertical displacement thing is happening, but I can catch snippets of a voice, so I think it's a sample that's been put into some kind of weird envelope.

Fairly broad frequency spectrum whole time, but certain sounds high frequencies are definitely emphasized.  Some sounds look synthesized, as their frequency spectra have very evenly repeating or distributed shapes  (~1:30).

I'm thinking, other than synthesizing some sounds, a lot of the piece is interesting samples time-shifted and fit to within very specific envelopes with rapid changes in amplitude.

As for Shlohmo's Teeth, the panning isn't so much present.  That's probably not a core part of the crunchy sound.  There are abrupt switches between higher frequency, digital "glass-breaking" sounds and bass sounds, almost like there's some extreme compression on there.  Side-chain?


What I should try:
Modulating incoming sound to achieve clipping and distortion.

Check out some weird compression.

Specifying repeating sets of high frequencies to be brought out in sound.

Figuring out how to change speed without use of multi-bar looper device.  -In sample view, you can change the speed either at increments of 2 or else gradually with a mouse drag.  Unfortunately, this isn't something MIDI or keyboard mappable.

Figuring out how to force an envelope onto a sample.* *It seems like this might require making Max for Live objects which take the content of a clip and put it into a buffer, and then feed the audio back out to a track in Live.

Saturday, April 4, 2015

SpQ15 Project Update 1

First week goals:  1) figure out looping in Live, and looping at different speeds
2) define some things that make a crunchy, crisp digital sound

Questions:
-Do I want warping on or off?  There will be some clips I'd like quantized, but there will also be clips I'd like to start playing immediately when I hit play.  Is there any way to individualize this by track, or something?

Here's an informative article on warping in Live: http://www.soundonsound.com/sos/dec06/articles/livetech_1206.htm

-If I turn warping off completely for this piece, what is the lag when I press record to when it actually starts recording?  If I want to have a rhythmic section, timing everything precisely will be important, like with a traditional loop station.

Observations:
-I'm trying out a looper instead of recording directly into a clip.  That seems to start recording more quickly, but when it plays back, audio doesn't seem to come through to Audio 1 or the master.

-The looper seems to have features that allow speed changes (a knob, but can also jump up/down by intervals), doubling the loop length, halving the loop length, and reversing samples.  There's also a "drag me" feature that lets the audio from a looper be dragged as a complete audio loop.  These built-in features could come in very handy, if I can figure out how to get the audio to come through.

Wednesday, April 1, 2015

Live class 4/1

NAME everything.

convolution reverb-fingerprint of a space

can freeze audio & have it play back, incl live instruments--either control click and Flatten, or make another audio track & drag it over after freezing it

instrument rack--new MIDI track, open Instruments-->Sampler --throw sample in there, it makes it into an instrument!--you can layer as many samplers as you want into an instrument rack, limited only by CPU--play in unison or in different configurations as you choose

can limit certain layers according to velocity--create a more complex sound
can pan each of them

Plugins-->Serum (synthesizer), a wave table synthesizer
can manipulate oscillators in real time
can scan through an audio file, will take chosen section of audio file as shape of oscillator

Then, when you're done designing an instrument, save it as a unique instrument, stored in Live forever.  Yay!

Can choose to add effects to part of a rack, not whole rack, or whole thing, or everything.

Granulator-->granular synthesis, drop sample in--file position determines where granulator will start playing it back--grain size determines size of grains--can scrub through file at any speed--make new sounds out of audio files--spray option gets it to jump around in audio file, introduces more randomness

Collect All and save, will save things all in the same folder

Groove Pool--picks up rhythmic signature of a beat, will quantize your MIDI to match it, so it can swing--you can alter little gray triangles to change rhythmic signature manually--then can split into MIDI, so it'll put a beat into a drum rack for you, or a beat into a kit, so you can play it on a keyboard

SpQ15 4/1: intro

monger/munger object: granular synthesis object

extremes! of dynamics, density, and speed

computers are really good at making hundreds and thousands and millions of copies of things--something humans struggle to do--a good way to address extremes perhaps

murmur--third party iOS software, turns your iPhone into like a remote

Jen/Gen is SuperCollider-like feature in Max