Wednesday, May 20, 2015

SpQ15 Max: resynthesizing w/ poly~

sigmund~ peaks @npeak 1
unpack i f f    (gives you index number, frequency, amplitude of that peak)
cycle~    (resynthesizes as sine tone)
*~  (adjusts gain)  (multiply two floats from unpack)
can add tapin~/tapout~ pair with an argument of the delay time in ms, before the dac

then, change to 10 peaks instead of 1 in sigmund~
cycle~ object switches among the 10 peaks really fast, sounds kind of swarmy

ENTER poly patcher

MAKE SUBPATCHER:
in 1 AND in 2
2 flonum boxes, one into cycle~ and one directly into R inlet of *~
cycle~
*~
out~1

Inlets are frequency and amplitude information, out~ gives you a signal.

IN ORIGINAL PATCH:
poly~ subpatcherfilename Poly

Insert this where cycle~ and *~ were before.  Now you have a nice 1-voice poly.

How do you do this with 10 peaks/voices rather than just one?

input message into poly~ that's voices $1, connect to left inlet, can put numbox and input however many voices you want

you can put open $1 message into same left inlet, and whichever integer you put into the number box, it'll open the sub patcher for that voice

**this is like a vocoder, you can understand his voice repeat back even with cycle~ and 10 voices!**
**just for fun, throw a spectroscope on the incoming signal and on the re-synthesized signal**

You can also change number of peaks coming out of sigmund~.

target$1 message is third important message into poly~--tells poly~ to send frequency to whichever voice--target 0 sends to every voice, target 1 sends just to voice 1--all the other voices will remain static on their last perceived frequency

To map multiple peaks into multiple voices:
in sigmund~, make it 2 peaks, and have 2 voices in poly~
make + 1, to make sure the index number matches with the voice number
place + 1 between index number (coming out of unpack) and target
NOW you need to re-order the patch cords to make sure peak 1 is mapped to voice 1--instead of unpack i f f, make it unpack f f i, put message above it that's $2 $3 $1--so that you get frequency, amplitude, and then index coming out of unpack, and you can put the index number through the +1, so now the target # is changed first before new amplitude and frequency inputted

now, can put input menu into sigmund~ for peak, and you can feed it integers to determine the number of peaks it finds--NO MORE than 20, probably a good idea to set limits on number input for number of voices, or else it will likely crash (note: today it was crashing on 7 or 8 voices)--if you make an integer box and feed it into the npeak menu and the number of voices, you can change them both at the same time

**poly~ is also set up to receive MIDI notes**

It's meant for replicating, so all the sub patches of the voices will be the same, but you can have them set up to randomly choose various pathways, so each voice will select different pathways.  You could feed a saw~ and a cycle~ each into a *~ object and then into a selector~ 2 object, and it will choose between those--add a third inlet, directly into the selector~ 2, and you can put a numbox in there (0 means nothing gets selected, 1 means the first, 2 means the second UGen).  That way, you can manually select each voice for target $1 and the kind of wave you want coming out, and it will be predetermined.

Tracks object tries to keep track of which partial is which, instead of putting whatever's loudest in voice 1 and next loudest in voice 2, etc.

Wednesday, May 13, 2015

SpQ15 Arithmetic & Logical Operators

jit.op help is great place for list of arithmetic operators

make jit.matrix my_matrix 1 float32 2 2
Creates a 2x2 jit.matrix object containing floats.

float32 vs. char (which yields integers in Max)

! means not, generally, in computer programming

wrap is like the floating-point modulo (modulo likes integers)

expr $i1 will give you result of arithmetic calculation

! not operator gives you a zero unless it receives a zero, in which case it outputs something--it's about zero vs. non zeros

&& is "and"
|| is "or"

Wednesday, April 29, 2015

SpQ15 OSC

OSC (late '90s) is an alternative to MIDI (1983), hasn't caught on super quickly yet

Yamaha DX7-first MIDI keyboard--we have one down in the classic lab--FM synthesis based

MIDI is designed to handle 8-bit samples, not a lot of data or very rich data.  OSC is designed to handle 32-bit numbers.

Here's the OSC specification site:
http://opensoundcontrol.org/spec-1_0

transport-independent message system--the OSC message is the content, but what's the container it travels in?  There's 2 basic transport protocols, UDP and TCP/IP.  The internet uses those guys.  In the case of the internet, the messages are HTTP messages traveling over a TCP/IP connection.  OSC typically uses UDP.  Main distinction between UDP and TCP/IP is that UDP just sends the data, doesn't care if the other side receives it, whereas TCP/IP waits for acknowledgement that data was received.  A little bit of overhead/latency with TCP/IP, but it ensures both sides know the message was sent and received.  You can send OSC messages on TCP/IP, or USB, or MIDI, but if you want quicker response you can use UDP.

In Max, there's an object udpsend and udpreceive that defaults to sending/looking for OSC messages.

MIDI has a message structure and a hardware specification.  OSC doesn't have a hardware specification.

Data types: In memory, zeros and ones could mean anything.  The computer has to know what it's looking at/for.
float32 tells it to look for a floating point number--first byte tells it where the decimal point is, next three bytes tell it the value
int32 looks for an integer--signed & unsigned (uint32), signed being negative and positive, unsigned being zero on up
char 8-bit, typically unsigned--in Jitter, often used for RGBA, with each component being a char

big-endian is where the largest, most significant digits are on the "left" and the least are on the "right"--Motorola uses this, whereas Intel does little-endian--for OSC, it has to decide to interpret as big-endian or little-endian, it prefers big-endian

in a string, the null ASCII character let's it know the string is done, literally a value of 0--the null character as typed for OSC is \0

the OSC-blob allows you to package another file, say a TIFF, or a PNG, or a JPG, or any file, you enter the size of the blob in bytes using the data type int32--then the blob is some arbitrary number of bytes, but the OSC-blob message tells you how big the blob is--you can use an OSC message to carry anything, using a blob--he doesn't know if Max supports blobs

OSC Packet consists of message and size.

OSC Packet can be either a Message or a Bundle.  The first byte tells it which of those it is.

OSC Message consists of: Address Pattern, Type Tag String, 0+ Arguments--Address, Type Tags, Args

Address Pattern and Type Tags are both OSC Strings

Args could be whatever, however many of them

Address identifies what kind of message it is.  For example, in Jordan's program, /faster.  You could have more slashes and create subcategories of messages, too.  In MIDI, there are fixed message types, like "noteon," but in OSC you have to create the message types yourself, and they can be anything.

Type Tags are a comma "," followed by a sequence of characters corresponding exactly to the arguments to follow.  If there are no arguments, you still need a type tag string: ",\0\0\0".  There are four possibilities: i => int32, f => float32, s => OSC-string, b => OSC-blob.  There are also, less standard possibilities, listed in the specifications.

In Max, it's taken care of automatically.  In programming X-code yourself, you have to include it.

Wednesday, April 22, 2015

SpQ15 FFT's & Spectra

FFT relies on groups of samples that are 2^n in size-in Max, default are 512 and 1024, as well as 2048 and 4096 (FFT window)

For each sample/data point, you'll get a frequency and amplitude in the spectrum--a graph of amplitude over time (waveform) becomes a graph of frequency versus amplitude--an integral

divide sample rate (48000, for ex) by FT size (1024, for ex)--> 46.875 Hz between points in frequency on the resulting graph--a harmonic series on the bin frequency

when you get up to half the sample rate, you get a mirror image

pfft~ object--needs to load in another patch, so name it: pfft~ my_pfft--then add argument for FFT size: pfft~ my_pfft 1024--next argument is overlap factor, 2 means overlap by 1/2, 4 means overlap by 1/4, etc.--pfft~ my_pfft 1024 2--input and output objects are fftin~ 1 and fftout~ 1--connect first two outlets of fftin to first two inlets of fftout

fft~ in 1 has far right outlet that's bin index, other two outlets are real and imaginary parts of it--they're graphed as x and y--if you take the same Cartesian coordinates and convert to polar, the r (radius) value gives you the amplitude of the sound, and the angle theta gives you the phase of the wave--a key element of phase vocoders--put a cartopol~ object between them, to convert Cartesian to polar coordinates

you can do inverse FFT to go from frequency domain into time domain, as well--works as a filter

spectral gate, only louder (for ex) components make it through--maximum amplitude you can get is window size divided by two or four or something--just need two objects, a > object (which sends a 1 if true and a 0 if false) and a multiply object--that gives you cool effects that only pass frequencies louder than a certain amount--basically produces a sinusoidal result--re-synthesizing based on frequency spectrum

ChucK questions 2

What's up with the word "now" and the component of a shred where you have to advance time?  How does ChucK conceive of time passing?  How do you run shreds in parallel with the miniAudicle?  Still command-line thing?  What is meant by suspending a current shred to let time pass?

How to get ChucK to reference outside itself?  That is, read audio files, write to audio files, and also produce MIDI mappable into Live or Max, and also recognize/interact with a MIDI foot controller?

How to create a random note order?  Where to type Math.random2f object?

How to chuck an LFO to blackhole properly so it modulates frequency and/or amplitude?

Monday, April 20, 2015

ChucK Beginnings

Question: How to run multiple shreds in parallel through miniAudicle?

Answer:

Time units: ms, second, minute, hour day, week, samp(le)

Oscillators: SinOsc, SawOsc, SqrOsc, PulseOsc

Standard effects: gain, PRCRev (reverb)

Can have ChucK read a value and then chuck it back to itself.  Add parentheses after the name of the parameter, like s1.freq()

Question: How to read a sound file in ChucK?

Answer:  WvIn reads file (can change rate), WaveLoop loops file (change # of loops/sec, phase offset), WvOut (writes samples to an audio file),

Question: How to record an entire ChucK session?

Wednesday, April 8, 2015

SpQ15 Project Update 3

I would like to learn to amplitude modulate clips, reverse clips, and speed up/slow down during performance, with sound that's recorded live, without the performer touching anything.  Obstacles to this: Ableton won't MIDI/keyboard map these things because they take time to process.  The lag time isn't so much a problem for me, because I wouldn't necessarily want them playing back as soon as they're recorded, but sometime later.  It could be I can record the clips directly into tracks that already have the speed and reverse set already, but there's not an obvious way to pre-set those things, since the sample box only appears after something's been recorded into the clip.

Possibilities include: 1) writing Max patchers that reverse, change speed automatically, and force samples to amplitude modulate, 2) figuring out how to control automation live for the envelope part, although this sounds just about as complicated as the previous option, maybe more.

To write these Max patchers: record clip in Live, transfer audio into buffer in Max, play audio back out in processed form to a NEW clip or track in Ableton to create a new audio file--this must be a pretty common thing to do, right?

As for amplitude modulation, I really need to explore exactly what that means, sounds like, go back to Chris' Max examples and see how he does it.