Friday, February 27, 2015

Final project update #1

I'll use Live for the looping portion of the piece.  I'll have three loops be recorded live, automatically, and played back with effects.  Each section of the piece will start with recording a loop, so that the same trigger used to start recording a loop will be used to move on to the next scene, with appropriate effects added to that scene.

Things to do in Max:
Figure out what objects can be used to control specific functions/buttons in Live
Set up triggers for each of 3 loops/scenes that interact with Live
Set up how long each loop will record, play back; the first loop will stop at a predetermined time,       probably when the trigger for the second loop happens, but the second and third loops will go on through most of the rest of the piece
Set ending time for loops

Things to do in Live:
Set scenes corresponding to sections of the piece
Add effects to each loop section
Add effects to each overall scene

Thursday, February 26, 2015

Class notes 2/26: Panning in stereo, languages that interface with Max

Sound travels ~1 ms/foot.  Less than a ms of time difference from one ear to the other, but we can hear that.

Head-related transfer functions HRTF's: head-related filtering, sound will have a slightly different frequency content coming from different locations.  Useful for headphones.

Straight linear crossfade results in it seeming sound is farther away when it's right in the middle--have to compensate for inverse square law.  There's a perceived 3 dB drop in the middle, because there's a 6 dB amplitude drop in the middle.  To account for that, take sqrt of left and right speaker values, and you'd get a more natural sound.  A more efficient way is to look up value in a quarter of a cycle of a cosine wave and assign that to L channel, from sine wave to R channel.  That way, 0-->1 is L to R.  You can use the phase offset inlet (R inlet) of the cycle~ object to achieve that.  Make sure you use a line~ object or rampsmooth~ object to interpolate between different dial settings, if you want to make a panning dial, or else you might get a bunch of clicks.

In real life, panning should result in a Doppler effect.

Within Max, there are a couple ways to use text based programming.  (Max itself is written in C.)

js object--a JavaScript object, but internally you write it in a text-based program.

gen~ object--there's a language called Gen that lets you manipulate every single pixel of a video signal or every single sample of an audio signal

mgraphics--a way to use JavaScript to control graphics

Processing--a Java-like interface for doing 2D graphics

Next week:  Jitter & 3D animation, an introduction

Tuesday, February 24, 2015

Class notes 2/24: re-synthesizing acoustic sounds, forward object, buffer~, groove~ and play~, things to think about when writing a Max piece

peakamp~ object--can send continuous amplitude information, by banging it with a metro, or have it report peak amplitude for some segment of time

sigmund~ object can also send out continuous amplitude information

sigmund~ object can also send out continuous frequency information, not just pitch information--you can have, for example, a sine wave recreate that frequency shape--put it into a buffer, graph it--this is a non-real-time process, although you could do the same thing in real time with an ezadc~ object

"stealing the expressivity of the player"

jit.poke~ object--takes value of a signal and puts it into a matrix where it can be displayed

sfplay~ object-->sigmund~-->buffer~ objects off to the side with record~ object for each

can then use those curves to control synthesized sound/re-synthesize sound--allows you to change pitch or speed independently

When you write a Max piece:
Make list of things that need to be checked before a piece starts
Make version of score that lists the triggering actions/notes to remind yourself

He's made an object called ducker~ that tells computer to ignore input that's below a certain threshold.  It looks really handy, see if it's in the examples section.

scheduler in audio interrupt--set any scheduled events along with audio scheduler
CPU meter--can tell you how much of the processing power you're using
Timer--can run a timer that counts up, so you can see how long the piece has been going on
Check Audio--opens Audio Status window
Can create a sub patch that determines threshold automatically, by using peakamp~ object.

change object--filters out repetitions of the same thing

forward object--sends what comes in to the appropriate objects within the patch--looks like a send object, but it doesn't have to know in advance where to send stuff--like a send object where you can change the name of where you want to send it

Check out Chris' *main* patch, which demonstrates the forward object and forwarding.

If you're thinking about how to make a Max structure for a piece, think about how a fellow player or improviser would react to what you play.

buffer~ object--makes a space in RAM, whereas sfplay~ object creates a small "personal" buffer but reads from hard drive--because it's Random Access Memory, can go to any part of it randomly at any moment--a whole bunch of objects can access the buffer--see his objectsthataccessbuffer~ example

**once you create a buffer with a name, all the objects pertaining to that buffer should contain the buffer name

replace message--resizes buffer and puts new thing in it--when it's done loading a file, it sends a bang out its right outlet

info~ object--gets information from buffer~ object, like channel and duration information

read message--like replace, loads something into the buffer, but doesn't set size of buffer--if you use this, you need to have set the size of the buffer already, and it'll play through that amount of time regardless of how long the sound file is

cycle~ object--reads cyclically through 512 samples of the sound and treat it like a waveform, any 512 samples you choose, and you can tell it to do that many times per second, so it creates some kind of tone out of that

index~ object--looks up what's in the buffer at any given moment, esp. when paired with count~

play~ object--plays the sound in the buffer--can give it arguments with starting and stopping points in ms--can also get it to scroll through buffer--can send start and stop messages, or pause and resume messages (?) probably, or "start 0 420" to start at the beginning and read through .42 seconds--can also say "start 420 0" to play part of the file backwards--can say "start 1000 2000 1200" to read from 1 second to 2 second, but do it in 1.2 seconds--can use a line~ object to move through buffer in certain ways, sending a 3-part message in to it--"0, 1000 2000" for example--can feed cycle~ in to it to set a speed for moving through it, usually a really low number like 0.025, it's like a sinusoidal scrubber moving forward & backward through buffer, sort of like scratching a record back & forth, can add number boxes for depth (how much of sample to scrub), rate (how fast), and center (frequency around which pitch will scrub)

Yes, this can work for video, but you have to process video parallel and just send them the same information

record~ object--allows you to record into buffer

poke~ object

peak~ object

wave~ object

groove~ object--you can set loop points within sound, change speed, and tell starting point in sound--very similar to play~, but defines things differently--in order to play, needs an msp signal that tells it the rate at which it will play (sig~) and needs startloop message--can continuously change rate at which it plays, positive and negative values--can also set loop spots within groove, have to send 1 into loop message into groove~ in order to activate loop features--right outlet sends out numbers 0-1 as it reads through loop--could send that into buffer that contains trapezoid to make window that fades it in and out with each loop, to avoid clicks--can also have it loop to a specific moment by sending in a numerical message--can also send it "set soundA" messages to change sound file it's reading through

sig~ object--passes on rate to send in to groove~

Thursday, February 19, 2015

Class notes 2/19: dB, scaling objects, transfer functions, buffer~

Linear Mapping: can think of a range as having a minimum and a size (100-1,000 has min of 100 and size of 1,000-100=900)
to map into another range, find multiplying factor, so if you were trying to map the above range into range of 60, divide 60 by 900 to find the multiplier--then add the minimum back to it

scale object--type scale 0 127 0. 1.--that's minimum and minimum of input range, and then the minimum and maximum of the output range, and it does all the math for you

can feed scale object into *~ object to scale volume of audio signal--add a plain dial, it'll get really loud for the first half of turning up but not that much difference in second half--scaled linearly not logarithmically

dbtoa object--converts dB to amplitude

atodb object--vice versa

can also say dbtoa~ and atodb~ to convert every sample of a signal, if for some reason you need that

for computers, the reference point for loudness is the loudest thing you'd want to hear, not the quietest thing you'd want to hear--usually 0 dB is the loudest thing your computer can output, so we talk about negative dB--if you set up a dial with a range of 120, then an object that's -120, then dbtoa, it'll tell you how many dB you're at on your slider--but then you can hardly hear it once it's down to about -70 dB in current listening conditions

a good quality CD player will have about an 85 dB range, and then room noise affects it, too

practically speaking, a 70 dB range is good

live.gain~ object--gives you a -69 dB range, plus a little above 0--a useful slider--also shows input monitoring--also has a small linear interpolation time, can choose ramp time in ms, to avoid clicks

mtof object--converts MIDI pitch to frequency

ftom object--converts frequency to MIDI pitch, can give floating point argument 0. to ftom to see fraction of a pitch you'd get with that frequency

If you want to use an exponential curve rather than a line object to change a sound (so it sounds less unnatural), add a live.gain~ object, and then use a line~ object to control the slider on the live.gain~.  That way, it'll be linear in terms of frequency.  Put a message into line~: initial frequency, final frequency, time to get there.  Or, you can throw an mtof~ object between line~ and the live.gain~, which will make the change linear in terms of pitch and logarithmically in terms of frequency.

There is no dedicated log object in Max.

*If you want to synthesize sound in Max, YES, it does recognize "57.3" as a pitch.  It only has to be a whole integer if it's for MIDI specifically.

function object--a GUI object, allows you to draw point-and-line diagrams that describe a shape over time--when you bang this object, the 2nd outlet will send information to the line~ object, which will send out a shape that corresponds to shape you've drawn--can specify time scale and amplitude--another way to use function, which is--can send a number in, it will send corresponding number out, so as input goes from 0-1,000, output goes from 0-1--this corresponds to domain as set in inspector--shift click to eliminate a point--can't manipulate function object in unlocked state--it takes input number (in time), looks up corresponding point on graph, outputs result--so a straight line up to 1 gives you exactly the same thing out as you put in--has a "Curve" mode, see the inspector--can option-drag on line to turn it into a curve in Curve mode

pow object--if you type pow 3, it gives you an exponent of 3

note: jit.movie can read still images as well as movies

jit.brcosa--jitter brightness, color, and saturation--can send it brightness messages to change its brightness

Do we perceive brightness linearly or exponentially?  it seems a little bit exponentially, from this example--need to make it exponential fading up and logarithmic fading down--maybe?--fading brightness by a power of 2 seems a little better than 3

buffer~ object--give it a name, so "buffer~ MySound"--can send in message "read" or "replace" to fill buffer--creates a space in memory to stash the sound--kind of like sfplay~, but grabs whole sound file, puts it in RAM, but don't have to read through it linearly--very handy

Tuesday, February 17, 2015

Class notes 2/17: pitch tracking

Pitch tracking techniques

Auto-correlation: if there's a basically repeating waveform, computer can detect when the differences between samples and past samples are minimal (when it's basically repeating), and so computer can figure out period of repeating sound, will use that period to calculate a frequency--not too much worry about time delay, probably how guitar tuners work--not necessarily fast enough for performance

Spectral analysis: fiddle~ and sigmund~ use this; finds harmonic relationships between peaks in frequency spectrum, uses that to determine fundamental frequency

Sigmund is best at outputting note and volume information, with arguments "notes" and "env."  What you do with that information is up to you.

It misses notes if you play fast, and it doesn't like multiple pitches at once.

If there's a danger of it picking up its own sound or an ensemble member's sound, Chris measures threshold (sound of audience, other players), and sets it so that sigmund~ only pays attention to sound above that threshold.  It's also a good idea to set sigmund to ignore sounds that are below the natural range of your instrument.

peakamp~ object: reports loudest amplitude in amount of time you set--can set it so that if peak amplitude is below a certain level, ignore it

*on Thursday, stay tuned for talking about transfer functions for mapping domains onto a third, "bridging" domain

Monday, February 16, 2015

Final Project, more concrete ideas & timeline

For my final project for the quarter, I'll write a piece for solo saxophone and Max for Live.  I'll use pitch tracking in Max to trigger certain events, and I'll use Ableton's looping and recording capabilities to capture and play back material from earlier in the piece.  Ideally, I'll figure out how to do this automatically, without having to hit buttons during performance.  If all else fails, I can still hit keys on the laptop, or borrow a foot pedal.

Tech-wise, I'll need my computer (as long as it's fast enough), a microphone, and an amplifier or PA system.  I'd prefer an amp, since I could put it behind me and be able to hear my own music.

Knowledge-wise, I need to get familiar with sigmund~ and/or other pitch tracking strategies in Max.  Since I just got Live, I'll have to dig in to its recording, playback, and looping capabilities.  Once I'm more familiar with these features, I'll have a better idea of how to compose the piece.  I'd like it to have more and less rhythmically driven sections, so I can get an idea of how to work in both those realms.

Here's a vague timeline:

Week 7: test patches with sigmund~ etc.
test Live's looping capabilities
start composition

Week 8: complete draft of the piece

Week 9: testing and revision

Week 10: piece ready to perform!  (??!)

Thursday, February 12, 2015

Class notes 2/12: critique of movie patches, animation basics

*look at t i b i b / trigger object-what is it (?)

To synch audio with movie, trigger "open audio file" and playing the movie at the same time.

It's more computationally intensive for Max to draw video in jit.pwindow than if it's in a separate window, especially if it's being resized.

When you use jit.movie, use autostart attribute, set it to zero.  Type jit.movie @autostart 0.

Quicktime has its own time units, hence the complicated way of getting the time in a movie per the example on the website.

qmetro object--Chris recommends for video, puts them on low priority queue--we notice dropped frames less than we notice audio dropouts

*read article about timing Chris posted

jit.rota object--allows you to rotate video--anchor points specify point around which entire movie rotates--so you might accidentally flip it off the screen

Mapping dimensions of one kind of art into another is a whole interesting area.  For ex), Anthony's interest in correlating lights (color, brightness) with musical harmony (pitches, their relationships).  Keep in mind linear mapping isn't the way people perceive things--we tend to perceive things logarithmically.

jit.lcd object--a place where you can draw stuff, creates a matrix & understands certain messages about what to draw--type jit.lcd 4 char 320 240   (4 plane means RGBalpha)--send it to a jit.window to show whatever gets sent out of LCD object--into that, loadbang font, background color (brgb), foreground color (frgb) to be all 255, which ends up white--send message "moveto", prepend "write," and send it to LCD

read a coll into the LCD, which can tell the LCD a series of things to do

can add LCD to movie with a jit.plus or jit.+ object, and it will throw titles on there

paintoval message--follow it with coordinates you'd like oval to be bound with

Tuesday, February 10, 2015

Class notes 2/10: latency, crossfading

latency: delay, usually from conversion from voltages to numbers & vice versa; done by filling a buffer before performing operation; I/O time is ~3 ms in the best case

crossfading: separate line~ objects fading one sound out as other fades in; can have one signal controlling both amplitudes, check out Chris' mix~

matrix~ object--creates an object with a certain number of inputs & outputs--like a little multichannel mixer

jit.xfade object--if attribute is 0, only left video; if attribute is 1, only right video; if 0.5, equal mix of both videos--can crossfade between videos

Monday, February 9, 2015

Final Project Ideas

Things I definitely want to learn to do in Max, eventually:

-analyze audio input for pitch information, have Max respond based on that:  People have made objects you can download, like sigmund~     **also consider amplitude tracking

-automatically change parameters of sound based on darkness/lightness (alpha values) or colors of movie

-improvise live with a sampler (preloaded samples, effects)  **I'm told UCI doesn't have a sampler to be checked out--also, if there were, I wouldn't be the only person wanting it--I'd have to go ahead and get a SoftStep**    **go to AMC for MIDI controllers w/ appropriate software**    **look into other MIDI pedals/pedal boards**

-design my own loop station for live performance, with a few extra features (pitch shift, time shift)

-solo piece for voice and Ableton/Max

-solo sample piece using only sounds from saxophone



I'll have to see if I can buy Ableton before the quarter ends.  If I buy it soon, I'll look into possibilities for customizing a loop setup, and I can perform with voice and/or saxophone that way.  If I'm unable to buy Ableton soon, I'll work on having Max analyze pitch input.  That would be valuable for future projects.

Problems with Patcher 5

I managed to get my movie to play, and to control it the way I want.  I tried to set up timepoint objects to trigger a series of sounds as the suitcases fall off the conveyor belt, but for some reason, although Max can load the sound files, it doesn't play them.  It definitely doesn't play them in time, as I've indicated with the timepoint objects, but it doesn't seem to play any of them period, except the first one I loaded, which is the background sound.  I hope I can figure it out tomorrow.

*I'm told I need a transport object to get time point objects to work.  That explains part of it.

*I have to figure out polyphony w/ audio files

Tuesday, February 3, 2015

Class notes 2/3: Aiyun Huang concert

patcherargs--put it in a sub patch, append it with arguments, that way the patcher knows how many arguments to expect; when patch loaded, arguments come out as a list; this allows you to use the file in another place with reconfigurable arguments

***connecting gesture to sound in electronic music***  electronic music can be disembodied    ***charisma factor human vs computer***        ***scale factor: human sounds can't get as loud as a speaker***      performance by Aiyun Huang challenges instruments-plus-electronics model of electro-acoustic performance

How do you write a whole piece in Max?
He has 3 strategies:
-standard, formal, sectional structure (using a patch for each one, maybe)
-going through a set of processors (like pedals) all the time, can turn them on and off--serial technique
-parallel technique--everything happening all at once, windowing only some of those things

_______
VISUALS: you need four #'s to express a color on a pixel: red, green, blue, and alpha (transparency/opacity)

37 million numbers per second of video at low, standard-tv resolution