Wednesday, May 20, 2015

SpQ15 Max: resynthesizing w/ poly~

sigmund~ peaks @npeak 1
unpack i f f    (gives you index number, frequency, amplitude of that peak)
cycle~    (resynthesizes as sine tone)
*~  (adjusts gain)  (multiply two floats from unpack)
can add tapin~/tapout~ pair with an argument of the delay time in ms, before the dac

then, change to 10 peaks instead of 1 in sigmund~
cycle~ object switches among the 10 peaks really fast, sounds kind of swarmy

ENTER poly patcher

MAKE SUBPATCHER:
in 1 AND in 2
2 flonum boxes, one into cycle~ and one directly into R inlet of *~
cycle~
*~
out~1

Inlets are frequency and amplitude information, out~ gives you a signal.

IN ORIGINAL PATCH:
poly~ subpatcherfilename Poly

Insert this where cycle~ and *~ were before.  Now you have a nice 1-voice poly.

How do you do this with 10 peaks/voices rather than just one?

input message into poly~ that's voices $1, connect to left inlet, can put numbox and input however many voices you want

you can put open $1 message into same left inlet, and whichever integer you put into the number box, it'll open the sub patcher for that voice

**this is like a vocoder, you can understand his voice repeat back even with cycle~ and 10 voices!**
**just for fun, throw a spectroscope on the incoming signal and on the re-synthesized signal**

You can also change number of peaks coming out of sigmund~.

target$1 message is third important message into poly~--tells poly~ to send frequency to whichever voice--target 0 sends to every voice, target 1 sends just to voice 1--all the other voices will remain static on their last perceived frequency

To map multiple peaks into multiple voices:
in sigmund~, make it 2 peaks, and have 2 voices in poly~
make + 1, to make sure the index number matches with the voice number
place + 1 between index number (coming out of unpack) and target
NOW you need to re-order the patch cords to make sure peak 1 is mapped to voice 1--instead of unpack i f f, make it unpack f f i, put message above it that's $2 $3 $1--so that you get frequency, amplitude, and then index coming out of unpack, and you can put the index number through the +1, so now the target # is changed first before new amplitude and frequency inputted

now, can put input menu into sigmund~ for peak, and you can feed it integers to determine the number of peaks it finds--NO MORE than 20, probably a good idea to set limits on number input for number of voices, or else it will likely crash (note: today it was crashing on 7 or 8 voices)--if you make an integer box and feed it into the npeak menu and the number of voices, you can change them both at the same time

**poly~ is also set up to receive MIDI notes**

It's meant for replicating, so all the sub patches of the voices will be the same, but you can have them set up to randomly choose various pathways, so each voice will select different pathways.  You could feed a saw~ and a cycle~ each into a *~ object and then into a selector~ 2 object, and it will choose between those--add a third inlet, directly into the selector~ 2, and you can put a numbox in there (0 means nothing gets selected, 1 means the first, 2 means the second UGen).  That way, you can manually select each voice for target $1 and the kind of wave you want coming out, and it will be predetermined.

Tracks object tries to keep track of which partial is which, instead of putting whatever's loudest in voice 1 and next loudest in voice 2, etc.

Wednesday, May 13, 2015

SpQ15 Arithmetic & Logical Operators

jit.op help is great place for list of arithmetic operators

make jit.matrix my_matrix 1 float32 2 2
Creates a 2x2 jit.matrix object containing floats.

float32 vs. char (which yields integers in Max)

! means not, generally, in computer programming

wrap is like the floating-point modulo (modulo likes integers)

expr $i1 will give you result of arithmetic calculation

! not operator gives you a zero unless it receives a zero, in which case it outputs something--it's about zero vs. non zeros

&& is "and"
|| is "or"

Wednesday, April 29, 2015

SpQ15 OSC

OSC (late '90s) is an alternative to MIDI (1983), hasn't caught on super quickly yet

Yamaha DX7-first MIDI keyboard--we have one down in the classic lab--FM synthesis based

MIDI is designed to handle 8-bit samples, not a lot of data or very rich data.  OSC is designed to handle 32-bit numbers.

Here's the OSC specification site:
http://opensoundcontrol.org/spec-1_0

transport-independent message system--the OSC message is the content, but what's the container it travels in?  There's 2 basic transport protocols, UDP and TCP/IP.  The internet uses those guys.  In the case of the internet, the messages are HTTP messages traveling over a TCP/IP connection.  OSC typically uses UDP.  Main distinction between UDP and TCP/IP is that UDP just sends the data, doesn't care if the other side receives it, whereas TCP/IP waits for acknowledgement that data was received.  A little bit of overhead/latency with TCP/IP, but it ensures both sides know the message was sent and received.  You can send OSC messages on TCP/IP, or USB, or MIDI, but if you want quicker response you can use UDP.

In Max, there's an object udpsend and udpreceive that defaults to sending/looking for OSC messages.

MIDI has a message structure and a hardware specification.  OSC doesn't have a hardware specification.

Data types: In memory, zeros and ones could mean anything.  The computer has to know what it's looking at/for.
float32 tells it to look for a floating point number--first byte tells it where the decimal point is, next three bytes tell it the value
int32 looks for an integer--signed & unsigned (uint32), signed being negative and positive, unsigned being zero on up
char 8-bit, typically unsigned--in Jitter, often used for RGBA, with each component being a char

big-endian is where the largest, most significant digits are on the "left" and the least are on the "right"--Motorola uses this, whereas Intel does little-endian--for OSC, it has to decide to interpret as big-endian or little-endian, it prefers big-endian

in a string, the null ASCII character let's it know the string is done, literally a value of 0--the null character as typed for OSC is \0

the OSC-blob allows you to package another file, say a TIFF, or a PNG, or a JPG, or any file, you enter the size of the blob in bytes using the data type int32--then the blob is some arbitrary number of bytes, but the OSC-blob message tells you how big the blob is--you can use an OSC message to carry anything, using a blob--he doesn't know if Max supports blobs

OSC Packet consists of message and size.

OSC Packet can be either a Message or a Bundle.  The first byte tells it which of those it is.

OSC Message consists of: Address Pattern, Type Tag String, 0+ Arguments--Address, Type Tags, Args

Address Pattern and Type Tags are both OSC Strings

Args could be whatever, however many of them

Address identifies what kind of message it is.  For example, in Jordan's program, /faster.  You could have more slashes and create subcategories of messages, too.  In MIDI, there are fixed message types, like "noteon," but in OSC you have to create the message types yourself, and they can be anything.

Type Tags are a comma "," followed by a sequence of characters corresponding exactly to the arguments to follow.  If there are no arguments, you still need a type tag string: ",\0\0\0".  There are four possibilities: i => int32, f => float32, s => OSC-string, b => OSC-blob.  There are also, less standard possibilities, listed in the specifications.

In Max, it's taken care of automatically.  In programming X-code yourself, you have to include it.

Wednesday, April 22, 2015

SpQ15 FFT's & Spectra

FFT relies on groups of samples that are 2^n in size-in Max, default are 512 and 1024, as well as 2048 and 4096 (FFT window)

For each sample/data point, you'll get a frequency and amplitude in the spectrum--a graph of amplitude over time (waveform) becomes a graph of frequency versus amplitude--an integral

divide sample rate (48000, for ex) by FT size (1024, for ex)--> 46.875 Hz between points in frequency on the resulting graph--a harmonic series on the bin frequency

when you get up to half the sample rate, you get a mirror image

pfft~ object--needs to load in another patch, so name it: pfft~ my_pfft--then add argument for FFT size: pfft~ my_pfft 1024--next argument is overlap factor, 2 means overlap by 1/2, 4 means overlap by 1/4, etc.--pfft~ my_pfft 1024 2--input and output objects are fftin~ 1 and fftout~ 1--connect first two outlets of fftin to first two inlets of fftout

fft~ in 1 has far right outlet that's bin index, other two outlets are real and imaginary parts of it--they're graphed as x and y--if you take the same Cartesian coordinates and convert to polar, the r (radius) value gives you the amplitude of the sound, and the angle theta gives you the phase of the wave--a key element of phase vocoders--put a cartopol~ object between them, to convert Cartesian to polar coordinates

you can do inverse FFT to go from frequency domain into time domain, as well--works as a filter

spectral gate, only louder (for ex) components make it through--maximum amplitude you can get is window size divided by two or four or something--just need two objects, a > object (which sends a 1 if true and a 0 if false) and a multiply object--that gives you cool effects that only pass frequencies louder than a certain amount--basically produces a sinusoidal result--re-synthesizing based on frequency spectrum

ChucK questions 2

What's up with the word "now" and the component of a shred where you have to advance time?  How does ChucK conceive of time passing?  How do you run shreds in parallel with the miniAudicle?  Still command-line thing?  What is meant by suspending a current shred to let time pass?

How to get ChucK to reference outside itself?  That is, read audio files, write to audio files, and also produce MIDI mappable into Live or Max, and also recognize/interact with a MIDI foot controller?

How to create a random note order?  Where to type Math.random2f object?

How to chuck an LFO to blackhole properly so it modulates frequency and/or amplitude?

Monday, April 20, 2015

ChucK Beginnings

Question: How to run multiple shreds in parallel through miniAudicle?

Answer:

Time units: ms, second, minute, hour day, week, samp(le)

Oscillators: SinOsc, SawOsc, SqrOsc, PulseOsc

Standard effects: gain, PRCRev (reverb)

Can have ChucK read a value and then chuck it back to itself.  Add parentheses after the name of the parameter, like s1.freq()

Question: How to read a sound file in ChucK?

Answer:  WvIn reads file (can change rate), WaveLoop loops file (change # of loops/sec, phase offset), WvOut (writes samples to an audio file),

Question: How to record an entire ChucK session?

Wednesday, April 8, 2015

SpQ15 Project Update 3

I would like to learn to amplitude modulate clips, reverse clips, and speed up/slow down during performance, with sound that's recorded live, without the performer touching anything.  Obstacles to this: Ableton won't MIDI/keyboard map these things because they take time to process.  The lag time isn't so much a problem for me, because I wouldn't necessarily want them playing back as soon as they're recorded, but sometime later.  It could be I can record the clips directly into tracks that already have the speed and reverse set already, but there's not an obvious way to pre-set those things, since the sample box only appears after something's been recorded into the clip.

Possibilities include: 1) writing Max patchers that reverse, change speed automatically, and force samples to amplitude modulate, 2) figuring out how to control automation live for the envelope part, although this sounds just about as complicated as the previous option, maybe more.

To write these Max patchers: record clip in Live, transfer audio into buffer in Max, play audio back out in processed form to a NEW clip or track in Ableton to create a new audio file--this must be a pretty common thing to do, right?

As for amplitude modulation, I really need to explore exactly what that means, sounds like, go back to Chris' Max examples and see how he does it.

Monday, April 6, 2015

SpQ15 Project Update 2

Crunchiness:
I put Tetsu Inoue's piece into Live to check out the waveform.

The piece pans drastically.  Each sound has a unique place in the stereo field.  At some places (~:50), the waveform is the same in L and R channels but delayed by small fractions of a second.

The envelopes are very crisp, with one section being individual blips of sound evenly spaced--almost as if the L and R channels were given different algorithms for reproducing a blip of some kind at different rates.  *On frequency spectrum, these blips seem to be fairly even white noise.

Some sounds where the amplitude is vertically displaced in strange ways, so it looks like a wave (~41 seconds).

Starting at ~80 seconds, that amplitude vertical displacement thing is happening, but I can catch snippets of a voice, so I think it's a sample that's been put into some kind of weird envelope.

Fairly broad frequency spectrum whole time, but certain sounds high frequencies are definitely emphasized.  Some sounds look synthesized, as their frequency spectra have very evenly repeating or distributed shapes  (~1:30).

I'm thinking, other than synthesizing some sounds, a lot of the piece is interesting samples time-shifted and fit to within very specific envelopes with rapid changes in amplitude.

As for Shlohmo's Teeth, the panning isn't so much present.  That's probably not a core part of the crunchy sound.  There are abrupt switches between higher frequency, digital "glass-breaking" sounds and bass sounds, almost like there's some extreme compression on there.  Side-chain?


What I should try:
Modulating incoming sound to achieve clipping and distortion.

Check out some weird compression.

Specifying repeating sets of high frequencies to be brought out in sound.

Figuring out how to change speed without use of multi-bar looper device.  -In sample view, you can change the speed either at increments of 2 or else gradually with a mouse drag.  Unfortunately, this isn't something MIDI or keyboard mappable.

Figuring out how to force an envelope onto a sample.* *It seems like this might require making Max for Live objects which take the content of a clip and put it into a buffer, and then feed the audio back out to a track in Live.

Saturday, April 4, 2015

SpQ15 Project Update 1

First week goals:  1) figure out looping in Live, and looping at different speeds
2) define some things that make a crunchy, crisp digital sound

Questions:
-Do I want warping on or off?  There will be some clips I'd like quantized, but there will also be clips I'd like to start playing immediately when I hit play.  Is there any way to individualize this by track, or something?

Here's an informative article on warping in Live: http://www.soundonsound.com/sos/dec06/articles/livetech_1206.htm

-If I turn warping off completely for this piece, what is the lag when I press record to when it actually starts recording?  If I want to have a rhythmic section, timing everything precisely will be important, like with a traditional loop station.

Observations:
-I'm trying out a looper instead of recording directly into a clip.  That seems to start recording more quickly, but when it plays back, audio doesn't seem to come through to Audio 1 or the master.

-The looper seems to have features that allow speed changes (a knob, but can also jump up/down by intervals), doubling the loop length, halving the loop length, and reversing samples.  There's also a "drag me" feature that lets the audio from a looper be dragged as a complete audio loop.  These built-in features could come in very handy, if I can figure out how to get the audio to come through.

Wednesday, April 1, 2015

Live class 4/1

NAME everything.

convolution reverb-fingerprint of a space

can freeze audio & have it play back, incl live instruments--either control click and Flatten, or make another audio track & drag it over after freezing it

instrument rack--new MIDI track, open Instruments-->Sampler --throw sample in there, it makes it into an instrument!--you can layer as many samplers as you want into an instrument rack, limited only by CPU--play in unison or in different configurations as you choose

can limit certain layers according to velocity--create a more complex sound
can pan each of them

Plugins-->Serum (synthesizer), a wave table synthesizer
can manipulate oscillators in real time
can scan through an audio file, will take chosen section of audio file as shape of oscillator

Then, when you're done designing an instrument, save it as a unique instrument, stored in Live forever.  Yay!

Can choose to add effects to part of a rack, not whole rack, or whole thing, or everything.

Granulator-->granular synthesis, drop sample in--file position determines where granulator will start playing it back--grain size determines size of grains--can scrub through file at any speed--make new sounds out of audio files--spray option gets it to jump around in audio file, introduces more randomness

Collect All and save, will save things all in the same folder

Groove Pool--picks up rhythmic signature of a beat, will quantize your MIDI to match it, so it can swing--you can alter little gray triangles to change rhythmic signature manually--then can split into MIDI, so it'll put a beat into a drum rack for you, or a beat into a kit, so you can play it on a keyboard

SpQ15 4/1: intro

monger/munger object: granular synthesis object

extremes! of dynamics, density, and speed

computers are really good at making hundreds and thousands and millions of copies of things--something humans struggle to do--a good way to address extremes perhaps

murmur--third party iOS software, turns your iPhone into like a remote

Jen/Gen is SuperCollider-like feature in Max

Friday, March 13, 2015

Final project update 3

It seems I've been missing the entire set of Max objects that are part of the Live API.  This has the potential to make everything a lot easier, except I now have to remake my patch.  I hope, if I put in the time, that it will work, and not just be a lot of time wasted.

Tuesday, March 10, 2015

Class notes 3/10: buffers, delays, filtering

allocate some body of memory, a table capable of holding x samples, with an index at each location

that's what delay~ object does--its argument should be how many samples of memory to set aside--really, it works in chunks of audio, not individual samples

I/O buffer tells you what size chunk of audio it will get from operating system before bringing it into Max--probably default 512

signal vector size--not same as I/O vector size, how many samples are calculated at one time and sent out to DAC--there's always a small delay

to find out what happened 3 samples ago--subtract 3 from current index number, find out what's there--but have to make sure it's within the appropriate range (0-x)--when you get beyond end of buffer, subtract 44,100 from it, so it puts you back to zero--circular buffer--not literally circular, but cycles back to beginning of buffer once it reaches the end--this is how delay works

tapin, tapout objects work with a buffer to access different indices at random (?)

click~ object--when you bang it, it sends out 1--otherwise sends out a 0--so, if you do that with an audio signal, it sounds like a click

as two sine waves of the same frequency are delayed over the length of a phase, you get an amplitude-scaled sine wave of the same frequency--that's how a filter works usually, is by suppressing or boosting certain frequencies

tapin~ object--creates a place in memory that can hold a certain amount of time in sound--specify in ms--must be connected to tap out~

tapout~ object--tell it how many ms of delay you want it to look in the past, translates from ms to samples for you

can use tap out~'s output to feed back into tap in~ to generate feedback--generally this kind of feedback doesn't work in MSP, but this is a rare exception, works b/c tapout~ will never let you go back more than a signal vector's worth of time in the past--this allows you to do repeated echoes, like a "delay" function on a pedal or in a DAW

matrix delay~ (or something like that) object, for video--stores a number of matrices, can ask it to look back in the past a certain number of index #'s back

can also specify time value for delay in a tempo-related way, with translate object

in pop music, subliminal delay of like a half note, at an extremely low level--builds mysterious harmonic complexity b/c of suspensions--check out any Enya song, or Prince uses it a lot even with guitar

chorusing is a slight randomization of delay time, like passing a changing filter through sound, to emulate slight out-of-tuneness, or a bit like a flanger

there's an example that uses a rand~ object with tapin~ and tapout~ objects to give somewhat of a chorus-y effect

*good idea: use loop $1 message into sfplay~ object to loop it, put a toggle into the loop $1 message

play a version of a sound that's delayed exactly 1 sample, will interfere at half the sampling rate destructively, will create a kind of low-pass filtering effect

comb~ object--allows feedback delays internally, feedbacks less than one vector size, can specify milliseconds of delay, how much of the feedback and feedforward signals do you want

delay snare drum by 1 ms, get a pitchy sound

reson~ object--choose center frequency to be emphasized, and Q factor (how sharply to emphasize sound)--did Chris make this?  or does it come with Max?  not sure

table object--can specify frequencies, amplitudes, etc.

biquad~ object--all-purpose filter object in Max, often used alongside filtergraph~ object which allows you to send info into biquad~--help file shows you the formula it uses, you could input the scaling factors yourself, but filtergraph takes care of that for you--left inlet of filtergraph object lets you put in menu and set mode of filter--allows one cutoff or center frequency, basically

*filtering is delay--it's just really short delay

you can chain biquad~ objects, there may even be an object called cascade~ which would do this for you

Reason is a cool DAW that mimics hardware setups in electronic music studios, complete with cables and all

Sunday, March 8, 2015

Final project update #2

I'm having trouble mapping dials in Max for Live to actual dials on the Live interface.  Specifically, I want these dials to control the proportion of the signal being sent through reverb and delay, and the selected audio track in Live has built-in send dials for this purpose.  I want the dials to run through Max, because I want the motion of the dials to be automated very gradually.  I suppose there may be some way automate this in Live itself, but I was hoping to have Max time everything.

I already had to download one external object, aka.keyboard, to get Max to communicate with Live.  I made a bunch of keyboard shortcuts in Live, and I'm having this object in Max automatically press certain keys at certain triggers.  I thought mapping Max objects to Live controls directly would be super easy, because that's what I thought Max for Live was for.  It doesn't seem that way, though.

Thursday, March 5, 2015

Class notes 3/5: 3D animation and OpenGL

can do "moderately interesting" things with 3D animation in Max

there's a center point (0,0,0) around which everything is measured

OpenGL has lighting capabilities to determine direction of incoming light

reminder: use qmetro instead of metro, since it's ok if a frame of video gets dropped

jit.gl.render object --> render means turn something into a usable output, so it draws whatever it thinks it should draw and puts it out

send a message "erase, bang" to jit.gl.render to clear the previous information and give us the new information

give jit.gl.render object @erase_color 0. 0. 0. 1.--which means no red, no green, no blue, full opacity--when it erases, it's black--RGBalpha for this object (not the same order in every object)

jit.gl.gridshape object --> makes lots of tiny triangle drawings and stitches them together to make your picture--@shape attribute: sphere, cube, taurus, all kinds of things--put in pak position object 0. 0. 0.  so initial position is at origin--if you give it no size information, it will pretty much fill up screen

to automate something moving through space, have program running to continually change x, y, and z values--can use line objects or any algorithm you can dream up

check jit.gl.gridshape's attributes in inspector window--there's a lot--also check OB3D attributes, which are all available to any GL object

attrui object--type in gl_color, it gives you a menu--when you make an attrui object, it gives you the options to change that attribute in selectable form, either a menu or a continuum

@lighting_enable --> turns on lighting capabilities, otherwise looks like light is coming from all directions

jit.gl.texture object --> whatever you send into this can be referred to with the @texture attribute of jit.gl.gridshape--texture can be an image or a movie that will play in that shape

depth enable attribute--> can decide which objects are in front of & behind other objects

can also put a movie on a plane to put it in 3D space--> jit.gl.videoplane

to change name of a string of jit objects, send message drawto [new name]

pak camera object--> tells where your camera is coming from, position can be controlled just like a gridshape

nurbs is a cool concept--you have a 3D shape, and you have points that protrude or intrude to distort it--can put an image or video on the shape to see the image or video distorted

jit.gl.physics or jit.gl.physdraw (or something like that) object--> should emulate some natural physical movements, like a ball bouncing against a wall

jit.gl.multiple object --> if you want a whole bunch of the same thing, instead of adding shapes or whatever individually--once it reaches the maximum number you specify, it'll stop

*from reading: wireframe mode allows you to see 3D shapes as a grid

Tuesday, March 3, 2015

Class notes 3/3: JS for Max

JavaScript is object-oriented programming.  That means it has properties and methods.

Start by specifying "inlets = " and "outlets = " properties.

function name(1,2,3) {
           method();
           method();
}

A function can have arguments.  Above, 1,2, and 3 are arguments.

Useful method in Max: outlet(1, "bang");--this will send out the outlet on the right, whereas outlet(0) sends out the left outlet--try to keep right-to-left order as is typical for Max.

function anything() {

This means, if anything comes in other than what's already defined, do this.
A function can call another function within it.

In JS, no distinction between ints and floats.  In Max, yes.

At beginning of script, set variables.  In JS, equals sign means "set this to this."  A double equals sign == means is it equal to?  Testing if the logical expression is true or false.

var x = 0;

Any variable you set outside of a function is considered a global variable.  That means it's accessible from any part of the program, but that doesn't mean it can't be changed in a function.  A local variable only exists within a function, and it ceases to exist after that.

Set up script so if people put the wrong things into the object, sends out an error message--can do this function-by-function.

function msg_int( incoming)  {
That's a function that allows an integer to be input, can set that to a variable to store it, and/or can send it out immediately.

A function in a JS script is a potential incoming message from Max.  So you might have function msg_int, or function msg_float, or whatever other message might come in.

isNaN means "is not a number"--so if incoming isn't a number, can send an error, or whatever

Arrays:

thearray = new Array(1);

array name = new Array(# of arguments);

myfavoritearray = [32, 76, 95.3, "chris"];

Very much like a coll object, numbered automatically with index numbers

while loop-->set an initial condition, keep testing condition, if it's this, do this--have to also set a stop condition


Sunday, March 1, 2015

JavaScript in Max article

js object

JS is for scripts within web pages.  Uses core JS, not client-side JS.  Save as a text file with a .js extension somewhere in Max file search path, then type filename as argument in js object.

Default 1 inlet, 1 outlet; specify this at start of script.

Create a function()

Friday, February 27, 2015

Final project update #1

I'll use Live for the looping portion of the piece.  I'll have three loops be recorded live, automatically, and played back with effects.  Each section of the piece will start with recording a loop, so that the same trigger used to start recording a loop will be used to move on to the next scene, with appropriate effects added to that scene.

Things to do in Max:
Figure out what objects can be used to control specific functions/buttons in Live
Set up triggers for each of 3 loops/scenes that interact with Live
Set up how long each loop will record, play back; the first loop will stop at a predetermined time,       probably when the trigger for the second loop happens, but the second and third loops will go on through most of the rest of the piece
Set ending time for loops

Things to do in Live:
Set scenes corresponding to sections of the piece
Add effects to each loop section
Add effects to each overall scene

Thursday, February 26, 2015

Class notes 2/26: Panning in stereo, languages that interface with Max

Sound travels ~1 ms/foot.  Less than a ms of time difference from one ear to the other, but we can hear that.

Head-related transfer functions HRTF's: head-related filtering, sound will have a slightly different frequency content coming from different locations.  Useful for headphones.

Straight linear crossfade results in it seeming sound is farther away when it's right in the middle--have to compensate for inverse square law.  There's a perceived 3 dB drop in the middle, because there's a 6 dB amplitude drop in the middle.  To account for that, take sqrt of left and right speaker values, and you'd get a more natural sound.  A more efficient way is to look up value in a quarter of a cycle of a cosine wave and assign that to L channel, from sine wave to R channel.  That way, 0-->1 is L to R.  You can use the phase offset inlet (R inlet) of the cycle~ object to achieve that.  Make sure you use a line~ object or rampsmooth~ object to interpolate between different dial settings, if you want to make a panning dial, or else you might get a bunch of clicks.

In real life, panning should result in a Doppler effect.

Within Max, there are a couple ways to use text based programming.  (Max itself is written in C.)

js object--a JavaScript object, but internally you write it in a text-based program.

gen~ object--there's a language called Gen that lets you manipulate every single pixel of a video signal or every single sample of an audio signal

mgraphics--a way to use JavaScript to control graphics

Processing--a Java-like interface for doing 2D graphics

Next week:  Jitter & 3D animation, an introduction

Tuesday, February 24, 2015

Class notes 2/24: re-synthesizing acoustic sounds, forward object, buffer~, groove~ and play~, things to think about when writing a Max piece

peakamp~ object--can send continuous amplitude information, by banging it with a metro, or have it report peak amplitude for some segment of time

sigmund~ object can also send out continuous amplitude information

sigmund~ object can also send out continuous frequency information, not just pitch information--you can have, for example, a sine wave recreate that frequency shape--put it into a buffer, graph it--this is a non-real-time process, although you could do the same thing in real time with an ezadc~ object

"stealing the expressivity of the player"

jit.poke~ object--takes value of a signal and puts it into a matrix where it can be displayed

sfplay~ object-->sigmund~-->buffer~ objects off to the side with record~ object for each

can then use those curves to control synthesized sound/re-synthesize sound--allows you to change pitch or speed independently

When you write a Max piece:
Make list of things that need to be checked before a piece starts
Make version of score that lists the triggering actions/notes to remind yourself

He's made an object called ducker~ that tells computer to ignore input that's below a certain threshold.  It looks really handy, see if it's in the examples section.

scheduler in audio interrupt--set any scheduled events along with audio scheduler
CPU meter--can tell you how much of the processing power you're using
Timer--can run a timer that counts up, so you can see how long the piece has been going on
Check Audio--opens Audio Status window
Can create a sub patch that determines threshold automatically, by using peakamp~ object.

change object--filters out repetitions of the same thing

forward object--sends what comes in to the appropriate objects within the patch--looks like a send object, but it doesn't have to know in advance where to send stuff--like a send object where you can change the name of where you want to send it

Check out Chris' *main* patch, which demonstrates the forward object and forwarding.

If you're thinking about how to make a Max structure for a piece, think about how a fellow player or improviser would react to what you play.

buffer~ object--makes a space in RAM, whereas sfplay~ object creates a small "personal" buffer but reads from hard drive--because it's Random Access Memory, can go to any part of it randomly at any moment--a whole bunch of objects can access the buffer--see his objectsthataccessbuffer~ example

**once you create a buffer with a name, all the objects pertaining to that buffer should contain the buffer name

replace message--resizes buffer and puts new thing in it--when it's done loading a file, it sends a bang out its right outlet

info~ object--gets information from buffer~ object, like channel and duration information

read message--like replace, loads something into the buffer, but doesn't set size of buffer--if you use this, you need to have set the size of the buffer already, and it'll play through that amount of time regardless of how long the sound file is

cycle~ object--reads cyclically through 512 samples of the sound and treat it like a waveform, any 512 samples you choose, and you can tell it to do that many times per second, so it creates some kind of tone out of that

index~ object--looks up what's in the buffer at any given moment, esp. when paired with count~

play~ object--plays the sound in the buffer--can give it arguments with starting and stopping points in ms--can also get it to scroll through buffer--can send start and stop messages, or pause and resume messages (?) probably, or "start 0 420" to start at the beginning and read through .42 seconds--can also say "start 420 0" to play part of the file backwards--can say "start 1000 2000 1200" to read from 1 second to 2 second, but do it in 1.2 seconds--can use a line~ object to move through buffer in certain ways, sending a 3-part message in to it--"0, 1000 2000" for example--can feed cycle~ in to it to set a speed for moving through it, usually a really low number like 0.025, it's like a sinusoidal scrubber moving forward & backward through buffer, sort of like scratching a record back & forth, can add number boxes for depth (how much of sample to scrub), rate (how fast), and center (frequency around which pitch will scrub)

Yes, this can work for video, but you have to process video parallel and just send them the same information

record~ object--allows you to record into buffer

poke~ object

peak~ object

wave~ object

groove~ object--you can set loop points within sound, change speed, and tell starting point in sound--very similar to play~, but defines things differently--in order to play, needs an msp signal that tells it the rate at which it will play (sig~) and needs startloop message--can continuously change rate at which it plays, positive and negative values--can also set loop spots within groove, have to send 1 into loop message into groove~ in order to activate loop features--right outlet sends out numbers 0-1 as it reads through loop--could send that into buffer that contains trapezoid to make window that fades it in and out with each loop, to avoid clicks--can also have it loop to a specific moment by sending in a numerical message--can also send it "set soundA" messages to change sound file it's reading through

sig~ object--passes on rate to send in to groove~

Thursday, February 19, 2015

Class notes 2/19: dB, scaling objects, transfer functions, buffer~

Linear Mapping: can think of a range as having a minimum and a size (100-1,000 has min of 100 and size of 1,000-100=900)
to map into another range, find multiplying factor, so if you were trying to map the above range into range of 60, divide 60 by 900 to find the multiplier--then add the minimum back to it

scale object--type scale 0 127 0. 1.--that's minimum and minimum of input range, and then the minimum and maximum of the output range, and it does all the math for you

can feed scale object into *~ object to scale volume of audio signal--add a plain dial, it'll get really loud for the first half of turning up but not that much difference in second half--scaled linearly not logarithmically

dbtoa object--converts dB to amplitude

atodb object--vice versa

can also say dbtoa~ and atodb~ to convert every sample of a signal, if for some reason you need that

for computers, the reference point for loudness is the loudest thing you'd want to hear, not the quietest thing you'd want to hear--usually 0 dB is the loudest thing your computer can output, so we talk about negative dB--if you set up a dial with a range of 120, then an object that's -120, then dbtoa, it'll tell you how many dB you're at on your slider--but then you can hardly hear it once it's down to about -70 dB in current listening conditions

a good quality CD player will have about an 85 dB range, and then room noise affects it, too

practically speaking, a 70 dB range is good

live.gain~ object--gives you a -69 dB range, plus a little above 0--a useful slider--also shows input monitoring--also has a small linear interpolation time, can choose ramp time in ms, to avoid clicks

mtof object--converts MIDI pitch to frequency

ftom object--converts frequency to MIDI pitch, can give floating point argument 0. to ftom to see fraction of a pitch you'd get with that frequency

If you want to use an exponential curve rather than a line object to change a sound (so it sounds less unnatural), add a live.gain~ object, and then use a line~ object to control the slider on the live.gain~.  That way, it'll be linear in terms of frequency.  Put a message into line~: initial frequency, final frequency, time to get there.  Or, you can throw an mtof~ object between line~ and the live.gain~, which will make the change linear in terms of pitch and logarithmically in terms of frequency.

There is no dedicated log object in Max.

*If you want to synthesize sound in Max, YES, it does recognize "57.3" as a pitch.  It only has to be a whole integer if it's for MIDI specifically.

function object--a GUI object, allows you to draw point-and-line diagrams that describe a shape over time--when you bang this object, the 2nd outlet will send information to the line~ object, which will send out a shape that corresponds to shape you've drawn--can specify time scale and amplitude--another way to use function, which is--can send a number in, it will send corresponding number out, so as input goes from 0-1,000, output goes from 0-1--this corresponds to domain as set in inspector--shift click to eliminate a point--can't manipulate function object in unlocked state--it takes input number (in time), looks up corresponding point on graph, outputs result--so a straight line up to 1 gives you exactly the same thing out as you put in--has a "Curve" mode, see the inspector--can option-drag on line to turn it into a curve in Curve mode

pow object--if you type pow 3, it gives you an exponent of 3

note: jit.movie can read still images as well as movies

jit.brcosa--jitter brightness, color, and saturation--can send it brightness messages to change its brightness

Do we perceive brightness linearly or exponentially?  it seems a little bit exponentially, from this example--need to make it exponential fading up and logarithmic fading down--maybe?--fading brightness by a power of 2 seems a little better than 3

buffer~ object--give it a name, so "buffer~ MySound"--can send in message "read" or "replace" to fill buffer--creates a space in memory to stash the sound--kind of like sfplay~, but grabs whole sound file, puts it in RAM, but don't have to read through it linearly--very handy

Tuesday, February 17, 2015

Class notes 2/17: pitch tracking

Pitch tracking techniques

Auto-correlation: if there's a basically repeating waveform, computer can detect when the differences between samples and past samples are minimal (when it's basically repeating), and so computer can figure out period of repeating sound, will use that period to calculate a frequency--not too much worry about time delay, probably how guitar tuners work--not necessarily fast enough for performance

Spectral analysis: fiddle~ and sigmund~ use this; finds harmonic relationships between peaks in frequency spectrum, uses that to determine fundamental frequency

Sigmund is best at outputting note and volume information, with arguments "notes" and "env."  What you do with that information is up to you.

It misses notes if you play fast, and it doesn't like multiple pitches at once.

If there's a danger of it picking up its own sound or an ensemble member's sound, Chris measures threshold (sound of audience, other players), and sets it so that sigmund~ only pays attention to sound above that threshold.  It's also a good idea to set sigmund to ignore sounds that are below the natural range of your instrument.

peakamp~ object: reports loudest amplitude in amount of time you set--can set it so that if peak amplitude is below a certain level, ignore it

*on Thursday, stay tuned for talking about transfer functions for mapping domains onto a third, "bridging" domain

Monday, February 16, 2015

Final Project, more concrete ideas & timeline

For my final project for the quarter, I'll write a piece for solo saxophone and Max for Live.  I'll use pitch tracking in Max to trigger certain events, and I'll use Ableton's looping and recording capabilities to capture and play back material from earlier in the piece.  Ideally, I'll figure out how to do this automatically, without having to hit buttons during performance.  If all else fails, I can still hit keys on the laptop, or borrow a foot pedal.

Tech-wise, I'll need my computer (as long as it's fast enough), a microphone, and an amplifier or PA system.  I'd prefer an amp, since I could put it behind me and be able to hear my own music.

Knowledge-wise, I need to get familiar with sigmund~ and/or other pitch tracking strategies in Max.  Since I just got Live, I'll have to dig in to its recording, playback, and looping capabilities.  Once I'm more familiar with these features, I'll have a better idea of how to compose the piece.  I'd like it to have more and less rhythmically driven sections, so I can get an idea of how to work in both those realms.

Here's a vague timeline:

Week 7: test patches with sigmund~ etc.
test Live's looping capabilities
start composition

Week 8: complete draft of the piece

Week 9: testing and revision

Week 10: piece ready to perform!  (??!)

Thursday, February 12, 2015

Class notes 2/12: critique of movie patches, animation basics

*look at t i b i b / trigger object-what is it (?)

To synch audio with movie, trigger "open audio file" and playing the movie at the same time.

It's more computationally intensive for Max to draw video in jit.pwindow than if it's in a separate window, especially if it's being resized.

When you use jit.movie, use autostart attribute, set it to zero.  Type jit.movie @autostart 0.

Quicktime has its own time units, hence the complicated way of getting the time in a movie per the example on the website.

qmetro object--Chris recommends for video, puts them on low priority queue--we notice dropped frames less than we notice audio dropouts

*read article about timing Chris posted

jit.rota object--allows you to rotate video--anchor points specify point around which entire movie rotates--so you might accidentally flip it off the screen

Mapping dimensions of one kind of art into another is a whole interesting area.  For ex), Anthony's interest in correlating lights (color, brightness) with musical harmony (pitches, their relationships).  Keep in mind linear mapping isn't the way people perceive things--we tend to perceive things logarithmically.

jit.lcd object--a place where you can draw stuff, creates a matrix & understands certain messages about what to draw--type jit.lcd 4 char 320 240   (4 plane means RGBalpha)--send it to a jit.window to show whatever gets sent out of LCD object--into that, loadbang font, background color (brgb), foreground color (frgb) to be all 255, which ends up white--send message "moveto", prepend "write," and send it to LCD

read a coll into the LCD, which can tell the LCD a series of things to do

can add LCD to movie with a jit.plus or jit.+ object, and it will throw titles on there

paintoval message--follow it with coordinates you'd like oval to be bound with

Tuesday, February 10, 2015

Class notes 2/10: latency, crossfading

latency: delay, usually from conversion from voltages to numbers & vice versa; done by filling a buffer before performing operation; I/O time is ~3 ms in the best case

crossfading: separate line~ objects fading one sound out as other fades in; can have one signal controlling both amplitudes, check out Chris' mix~

matrix~ object--creates an object with a certain number of inputs & outputs--like a little multichannel mixer

jit.xfade object--if attribute is 0, only left video; if attribute is 1, only right video; if 0.5, equal mix of both videos--can crossfade between videos

Monday, February 9, 2015

Final Project Ideas

Things I definitely want to learn to do in Max, eventually:

-analyze audio input for pitch information, have Max respond based on that:  People have made objects you can download, like sigmund~     **also consider amplitude tracking

-automatically change parameters of sound based on darkness/lightness (alpha values) or colors of movie

-improvise live with a sampler (preloaded samples, effects)  **I'm told UCI doesn't have a sampler to be checked out--also, if there were, I wouldn't be the only person wanting it--I'd have to go ahead and get a SoftStep**    **go to AMC for MIDI controllers w/ appropriate software**    **look into other MIDI pedals/pedal boards**

-design my own loop station for live performance, with a few extra features (pitch shift, time shift)

-solo piece for voice and Ableton/Max

-solo sample piece using only sounds from saxophone



I'll have to see if I can buy Ableton before the quarter ends.  If I buy it soon, I'll look into possibilities for customizing a loop setup, and I can perform with voice and/or saxophone that way.  If I'm unable to buy Ableton soon, I'll work on having Max analyze pitch input.  That would be valuable for future projects.

Problems with Patcher 5

I managed to get my movie to play, and to control it the way I want.  I tried to set up timepoint objects to trigger a series of sounds as the suitcases fall off the conveyor belt, but for some reason, although Max can load the sound files, it doesn't play them.  It definitely doesn't play them in time, as I've indicated with the timepoint objects, but it doesn't seem to play any of them period, except the first one I loaded, which is the background sound.  I hope I can figure it out tomorrow.

*I'm told I need a transport object to get time point objects to work.  That explains part of it.

*I have to figure out polyphony w/ audio files

Tuesday, February 3, 2015

Class notes 2/3: Aiyun Huang concert

patcherargs--put it in a sub patch, append it with arguments, that way the patcher knows how many arguments to expect; when patch loaded, arguments come out as a list; this allows you to use the file in another place with reconfigurable arguments

***connecting gesture to sound in electronic music***  electronic music can be disembodied    ***charisma factor human vs computer***        ***scale factor: human sounds can't get as loud as a speaker***      performance by Aiyun Huang challenges instruments-plus-electronics model of electro-acoustic performance

How do you write a whole piece in Max?
He has 3 strategies:
-standard, formal, sectional structure (using a patch for each one, maybe)
-going through a set of processors (like pedals) all the time, can turn them on and off--serial technique
-parallel technique--everything happening all at once, windowing only some of those things

_______
VISUALS: you need four #'s to express a color on a pixel: red, green, blue, and alpha (transparency/opacity)

37 million numbers per second of video at low, standard-tv resolution

Tuesday, January 27, 2015

Class notes 1/27: more useful MIDI objects

kslider object--gives you little visual keyboard, it'll tell you the number of the pitch when you click on the keys--or you can get these numbers from the little keyboard in the documentation

mtof--MIDI pitch number to frequency

Velocity signals are usually interpreted by receiving device as amplitude, but some synthesizers may be more sophisticated and emphasize different frequencies depending on velocity to mimic different acoustic instruments.

4n is quarter note, 4nd is dotted quarter note, 8nt eighth note triplet, or you can calculate triplets by number of ticks--to have both duple and triple feel in same piece, send each of those messages differently, or different metros

timepoint object--specify moment that you want something to happen, can be 29.1.0 (bar 29, beat 1, tick 0)

metro has attribute called @active, which means whenever the transport is on, that metro will be on--another attribute called @autostarttime (or something similar), which allows you to specify when metro turns on

drunk object--kind of random generator, not quite as random as random object

urn object--generates numbers like random object but won't repeat numbers--if you keep banging once it's out, it won't change its number but will send bangs out right inlet--can input message "clear" to restart it--Dobrian trick is to connect right outlet of urn to inlet of "clear, bang" message so it restarts automatically after it gets to the end

random object--randomness is maybe impossible? to define, b/c we actually have very little or no experience with things that are random in the universe--in computers, pseudorandom--uses deterministic system, different programming languages have different ways of doing it and expressing it--can constrain number of random possibilities in Max, so can say random 12 (gives you twelve possibilities) or send 12 in right inlet--every time it's banged, it sends out integer number between 0 and 11--if you don't give it an argument, it only chooses 0--random object only deals with positive integers

decide object--random object, but just for 0 and 1, same as random 2

noise~ object--generates white noise, can filter it to create different effects, can sample it

sah~--sample and hold, samples noise, but only when receives triggering message in R inlet--triggering message can be cycle~ object, which samples white noise 8 times per second and sends it out--add one (so it's not negative), can mess with it from there (multiply it by a frequency, etc.)

prepend object--takes whatever you type in as argument, puts that argument before whatever comes in the inlet and sends them out together

append object--same as prepend, but puts argument on end of whatever comes in inlet

text button--can have button mode or toggle mode, if toggle mode, can have diff't messages depending on whether it's toggled on or off

table object--only stores integer numbers, whereas coll object stores floating point numbers

if object--works like an if;then statement

when object--reports the time in measure, beat, and ticks, have to unpack it though

MIDI deals with equal temperament, although some synthesizers you could send to might be able to handle other tunings--MSP gives you really specific control over pitch, so that would be a better way to work--but you can use Max to map onto a tuning system you've devised yourself, using a coll object would be one way

Middle C is 60--if you divide any pitch number of C's by 12, the remainder will be zero.  Therefore:
modulo 12 operation, %: gives you remainder when divided by 12, so C is 0, C#'s are 1, D's are 2, etc.--gives you pitch class of each note--so you could find a way to multiply each pitch class by a certain number to detune them

TO LOOK INTO: umenu, drunk

Thursday, January 22, 2015

Class notes 1/22: debugging, critique of rhythm patcher, sub patchers, transport object

stack overflow--prevents outlets from sending out more information until overload problem is fixed

Critique of Rhythm Patcher:
I'm adding modulating frequency to carrier frequency, pushing it all to clipping levels.  Have to put +~ object BEFORE cycle~ (carrier frequency) object and do +~82 so that you have carrier frequency at proper place.

Add something down at bottom that multiplies them all by a smaller number, because adding all 6 together will still clip otherwise.

Modulating with a sawtooth wave will still clip, despite the smoothing formula. To use sawtooth type wave, phasor is fine, or rampsmooth~, which adds in some filtering.

Embedded patchers--When you have the sub-patcher made, highlight all things you want to be made into one object, select Edit, Encapsulate--makes single object starting with p, click on it to open sub patcher.  Also creates inlet and outlet object in sub patcher, which show up as regular inlets & outlets in main patcher.  Can also do this by saving a patcher normally with a title like "chrisosc~," and then you can use that as a sub patcher in all your other patchers.

pack object--send several number boxes into it, it turns them into a list and sends them out as a unified list

unpack object--gets list in inlet, sends messages out through separate outlets

function object--makes shape you want using little dots that are connected, to delete dots, shift-click

Ways to think about time: absolute time, frequency (kind of the inverse), metronome time

transport object--separate clock/timekeeper that thinks in musical terms, tempo, bars, beats, etc.--turned on or off with 1 or 0, but it's not a metro, which thinks in relation to absolute time--make transport object with toggle, then metro 4n (to make quarter notes, for example), and metro will now know what 4n means--default tempo of transport is 120--input tempo message

tempo message--tells tempo to transport object, type "tempo 48" or whatever tempo into message box

Tuesday, January 20, 2015

Class notes 1/20: MIDI history, messages

Musical Instrument Digital Interface (MIDI), since late '70s, early '80s.

BINARY
1 byte = 8 bits = 256 ways to represent information  (2^8)
10101101-->  first 1 is MSb (Most Significant bit) and last 1 is LSb (Least Significant bit)

HEXADECIMAL  (colors in HTML use this, a few other things)
0123456789ABCDEF
A0 is hexadecimal for 160 (A is 10*16^1 plus 0)

Dave Smith suggested to synthesizer manufacturers who were considering using computers to control their products to create a language they could all use.  Receiving device actually creates the sound.

Macs have CoreAudio and CoreMIDI built in to their operating system, and Macs include AudioMidiSetup application automatically (hidden in a subfolder, search for it).  They both also have a MIDI device built in.

midiin object--when patcher locked, can select input device--also digital ways to send MIDI to another program, like GarageBand

128 possible gradations of velocity in MIDI
"note on" message with 0 velocity does same thing as "note off" message

SERIES OF BYTES IN A MIDI MESSAGE
status byte (tells you what kind of message it is, starts with 1 so it's over 128 and not a velocity message, 128-255)
data byte (starts with a zero, 0-127, the actual content of the message)--the number 60 means middle C, then move chromatically from there
velocity (0-127, where 0 means key has been released--or you can send note off specific message, and then the third byte will be release velocity--we don't use this very often)

-MIDI sends no information about duration-  Inventors thought this would be used in real time, in performance, so people would be pressing and holding keys to determine duration.  Receiving device can contain internal processing to measure duration.  That's how a sequencer works--it has a tempo going, uses tempo info to figure out where beats & durations are.

Pitch bend messages can also be sent--224 is status byte meaning pitch bend, then fine pitch bend value, then coarse pitch bend value--some devices put 0 in that middle slot and don't use it, some devices combine fine and coarse bend into one 14-bit number so there's much finer pitch bend resolution--8092 is medium bend--receiving device decides how much bend the pitch wheel will actually give you, a lot of devices use two semitones up and down as a standard

Program change message: changes sound (piano, for example) to some other sound

pgmout object--sends out easy MIDI message to change sound--also internally prevents you from sending out messages outside the 0-127 range

midiout object--considers every number you input to be a byte, you have to program it all manually

notein--looks at MIDI coming in, only responds to note on and note off messages--sends out channel, velocity, and pitch information, just the stuff you care about--or you can type in the device/channel you want after that, and it'll get rid of that outlet for you

noteout--takes in pitch, velocity info, sends out pitch, can specify channel

*when programming, make sure you program a way to end a note or sound

channel information: last four bits of status byte--first channel is 0000, second channel is 0001, etc.--computers have channels 0-15, we humans call it 1-16

bendin object--gives you a simple 0-127 readout of pitch bend, with 64 being middle spot

ctlin object--Mod wheel and other controllers, can select them from the object

ctlout 123 1--input 127 as a message, that's reserved in MIDI as the "panic" messages that turns everything off

makenote object--expects to receive pitch, velocity, and duration--sends out pitches & velocities--tags each note with a duration value, schedules a note off for that pitch in the future

*counter object is good for making chromatic scales, just specify which pitches to start & stop

An array is a stored list of information, use index (#s starting at 0)--can use to store preferred pitches, etc.

coll object--lock, double click, opens editing window to type in what to store, in format "1, 60;"--as in, when you receive the message 1, output middle C--like line by line programming--array doesn't automatically get saved with patcher, use the inspector to "embed", or else type "coll @embed 1" right into object IMMEDIATELY--otherwise it'll erase your whole array you've already typed when you try to save it, embed it FIRST

also object called table (stores numbers, very simple), buffer (stores numbers)

Monday, January 19, 2015

Problems & notes while making "rhythm" patcher

I've managed to set up six LED buttons, each of which controls a sine wave that's being modulated by another wave to vibrato a few times a second.  The pitches form a not-quite-harmonic series, which makes it sound nice and crunchy.  I'm hoping that, when I put them all together, the result will be rhythmically interesting, and the user would be able to turn each of the six tones on or off to change the sound.

Unfortunately, now that I've got the buttons to do what I want them to do and added a gain control for each of the six sounds, I can't hear anything coming out of my speakers.  The dac~ is on, and the level meters in Max itself are registering mostly what I'd expect.  I just can't hear the signals aloud, except occasionally in unexpected ways.  Replacing the dad~ with ezdac~ makes no difference.

*I've now realized the cycle~ object doesn't understand "start/stop" or "0/1" messages.  I have to figure out how to turn each of the six sounds' audio on and off independently.  Either I'll have to put in six different dac~ objects (doubt it will work), or I'll have to find a different way to start and stop each sound going.

*Lizzy helped me to figure this out.  First of all, I had to add a line~ object to each sound so that I didn't get nasty clicks when starting and stopping the sounds.  You have to add floating-point messages (0. and 1.) to the input values for that to get it to ramp up (1.) and ramp down (0.).  Then I added a route object right below the LED buttons, with the choices of 0 and 1, which then bang the line~ object with either 0. or 1. messages to ramp up and ramp down the signal to full volume, and then the master volume of each sound is controlled by the live.gain faders.  Very nice of Lizzy to teach me that.

Thursday, January 15, 2015

Class notes 1/15: modulating signals, time & computers

For controlling volume, use numbers between 0 and 128 (max volume).

Shift key allows you to make multiple connections at once.

If sound is too soft, how do we figure out the multiplier to get it to the correct volume?  In Audacity, there's a normalization function that fixes things.

loadmess~  When the patcher loads, send out this (short) message.

live.gain~   Slider that scales volume for moderate loudness, shows dB of output.

Modulating one signal by another:
This is why you'd scale a signal to a smaller amplitude and then use an add function to push it up towards +1.  It can be used to modulate the amplitude of another signal, say a cycle~ 440, to create tremolo (and eventually other effects).  Send the modulating signal into a multiplier and then to the signal you want to modulate.  If you send it directly into the frequency you want to modulate (the cycle~), it will create vibrato (pitch variation) instead.

Modulating signal can be amplified by a lot (times 12, for example), to change "depth" of modulating oscillator.  (Think of that as the amount of the modulator's effect.)

If you bring modulating signal really wide and up in frequency into audio range, can get whole host of strange harmonics.


Time:
CPU clock, crystal vibrating at known frequency installed in each computer--each vibration allows circuit(s) to be closed and opened.
CPU speed is like 2.4 GHz now, it's really fast.
Computer tries to do tasks you give it as fast as possible.  What if you want a computer to do something at a slower, pre-determined rate?

metro~  Outputs a bang message at regular intervals.  (Time expressed in milliseconds.)  Right inlet can be connected to number box that will change intervals.  If you want to set a tempo of beats per minute, use !/~ to divide bpm into 60,000 and will calculate how many ms between beats for you.

counter~  Counts the number of times something has happened--output left outlet to select~ 7 (or whatever number) and output select 7 to a zero message to tell it to stop after that.  *It starts counting at zero, so saying select 7 will stop it after 8 times.

"ABCDE" Patch Critique

-ezdac~ doesn't understand "bang," just "start" or "1," so you need to bang a message box that sends that message to ezdac~

-open in presentation mode--from Patcher Inspector

-think about where and how graphics should look when it opens, position, window size (Max will save window size, too)

-add a comment to presentation mode to specify that those are pitches, not the keys to press on the keyboard

Tuesday, January 13, 2015

Class notes 1/13: cycle, scope, plot, meter, line

cycle~ is a single sine wave oscillator, can choose the frequency.  Sends out a full-amplitude wave.

scope~ tries to draw a visual sine wave for you, representing the signal

plot~ will plot audio information, but you have to define the axes and scale in an input message

meter~ gives you a green-to-red sound meter, if you put a number box at output, gives you numerical level

*~ multiplies--helpful for adjusting volume, since automatic output from oscillator is at volume 1.  Shouldn't adjust this instantaneously, as it will create a click.  Need to gradually adjust volume.  Do this with -linear interpolation-.  Instead of a number box with a multiplier in it, replace with:

line~  whole job is to do linear interpolation, sends out audio signal, in R inlet wants the amount of time to get to new value (in milliseconds), in L inlet wants volume it should go to, or you can input them as a list into the L inlet

None of the meters will work unless the DAC is turned on.  Think of the DAC being on as Max being able to process audio information.

Apparently, I can't hear much above 18,000 Hz.

Monday, January 12, 2015

First Patcher Problems

I managed to create a patcher that plays five thumb piano samples (A, B, C, D, and E) when five keys on the keyboard are depressed (A, S, D, F, and spacebar).  That took some trial and error.  The big difficulty, though, is inserting image files appropriately to represent that visually.  I've tried a couple of things, and fpic seems like it will be the best way to get pictures in there.  Right now, I can't get the "auto fit" attribute to change, which is frustrating, because that's what's standing between me and a completed patch.

I noticed that Max's path for searching for audio files reverted back to the original after I closed my session last time.  I thought that, once I set a search pathway, Max would retain that, but I guess I'll have to re-input it each time?  I hope there's a way around that.

*Revision 1/15  Max somehow started retaining its search pathway.  Weird bug.

Wednesday, January 7, 2015

Digital Audio article by CD

-It would be cool to have a listening session with audio examples recorded at various sample and bit rates, as well as examples recorded with some analog technologies, to hear the differences.  It would also be good to hear clipping and aliasing, so that I can listen for these things in my own work.  Access to different speakers?  Different sets of headphones?

-I'd like to get more in-depth on the mathematics of synthesis.  Are we going to do FFT's in this class?  Analysis of acoustic sounds?  Designing synths from that?

-A-weighting sounds for human hearing