Steve Duda AMA roundup

---

Hey folks, it’s fiyarburst! I’ve been organizing these and stripping out non-production-relevant questions as I see fit, but if you have any suggestions for improvement PM me!

---

http://www.reddit.com/r/edmproduction/comments/yry3t/i_am_steve_duda_ama_about_edm_production/

“I hope to see a software which is somewhere between Quake and Ableton Live, which empowers more people to connect and collaborate on creating music.”

Table of Contents (yeah, he answered nearly a hundred questions.)

Learning

Workflow

Songwriting (and Doppelgangers)

Sound design

VSTs

More advanced synth stuff?

Recording

Drum Processing

FX

Headroom/Mixing/Mastering

Compression-Bussing

Producing equipment

Live performance/controller setups

Learning

Well, might be a dumb question, but how did you get started?

violin/cello/piano, took "MIDI lessons" believe it or not at 15, worked at a music store selling synthesizers through high school, and took Jazz piano lessons (I didn't really want to be a professional Jazz piano player, but I wanted to know how to improvise well).

In terms of sequencing software, I had Opcode Vision 1.1 when I was 15 on a black+white mac, learned everything it could do.. got Logic before it could even do digital audio.. Picked up ProTools at v3.0, Fruityloops at version 2.0, Ableton Live at 1.5. The only manual I've ever read was for ProTools (and that's just because I was working tech support on it and figured I should probably make sure I'm not overlooking any features). I made tutorials in Flash that shipped with ProTools4, but there wasn't even a WWW when I was starting out.. (I almost registered com.com back in the day, because I thought com@com.com would be an amusing email address. I checked and it was available. I checked a month later and it wasn't). :(

What college did you study music at?

I went to UCSC and majored in music composition (David Cope was my main prof.) with a focus in electronic music (Gordon Mumma and Peter Elsea).

As far as you education goes, what would you place more importance on, college or your first music job? Or is it really just trivial at this point, because they were both too important?

I think my first music job had much more influence on me.

Working in a music instrument store, learning every synth and effect module, being surrounded by musicians (customers and employees, who were all older than me), and spending my paychecks entirely on gear. I amassed a small studio and brought it to college. I completed most of my electronic music assignments in my dorm (short of the tape-loop ones), instead of bothering to go to the facilities where you had to book time.

The main thing college gave me was a strong theory background, I always did everything by ear. It took about 2 years of the 4-year program before my ear and the theory began to overlap and grow together. That was a pretty exciting time, because theory was pretty boring to me prior, as it wasn't the thing that I considered music.

I can't say I'd do things differently starting over as I've had a great ride, but I will say plenty of people do just fine without much theory knowledge, so it clearly isn't that essential.

What are one or a few things that really helped you make the push from a "pretty good" producer to an excellent, label-worthy producer?

Some guys have the natural knack for it, but really it is about allowing your brain to mature and develop, learning what it is that makes other people great - and it is what they don't do as much as it is what they do. I'm all about restraint, personally..

For some one who is into making electronic music, but not really into EDM necessarily, where would you suggest I start? I joined this sub, but I would like a list of influences, if you wanna give em.

I'm influenced from all over the board. My early influences (in order of influencing me) were Art of Noise, Depeche Mode, YES (explains my hair), Pink Floyd, Squarepusher, Skinny Puppy, Nine Inch Nails, Aphex Twin, Prince, Orbital, Boards of Canada, Prodigy .. and countless others that I can't quite cite as influences..

What are some of the biggest mistakes intermediate producers make?

imitating their influences too closely (sometimes to the point of plagiarism), spending too much time on mix-type stuff instead of getting good sounds and notes/parts which work well together (both of which makes the mix 100 times easier)

How much has programming helped your music?

I don't think it helps much. Understanding DSP is interesting. There are technical many techincal aspects to sound and sound engineering (frequency/phase/harmonic series/psychoacoustics/artifacts, to name a few) and the more I learn about these they have had benefit. IT and CS, I don't think that will help at all with music. Music is an art form, I would think that studying dance, painting, or theater would help more with music than technical fields would. I'm not trying to discourage you at all, as programming can be a pretty good career.

[deleted question]

I come from a time before video tutorials. I think the resources of the web are amazing, you can search any topic and learn more than you would ever possibly want to know, instantly. However I think it is better to be a little bit ignorant and not get too sucked in to the trap of feeling like you are lacking knowledge needed to make music. More than anything you are lacking the experience, which you can give to yourself, with time.

Just like some people can pick up a guitar and mess around with it for a few years, learn how to cover some songs, start writing their own, etc. you can do the exact same thing with electronic production. It is the better road to take, if you are the type of person who isn't afraid to mess around and simultaneously try to get a result out.

Ultimately music is an art form, approaching it with too much of a technical mindset makes for uninspired music. It is much better to create your own solutions as needed, and just focus on getting the music to sound like you want it to. Easier said than done maybe, but creating your own solutions to any situation is perfectly acceptable.

How did you develop your ear?

I'm probably just talking semantics, but by ear, I think "musical ear" which is of course brain. My hearing has declined, though I try to take good care of them.. nonetheless I can "hear" things better than ever, as my brain as developed.
I rely on metering seldom, if at all. Its nice to check your waveform (especially final mix) for DC offset (waveform existing a lot more on the positive or negative of center) just to make sure you aren't accidentally compromising your headroom. I've noticed this in other people's mixes and I point it out to them when that is the case.

I developed my ear from just years of experimenting, I guess. there are a number of facets to it, of course (musical ear, and engineering-ears). I developed perfect pitch by having to tune walls of acoustic guitars when I worked at a music store, E A D G B E just burned into my memory, and I can recall any of those pitches at will. I learned to play music by ear at a very young age which made learning intervals easy - I practiced interval training which helps me transcribe parts very quickly. You can practice yourself online with pages such as this. I would recommend getting yourself to at least 95% accuracy (not to gloat, but I'll only miss one in a thousand, because it is just second nature now).

Engineering ears takes more time, but for your own music you don't need to be a master of all things technical - you really just have to trust your ears and be sensitive to when things are not right. my blanket statement would be select the right sounds and don't worry about EQ.

How important do you believe learning Music Theory and/or playing an instrument is to producing really good music?

Ears tend to beat training, whether in realms such as theory, or even in sound engineering. After all it's music, so you need to be in a creative and musical head-space. Theory is mostly good for analysis, not for creativity. However it is handy for knowing your options for a chord change, a harmony, etc.

Workflow

Often I'll start with the drums and work up, but I really prefer to start with chord/melody and keep the drums out of the equation for a long time, to get something interesting going on. It's pretty easy to imagine a 4/4 beat so I'd rather save it for dessert!

When I start a project, of course it depends, but I'm usually just trying to get something down that excites me, no more, no less. The ones that are still interesting to me when I re-open them a few days later, I try to finish...

What attention to mix or composition do you find is most lacking in amateur (or pro) tunes?

Most lacking in amateur tunes is more on the composition side. I can listen "through" a fairly-bad mix and hear the underlying music, but generally there isn't anything there to hear. So my biggest gripe is the one-note-droning riffs with nothing at all original/unique/memorable to offer.

Music to me is very exposing, to a trained listener, someone's strengths and weaknesses show through. Listening to music I made 15 years ago is a painful experience. At the time I thought it was great. People send me demos which are terrible ("IVE BEEN PRODUCING 6 MONTHS NOW SIGN ME"), but I'm sure they think they are great, though they too will probably be embarrassed by them 10 years from now, if they stick with making music and improve (which I hope everyone does, because just like guitar or some instrument, it just takes a lot of time to get good, more than it takes reading on forums or even knowing what tools your favorite producer uses).

Think of an instrument like Guitar.. reading magazines won't make you play guitar better. Producing is no different - the most talented guys, for the most part, just hole away and create their own solutions, wrangling with whatever tools they have until they get a result that they like.

Do you have any tips for staying motivated and completing projects when you DON'T have a lot of spare time?

I would recommend to impose limits. Assuming you have the luxury of occasional odd hours, I'd say "I'm not going to sleep until this song is finished" or even better "I'm going to make/finish this song in the next 4 hours". The power of deadlines can be amazing. Of course it might not be a masterpiece, but at least you can call it done and move on to greener pastures.

Do you know of any great online resources for intermediate producers that might help them get to that pro level (free or not)?

Disconnect from the internet. As you mention time is limited, I would just suggest to just try and create your own solutions. There is no one way to do anything, there is no 'right' and 'wrong'. focus on trying to find/steal/make interesting sounds/notes/melodies and not on what 2khz sounds like.

What advice would you give someone just starting out on electronic music production?

electronic production is an interesting blend of so many facets - software operation / music composition / arrangement / drum programming / engineering / sound design etc.

I think it is best to focus on music in a traditional sense - melody / harmony / chord progression. This is what matters most, and is often undervalued. It is easy and a common mistake to dive in and try to get mastery over all these various tools, and overlook that it is music that you're trying to make.

Second to that is sound selection, getting the right sounds to fit the notes/parts/mood. If you have good notes, parts, and sounds, then the music will "mix itself" to some degree.

Songwriting (and Doppelgangers)

What is the one single piece of production-related advice you were given that you feel has made the most impact on you as a producer?

My composition teacher in college said something which I'll never forget. There are 3 rules to composition. 1) steal, steal, steal!! 2) less is more. 3) simple is best.

Are there any particular scales you find yourself drawn to again and again or are you experimental with them?

Dorian all the way.. it has the minor 3rd and flat7 so you can use it ambiguously as a minor key, and then gently use in the major 6. I used it in this manner in Generation

Hey Steve, big fan. I was wondering if you have any tips for writing catchy and effective basslines (particularly for House music)? Also, I saw someone on the train who looked just like you.

Haven't been on a train recently - if you see my doppelganger again, let me know, since I must kill him!

I think the trick to catchy basslines is placing accents (upbeat accents, whether its the upbeat 8th or one or more upbeat 16ths). Its also good to put the 'root' note changes off the bar lines (before or after the 1) so the bass 'chord changes' are not moving too block-y. I would recommend to transcribe Daft Punk - Voyager bassline (which will likely take you a few minutes even if you are good at transcribing) and analyze.. it is one of the funkier/catchy ones I know!

What do you do to make your songs longer without just repeating? That's the hardest part of production, in my opinion.

easiest thing to do there, analyze some other songs, break them into 8-bar segments and list what enters/exits every 8 bars, new parts etc. getting introduced. It should hopefully inspire you, and maybe give you some structure templates..!

how do you finish a track?

open them up, any one which you like or excites you.. stop right there and make yourself finish it! Tear it off the page and call it done. Yeah, it could always be different. Yeah, it could probably be better. But until you do that, you'll be stuck in that pattern.

Once you have finished one (and maybe another), then the next time you start something new, you'll be able to visualize it getting finished! They'll just get better each time, and you'll no longer be so stuck.

Are pieces like [some prototyperaptor song] what you think are too complex or is there something that is being done which makes this sort of complexity work?

if music is too unpredictable, it becomes noise, the brain can't identify patterns and loses interest / starts to tune-out like the chaos of sounds at a busy cafe..

On the other side, if the music is too predictable, the brain is never surprised, thus not engaged, and starts to tune it out, like you would tune out the sound of a motorboat.

So, things need to be in the middle. Really complex stuff (complextro) tends to have very unexpected sounds happen, but where those sounds happen tends to be predictable. Then your subconscious says, "aha, I knew something was about to happen right here, but I didn't know it would sound like that, wheee this is fun! Hey, drink another coffee!".

Any advice on making a drum sequence?

Keep it simple, it is more about sound selection and the mix than the actual programming. Of course it is hard to make blanket statements, but in general the drums are the time-keeper, the sounds help set the mood, and the programming itsself shouldn't interfere/distract from the note-based material.

How do I make the jump from just messing around on FL Studio to actually making full blown tracks?

build up a pattern to include a lot of parts, break these parts to separate patterns and then paste them all for 4 minutes on the arrange window, then start erasing things.. analyze some other songs for inspiration for how to make things change over time. Often you'll want some significant "B section" or contrasting material so the piece of music isn't just a "stuff entering and dropping out" arrangement all the way through.

Did you have a memorable "Eureka" moment as a producer that made your music more professional sounding?

mostly when I realized to go through and remove unnecessary notes that are just babble (e.g. thoughtless 16th hi-hats), try to keep what is in the music, there for a reason. If it doesn't work as a foreground part, and it doesn't work as a background part, maybe just get rid of it!

What're some tricks to writing good and simple melodies?

Melodies are all about the harmony (chord progression). A Russian piano teacher I had said "the harmony does all the work, and the melody gets all the credit". I think this is pretty true, if you think of melodies, for the most part what makes them interesting is the "other stuff" that you aren't humming/singing. You can take a very simple melody (such as 2 or 3 notes) and do interesting things harmonically against it. But to try to answer the question better, phrasing is everything. Think of blowing a saxophone or singing the melody, it needs breaths (rests) and you need certain notes to feel emphasized one way or another (with rhythm/duration, for instance).

Sound design

How can I get professional sounding synths and such?

A lot of it is just finding the right sound to fit the part/notes/mood. A lot of people use presets, and depending on the genre, not even much processing. So "sounding professional" in terms of synth sounds is much easier than "sounding interesting" or "sounding unique". Your best bet is probably demoing a bunch of synths and/or preset surfing when you have a part idea, and write down candidates as you go through, then go back to the top candidates and settle on one.

Do you have any advice on improving sound design skills, being a sound engineer and everything?

Being unique is a very tough thing. There's a few different direction I can take this answer, so I'll try just to lightly touch on a few of them:

1) look at presets more, flip through some until something interests your ear. Then figure out what gives it that sound e.g. "ooh sounds like pleasant bells because of the square wave +2 octaves"

2) try to create other sounds. It is incredibly useful to listen to recordings and try to recreate the sound.. I did this in a Prince cover band, figuring out all of the keyboard parts to perform, as well as re-create all of the patches. I also have done this "covering" songs just for fun. Sometimes it can be frustrating, and it is a very different thought process than the original song creator took, but it is still educational.

3) As for unique - the trickiest thing of all. Try to transcribe whatever sound you can imagine in your head. Try recording and processing audio. I'm pretty in awe of Ben Burtt and his sound design on Star Wars. It was the early 1970's and the technology was much more primitive, but really shows to me that inventing your own solutions trumps working inside the lines.

Favorite Prince synth patch?

At the time, I thought the 'bass' sound on Kiss was cool, but later I found out it wasn't a synth sound at all, but rather an AMS delay getting triggered off the hihat! I believe Kiss was the first top-40 #1 song to technically have a bass line! I think he repeated that approach on 'when doves cry'.

What piano did you use in your remix to Quadcore?

I used Kontakt factory sounds for that piano, can't remember which one it was, one of the pianos that comes in Komplete.

Where’s My Keys- how did you guys get those earth shaking kicks going while having the bass still ride with it?

just look at the waveform, you'll see that the bass is realtively quiet and buzzy, and the kick is carrying the low-end. The other way can be very effective as well and is popular in minimal styles (small kick, and subby bass). The general rule is either a subby bass, or a subby kick, not both. If you really want both (I've only done that a couple times) you probably will want to sidechain the bass off the kick.

VSTs

Do you have anything while producing that you get absolutely giddy over? Any plugin that you love using or technique that really makes you excited beyond the "regular" fun of writing music?

I do love "eating my own dog food", I use Nerve, LFOTool, DimensionExpander, 8-Bit Shaper, and some older plugins I've made (Lucifer, vMinion) on tracks often. It makes me pleased to get creative with things that I made and thus of course know well (especially when I don't hit a single bug, snag, or wish!) as opposed to simply testing them.

Other than that, its pretty much the regular fun of writing music.. not too much in the way of 'tricks' and no real 'secret weapons'. Just using ears / making countless decisions..

Are there any little-known MUST HAVE VSTs that you use?

I don't label any software as MUST HAVE. Its a common perception amongst people to focus on gear/tools: when I was young and working at a music instrument store it was "what gague guitar strings does Eric Johnson use" and "I hear he likes Energizer brand batteries in his guitar pedals". Plug-ins are pretty much the same. There are some cool obscure ones out there (and some fairly amazing free ones on PC, which is where I do my production), but no secret weapons / nothing I use all the time.

As far as synthesis goes... Is there one in particular that really has done a great job at expanding the horizon as far as sound design goes?

Additive "spectral" synthesis would be that one such as found in Camel Audio Alchemy, Image-Line Harmor, or the Roland V-Synth. The ability to mess with the frequency and time domains using complex source audio files can yield near-infinite results.

More advanced synth stuff?

How can I start getting into modular synths?

You could start with something like a Doepfer Dark Energy I or II. Arturia Brute looks pretty cool too, I've heard good things about it. Either of those will give you Analog sound as well as some sort of computer interfacing (MIDI->CV). The other option is to dive in to an A100 rack, but that doesn't really scream affordable to me. I'm not really an analog snob, so you might be better off asking other peoples opinions.

Any tips on how to get started writing my own VSTs? Are there any 3rd party libraries that make it easy?

I jumped into VSTSDK and VSTGUI, just compiling examples and hacking from there. I knew absolutely nothing about C++ and almost nothing about DSP. Was the long road. I'd recommend to look at JUCE, if I were starting over I would probably go the JUCE route (although it isn't free for commercial use, it is still a bargain in that respect). Personally I'm content with just VSTSDK and VSTGUI however, and it is fairly lightweight and I can do things extremely fast, and probably most important for me it is familiar and comfortable.

I'm curious how helpful your programming skills have been to you, both in allowing you to express your musical production better (with interfaces/VSTs you designed), and in allowing you to be financially stable as a musician.

I've spent about the last 15 months on my first synthesizer VSTi/AU, and although I'm very pleased with the near-finished results, in some ways I regret it - I could have made an album or two in that time. I could have easily financed someone else to make it "for me". Nonetheless, I've learned a ton in the process and feel like twice the programmer I once was. Hopefully it will pay me back for my year..

Do you recommend any specific books or sites for learning to program vsts? I know some c++ and would love to give it a shot.

regarding making VST's I would start by reading this.

Even more specifically, do you know much about the circuit modeling that Cytomic and Fxpansion have been doing? Is it possible for someone to do that themselves?

Component modelling is not the place to start - thats great you know some C++ but its very heavy stuff. I know a bit about it conceptually (take schematics into a program such as SPICE, and then cut corners wherever you can get away with it, to try and make it run realtime). You'll probably want an EE degree for starters.

Recording

Mid/Side processing: How do you make use of it in your productions? Do you do anything more than the basic Mid/Side EQ? Any special applications for this technique you might have?

I find M/S processing useful for processing of stereo-correlated signals, like a pair of drum mics/overheads. I've done it a bit in my rock days.. In my productions, I don't tend to have a lot of anything that is true stereo (just delays or synths that are stereo from FX mostly), hence not much use for M/S anything. It might be useful on the master bus, I've watched others do that- but personally I've never had a need (I don't do anything 'corrective' in a mastering stage, as I don't usually do a mastering stage).

Drum Processing


Can you give a basic overview of how you process your drums?

Processing drums are done in various places:

1- creating new samples, a variety of ways - usually sample layering, using frequency-splitting (HP one, LP the other) or time-splitting (transient/tail or transient/body)

2- bussing related sounds together (e.g. hats and cymbals, snares and claps etc) and treating (processing) them as a group to give a bit of 'unity'

3- bussing busses together (usually sans-kick) mostly for compression, occasionally EQ, and level control over all drums.

How important do you feel tuning your samples are?

re: tuning samples, I just use my ears but not to try to get all things in the key of the song.

Coming from a rock world, you'd have a drummer who would almost-never tune his drums (on albums even) to be in the key of the song. I'm content with drums being "non-pitched" elements.

Its one of my pet peeves, I see and hear it all the time, people talking about tuning kick drums and they're calling up a 909 or some sound which is essentially a frequency sweep. The pitch of such a drum is completely subjective to the listener. In such a case- if you want to tune it where it sounds more in-key to you, that's fine, but it is really an illusion. I go for where the drum sounds good to my ears, when the drum(s) are soloed. Typically I'll never pitch a sample more than +/- 2 semitones, and very often the sample sounds best with no pitch change applied. Of course there are exceptions to this, but that is a rule-of-thumb for me.

what are your thoughts on drum samples? Should we have good samples and do minimum processing to them, or can we get by with okay samples and a lot of good good processing?

In my opinion it is great to start at the source of the history of drum samples - drum machines. Download sounds from every drum machine you can find (606/808/909/Linn/Sequential/R8/Ry30/etc etc etc). If you get to know these sounds, you'll start recognizing them all over the place. Many sample CD's use these as source (directly, or layered/processed, or indirectly ripped from other people's tracks).

What's a way to practice making good drums?

A good practice for making good drums would be to start by defining what you think 'good' is. Sample other people's tracks and look at the source audio. You can see a lot one you learn to look at waveforms, the amount of transient material in milliseconds, the pitch/pitch bend, the duration.

In Nerve I included a Duda Kicks folder. This is a folder of kick drums I made a while ago, which prior to Nerve I gave to a bunch of producers who have used them on releases (deadmau5, Avicii, and others). I made the whole folder by analyzing three kick drums I thought were great, and then created my own trying to stay within those parameters. Of course to make a whole folder I had to get creative on various approaches (including applying LPF to the source kicks I like, and create my own new transient) but most of them were generated out of a sine wave with pitch bend (in Adobe Audition) and additional processing/transient paste/draw/process/design/whatever.

Triangle wave for kick drum?

Triangle is a little extreme sometimes (LPF triangle may be better). Nerve has a Triangle "resynthesis effect" in the precalc section which allows you to blend in triangle with the original sample 'pitch' automatically.

Why Adobe Audition for designing those kick drums over something like ProTools or any other DAW?

Adobe Audition has some nice tools such as destructive pitch-bend, auto-crossfading on copy/paste (without dealing with separate regions), scientific filters (for e.g. brickwall hipass or lowpass), the ability to draw on samples, etc.

I do like ProTools a lot, it is my choice for doing a ton of editing, however when dealing with 2-track files (e.g. trimming silence off of samples) I like Audition, as you don't need to do a separate export process.

Snares. I always fail to get really punchy and chunky snares (i.e. BSOD- Game Over). What tips do you have to get snares like that?

Keep them very short, use a decaying envelope. Experiment with layering two different sounds. It's more about the space in the music which gives it that sound.

can you go a little in-depth on the making of the xfer deadmau5 sample cd/nerve sample library? who made what and from where did most of the samples come from/originate? specifically, on the loops folders. how were most of these constructed?

most of the loops were constructed from the one-shots (thrown in drum racks in Ableton and programmed + processed). I made about 15-20 a day for three weeks. I tended to make a simple bass-line and kickdrum and then fill in the rest (make the 'loop'), then mute the kick+bass and render it out.

The one-shots were constructed from tons of sound sources, old DAT recordings of bands/rehearsals, BFD samples and other band recordings, drum machines, drum synths, analog synths, and some recordings I did just for the CD, some in Joel's old loft, and some in my living room here in Los Angeles (junk percussion, claps, shakers, etc).

so how do you get those loops to sound so groovey?

It has a lot to do with variation. I avoid using the same sample repeatedly (at least some pitch variation, or a different sample, or different layer, effects). There's also often some swing, and velocity adjustments to keep things less than perfect / non-mechanical. I spent a good 20-30 minutes on each loop, and I work quick so that equates to a fair amount of time on each.

There was nothing I always did. 'messing around until it sounds interesting, then bounce it down before it becomes bad again'.

Maybe the 'groovy' part is just the fact there's a kick drum and bassline that you aren't hearing, so I'm placing accents around something musical rather than just mindlessly throwing stuff on 16ths (though I did that on some of them too, nothing wrong with lucky accidents!)

When you recorded the samples for the Xfer sample pack with deadmau5, did you intend them to be for one shots, or layered with each other and heavily processed?

The one-shots were intentionally unprocessed, I thought that rather than EQ and color them I would leave them less adulterated. The advantage is that if you choose to EQ them, you won't be "un-eq'ing" something that we did. Also they are full-spectrum that way (not hipass/lowpassed). How you use them is really up to you, I made most of the loops from the one-shots and in that case I did generally heavy layering, EQ, Reverb, panning or pseudo-stereo processing, or other extreme processing (waveshaper/resynthesis/etc). Most of them are unrecognizable at that point.

In a song context it will vary, but I would consider them to be 'raw' and you'll likely want to layer if you want a thicker sound (e.g. Snare+Clap left+Clap right). Also don't be afraid of very short reverbs (maybe with a gate) as for the most part they are intentionally 'dry' sounding.

What's the story behind DL_dudafunk.wav?

that was before BFD, 2000 or 2001.. but the same drumkit recording (one-shots). At that point I had bounced them out to stereo.. made a little simple funky beat with some ghost snares and didn't put much thought into the wav name. I think Gol compressed it to 4 bits using ADPCM compression. I remember being impressed it retained most of the sound.

When I load samples into Logic from your sample pack, Logic tells me I do not have permission to access the samples and will not play them back. They work fine in Ableton, but some just won't work in Logic. Is there any fix to this? Before you ask, I did pay for them and have the CD right in front of me.

Logic doesn't load 32-bit audio files :-( Someone needs to remove "Pro" from Logic's name. I made a "patch" for this by batch-converting the offending samples / I email it to people who ask. Contact me through the Xfer site and I will send you a link to the patches.

FX

What are some ways to give your productions more "width"? Although I'm currently at a point where my tracks sound quite solid, I can never get a "wide" sound, even with panning, adding a chorus to one channel of a pad, for instance, etc. Also, what's the best way to get your tracks published, should I send my tracks to various labels until one of them likes what they hear, or is there a better way?

You can expand the stereo field on the master with something like Waves S1, or even the Ableton Utility. Of course this is effectively reducing anything in the center channel (anything mono) and you don't want to go too extreme (maybe 120% to 140% in an extreme case).

It is indeed best is to treat individual parts at the source like you are to get the varying widths that you want. Reverbs (both algorithmic and convolution), Delays (cross or ping-pong delays), Chorus (as you mention), or short zero-feedback delays (e.g. 60ms L, 90 ms R), Autopan (used subtly). There are some pseuduostereo plugins that are handy in some cases too..

Generally I wouldn't worry too much about width. There are much more important things, like the right sounds/parts/notes/level balances.

[another question about stereo separation]

Stereo separation is just having a different signal in the left and right channel. The less they have in common, the more separation you have. Too much separation makes things sound like they are behind you, or inside your head. Real sounds don't have incredible separation (mono source, bouncing off a couple walls/floor/shoulders and wrapping around your skull to the other ear). If you have too much separation you'll also have mono-compatibility problems (an L/R signal out of phase will become silence when summed to mono).

Personally I would worry about stereo separation very little. You're better off with a good mix which is mostly mono.

[Some more babbling about stereo on bass?]

What is all this hype around "stereo imaging"? I feel like it is wholly unimportant for music destined to be put through PA's. Its a pretty big deal in symphonic, jazz, acoustic drum kits perhaps..

Its also pretty bizarre to mention it in the same breath as sub bass.. low frequencies are by and large omnidirectional and I wouldn't recommend to have out-of-phase sub-bass ever..

When applying reverb, do you prefer applying it surgically to individual tracks, or doing the bulk of your reverb on send tracks? Is it a mix of both?

Reverb has a lot of uses of course so it's tough to make blanket statements, but in general I place reverb where it is felt-but-not-heard. In other words, turn it all the way down to silent, then slowly keep turning it up until you distinctly can hear it, then turn it back down a little :)

Then of course there are "effect verbs" where you very-much hear them (like the type which 'fade in' on a dry sound as the riff repeats), and of course those are typically louder.

I place my reverb on both sends and on individual parts, so mix of both, yes.

Other reverb tips:

I don't personally use the "soundstage" approach when mixing, the instruments don't really exist in a room together, and if you've been in a room with instruments playing together, it doesn't usually sound all that great. So, I'm going for hyper-real approaches.. I don't use reverb to sound like a real acoustic space usually.

I use the DJMfilter on most of my tracks and it sounds great and is so easy to play with but I had one question: What does the drive knob actually do? I try to play with it and can't figure out what it's changing.

The drive param is a boost (gain) in the filter feedback path. It adds a bit of soft distortion. I didn't make the drive range very extreme, I just noticed the DJM had a touch of drive and the old version (before I added the knob) had a fixed amount of drive (around 50% on the knob). I decided to make it variable so you could have a much-cleaner (relatively speaking) or slightly-dirtier filtering.

If you run a sine tone into it I'm pretty sure you'll hear it.

btw your 8-bit shaper is a fucking work of art, however it would be nice if there was a "reset to default settings" button on it so i don't have to delete it and reopen it every time i want to start shaping from scratch.

protip: you can alt-click on the graph to reset it, or command-click (control-click on windows) to reset knobs/sliders.

Any tips on how to use your 8-bit shaper more effectively?

maybe just the understanding that on the diagonal graph, X is input level and Y is output level. So, the lower-left represents the zero-crossing of audio, and the top-right represents the maximum. The diagonal line default is "bypassed" (any input = any output).

If you want to really mess up a sound, you could raise pixels on the left side very high, this means that where the source waveform is getting quiet, it is now very loud. This is known (to me) as zero-cross distortion and is a pretty good way to ruin any signal :)

I mostly use 8BS on bass sounds to 'dirty' them up- we used it on a lot of BSOD tracks, just a little bit wet/audible - just to make your speakers sound like they can't quite handle the signal :)

What are your thoughts on the built in ableton tools, especially eq/filter/compressor?

Ableton plugs are functional, not wild on the EQ.. recommend to put it in HQ mode at the least.

FXPansion?? DCAM synthsquad thoughts

been using it for about 4 years before it was released :-) Andrew Simper is a good friend of mine, he's left Fxpansion and started his own company Cytomic. His next Cytomic plugin (theDrop) is jaw-dropping good proof / more proof.

Why should I buy Nerve? What makes it stand out from other drum plugins like Reaktor?

It offers a lot of unique sound-sculpting, drag-and-drop export of Audio or MIDI parts directly to host, quick switching (or randomizing) of drum samples within a folder(s) on one or all pads.

In my mind it is a rapid+focused way to get beats made.. it comes with 2 Gb of sounds but also will breathe new life into the samples you may already have.

It does a lot, I'd check out the videos on YouTube and download the demo version (there's a 1.01 demo beta in the general forum, if you're on OSX Lion or greater).

I'm always happy to answer questions on it, you can reach me via www.xferrecords.com

Headroom/Mixing/Mastering

I have issues with the loudness of a finished track. If i leave the headroom and dynamic range, the song keeps its true energy but is very silent. I spend alot of time on mixing the track and I make sure to have around -6db headroom. My mastering must be very terrible because I can't manage to get the loudness up and still and keep all the instruments in the song as clear as I want them to... Do you have any advice on how to solve this problem?

it is probably the low frequencies. Loudness is perceived more in the mid-range. It is a common mistake to have too much bass, I've done it myself. I would suggest to lower the LF content, or find the sounds you want louder and apply gentle wide midrange boosts. use a spectrum analyzer on your track versus a similar one and look at the overall LF-HF contour of both (rather than very specific frequency bands).

It could also be the dynamic range, you could experiment with multiband compression/distortion on individual parts or busses which you want to sound more "in your face". You can also experiment with distortion, which the ear perceives as loudness (because loud sounds distort the eardrum, it is hard for the brain to differentiate).

I was just wondering how you handle sub-bass frequencies under 50-40hz. Some people say keep them in, others say take them out altogether, what's your procedure?

50-40 Hz is still very very important. If it is important. Mixing is a bit of compromise, you want it to sound great in a club/arena but you also want it to sound great in headphones/earbuds/cars. The only playback medium I ignore is laptop speakers :) 40 Hz is very important where there are walls of subwoofers. You just don't need very much of it. I would recommend to remove that range on anything that might have 'junk' down there, which would be basically any distorted non-bass sound (nyquist reflections can put a lot of garbage down there), vocals and the like. I'd also recommend a good EQ because bad ones might be in fact adding garbage (noise) down there..

How can I make my basslines really shine in the midrange?

reduce the amount of low-end on the bassline first. People think that "low sounds need more lows" where as it is important to keep in mind that effectively bossting anything is like cutting everything else, or vice-versa. If there isn't midrange there, you can try waveshapers or distortion (if applicable at all to your style/sound) to bring out/create material in that frequency range. Otherwise, obviously you could try another sound. Filtered sawtooth (or square) is probably the most common bass waveform, as without the filtering you have harmonics all the way up the spectrum.

What would be your top little known mixing techniques.

1) try having parts loud when they enter for the first time, then slowly turn them down over the next few seconds. 2) waveshaper or other strange short static (non-modulating) effects on the bass, mixed just a barely wet/audible. I do this a lot. 3) using EQ on the detection circuit (sidechain) on compressors. Some have this built in (usually as just a highpass, such as Cytomic TheGlue). You can really narrow the compression in to target a certain frequency range, or be lenient on a certain frequency range (though you can do this with multiband compressors as well and get a similar result).

What are you thoughts/uses of upward compression, any good links to info on the topic. Same thing but for multi-band compression

1) upward compression can give sounds more "punch", where the loudest part becomes even louder. I've used to to make block-sustain synth or samples have a bit more 'kick' or 'bite' to the front (almost spelled that byte).

Multiband compression can have a variety of uses, traditional engineers will often consider it a crutch. There's 3 main use categories in my mind - make something in-your-face-loud, keep a 'consistent sound' amongst varying parts (e.g. a bus which has different loops at different times), or frequency-dependent compression (e.g. only one band is really active).

once I start adding compression over the top, the actual sonic quality of the drums reduces significantly and they become these flat lame "womp" kind of hits, as opposed to the magnificent pendulum/spor/sub focusesque punchy good sounding tight hits. How on earth are these artists achieving a mix where everything is so loud but still so clear?

It sounds to me like you're trying to shortcut the situation. Keep in mind the artists you mention aren't just slamming a plugin on the master. They're getting individual parts to sound right, probably VERY much like they sound to you when you are listening to the final product. So I think you need to go 'upstream' a bit and try to individually compress sounds and parts.

Putting a limiter on the master with extreme settings causes a 'mushy' sound because the limiter is reducing gain, and then slowly releasing when the threshold is being crossed. See this and notice the quiet area on the 'limiting' waveforms about 60% to the right.

You could try hard-clipping instead, I'm not personally a fan of this approach (it is traditionally a no-no and I'm very against it personally - makes my ears fatigue quickly, and hurts my ears to listen loud) but you can get a less mushy sound that way.

The Izotope Ozone maximizer is another option for aggressive 'limiting' however for me, I don't find this particularly pleasant either.

I would strongly suggest to refer to my first paragraph in this response, try to get individual sounds to have the impact you want, and do much less on the master bus.

Elaborate on “I don't do anything 'corrective' in a mastering stage, as I don't usually do a mastering stage”

I mix into a reasonably aggressive (perhaps tame by some standards) master chain and often tweak it a little on the way. I don't have a default chain, though I do have 'templates' (Live racks) stored from every song I've ever done, and often start with one arbitrarily.

I don't send tracks out to be mastered, I'd rather have the ability to hear it all the way to the finish-line myself.

I don't do a second 'mastering stage' either, I see no reason to not make the session contain the final product.

Mastering is over-hyped in my opinion, they can work magic considering they only have a 2-track mix, but I'd rather get it sounding right myself - Mixdown is where the real 'magic' happens.

what are your favourite techniques for mixing?

don't EQ or compress anything, compress the master very heavy and mix into that (hopefully back it off a little later). Just focus on level balances and ask yourself things like "is that the best sound for that part? is that the right part for that sound? Are there notes I can remove?"

How do you get drums to punch through the mix, and not be covered up and muffled, besides sc compression on subs

mix them loud (and keep them short so they don't eat the whole mix, they'll sound longer in a room environment). Master compression will help remove this loudness, but the master comp will compress every time the drums hit. The net result is very similar to ext. sidechain compression.

I would like to know what is your whole approach on the mixing.

its a pretty broad subject, I've covered some facets elsewhere here, but the vague concept is "Foreground / Middleground / Background" and getting things to be at the appropriate levels. I think of it like being a tour guide and I'm showing the listener what to pay attention to at what moments, in order to help guide them through the music and to keep it hopefully interesting.

Compression-Bussing

If you set up two sends, one for synth compression and one for percussion compression, do you then not use a compressor on the master bus? This is something mau5 talks about a fair amount and I've never understood if it entails having a master bus compressor. Care to elaborate on this method?

It might just be arguing semantic but I never send to a compressor, but rather bus.. as a send would create a wet/dry situation (which can be done better ways if that is truly what you want, at least with Racks in Ableton Live).

Master compression is something I always use. Keep in mind that separate bus compressors can be doing different things based on settings and input signals, such as bringing up quiet tails, or just evening out levels, or adding a bit of attack.

Compressors often seem to get equated to "make things sound phatter" when in a certain sense they are doing the opposite (if you were to A/B with a matched RMS). In my mind it is more like the previous sentence though, they can do a few different things. I get a little bothered when people say "throw a compressor on that" in a mindless sense.

I also happen to like some dynamics :-) Expanders FTW!

Could you expand on the A/B with matched RMS compression?

compressors offer makeup gain (either as a separate control, or sometimes automatic). The change in gain (loudness) makes the ear perceive things differently in the frequency domain (Fletcher-Munson). Bypassing a compressor is a not very effective method to 'compare' or hear what the compressor is doing. You need an appropriate amount of gain compensation, so that you can hear what is really changing besides loudness. When you do this, often you may find the compressor is in fact making the sound 'worse' and it is simply the loudness (volume) you perceive as improving things.

Could you tell us a bit more on how you process the whole thing ?: Do you have some compressor trick that you use on some types of busses ? You said the kick was by itself, not in the same bus as the bass ? Do you happen not to compress some instruments ?

I have the kick going straight to master, everything else goes to a bus for one reason or another (even if it is just to be able to control levels on a group of sounds without having to adjust several sounds at once). So, the kick is the 'anchor' in terms of levels, and everything else is mixed relative to it.

I don't have a compressor trick, there are a whole bunch of tricks I suppose depending on your point of view. However I'll now try to downplay that, because I think there is too much emphasis put on compressors. Levels are what matters. Mixing is 90 to 95% about getting the correct level balances. I don't mind things up-and-down a bit.. if there is one offending loud part on a sound, I'll automate it down. If there are many, then maybe i'll compress. If I want a sound to be boldface and italic, then I'll multiband compress it... but personally I don't want every sound to be boldface and italic. If I was making more aggressive music, then maybe I would multiband compress/expand all the things.

A lot of stuff is uncompressed, except everything gets compressed by the master bus. I won't throw compressors on things mindlessly, I have to want it for some reason before I'll add it (dynamics are not a bad thing!)

Producing equipment

What piece of producing equipment do you value the most? I know a lot of people say studio monitors are the most important in producing but I've also heard some people say midi controllers are a must.

a decent PC, good speakers, and a place where the outside world isn't bothering you and vice-versa.

thats all that matters.

Hardware vs. software?

Hardware and Software are tough to make blanket statements about. FL Studio runs on hardware.

For those who don't have access to hardware to make their own determinations, I'll say this - there is a big psychological factor at play - take analog tape for example - the goal of tape recording was simply transparency, to capture an audio signal to 'capture' a performance/microphone/etc. so it could be played back. Once people started recording digitally, which is much more transparent, many people started deciding they preferred the "analog sound" of tape. Now you see "tape emulation" plugins (I never use them personally, each to their own). Many younger engineers will rave about analog tape but they don't really have the experience, they just listen to Led Zepplin or Pink Floyd (as examples) and mistakenly attribute the awesomeness to analog tape. Many veteran engineers who have used tape for years, don't miss it at all, because many will say it sounds bad, nevermind the unpleasant experience of rewind/tape splicing/destructive recording/etc.

Analog circuits are often designed with cost-saving corners cut, sometimes these flaws become considered desirable. DSP ('plugins') are very often built with corners cut, to save CPU use.

Analog circuits can also exhibit complex behavior (feedback/properties of various components in the circuit) which aren't cheap (CPU-wise) to imitate in the digital realm...

however anything you can do in the analog realm can be done in the digital realm, provided you know what it is you are trying to model/accomplish, and you have enough DSP/horsepower to do it.

Anyway - shorter summary: Analog is mostly overrated, by people who suffer from visual bias. There are times where analog gear might prevail, but I wouldn't be concerned with that unless you are seeking a 'retro' sound, in which case that is probably the easy way. Mostly it is about what results you want, it is a different set of tools, which will steer you down different paths.

I've been considering purchasing a Virus TI and an Electron MachineDrum for better sound quality and was wondering if those are good choices for an increase in sound quality.

I think the Virus TI will increase your sonic palette, it is probably a good option for progressive. MachineDrum has a quirky interface which might lead to unexpected results, which I suppose has a place. I've only spent a short amount of time on one, but decided pretty quick it wasn't for me. I personally like to keep everything in software, especially when it comes to digital gear (there is no reason the MachineDrum or Virus couldn't be plug-ins instead of hardware).

Live performance/controller setups

I've known you for the cool custom software and hardware setups you do. I've had an idea for a while for some music stuff, I can program a bit, but I don't really know where to start (especially for the non-electronic hardware stuff) any tips?

It really depends on what your net goal is, e.g. personal live setup, or sell a commercial product. Reaktor and Max/MSP/Max4Live come to mind for creative live options. Either of these would be huge time-savers, in the sense that C programming is very low-level.

For commercial products there are some intermediate solutions if you don't mind being restricted to Windows (SynthEdit / SynthMaker, which just changed names to something else recently).

what is a least expensive custom solution you can think of?

an iPad (or android tablet, I suppose) is incredibly versatile and incredibly customizable.

I'm not sure why custom is important - are you trying to give the audience a unique presentation? You might want to look into various sensors + Arduino, since you mention you can program a bit.

Do you still use your Monome with live setups? Could you talk a little about your setup?

I'm actually selling the Monome256. I've had a good run with it, taken it around the world, but I never use it in the studio, not that it isn't creative.. Mostly it has gotten a little heavy on my back going through airports. I realized I can do what I do on it in other ways, so I bought an Native-Instruments F1 and recently coded a custom build of [Nerve](www.xferrecords.com/products/nerve/) with the relevant features I use mapped on to it, with the added bonus of knobs and sliders for the FX.

My setup has gone from Lemur to TouchOSC software on the iPad (again, less weight/size, less wires, and if it breaks on a tour much easier to replace).

But conceptually speaking, nothing has changed much since this video, except new controllers.

I use TouchOSC on the iPad but I've never felt like I can get the precise controls of hardware buttons with a touch screen. Is that something you've been able to work around after enough time? Also, why TouchOSC and not the Lemur app?

I bought the Lemur app too, it didn't exist when I made my port to iPad and I'm quite comfortable with my TouchOSC setup now, so many gigs later. While I use it mostly for 'button pushes', it isn't great to replace faders/knobs by any means. However there are two advantages to conventional knobs/sliders: 1) the visual feedback allows you to see the current position without some sort of 'latching/takeover' and 2) you can jump to a discrete position (by pressing anywhere on the fader throw) without having to slide to it like you do with a physical knob/slider. It is probably most annoying for filtering, but I have the NI F1 for that now, and I simply use the DJ mixer for a lot of that in any case.

I'm assuming you're using the OSCeditor to make your own templates. Would you ever consider releasing them so we could take a look at how you've managed to take advantage of TouchOSC?

There's nothing magic in what I'm doing with the TouchOSC editor. Just a bunch of controls (faders/pads/text labels) with OSC tags assigned. I think of it like a 'dumb remote control'. All of the "smarts" are happening in my VSTPlugin which is interfacing between LiveAPI and the iPad (as well as grabbing timeposition info from the VST interface).

As a vague example:

"when I push X, I want Y to happen". I hardcode this function into the plugin and re-compile.

So I push X, a simple little OSC message is sent from the iPad to my plugin, signalling that X was pushed. Then plugin makes Y happen by in turn sending a message to LiveAPI (or messages in intervals, until X sends notice that it is released).

How do you feel about stuff like this, all of these custom setups, like MIDIFighters and custom controllers, in comparison to traditional DJing? re: the debate about "pushing play"

Triggering samples and loops over existing music is a great way to expand the live experience, doing something a little more unique. The effectiveness of such an approach depends a lot on the genre of music / how much empty space is available. Of course Ableton Live is capable of much more than just triggering samples and loops, but beat-matching and the former are just about all I tend to see it getting used for.

The more the performer is an artist/producer (and hopefully known for their music) then the more freedom there is to create alternate mixes, or stems, to either create space for improvisation, or at least create something unlike the album version for the audience to recognize (and recognize the difference). Even this preparation makes a live show more of a non-album-track experience.

Said conversely, not to be captain obvious, and this applies to music as well: if everything is predictable, thennothing is exciting.

As for custom controllers, I don't think the audience really cares what is used (they can't usually see it anyway). I've watched people build custom hardware for basically no benefit: nobody notices, it costs more, it breaks more, it takes time away from more important things, and it doesn't make the resulting sound any better in most circumstances. However if the layout offers some mental benefit and allows the performance to go better, I can't argue it. How using controllers, or laptops, compares to traditional DJ'ing, I see no fundamental difference. Of course software/controllers has the potential to open up more options, the majority of people don't seem to want those options and end up using the system more-or-less like a pair of turntables.

In any case I'm very thankful that it is no longer taboo to have laptops in a DJ booth, because it is a silly distinction to make (a CDJ is a computer too, in my mind).

As long as guys are getting paid handsomely to play back music, they aren't going to make the time investment to create and practice more complex performances.. but I expect this will gradually change, and people will eventually feel second-rate doing a traditional DJ set on a stage full of performers. How long this will take is anyone's guess, I give it at least 5 years and probably more like 20.

could you atleast explain why you choose to use a mac?

Joel gave me that Macbook, it was his old touring laptop.

Really its a debate I prefer to ignore. I'm pretty anti-apple but its a very slick user experience for the most part. I like to say "Macs are like supermodels - they're great to look at, but if you get to know it intimately you'll probably end up miserable".

my biggest gripe with Apple is their policies. They aren't the "don't be evil" company. Charging developers for the 'right' to develop, and taking 30% margins running the virtual storefront is pretty evil. Just my opinion!