This document will show you some tips for surround mixing. Mainly targeting ITU/SMPTE 5.1 mixing in reaper, with a bit of Audition and comments on other DAWs. It has been vastly expanded on in 2015 from 2014, with more depth, implementation details, and a dedicated Ambisonics section.
Although written with ITU in mind, Ambisonics readers can take in suggestions from the ITU sections where appropriate.
Ambisonics is an alternative to conventional surround. It has its ups and downs. The downs mainly support & prevalence.
Conventional surround/multichannel is 1 ch = 1 speaker. Ambisonics works by transmitting differences between speakers, and there is a slight decoupling between the stream and your speaker setup. This is nice, but requires decoders to be smarter and knowledgeable. Rear sounds may be perceived as "brighter" (via the EEtimes link, later), which is why 'proper' ambisonic decoders EQ the HFs down at the back.
In recent years, apart from 7.1 (back) for BD, there's also been the Auro family, 10.2, 22.2 surround (all stream-based), Dolby Atmos, and DTS-UHD (both object-based). Auro & Atmos use proprietary schemes to encode the extra data in 5.1/7.1, and because Atmos is object-based, it goes somewhat towards Ambisonics.'
I Highly recommend Reaper. While you'll need to turn to 3rd pty free or paid addons to do surround panning, it more than makes up for it in its generic multichannel and flexible routing abilities.
You could use Audition. As of V5, the only mch fmt it supports is 5.1. Audition V3 does MIDI and surround , but the surround panner isn't integrated into the mixing session so i can recommend against V3.
I also recommend against using Logic as it fills the LFE with upscaled content. You can tell that they (possibly) used logic because the LFE almost always has upscaled content
You should know by now, if you're mixing for ITU surround, imagine a circle, from a clockwise position with the center speaker straight ahead at 0deg, that the front speakers are +- 30deg, the surround speakers are +-115deg. SMPTE/WAV ordering is L R C LFE LS RS.
There is this list of things to consider [ https://uod-true-multi-channel-mixing.wikispaces.com/Composing+Music+in+Surround ] for mixing multichannel audio. I don't go through this every time I do something in surround. Too much time, not worth the effort, and some things are just assumed or the usual, though I may use some of the techniques listed there. Like continuous panning in a circle, 2ary melodies at the back, splitting chords , splitting an ostinato, or 1/16th notes between the speakers to keep things interesting. Yes you can make a static surround mix, but why not make it worthwhile for surround? Or even if you don't mix or deliver in surround, see if you can port or apply these techniques (to stereo).
Reaper has 3d pty free JSFX for splitting MIDI chords, and you can use parameter modulation, LFO: Saw R shape, centered offset, tempo-sync to say, 1 or 2 bars and link that to the pan slider of a surround panner.
and you'll be fine. Unfortunately, the (music) studios find this quite hard to do sometimes, like too much similarity between from/back, delays between groups, etc. If you mix sensibly, a fine stereo downmix should follow naturally. And irregardless of whether you like it or not, it WILL (may) be downmixed to stereo at some point. inb4 stereo is the new mono
When you start a project in Audition, you choose your sample rate and channel config (currently, only multichannel format is 5.1 as of V4+). You need to choose carefully as any SR mismatches will make Audition reencode the imported media. You also need to keep in mind your delivery SR (usually 48k for surround). This should apply to most other apps. You can mix and match SRs in Reaper. pleas
Sonar, Logic, and MOTU DP have more flexible options for mch, including quad,, 7.1, 5.1. Reaper has generic multichannel capabilities (no surround panner builtin, but free/paid 3d pty ones exist see later), meaning you'll need to manage the channel config yourself, or use VSTs.
I Define 3 classes of support:
1st class - first time: slow to import, makes a small index file. Thereafter, fast to open project
2nd class - slow to import, doesn't get any faster on subsequent opens, deletes temporary file after exist.
3rd class - slow to import, doesn't get any faster on subsequent opens, temporary files exist until deleted and will be rebuilt if they don't exist.
Most DAWs have 1st class support for WAV - no surprises there. REAPER has 1st class support for everything it supports. FLAC, WV, mp3. For m4a, you need to rename to mp4, and for DTS/AC3, you need to remux or rename to mkv. If FFMPEG or Directshow is used, this may may seeking laggy, CPU high, etc.
Logic has 1st class support for ALAC.
AUdition has 2nd class support for MP3, M4a, FLAC, and WV.
Sonar manages to have 3rd class support for FLAC, which really isn't support at all. It's worse than having a WAV file because the FLAC is supposed to save space over the WAV (which was the whole point of doing the flc), but then it's purpose is defeated by having a persistent WAV just to load it in the DAW. So now you have FLAC + WAV >_>
This is almost inevitable, and it also makes inspection & mixing easier.
In Audition, insert the channel mixer FX into the master. It has a preset to downmix to stereo. It is correct and a valid spec, but the rears are downmixed too quiet imo (boost it to 100% into L & R respectively). YOu can also make your own presets for downmixing to quad or monitoring the rear speakers on the front. You can have multiple copies of the same FX on the same chain. I typically have 2 CMs downmixing to stereo & quad on the master.
In reaper, you can insert Mixer8xM and tweak the sliders to downmix to 2ch. Alternatively, use the hardware sends. Assuming you can mch out (many mid/low end mobos these days have 8ch or at least 6ch anal. out). Add a hardware send to 1/2, pick 1/2 as the source. DO the same with 3/4 and 5/6. If you're downmixing to say, quad, add 1/2 as out, but pick 3 as a source. Remember hardware outs aren't pan-compensated, so you'll need to gain by -3dB. This is more tedious to setup, but it's more flexible than audition as you can solo channel pais without instantiating a new FX.
Just make sure you get the opportunity to test your mix in multichannel at some point(s)
If you have virtual 5.1 or 7.1 ch headphones (with only 1 driver per ear), skip this bit, they do the HTRF for you. If you have multi-driver headphones, do this.
As with monitoring stereo mixes on headphones, a false sense of imaging is exposed, which is why there exist HTRF/VRM VSTs. Tonebooster's Isonesurround and Beyerdynamics' Virtual studio [ http://www.dontcrack.com/freeware/downloads.php/id/7200/software/Virtual-Studio/ ] are Free HTRF VSTs which take in surround.
I Prefer Beyerdynamics' VS simply because it's more convincing for me, even though it has too much reverb for my tastes. Press the 2nd button for 5.1 surround downmixing.
NB: For some reason, the center channel is mixed in too loudly for above both. Keep this in mind. Downmix to 4ch Quad first, and then feed them the sound , or downmix to 2ch (since the sense of 'rear' doesn't work well for me using HTRF)
This is a very debatable topic on how to use it, or not to use it at all.
I've had someone say lead vocals in physical center is fine, while they say the exact opposite a year later. On some mixes, the lead vocal is in all 3 front speakers, and some instruments are in the physical center, too (probably good practice for sounding good, I thought this was because people were used to the hack DPL is). Some mixes leave the center (almost) silent and use phantom center only.
I mix with lead vocals as much as possible in the physical center. easier to deduce levels from metering, and you might be able to get a usable instro/kara if you dissect my exported mix. This may actually sound bad on a true 5/6ch system, but based on my limited testing, you can't easily tell the difference between a physical and a phantom center unless you're really close, the center is very differently sized, or very differently placed, etc.
Recommendation: so, wynaut have a discrete C ch and stick something in there?
The LFE is supposed to be for Low Frequency Effects. Selected booms and bangs for movies. You shouldn't need to use it at all for music. But nowdays, many people just lowpass all the other ch and feed it to the LFE (ever hear of BMS? guise?) leaving fuckall what you do with the LFE (at least in music). In addition, most subs are miscalibrated (to cope with this).
For digital delivery (including DVD/BD), all you need to worry is that the LFE is turned up by 10dB, low passed, and fed to the sub. BMS may feed the bass from the other ch into the sub if you don't have full-size speakers. (via http://www.avsforum.com/t/748147/lfe-subwoofers-and-interconnects-explained )
In my early days of mixing, I used to use the LFE badly (lopass etc), but nowdays, I hardly use it for music.
http://www.ambisonic.net/dvda.html#bass < This page has some more info on LFE, surround & ambisonics.
Recommendation: don't use it, or use it 'the movie way'
You will get INSTANT artefacts when downmixing.
Groups: L R <> C LFE <> LS RS , etc
The Marvin Gaye Collection DVDA/SACD, Forever yours DTSCD 5.1 mixes: There is a delay between C & L R groups. When downmixing to 4ch quad or stereo, INSTANT combing/flanging artefacts.
Disney's Lion King BD 7.1 audio. There is a delay between side & back groups on (some) songs. INSTANT combing/flanging artefacts when downmixing to 6/4/2ch.
If you must, please make sure that the artefacts are not too audioable.
Delays within a group is OK, however.
juzplanecrazie linked http://www.grammy.org/files/pages/SurroundRecommendations.pdf via [ www.reddit.com/r/audioengineering/comments/26zauj/do_any_of_you_guys_know_of_good_books ] . It has some general points on surround, mixing, and a brief bit about music. while I generally agree, i'm not so keen on use of delay, and delay compensation for speaker placement. What if you have some delays on for monitoring and forget to take it off when you export?
If there is an existing stereo mix, try to respect their panning if sensible in surround.
I start with material in stereo, and then I move stuff into the other speakers to make a surround mix from ground-up that should sound good. Some people treat the additional speakers as 'extensions' of stereo, like have most stuff in L&R and copy select material into C & LS RS or similar. This way offers compatibility for broken setups and more resilient imaging, but may sound bad on a proper setup, and I don't cater for broken setups.
Keeping stuff panned around the edges & F/B makes sound easiest to pinpoint. If you have stuff panned halfway between front/back or near the middle of the panning circle, it will sound disorienting - the brain can't resolve where the sound is coming from cuz human hearing is weakest around the sides. Panning like this will not create the impression the sound is in the 'middle' of the speakers. It will just sound bad
"Instruments panned partway between front and surround channels in 5.1-channel sound are subject to image instability and sounding split in two spectrally, so this is not generally a good position to use for primary sources."
Try to place discrete content in the back speakers, and preferably instruments with long notes/sustains like strings, pads, etc. You don't want the back speakers to steal attention from the front too much with lots of transients usually (but feel free to do this if you have good reasons). Also Meaning that they should also be mostly not louder than the front. If the rear speakers just 'blend in', that's fine. That's what they're supposed to do.
Try to keep stuff centered and balanced in the rear (but not necessarily mono), otherwise it may sound weird.
Use reverb if you;re doing a little, or a dedicated surround upscaler like DTS Neural upmix with width=100% if you're spreading a lot.
Doing reverb? DO CHECK whether it sounds okay on a 4+ch system that it doesn't sound too unpleasant with sound 'travelling' to the opposite side as a result of the reverb.
If you have say, a 4-note MIDI chord from a synth or string section, consider putting 1 note in each side speaker. Don't do this with a solo instrument that doesn't make sense to be exploded, like a piano. It sounds almost as bad as upscaling i to 4 speakers (even though it is discrete, the brain is too focused on other things to tell the difference in a full mix).
You will need to separate the chord first, like into separate MIDI channels. This is where exploding via JS, this JS router (updated version here) , Sibelius, or Finale (most HQ/flexible exploding) come in. Audiowise, you can have a multi-out VSTi rompler/sampler, 1 synth per speaker, up to you how you implement it.
You can also do chords or notes alternating between speaker pairs. Much work to setup. There'll be some effect, but again, the brain may be too busy to focus on the effect, especially if you alternate it too fast (but I did this for a few projects anyway because I thought my initial idea was cool)
Don't do too much, or too little. many commercial surround mixes sound crap because there's too much similar between front & back. They're lazy to discretize the back. Though there are a few genuine cases of songs with few (sensible) instruments to place in the back.
If there's not enough similarity between front & back via notes or instruments, it could sound like "the back speakers just happen to be playing something that doesn't clash musically with the front"
Better not enough stuff at the back since you can reprocess it later.
In reaper, you can use this JS http://sonic.supermaailma.net/plugins } or FX IO routing to do panning. or via IO pins. I suggest NOT using Reasurround since there's too much leakage by default and it's hard to use. I have (routing) presets setup in reastream to move front stuff to back/sides.
As much as I hate waves stuff, I'll have to admit that their 360 stereo to surround panner uses 1/3d of the CPU as the above JS, which is why I use it almost exclusively for 5.1 panning in reaper
I am advising against spat tools for panning because they're not conventional, and they're not cross-platform
The panner in Audition 4+ does width compensation (lol) since in ITU 5.1, the back speakers are 90deg apart, but the front speakers are 60deg apart. This is a nice thought, but can be annoying sometimes if you just want to stick something in the back speakers, untouched.
I try to have 6-9dB of headroom for all channels so when downmixed to 2ch, it doesn't clip too much.
Maaya's 15th Anniv. concert audio: mixwise, it's fine. Some separation between front/back, LFE sparingly used to good effect. Unfortunately, the mix is ruined by waveforms that look like solid bricks, ie, heavy DRC.
I will be talking about the 7.1 (back) layout for BD. Speaker order for SMPTE (you should know this): L R C LFE RL RR SL SR. Speaker placements (approximately ideal): starting at 30 degrees, one speaker at 60 degree offsets, and Center at 0deg. I think i've got these right. Downmixing 7.1 to 5.1, the back 4 speakers fold down into the back to, with no attenuation.
FOr 5.1, it was found that +-90deg for the rear speakers offered the best envelopment but +-135deg offered the most stable rear imaging. Therefore, a compromise of +-115deg was made. Now, for 7.1 there is NO COMPROMISE between envelopment and stable rear imaging because now you have a pair of side speakers AND back speakers at +-90 & 135deg (depending upon arrangement). Some people have a native 7.1 layout, some people have the speakers quite close together (to make listening to 5.1 without 7.1 speaker sill sound less bad)
If you're working in 7.1, keep in mind that a lot of tools that support 5.1 don't support 7.1. I'm talking about Audition and waves 360. And some other DAWs. in 7.1, you have more speakers, so your mix is even more exposed. So you need to spend more effort to make it sound good, find appropriate things to place in the speakers, etc. I usually place trumpets and backup vocals in the sides, and spread some things from back to side.
7.1 Is a nice format to work with, once you get past the downsides like speaker placements, assignments, software support, etc. With 7.1 in a small place and speakers placed correctly, I noticed that speakers 'just disappear' and there is only a soundfield. But only on well mixed stuff. If your side & back speakers are too discrete, then the soundfield sounds a bit disparate. You should have some similarity btw back & sides. Pans sound smoother with a discrete 7.1 panner.
Exporting in 7.1 is another matter. It's mostly about channel ordering. Since 7.1 (back) for BD was developed after other standards were (such as 22.2, ms WAVEFORMATEXTENSIBLE, etc), you may find channel ordering issues. As much as I like Wavpack, it follows MS WFE order so FLAC is best for without tinkering. For beyond 7.1, Wavpack is a must.
For 7.1 Mixing, I sometimes have 2 buses. A backfocus bus a backspread bus. The backfocus bus I have something to direct 2ch into the back speakers. The backspread bus I have something to spread 2ch to the back 4 speakers (back & side). Because speaker duplication is occurring, you need to gain by -3dB to avoid increasing the volume. You can do this at the bus fader level. You may have 1 or 2 instruments panned directly to the side speakers to give discreteness.
Sometimes, you might want more control over this, and you can stick on something with volume knobs to control which pair gets what gain (eg, back gets -1.5dB while side gets Some-4.5dB).
Sometimes I do complex stuff like have a multi out VSTi outputting on the left pair & a different thing on the right pair (or even pair up side left + back right & vice versa), manually managing the pans keeping in mind that you're panning left vs more left etc, and reducing the width of the back pair etc. It's more complex, but it gives more discreteness & from preliminary tests it could be worth it.
If you're layering instruments, say a synth, you can have different layers in each different pair. This is more discrete, while giving the illusion of spread. For example, there was a synth that had a pad preset, but it was lacking highs and EQing it or exciting it didn't really sound good. I lowpassed it, stuck it in the back only and put on a hipassed detuned saw synth at the sides. This synth sound is now worth 4 discrete channels, a very nice rich sound, and an enveloping back.
Multi-out VSTis are nice for surround.
Most of your stems should be mono or stereo to start with, then you pan them in surround. If you would like to do custom FX routing, you can use Audition's IO pin mapper (since V5), but that's not as flexible as Reaper's IO pin mapper. If you would like to upscale stereo stuff to 4ch or more (like pads, strings, etc), please use DTS neural upmix http://www.dts.com/dts-store/products/software/neural-upmix-by-dts.aspx (use only on selected stems, not to be abused on full mixes)
Multichannel plugs are below. These should only be applied to master or instrument buses.
More stuff: http://www.soundonsound.com/sos/aug01/articles/surroundsound1.asp You can skim read this. Keep in mind that these articles are OVER a decade old, and might not be too relevant. Ignore the matrix stuff.
Reaper & Audition can export WAV & FLAC. FLAC supports up to 8ch, and WV & WAV support more.
Wavpack support through 3d pty plugins
AC3 supports up to 5.1 ch. Free AC3 encoders exist, but the best AC3 encoder is DD Pro, available in Mainconcept/Rovi Reference V2+. Audition might have that.
The Best DTS encoder is probably DTS MAS. DTS supports up to 6.1, and DTSHD/HR supports 7.1 and many other nonstandard layouts.
LC-AAC can do up to 8ch fine. HE-AAC may have problems with channel layouts & decoders.Keep checking the news for information on xHE-AAC.
Some of the text below, I have written with my (brief) experience. Some will be guesses and deductions, but they should be correct. I get the impression that ambisonics is all about the pan. And ambisonics material is more common with acoustic experiences rather than music or film. Not that you can't do those, but it's less common, based on my impressions.
I was disappointed with ambisonics in that it was not what i was expecting, and not what I wanted. Conventional Ambisonics targets a full sphere soundfield, and so discreteness near the top and bottom is low. I wasn't able to achieve the discreteness I wanted, and separation is quite poor at times. It may be possible with custom Ambisonic-like system I would be able to achieve this, but conventional surround comes closer (but I'm biased because I used conventional surround first). Irregardless, some people may find some use and merit in Ambisonics.
Along with the usual file format & hardware/software support issues, general dis-prevalence, two important details that weren't known to me well, initially:
If you add speakers in conventional surround, you get silent speakers. If you add speakers in Ambisonics, you get… sound in your speakers but the spatial resolution won't be great.
1st order WXYZ is only good for about 4 speakers in the horizontal plane. You can decode to more, but it won't sound goot. You'll want 2nd order for 6 horizontal speakers, and 3rd for 8 horizontal. If you have a specific speaker layout in mind, you'll have less leakage with conventional surround because you can place things exactly and only in particular speakers.
You can easily make your own decoder in reaper's JSFX using a spreadsheet and a text editor with these coefficient tables [ http://www.blueripplesound.com/decoding ]
Ambisonics doesn't emphasize a particular direction. And since conventional surround had little consideration for height initially, I would say it's a strength of ambisonics. Height apparently makes almost as much of a difference as rear speakers [ http://www.ambisonic.net/dvda.html#height ]. It also takes slightly less channels to encode something for n speakers. The modular nature of Higher order ambisonics for mixing & decoding - you can simply ignore the channels you're not capable of using (whereas this could be disastrous for conventional surround).
The go-to software for Ambisonics seems to be Cockos reaper. With it generic multichannel capabilities (not tied to a specific speaker or ordering) - up to 64 discrete channels, and extremely flexible routing, you can theoretically do up to 5th order (but i'm not aware any VSTs support this). And free JSFX.
You can prolly get away with something which supports specific multichannel formats such as quad, 5.1, 7.1, etc (Audition, MOTU DP, Sonar etc, but you may have your daw do some channel mangling (special LFE treatment, apple logic especially) , and you may have to constantly do workaround on every FX to unroute the default routings. You may be able to get away with 2nd order 2D Ambisonics in a 5.1 host (WXYUV)
Apart from panning, there seem to be… not much (free effects, anyway). With conventional surround, you can apply stereo effects to surround, anytime. Or build your own with multiple. With ambisonics, you need to do it before panning/encoding. There's an ambisonics 1st O freeverb port, but there's no focus. I want a reverb with glues my mix together, not destroys the panning.
Here are a list of cross-platform ambi VSTs:
There are enough free plugs that if one has deficiencies, other ones can make up for it. For xaemple, the ambix kit doesn't have any speaker decodes, instead you need to generate presets (how user-unfrioendly), so you can use the wig, or blue ripple decodes, or write your own.
Google has a handy guide to using the nifty & smnall thrive HRIRs in Reaper
You can use VSTs otherwise designed for ITU 5.1/7.1, but keep in mind the LFE may be handled differently, and some may destroy or do weird things with your ambisonics imaging. Simple plugins like gain and EQ, apply it equally to all channels, and you should be fine. Imaging VSTs like reverbs, you may get strange (or interesting) effects. A delay should result in a comb imaging (this is actually used to widen mono into stereo). DRC may wreck your panning, and you should not be using DRC on ambisonic compositions.
You can Mix & Match ambisonics & conventional surround in the same project (but why would you want to do that?). For applying any stereo/mono effects, you'll need to do that before Any Ambisonic panning/encoding occurs.
For a surround host project, pan/encode to ambisonics selected tracks, then decode.
For an Ambisonics host project, have surround stems split into separate mono channels, then pan them with an Ambisonics decoder.
You can have mixed order ambisonics in your project. This is one great thing about ambisonics - the modular nature of Higher order Ambisonics. If you have a 1st order effect, and then you have a 3rd order something else, that's fine (but ofc won't sound good). You just need to be careful about which pins are which channels since not all VSTs label their pins ( see also http://www.blueripplesound.com/b-format ). Another downside about ambisonics - you may not be able to tell just by listening which channel does what (especially for static sections/mixes). It should be fairly easy with conventional surround.
q66: Ambisonics mite be technically better than conventional surround, but IN PRACTICE & MIXING etc there's a lot of issues which degrade it to only marginally better.
e.g, with surround you can easily tell where a sound is based on which channel is active. With ambi, it's harder, especially with HoA where a lot of channels are active. This doesn't really change with ambi visualizers, e.g, if i stick on a 0L globe or area visualizer and pan something top, I can see top or bottom is active, but I cannot tell WHICH. Meters don't help either, as I know Y or whatever height is, active, and positioning is dependant on polarity which is relative.
Channel ordering issues are harder to fix in ambi. There's 2 main conventions, fuma & ACN. fuma was more popular in 2015, but ACN was more popular in 2016, meaning that there are lots of recent things, which don't work in ACN. Not many FUMA > ACN things for HoA Exists.
If you want to fix a ambi mix with wrong channel order, it's very difficult to do via waveform inspection alone, especially for HoA & MoA as it's very non-intuitive. You can do this with surround tho.
Also, 1oA's soundfield is actually less stable than 5.1, and even 5.1 sucks.
also, with amb, you NEED to sick a panner/enc on EVERY channel if you want to do a fully native mix. 4 surr, you only need 2 if it's nething other than F LR, which is only about half of all tracks
quad failed 4 (one of) the same reasons as 1oA did in the 70s- soundfield is simply not very stable
all these small reasons make making ambi mixes manually not very nice. It is nice tho if you generate it like say, in a game.
1oA stock has very high leakage btw speakers (8>4>8 has ~6-15dB of SQ), a custom 4>3 compressor based on ambi princibles (while + depth & wodth differeces http://pastebin.com/WZiLqtmT
) achieves a bit better (~10-20dB). Figures not acurate, tested on upscaled stsro > quad, but gives a rough real-wold figure
[17:50] <junh1024-XD> prunedtree , can you compress 4ch into 3ch, and then losslessly decode back to 4ch? I made a custom ambisonic principles-inspired thing quad > W (whole), X (Width DIffrence) , Z (Depth difference) thing, but when I decode back to 4ch, there's a bit of leakage from the back to the front. Is this expected / howfix ?
[17:55] <atomnuker> junh1024-XD: assuming completely random full range distrubtion of frequencies all the way up to nyquist it ain't gonna be lossless unless you break the laws of entropy
[17:57] <junh1024-XD> oh, so is it gonna be like, 85% lossless?
[17:58] <junh1024-XD> what about 6>5 or 4 compression ?
[17:58] <atomnuker> no, depends on what's in that channel, what the other channels have in terms of frequencies, the similarities of the signals across all channels, how you split it across the channels, etc.
[17:59] <junh1024-XD> basically full range
[17:59] <junh1024-XD> not doing anything special with frequency filtering, just simple + - maths
"Matrix encoding does not allow one to encode several channels in fewer channels without losing information: one cannot fit 5 channels into 2 (or even 3 into 2) without losing information, as this loses dimensions: the decoded signals are not independent. The idea is rather to encode something that will both be an acceptable approximation of the surround sound when decoded"
SO it's known that 1oA has low SQ. How to increase SQ? Things which didnt work:
Only the harpex thing works
At 46/16/3 1oA 2D, you get ~180s (2.9min) 288bps,264.6
32/16/3 260s (4.2 min) 192bps
Need to make a msed double cross? MSED does actually work (cross f&B but dont do it too much 1.5-6? of mid ungain, side gain Work best)