edit 04.05.24

deton24’s

Instrumental and vocal & stems separation & mastering guide

(UVR 5 GUI - VR/MDX-Net/MDX23C/Demucs 1-4, BS/Mel-Roformer

MVSEP-MDX23-Colab/KaraFan/drumsep/LarsNet/Ripple/GSEP/Dango.ai/Audioshake/Music.ai)

General reading advice | Discord | Table of content (or open document outline)

Straight to currently the best models list 

___

Last updates and news

-(x-minus) max_mag of (?-)Roformer and Demucs (drums only) added

“now the synths and everything else feels muddy

noticed the drums in some places (mainly louder-ish bits) sound a bit weird

mostly lower end like bass drum instead of hi hats

great improvement overall” isling

- Doubledouble might have some occasional hiccups on downloading. If you encounter very slow download, don’t attempt retrying the same download, but generate new download query

- (x-minus) “Added max_mag ensemble for Mel-RoFormer model!! It combines Mel and BS results, making the instrumentals even less muddy, while better preserving saxophone and other instruments.”

- New Mel-Roformer model trained by Kimberley Jensen on Aufr33 dataset dropped exclusively on x-minus.

“This model will now be used by default and in ensemble with MDX23C (avg).”

It’s less muddy than viperx model, but can have more vocal residues e.g. in silent parts of instrumentals, and can be more problematic with wind instruments putting them in vocals, plus it might leave more instrumental residues in instrumentals.

“godsend for voice modulated in synth/electronic songs”

SDR is higher than viperx model (UVR/MVSEP) but lower than fine-tuned 04.24 model on MVSEP.

- New UVR patch has been released. It fixes using OpenCL on AMD and Intel GPUs (just make sure you have GPU processing turned on in the main window and (perhaps only in some cases) OpenCL turned on in the settings).

Plus, it fixes errors when the notification chimey in options is turned on.

https://github.com/TRvlvr/model_repo/releases/download/uvr_update_patches/UVR_Patch_4_14_24_18_7_BETA_full_Roformer.exe (be aware that you can lose your current UVR settings after the update)

To use BS-Roformer models, go to download center and download them in MDX-Net menu (probably temp solution).

For 4GB VRAM and at least AMD/Intel GPUs, you can try out segments 32, overlap 2

and dim_t 201 with num_o 2 (dim_t is at the bottom of e.g. model_bs_roformer_ep_368_sdr_12.9628.yaml) to avoid crashes.

You might want to check a new recommended ensemble:

1296+1297+MDX23C HQ

Instead of 1297 and for faster processing and similar result, make a manual ensemble with a copy of 1296 result instead. It might work in similar fashion like weighting in 2.4 Colab and model ensemble on MVSEP (source).

- VIP code allowing access to extra models in UVR currently doesn’t work for some people using the beta patch and MDX23C Inst Voc HQ 2 models disappeared from download center and GH. You can try to download VIP model files manually from this link and place them in Ultimate Vocal Remover\models\MDX_Net_Models directory:

https://github.com/deton24/Colab-for-new-MDX_UVR_models/releases/download/v1.0.0/UVR-MDX-NET_Main_406.onnx

https://github.com/deton24/Colab-for-new-MDX_UVR_models/releases/download/v1.0.0/UVR-MDX-NET_Main_427.onnx

https://github.com/deton24/Colab-for-new-MDX_UVR_models/releases/download/v1.0.0/MDX23C-8KFFT-InstVoc_HQ_2.ckpt

Of course, it’s not all. E.g. 390 340 models and old beta MDX-Net v2 fullband inst models epochs are not reuploaded. This situation might cause errors on an attempt of using Inst Voc HQ 2 in AI Hub fork of Karafan.

Decrypted VIP repo leads to links which are offline, and also it doesn’t contain all models. Possibly the only way to access all the VIP models in beta UVR, is to roll back to stable 5.6 version from UVR official repo, and after downloading all desired VIP models, update to the latest patch.

- According to their forum leak, iZotope RX11 might be released between May and July, and contain some “pretty big changes”, among others, a novel arch for separation is rumored, and a lot of options reworked.  (cali_tay98)

Official announcement is out:

https://www.izotope.com/en/learn/rx-11-coming-soon.html

(overhauled repair assistant, real time dialogue isolation for better separation of noise and reverb from voice recording)

- GSEP announced an update on May 9th with a WAV download option and redesigned UI.

The site will be unavailable on 8th May.

Noraebang (karaoke) service “due to low usage” will be shutdown, and your separated files deleted (you can make a backup of your files before).

Paid plan will be offered with faster processing times and “additional features”.

No model changes are announced so far. The update schedule might change.

- MDX23-Colab Fork v2.4 is out. Changes:

“BS-Roformer models from viperx added, MDX-InstHQ4 model added as optional, FLAC output, control input volume gain, filter vocals below 50Hz option, better chunking algo (no clicks), some code cleaning” - jarredou

- (x-minus) “Added mixing of MDX23C and BS-RoFormer results (avg/bs-roformer option). So far, it works only for MDX23C.” Aufr33

- “Output has released a free AI based generator that create multitrack stem packs

https://coproducer.output.com/pack-generator

12-seconds long audio, fullband, 8 stems (drums in one stem, electric and rhythm guitar, hammond organ, trumpet, vocals) with 8 variations

“this looks more like it's mixing different real instruments, rather than actually making up songs (like a diffusion based generator)” ~jarredou/becruily

- Ensemble on MVSEP updated

- The site is up and running after some outage

- ZFTurbo released fine-tuned viperx model (“ver. 2024.04”) on MVSEP (further trained from checkpoint on a different dataset). Ensembles will be updated tomorrow. Clicking issue has been fixed.

SDR vocals: 11.24, instrumental: 17.55 (from 17.17 in the base model)

Depends on a song if it’s better. Some vocals can be worse vs the previous model.

- BS-Roformer UVR beta patch for MacOS ARM (link | invite)

- Some directions to use the patch on Linux

- Test out ensemble 1296 + 1143 (BS-Roformer in beta UVR) + Inst HQ4 (dopfunk)

Ensembles with BS-Roformer models might not work for everyone, use manual ensemble if needed.

- Viperx model added also to beta Colab by jarredou. It gives only vocals, so perform inversion on your own to get instrumental

https://colab.research.google.com/drive/1pd5Eonbre-khKK_gn5kQPFtB1T1a-27p?usp=sharing

Update: now BS-Roformer is also added in the newer v.2.4 Colab

- Viperx’ BS-RoFormer models have been implemented by Anjok to UVR

Windows beta update (Anjok's alt repo):

https://github.com/TRvlvr/model_repo/releases/download/uvr_update_patches/UVR_Patch_4_14_24_18_7_BETA_full_Roformer.exe (new)

https://github.com/TRvlvr/model_repo/releases/download/uvr_update_patches/UVR_Patch_3_29_24_5_11_BETA_full_roformer.exe (old)

(fixed in the new) If you have playsound.py errors, disable notification chimneys in settings>additional settings

The best measured SDR for both ep368 and ep317 models is when “inference.dim_t” parameter at the bottom of e.g. model_bs_roformer_ep_368_sdr_12.9628.yaml file in the models folder is set to “1101” (it's the part of the file at the bottom, not at the top). Be aware that it will increase separation time.

“There's probably an issue with config file of ep368 BS-Roformer model:

ep317 uses inference.dim_t = 801, while ep368 is set to 901, but it should be 801 too (which is the right value when computed from chunk_size & hop length)

It's “model_bs_roformer_ep_368_sdr_12.9628.yaml” file to edit” jarredou.

Plus even before SDR measurement, for some people, using default 901 instead of 801 for ep_317 could give better results, though. Jarredou said later he was wrong about that particular measurement.

If you have problems with sudden cutting of the content in your stem and moving it into another, use overlap 4 and segments 384.

Be aware that it increases separation time substantially.

It can also lead occasionally to some effects or synths missing from the instrumental stem.

Also, the problems with clicks are alleviated with overlap 2.

Sometimes overlap 8 can be good enough too.

1602 segment size might lead to less wateriness, but turns out in cost of a bit more of vocal residues.

1053 model separates drums and bass in one stem, and it's very good at it.

It might depend on a song whether 1296 or 1297 is better for instrumentals (it can be 1296 more often, and 1297 for vocals).

Versus MVSEP model trained by ZFTurbo, the viperx’ model(s) tend to have more problems with recognizing instruments. Other than that, they're very good for vocals.

Be aware that names of these models on UVR refer to SDR measurements of vocals conducted on private viperx dataset, not even older Synthetic dataset, instead of on multisong dataset on MVSEP, hence the numbers are higher than in the multisong chart.

The update caused some stability and performance issues with other arch's for some people when specific parameters started to take more time than before.

Roll back to stable 5.6 in these cases if necessary. Possibly make a copy of the old installation. Your configuration files might be lost.

(both fixed in the newer beta) Using OpenCL GPU acceleration (AMD) for BS-Roformer doesn’t work at the moment (or at least not for everyone).

- The model was also added on MVSEP

- New ensembles with higher SDR were added on MVSEP

- BS-Roformer model trained by viperx was added on x-minus (it's different from the v2 model on MVSEP, and has higher SDR, it's the “1.0” one). If it's better vs V2 might depend on a song.

It struggles with saxophone and e.g. some Arabic guitars.

- (x-minus - aufr33) “I have just completed training a new UVR De-noise model. Unlike the previous version, it is less aggressive and does not remove SFX.

It was trained on a modified dataset. I reduced the noise level and made it more uniform, removed footsteps, crowd, cars and so on from the noise stems. On the contrary, the crowd is now a useful / dry signal. (...) The new model is designed mainly to remove hiss, such as preamp noise.”

For vocals that have pops or clipping crackles or other audio irregularities, use the old denoise model.

- Dango.ai updated their model, also giving some kind of demudder to the instrumentals, enhancing their results. Results might be better than MDX23C and BS-Roformer v2. Still, it’s pretty pricey (8$ for 10 separations). 5x 30 seconds fragments per IP can be obtained for free, and usually it doesn’t reset. “It’s $8 for 10 tracks x 6 minutes, all aggressiveness modes included (but vocal and inst models are separate). The entire multisong dataset for proper SDR check would cost around $133.” becruily

- Be aware that queues on https://doubledouble.top/ are much shorter for Deezer than Qobuz links. If there’s no 24 bit versions for your music, use Deezer instead. Also, avoid Tidal and 16 bit FLACs from “Max” quality, which is slightly lossy MQA. Use 24 bit MQA from Tidal only when there’s no 24 bit on Qobuz. Most older albums under 2020 are 16 bit MQA instead of 24 bit MQA on Tidal, and are lossy compared to Deezer and Qobuz which doesn’t use MQA (so doubledouble doesn’t convert MQA to FLAC like on Tidal). MQA is only “slightly” lossy, because it affects frequencies mainly from 18kHz and up, and not greatly.

- Members of neighboring AI Hub server made a fork of KaraFan Colab updated with the new HQ_4 and InstVoc HQ2 models. It has slow separation fix applied. Click

- HQ_4 and Crowd models added to HV Colab temp fork before merge with main GH repo

- (MVSEP) “We have added longer filenames disabling option to mvsep, you can access it from Profile page

20240312034817-b3f2ef51cb-ballin_bs_roformer_v2_vocals_[mvsep.com].wav -> ballin_bs_roformer_v2_vocals.wav

Due to browser caching, you might want to hard refresh the page if you have downloaded onc”

- The ensembles for 2 and 5 stems on MVSEP have been updated with bigger SDR bag of models containing now new BS-Roformer v2 (with MDX23C, VitLarge23, and for multistem, the old demucsht_ft, deumcs_ht, demucs_6s and demucs_mmi models)

- All the Discord direct links leading to images in this document have expired. I already reuploaded some more important stuff. Please ping me on Discord if you need access to some specific image. Provide page and expired link.

- https://free-mp3-download.net has been shut down. Check out alternatives here.

New Apple Music ALAC/Atmos downloader added, but its installation is a bit twisted and subscription is required. Murglar added.

- MDX-Net HQ_4 model (SDR 15.86) released for UVR 5 GUI! Go to Models list>Download center>MDX-Net and pick HQ_4 for download. It is an improved and faster than HQ_3, trained for epoch 1149 (only in rare cases there’s more vocal bleeding, more often instrumental bleeding in vocals, but the model is made with instrumentals in mind.

Along with it, also UVR-MDX-NET Crowd HQ 1 has been added in download center.

- HQ_4 model added to the Colab:

https://colab.research.google.com/github/kae0-0/Colab-for-MDX_B/blob/main/MDX_Colab.ipynb

- New BS-Roformer v2 model released on MVSEP. It’s more aggressive model than above.

- Fixed KaraFan Colab with the fix for slow non-MDX23 models. You'll no longer stack on voc_ft using any other preset than 1, but be aware that it will take 8 minutes more to initialize. (same fix as suggested before, but w/o console, as it wasn't defined, and faster ort nightly fix doesn't work here).

Turns out, there has been an official non-nightly package released, and it works with KaraFan correctly (no need to wait 8 minutes any longer):

!python -m pip -q install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

- (x-minus.pro) “Since Boosty is temporarily not accepting PayPal and generally working sucks, I made the decision to go back to Patreon. Please be aware that automatic charges will resume on March 22, 2024. If you have Boosty working correctly and do not intend to use Patreon, please cancel your Patreon subscription to avoid being charged.

If you wish to switch from Boosty to Patreon, please wait for further instructions in March.” Aufr33

- If you suffer from bleeding in other stem of 4 stems Ripple, beside decreasing volume by e.g. 3/4dB also “when u throw the 'other stem' back into ripple 4 track split a second time, it works pretty well [to cancel the bleeding]” if it's still not enough, put other stem through Bandlab Splitter.

- If you suffer from vocal residues using Ensemble 4 models on MVSEP.com, decrease volume of input file by -8dB “now it's silent. No more residue” usually 3 or 4dB was doing the trick for Ripple, but here it’s different. Might depend on a song too.

- Image Line “released an update for FL Studio, and they improved the stem separation and it's better, but it has quite a bit of bleeding still, but it also seems they may have improved the vocal clarity”

- (probably fixed in new HV MDX) Our newly fixed VR and newer HV MDX Colabs started to have issues with very slow initialization for some people (even 18 minutes/+ instead of normally 3). It’s probably due to very slow download of some dependencies. Possible solutions: use other Google account, use VPN, make another Google account (maybe using Polish VPN). Let us know if it happens only for some specific dependency or all of them. You can try to uncomment the ORT nightly line in mounting cell (add # before), as it triggers more dependencies to be installed, which can be slow in that case. The downside is - there won't be GPU acceleration, and one song will be processed in 6-8 minutes instead of ~20 seconds.

- New paid drum separation service:

https://remuse.online/remusekit

It uses free drumsep model (same model hash: 9C18131DA7368E3A76EF4A632CD11551)

- MDX Colab seem to not work due to Numpy issues. I already fixed them in Similarity Colab, and hopefully reimplement the fixes elsewhere soon. VR Colab fixed too.

Tech details about introduced changes described below Similary Extractor section.

- Music AI surfaced. Paid - $25 per month or pay as you go (pricing chart). No free trial. Good selection of models and interesting module stacking feature. To upload files instead of using URLs “you make the workflow, and you start a job from the main page using that custom workflow” [~ D I O ~].

Allegedly it’s made by Moises team, but the results seem to be better than those on Moises.

“Bass was a fair bit better than Demucs HT, Drums about the same. Guitars were very good though. Vocal was almost the same as my cleaned up work. (...) I'd say a little clearer than mvsep 4 ensemble. It seems to get the instrument bleed out quite well, (...) An engineer I've worked with demixed to almost the same results, it took me a few hours and achieve it [in] 39 seconds” Sam Hocking

- “I just got an email from Myxt saying they're going to limit stem creation to 1 track per month. For creator plan users (the $8 a month one) and 2 per month for the highest plan.

So I may assume with that logic, they're gonna take it away for free users?”

- (probably fixed) For all jarredou's MDX23 v. 2.3 Colab fork users:

“Components of VitLarge arch are hosted on Huggingface... when their maintenance will be finished it will work again. I can't do anything about it in the meantime.”

2.2 and 2.1 and MVSEP.com 4-8 models ensemble (premium users) should work fine.

- Ripple now has fade in and clicking issues fixed. Also, there's less bleeding in the other stem (but Bas Curtiz’ trick for -3dB/-4dB input volume decreasing can be still necessary).

“Ripple’s lossless outputs are weird, some stems like the drums are semi full band (kicks go full band, snares not etc) and the “other” stem looks like fake full band”. These fixes are applied also for old versions of the app.

Also, the lossless option fixes to some extend the offset issue so it's more similar to input now, but not identical (lossless option might require updating). Also no more abrupt endings

Ripple  = better than CapCut as of now (and fullband).

plus Ripple fixed the click/artifacts using cross-fade technique between the chunks.

- ViperX currently doesn't plan to release his BS-Roformer model

- New “uvr de-crowd (beta)” model added on x-minus. Seems to provide better results than the MVSEP model. Also, an MDX arch model version is planned for training.

“At minimum aggressiveness value, a second model is now used, which removes less crowd but preserves other sounds/instruments better.”

- Ripple seems to have a lossless export option now. “First make sure the app is updated then click the folder then click the magnet icon then export and change it to lossless”

- Seems like CapCut now has added separation inside Android Capcut app in unlocked Pro version

https://play.google.com/store/apps/details?id=com.lemon.lvoverseas (made by ByteDance)

Seems like there is no other Pro variant for this app.

At least unlocked version on apklite.me have a link to regular version, so it doesn't seem to be Pro app behind any regional block. But -

"Indian users - Use VPN for Pro" as they say, so similar situation like we had on PC Capcut before. Can't guarantee that unlocked version on apklite.me is clean. I've never downloaded anything from there.

- Mega, GDrive and direct link support for input files added on MVSep. If you want to apply MVSep algorithm to result of other algorithm, you can use "Direct link" upload and point https link on separated audio-file on MVSep.

- If you have an issue with Demucs module not found in e.g. MDX23 v.2.3 Colab (now fixed there and also in VR Colab), here's a solution:

“In the installation code, I added `!pip install samplerate==0.1.0` right before the `!pip install -r requirements.txt &> /dev/null` and I managed to get all the dependencies from the requirements.txt installed properly.” (derichtech15)

-  If you repost your images or files from Discord elsewhere while cutting link after "ex=" for all new posted files, it will make your files expire pretty soon (17.02.24). If you leave the full link with "ex=" and so on, it won't expire so fast, but who knows if not later.

So far, all the old Discord images shared elsewhere with "ex=" cut, work (also in incognito without Discord logged in), but it's not certain that it will be that way forever.

Discord announced in the end of 2023, that they'll update their mechanisms of sharing links, so they'll expire after some time when they're shared, to avoid some security vulnerabilities allowing scams. Or they just want to offload the servers.

- OpenVINO™ AI Plugins for Audacity 3.4.2 64-bit introduced.

4 stems separation, noise suppression, Music Style Remix - uses Stable Diffusion to alter a mono or stereo track using a text prompt, Music Generation - uses Stable Diffusion to generate snippets of music from a text prompt, Whisper Transcription - uses whisper.cpp to generate a label track containing the transcription or translation for a given selection of spoken audio or vocals.

Not bad results. They use Demucs.

- For people with low VRAM GPUs (e.g. 4GB or less), you can test out Replay app, which provides voc_ft model and tends to crash less than UVR. Sadly, the choice of models is much smaller, but it has some de-reverb solution. Screenshot

- Latest MVSep changes:

1) All ensembles now have option to output intermediate waveforms from independent algorithms + additional max_mag, min_mag.

2) Ensemble All-In now includes DrumSep results extracted from Drum stem.

- resemble-enhance (GH) model added on x-minus in denoise mode. It can work better than the latest denoise model on x-minus. It is intended only for vocals. For music use UVR De-noise model on x-minus.

- (fixed in kae, 2.1, 2.2 [and KaraFan irc] Colabs) All Colabs using MDX-Net models are currently very slow. GPU acceleration is broken and separations now only work on CPU with onnxruntime warnings.

To work around the issue, go to Tools>Command palette>Use fallback runtime version (while it's still available).

Downgrading CUDA to 11.8 version fixes the issue too, but it takes 9 minutes in order to install that dependency, so it’s faster to use fallback runtime till it’s still available. After that period, just execute this line after initialisation cell:

console('apt-get install cuda-11-8') and GPU acceleration will start to work as usual.

>“Better fix [than CUDA 11.8]  until final version is released, using that onnxruntime-gpu nightly build for cuda12:

!python -m pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/

(no need to install cuda 11.8)” jarredou

- LarsNet model was added on MVSep. It's used to separate drums tracks into 5 stems: kick, snare, cymbals, toms, hihat. Source: https://github.com/polimi-ispl/larsnet

It’s worse than Drumsep as it uses Spleeter-like architecture, but “at least they have an extra output, so they separate hihats and cymbals.”. Colab

“Baseline models don't seem better quality than drumsep, but the provided checkpoints are trained with oly 22 epochs, it doesn't seem much. (and STEMGMD dataset was limited by the only 10 drumkits), so it could probably be better with better dataset & training”

“ it separates the toms so much better [than Drumsep]”

Similar situation as with Drumsep - you should provide drums separated from e.g. Demucs model.

- Captain FLAM from KaraFan asks for some help due to some recent repercussions.

You can support him on https://ko-fi.com/captain_flam

- To preserve instruments which are counted as vocals by other MDXv2 models in KaraFan, use these preset 5 modified settings (dca100fb8).

- Added more remarks from testing these settings against sax preset and others.

- drumsep added on MVSEP!

(separation of drums from e.g. Demucs 4 stem or “Ensemble 8 models”/+)

- New Bandid Plus model added on MVSEP

“I trained BandIt for vocals. But it's too far away from MDX23C” -ZFTurbo

“I loved this bandit plus model!! It has great potential.”

- UVR De-noise model by FoxJoy added on x-minus. It’s helpful for light noise, e.g. vinyl. (de-reverb and de-echo are up already)

New MDX de-noise model is in the works and beta model was also added!

“the instruments in the background are preserved much better than the FoxJoy model”

It works for hiss, interference, crackle, rustles and soft footsteps, technical noise.

- New hifi-gan-bwe Colab fork made by jarredou:

https://colab.research.google.com/github/jarredou/hifi-gan-bwe/blob/main/HIFIGAN_BWE.ipynb

- New AI speech enhancer - https://www.resemble.ai/introducing-resemble-enhance

- Reason 12.5 (a DAW) was released with VST3 plugin support

- jazzpear94 “I made a new model with a modified version of my SFX and Music dataset with the addition of other/ambiant sound and speech. It's a multistem model and should even work in UVR GUI as it is MDX23C.

Note: You may want to rename the config to .yaml as UVR doesn't read .yml and I didn't notice till after sending. Renaming it fixes that, however”

“You put config in models\mdx_net_models\model_data\mdx_c_configs. Then when you use it in UVR it'll ask you for parameters, so you locate the newly placed config file.”

“Keep in mind that the cinematic model focus is mainly on sfx vs instruments

voice stems are supplemental. Usually I remove voices first”

- https://github.com/karnwatcharasupat/bandit

Better SDR for Cinematic Audio Source Separation (dialogue, effect, music) than Demucs 4 DNR model on MVSEP (mean SDR 10.16>11.47)

- "Demucs+CC_Stereo_to_5.1" - it's a script where you can convert Stereo 2.0 to 5.1 surround sound. Full discussion about script. They use MVSep to get steams and after use script on them.

- Colab by jazzpear96 for using ZFTurbo's MSS training script. “I will add inference later on, but for now you can only do the training process with this!”

- New djay Pro 5.0 has “very good realtime stems with low CPU” Allegedly “faster and better than Demucs, similar” although “They are not realtime, they are buffered and cached.” it uses AudioShake. It can be better for instrumentals than UVR at times.

- AudiosourceRE Demix Pro new version has lead/backing vocals separation

- New crowd model added on MVSEP (applause, clapping, whistling, noise) (and got updated by the time 5.57 -> 6.06; added hollywood laughts, old models also available)

- VitLarge23 model on MVSEP got updated (9.78>9.90 for instrumentals)

- MelBand RoFormer (9.07 for vocals) model added on MVSEP for testing purposes

“The model is really good at removing the hi-hat leftovers. These e.g. in the Jarredou colab sometimes when you can hear the hi-hats from the acapella. And Melband roformer can almost remove all the hi-hat leftovers from the acapella.”

“are the stems not inverted result? for me it sounds like there is insane instrument loss in the instrumental stem and vocals loss in the vocal stem, yet there is no vocal bleed in instrumental stem and vice versa” “I also think that the vocals are surprisingly clean considering the instrumentals sound quite suppressed but also clean”

- Goyo Beta plugin for dereverb stopped working on December 2nd (as it required internet connection and silent authorization on every initialization). They transitioned to paid Supertone Clear. They send BETA29 coupon over emails (with it, it’s $29).

- New MVSep-MDX23 Colab Fork v2.3 by jarredou published under new Colab link here

Now it has Vitlarge23 model (previously used exclusively on MVSEP) instead of HQ3-Instr, also improved BigShifts and MDXv2 processing.

Doesn't seem to be better than RipX which is better in preserving some instruments, and also removes vocals completely

- Check out new Karaoke recommendations (dca100fb8)

- Dango.ai finally received English web interface translation

- New SFX model based on Mel roformer was released by jazzpear94. More info

- User friendly Colab made by jarredou and forked by jazzpear94 with new feature. In case of some problems, use WAV file.

- Seems like Ripple got updated, "it sounds a lot better and less muddied" doesn’t seem to give better results for all songs, though. Might be similar case with Capcut too.

- Hit 'n' Mix RipX DAW Pro 7 released. For GPU acceleration, min. requirement is 8GB VRAM and NVIDIA 10XX card or newer (mentioned by the official document are: 1070, 1080, 2070, 2080, 3070, 3080, 3090, 40XX, so with min. 8GB VRAM). Additionally, for GPU acceleration to work, exactly “Nvidia CUDA Toolkit v.11.0” is necessary. Occasionally, during transition from some older versions, separation quality of harmonies can increase. Separation time with GPU acceleration can decrease from even 40 minutes on CPU to 2 minutes on decent GPU.

- UVR BVE v2 beta has been updated on x-minus

“It now performs better on songs with 2 people singing the lead

No longer separates the second lead along with it”

-dca100fb8 found out new settings for KaraFan which give good results for some difficult songs (e.g. Juice WRLD) for both instrumental and acapella. It’s now added as preset 5.

Debug mode and God mode can be disabled, as it's like that by default.

"It's like an improved version of Max Spec ensemble algorithm [from UVR]"

Processing time for 6:16 track on medium setting is 22 minutes.

- New MDX23C model added exclusively on MVSEP:

vocals SDR 10.17 -> 10.36

instrum SDR 16.48 -> 16.66

Also ensemble 4 got updated by new model (10.32>10.44 for vocals)

- For some people using mitmproxy scripts for Capcut (but not everyone), they “changed their security to reject all incoming packet which was run through mitmproxy. I saw the mitmproxy log said the certificate for TLS not allowed to connect to their site to get their API. And there are some errors on mitmproxy such as events.py or bla bla bla... and capcut always warning unstable network, then processing stop to 60% without finish.” ~hendry.setiadi

“At 60% it looks like the progress isn't going up, but give it idk, 1 min tops, and it splits fine.” - Bas

-ZFTurbo published his training code:

https://github.com/ZFTurbo/Music-Source-Separation-Training

"It gives the ability to train 5 types of models: mdx23c, htdemucs, vitlarge23, bs_roformer and mel_band_roformer.

I also put some weights there to not start training from the beginning."

It contains checkpoint of e.g. 1648 (1017 for vocals) MDX23C model to train it further.

Be aware that the older bs_roformer implementation is very slow to train IRC.

Vitlarge23 “is running 2 times faster than MDX models, it's not the best quality available, but it's the fastest inference”

“change the batch size in config tho

I think zfturbo sets the default config suited for a single a6000 (48gb)

and chunksize”

-"A small update to the backing vocals extractor [on X-Minus]

Now you can more accurately specify the panning of the lead vocal." ~Aufr33 Screen

- IntroC created a script for mitmproxy for Capcut allowing fullband output, by slowing down the track. Video

- Jazzpear created new VR SFX model. Sometimes it’s better, sometimes it’s worse than Forte’s model. Download

For UVR 5.x GUI, use these parameters (irc same as Forte):

User input stem name: SFX

Do NOT check inverse stem!

1band sr44100 hl 1024

- Now KaraFan should work locally on 4GB GTX GPUs (e.g. laptop 1060), on presets 2 or 3, and with chunk 500K, speed can be slowest. Download on GitHub the Code > ZIP

-Bas Curtiz' new video on how to install and use Capcut for separation incl. exporting:

https://www.youtube.com/watch?v=ppfyl91bJIw

and saving directly as FLAC, although the core source of FLAC is still AAC in this case:
https://www.youtube.com/watch?v=gEQFzj6-5pk

"It's a bit of a hassle to set it up, but do realize:

- This is the only way (besides Ripple on iOS) to run ByteDance's model (best based on SDR).

- Only the Chinese version has these VIP features; now u will have it in English

- Exporting is a paid feature (normally); now u get it for free

The instructions displayed in the video are also in the YouTube description."

Capcut normalizes the input, so you cannot use Bas’ trick to decrease volume by -3dB like in Ripple to workaround the issue of bleeding (unless you trick out the CapCut, possibly by adding some loud sound in the song with decreased volume, something like presented here).

- (fixed) KaraFan Colab will be fixed on 27th at morning.

- There’s a workaround for people not able to split using Capcut. The app discriminate based on country (poor/rich) and paywalls Pro option.  

The video demonstration for below

0. Go offline.

1. Install the Chinese version from capcut.cn

2. Use these files copied over your current Chinese installation, and don’t use English patch.

3. Open CapCut, go online after closing welcome screen, happy converting!

4. Before you close the app, go offline again (or the separation option will be gone later).

Before reopening the app, go offline again, open the app, close welcome screen, go online, separate, go offline, close. If you happen to missed that step, you need to start from the beginning of the instruction.

(replacing SettingsSDK folder no longer works after transition from 4.6 to 4.7, it freezes the app)

FYI - the app doesn’t separate files locally.

- Bas Curtiz found out that decreasing volume of mixtures for Ripple by -3dB eliminates problems with vocal residues in instrumentals in Ripple. Video.

This is the most balanced value, which still doesn't take too many details out of the song due to volume attenuation.

Other good values purely SDR-wise are -20dB>-8dB>-30dB>-6dB>-4dB> /wo vol. decr.

The method might be potentially beneficial for other models and probably work best for the loudest tracks with brickwalled waveforms.

- OpenCL version of UVR 5 GUI supporting AMD (and probably Intel) GPUs acceleration, released on GH.

Download

Initial OpenCL build required

8GB VRAM for 3:00/3:30 tracks using MDX23C HQ model with 12GB VRAM probably enough for 5:00 track which is more than in CUDA.

Now the issue should be mitigated, and less memory crashes should occur.

Ensembles might require more memory due to memory allocation issues not met in CUDA before. Also, VRAM is fully freed only after closing the application.

Acceleration for only Demucs 2 (and 1?) arch on AMD is not supported. All others archs should work.

- “MDX23C-InstVoc HQ 2 is out as a VIP model [for UVR 5]! It's a slightly fine-tuned version of MDX23C-InstVoc HQ. The SDR is a tiny bit lower, but I found that it leaves less vocal bleeding.” ~Anjok

It’s not always the case, sometimes it can be even the opposite, but as always, all can depend on specific song.

- Be aware that there was introduced also full MPS (GPU) acceleration for Mac M1 for all MDX-NET Original Models (HQ3, etc.), all MDX23C Models, all Demucs v4 models (no VR models acceleration on GPU). So don’t use Windows in VM to run UVR, but separate dmg installer from releases (ARM). GPU acceleration is 3x faster than separation took on CPU before.

- jarredou’s MDX23 2.2 Colab should allow separating faster, and also longer files now (tech details)

- All-in ensemble added for premium users of MVSEP - it has vocals, vocals lead, vocals back, drums, bass, piano, guitar, other. Basically 8 stems (and from drums stem you can further separate single percussion instruments using drumsep - up to 4 instruments, so it will give 10 stems in total).

- https://www.capcut.cn/ (outdated section: read)

Is a new Windows app which contains Ripple/SAMI-Bytedance inst/vocal model (not 4 stems like in Ripple).

“At the moment the separation is only available in Chinese version which is jianyingpro, download at capcut.cn [probably here - it’s where you’re redirected after you click “Alternate download link” on the main page, where download might not work at all]

Separation doesn't require sign up/login, but exporting does, and requires VIP.

Separated vocal file is encrypted and located in C:\Users\yourusername\AppData\Local\JianyingPro\User Data\Cache\audioWave”

The unencrypted audio file in AAC format is located at \JianyingPro Drafts\yourprojectname\Resources\audioAlg (ends with download.aac)

Drag and drop it in Audacity or convert to WAV (https://cloudconvert.com/aac-to-wav)

“To get the full playable audio in mp3 format a trick that you can do is drag and drop the download.aac file into capcut and then go to export and select mp3. It will output the original file without randomisation or skipping parts”

“Trying out Capcut, the quality seems the same as the Ripple app (low bitrate mp3 quality)

at least the voice leftover bug is fixed, lol”

Random vocal pops from Ripple are fixed here.

Also, it still has the same clicks every 25 seconds as before in Ripple.

Some people cannot find the settings on this screen in order to separate. Maybe it’s due to lack of Chinese IP, or Chinese regional settings in Windows, but logging wasn’t necessary from what someone told.

- Looks like the guitar model on MVSEP can pick up piano better than the available there piano model in lots of cases (isling)

- AudioSep has been released

https://github.com/Audio-AGI/AudioSep

(separate anything you describe)

https://replicate.com/cjwbw/audiosep?prediction=j7dsrvtbyxfm3gjax3vfzbf7py

(use short fragments as input)

https://colab.research.google.com/github/badayvedat/AudioSep/blob/main/AudioSep_Colab.ipynb (basic Colab)

https://huggingface.co/spaces/badayvedat/AudioSep (it’s down)

"so far it's ranged from mediocre to absolutely horrible from samples I've tried"

"So far[,] it does [a] great job with crowd noise/cheering."

Didn't pick piano.

Output is mono 32kHz. Where input is 30s, the output can be 5s.

- UVR started to process slower for some people using Nvidia 532 and 535 drivers (at least Studio ones on at least W11). More about the issue. Consider rolling back to 531.79.

“Took 10 seconds to run Karaoke 2 on a full song (~5[]mins), with the latest drivers it took like 20 minutes”. The problem may occur once you reboot your system.

- AMD GPU acceleration has been introduced in the official UVR repo under a new branch on GH. Beta as exe patch will be released in the following days. Currently, it supports only MDX-Net, but not MDX23C, and Demucs 4 models (not 3) and VR arch (5.0, but not 5.1).

Currently, GPU memory is not clearing, so you need a lot of VRAM in order to use ensembles.

- (x-minus) "Added additional download buttons when using UVR BVE model.**

Now you can download:

- song without backing vocals

- backing vocals

- instrumental without vocals

- all vocals" Anjok

- MacOS UVR versions should be fixed now - redownload the latest 5.6 patches. GPU processing on M1 is fully functioning with MacOS min. Monterey 12.3/7 (only VR models will crash with GPU processing). It’s very fast for the latest MDX23C fullband model - 11 minutes vs 1 hour on CPU previously.

- Cyrus version of MedleyVox Colab with chunking introduced, so you don't need to perform this step manually

https://colab.research.google.com/drive/1StFd0QVZcv3Kn4V-DXeppMk8Zcbr5u5s?usp=sharing

“Run the 1st cell, upload song to folde infer_file, run 2nd cell, get results from folder results = profit”

“one annoying thing is that is always converts the output to mono 28k”

- Separation times since the UVR 5.6 update increased double for some people. Almost the same goes to RAM usage.

Having lots of space on your system disk or additional partition assigned for pagefile can be vital in fixing some crashes, especially for long tracks. Be aware that CPU processing tends to crash less, but it's much slower in most cases.

"I realized that with 2-3h long audio files, I was able to use Demucs, after I added another 32GB of RAM. In Total my system got 64GB and I increased the swap file to 128GB, which is located on an NVME drive.... so just in case the 64GB RAM are not enough, which I experienced with the "Winds" model, it's not crashing UVR, instead using the swap."

- Segments set to default 256 instead of 512 is ⅓ faster for the new MDX23C fullband model at least for 4GB cards. But it's still very slow on such RTX 3050 mobile variant (20 minutes for 3:40 song).

- Sometimes inverting vocals with mixture using MDX23C instead of using instrumental output can give better results and vice versa.

“Differences were more significant with D1581 [than fullband], but secondary vocals stem has "a bit" higher score” (click). Generally inversion of these MDX23C models (but not spectral) was giving sometimes better results.

- MedleyVox Colab preconfigured to use with Cyrus model

Newer model epochs can be found here:

https://huggingface.co/Cyru5/MedleyVox/tree/main

Q: What is isrnet?

A: It's basically just another model that builds on top of what I've built so far that performs better. That's the surface level explanation, at least.

- Settings for v2.2.2 Colab

https://colab.research.google.com/github/jarredou/MVSEP-MDX23-Colab_v2/blob/v2.2/MVSep-MDX23-Colab.ipynb

If you stuffer from some vocal residues, try out these settings

BigShifts_MDX: 0

overlap_MDX: 0.65

overlap_MDXv3: 10

overlap demucs: 0.96

output_format: float

vocals_instru_only: disabled

Also, you can manipulate with weights.

E.g. different weight balance, with less MDXv3 and more VOC-FT.

- As an addition to AI-killing tracks section, and in response to deletion of "your poor results" channel, there was recently created a Gsheet with your problematic tracks to fill in. It is open to everyone to contribute.

- Video tutorial by Bas Curtiz how to install Medley Vox (based on Vinctekan fixed source). Cyrus trained a model. MD serves to separation of various singers from a track. It sometimes does a better job than BVE models in general.

Sadly, it has 24kHz output sample rate, but AudioSR works pretty good for upscaling the results.

https://github.com/haoheliu/versatile_audio_super_resolution

https://replicate.com/nateraw/audio-super-resolution

https://colab.research.google.com/drive/1ILUj1JLvrP0PyMxyKTflDJ--o2Nrk8w7?usp=sharing

Be aware that it may not work with full length songs - you might need to divide them into smaller 30 seconds pieces.

- "Ensemble 4/8 algorithms were updated on MVSep with new VitLarge23 model. All quality metrics were increased:

Multisong Vocals: 10.26 -> 10.32

Multisong Instrumental: 16.52 -> 16.63

Synth Vocals: 12.42 -> 12.67

Synth Instrumental 12.12 -> 12.38

MDX23 Leaderboard: 11.063 -> 11.098

I added Ensemble All-In algorithm which includes additionally piano, guitar, lead/back vocals. Piano and guitar has better metrics comparing to standard models, because they are extracted from high quality "other" stem. Lead/back vocals also has slightly better metrics.

piano: 7.31 -> 7.69

guitar: 7.77 -> 8.95" ZFTurbo

- New vocal model added on MVSEP:

"VitLarge23" it's based on new transformers arch. SDR wise (9.78 vs 10.17) it's not better than MDX23C, but works "great" for ensemble consisting of two models with weights 2, 1.

- MVSEP-MDX23-Colab fork v2.2.2 is out.

It is now using the new InstVocHQ model instead of D1581:

https://github.com/jarredou/MVSEP-MDX23-Colab_v2/

Memory issues with 5:33 songs fixed (even 19 minutes long with 500K chunks supported)

It should be slightly faster than the previous version, as the extra processing for the fullband trick is not needed anymore with the new model.

Q: Why is "overlap_MDX" set to 0.0 by default in MVSEP-MDX23-Colab_v2 ?

A: because it's a "doublon" with MDX Big Shifts (that is better)

- Stable final version of UVR v5.6.0 has been released along with MDX23C fullband model (the same as on MVSEP) - SDR is 10.17 for vocals & 16.48 for instrumentals.

It’s called MDX23C-InstVoc HQ.

https://github.com/Anjok07/ultimatevocalremovergui/releases/

Be aware it’s taking much more time to process a song with it, then all previous models. Also, it doesn’t require volume compensation set. It can leave more vocal residues than HQ_3 models for some songs. On the other hand, it can give very good results with song with “super dense mix like Au5 - Snowblind” but also for older tracks like Queen - March Of The Black Queen (always caused issues, but it gave the best result so far, although still lot of BV is missed).

Performance:

- 3:30 track with HQ_3 takes up to 24 minutes on i3-3217u while the new model takes 737 minutes (precisely 1:34 vs 41:00 for 15 seconds song).

- RTX 3060 12 GB - takes around 15 minutes to process a 25 minutes file with the new model.

- GTX 1080 Ti took about 4 minutes to process, about a 5 min 30 song

- If you upgraded from beta, Matchering might not work correctly. In order to fix the error:

Go to the Align tool.

Select another option under "Volume Adjustment", it can be anything.

Now, matchering should work. The fix may not apply for Linux installations.

- KaraFan original Colab seems to work now (v. 3.1) but one track with default settings takes 30 minutes for 3:37 track on free T4 (the last files processed are called Final) and it can get you disconnected from runtime quick (especially if you miss some multiple captcha prompts). V. 3.1 can have more vocal residues than in 1.x version and even more than in HQ_3 model on its own.

You might want to consider using older versions of KF with Kubinka Colab.

- Now 3.2 version was released with less vocal residues.

As mentioned before, after runtime disconnection error, output folder still constantly populated with new files, while progress bar is not being refreshed after clicking close or even after closing your tab with Colab opened.

-"Image-Line the company that made Fl Studio 21 took to instagram announcing a beta build that allows the end users to separate stems from the actual program itself, this is in beta and isn’t final product"

People say it's Demucs 4, but maybe not ft model and/or with low parameters applied or/and it's their own model.

"Nothing spectacular, but not bad."

"- FL Studio bleeds beats, just like Demucs 4 FT

- FL Studio sounds worse than Demucs 4 FT

- Ripple clearly wins"

-Org. KaraFan Colab with v. 3.0 should work with the large GPU option disabled (now done by default).

-You may be experiencing issues with KaraFan 3.0 alpha (e.g. lack of 5_F-music with which the result was better before), and using Kubinka Colab which uses the older version for now has some problems with GPU acceleration. Maybe the previous KF commit will work or even the one before (2.x is used here).

-New UVR beta patches for Windows/Mac/M1 at the bottom of the release note

https://github.com/Anjok07/ultimatevocalremovergui/releases/

Usually check for newer versions above, but this one currently fixes long error on using the new BVE model

https://github.com/Anjok07/ultimatevocalremovergui/releases/download/v5.5.0/UVR_Patch_9_20_23_20_40_BETA.exe

- “The new BVE (Background Vocal Extractor) model [in UVR 5 GUI] has been released!

To use the BVE model, please make sure you use the UVR_Patch_9_18_23_18_50_BETA patch (Mac). Remember, it's designed to be used in a chain ensemble, not on its own. It's better to utilize it via "Vocal Splitter Options". ~Anjok”

Using Lead vocal placement  = stereo 80% is still only available on X-Minus only. UVR GUI doesn't support this yet - it’s for the situation when your main vocals are confused with backing vocals.

- In the latest UVR GUI beta patch, vocal stems of MDX instrumental models have polarity flipped. You might want to flip it back in your DAW.

- Investigating KaraFan shapes issue > link

- New piano and guitar models added on MVSEP. Use other stem from e.g. “Ensemble 8 models” or MDX23 Colab or htdemucs_ft for better results.

- To separate electric and acoustic guitar, you can run a song (e.g. other stem) through the Demucs guitar model and then process the guitar stem with GSEP (or MVSEP model instead of one of these).

Gsep only can separate electric guitar so far, so the acoustic one will stay in the "other" stem.

- New UVR beta patch implements chain ensemble from x-minus for splitting backing and lead vocals. To use it:

1. Enable "Help Hints" (so you can see a description of the options),

2. Go to any option menu

3. Click the "*Vocal Splitter Options*"

4. From there you will see the new chain ensemble options.

Patch (patching from the app may cause startup issues)

- "New MDX23C model improved on [MVSEP] Leaderboard from 10.858 up to 11.042"

- "For those of you who were running into errors related to missing *"msvcp140d.dll"* and *"VCRUNTIME140D.dll"* after installing the latest patch, it's been fixed." -Anjok

UVR_Patch_9_13_23_17_17_BETA

- The UVR's latest beta 9 patch causes startup issue for lots of people on even clean Windows 10. No fix for it. Copying libraries manually or installing all possible redistributables doesn't work. In such case, use beta 8 patch.

- If you see an error that you're disconnected from KaraFan Colab, it can still separate files in the background and consume free "credits" till you click Environment>Terminate session. It happens even if you close the Colab.

So, you can see your GDrive output folder still constantly populated with new files, while progress bar is not being refreshed after error of runtime disconnection or even after Closing your tab with Colab.

- KaraFan got updated to 1.2 (eg. model picking was added). Deleting your old KaraFan folder on GDrive can be necessary to avoid an error now in Colab.

- KaraFun - next version of MDX23 fork (originally developed by ZFTurbo, enhanced and forked by jarredou) has been created by Captain FLAM (with jarredou’s assistance on tweaks).

Official Colab (video guide in case of problems)

Colab forked by Kubinka (can show error now after 1.2 update)

GUI for offline use: https://github.com/Captain-FLAM/KaraFan/tree/master

It gives very clean instrumentals with much less of consistent vocal residues than in MDX23 2.0-2.2 and Ripple/Bytedance.

(might have been changed) You can also disable SRS there to get a bit cleaner result, but in cost of more vocal residues. How detestable it will be without SRS, depends on a track - e.g. if it has heavy compressed modern vocals and lots of places with not busy mix (when not a lot of instruments play). Disabled SRS adds a substantial amount of information above 17.7kHz.

One of our users had problems caused seemingly by empty Colab Notebooks folder which he needed to delete. Could have been something else they did too, though.

- New epoch of new BVE model has been added to x-minus

“In some parts the new BVE is better, in some it's worse. Still a great model”

> To get better results, you can downmix the result to mono and repeat the separation

- For people having issues with Boosty x-minus payment:

https://boosty.to/uvr/posts/5d88402e-9eb1-4046-a00a-cf8b09e27561

-Sometimes for instrumental residues in vocals, AIs for voice recorded with home microphone can be used (e.g. Goyo, or even Krisp, RTX Voice, AMD Noise Suppression or Adobe Podcast as a last resort) it all depends on type of vocals and how destructive the AI can get.

- Izotope Ozone 11 has been released. It’s 1200$ for Advanced Edition. It’s the only version possessing Spectral Recovery. Music Rebalance is said to have Demucs instead of Spleeter now.

https://www.izotope.com/en/products/ozone.html

- Acon Digital has released Remix, their first plug-in capable of real-time separation to five stems: Vocals, Piano, Bass, Drums, and Other.

“Just listened to the demo, not great but still”

-RemFX for detection and removal of the following effects: chorus, delay, distortion, dynamic range compression, and reverb. Huggingface | Samples

The Colab is currently slow while downloading checkpoints from zenodo (400KB/s for 1GB file out of 6).

Outputs in Huggingface are mono, may not work in every case, the website in general doesn't work well with big files, keep them short, 0-30 seconds.

It's not better than our dereverb model in UVR.

- Beta UVR patch also released for x86_64 & M1 Macs:

https://github.com/TRvlvr/model_repo/releases/download/uvr_update_patches/UVR_Patch_8_28_23_2_9_BETA_MacOS_x86_64.zip

“If you have any trouble running the application, and you've already followed the "MacOS Users: Having Trouble Opening UVR?" instructions here, try the following:

Right-click the "Ultimate Vocal Remover" file and select "Show Package Contents".

Go to -> Contents -> MacOS ->

Open the "UVR" binary file.”

In case of further issues, check this out:
https://www.youtube.com/watch?v=HQsazeOd2Iw&feature=youtu.be

Looks like e.g. with Denoise Lite models it can ask for parameters. Set 4band_v3 and 16 channels, press yes on empty window.

“The Mac beta is not stable yet.” - Anjok

-"The new beta [UVR] patch has been released! I made a lot of changes and fixed a ton of bugs. A public release that includes the newest MDX23 model will be released very soon. Please see the change log via the following message - https://discord.com/channels/708579735583588363/785664354427076648/1145622961039101982"

Patch:

https://github.com/TRvlvr/model_repo/releases/download/uvr_update_patches/UVR_Patch_8_28_23_2_9_BETA.exe

-"I found a way to bypass the free sample limits of Dango.ai. With VPN and incognito, when the limit appears, change the date on the computer or other device (I set the next day) and close and re-open the incognito tab. Sometimes it can show network error, in such case restart the VPN and re-enter in incognito again" Tachoe Bell

- Bas' guide to change region to US for Ripple on iOS

https://media.discordapp.net/attachments/708595418400817162/1146727313963237406/Ripple_iOS_iPad_mini_2_-_demo.mp4

- Another way to use Ripple without Apple device

Sign up at https://saucelabs.com/sign-up

Verify your email, upload this as the IPA: https://decrypt.day/app/id6447522624/dl/cllm55sbo01nfoj7yjfiyucaa

Rotating puzzle captcha for TikTok account can be tasking due to low framerate. Some people can do it after two tries, others will sooner run out of credits, or completely unable to do it.

- Every 8 seconds there is an artifact of chunking in Ripple. Heal feature in Adobe Audition works really well for it:

https://www.youtube.com/watch?v=Qqd8Wjqtx-8

-The same explained on RX 10 example and its Declick feature:

https://www.youtube.com/watch?v=pD3D7f3ungk

- Ripple/SAMI Bytedance's API was found. If you're Chinese, you can go through it easier.

The sami-api-bs-4track (the one with 10.8696 SDR Vocals) - you need to pass the Volcengine facial/document recognition apparently only available to Chinese people

https://www.volcengine.com/docs/6489/72011

We already evaluated its SDR, and it even scored a bit better than Ripple itself.

This is the Ripple audio uploading API:

https://github.com/bitelchux/TikTokUploder/blob/2a0f0241a91b558a7574e6689f39f9dd9c39e295/uploader.py

there's a sample script on the volcengine SAMI page

"API from volcengine only return 1 stem result from 1 request, and it offers vocal+inst only, other stems not provided. So making a quality checker result on vocal + instrument will cost 2x of its API charging

something good is that volcengine API offers 100 min free for new users"

API is paid 0.2 CNY per minute.

It takes around 30 seconds for one song.

It was 1.272 USD for separating 1 stem out MVSEP's multisong dataset (100 tracks x 1 minute).

- (outdated) Using Ripple on an M1 remote machine turned out to be successful but very convoluted.

https://discord.com/channels/708579735583588363/708579735583588366/1143710971798507520

-It is possible that "a particular song that an older version of mdx23 (mdx23cmodel3.ckpt) has a much better extraction than D1581 and the current 4 model ensemble on MVSEP for preserving the instruments (also organ-like instruments)"

-Seems like Google raised Colab limit for free users from 1 hour to 5 hours. It depends on a session, but in most cases you should be able to perform tasks taking above 4 hours now.

-How to change region to US in Apple App Store to make "Ripple - Music Creation Tool" (SAMI-Bytedance) work.

https://support.apple.com/en-gb/HT201389

https://www.bestrandoms.com/random-address-in-us

Or use this Walmart address in Texas, the number belongs to an airport.

Do it in App Store (where you have the person-icon in top right).

You don't have to fill credit cards details, when you are rejected,

reboot, check region/country... and it can be set to the US already.

Although, it can happen for some users that it won't let you download anything forcing your real country.

"I got an error because the zip code was wrong (I did enter random numbers) and it got stuck even after changing it.

So I started from the beginning, typed in all the correct info, and voilà"

If ''you have a store credit balance; you must spend your balance before you can change stores''.

It needs (an old?) a sim card to log your old account out if necessary.

- Long awaited app made by Bytedance with one of their SAMI variants from MDX23 competition which holds top of our MVSEP leaderboard was published on iOS and for US region only

(with separate possibility to sign up for beta testing, also not for people outside US, and the app is in the official store already anyway, but it was before official release - at the end of June, so it's older news).

It's a multifunctional app for audio editing, which also contains a separation model.

It's free, called:

"Ripple - Music Creation Tool"

https://apps.apple.com/us/app/ripple-music-creation-tool/id6447522624

The app requires iOS 14.1

(it's only for iOS).

Output files are 4 stems 256kbps M4A (320 max).

Currently, the best SDR for public model/AI, but it gives the best results for vocals in general. For instrumentals, it rather doesn’t beat paid Dango.ai (and rather not KaraFan too).

"My only thought is trying an iOS Emulator, but every single free one I've tried isn't far-fetched where you can actually download apps, or import files that is"

Sideloading of this mobile iOS app is possible on at least M1 Macs.

"If you're desperate, you can rent an M1 Mac on Scaleway and run the app through that for $0.11 an hour using this https://github.com/PlayCover/PlayCover"

IPA file:

https://www.dropbox.com/s/z766tfysix5gt04/com.ripple.ios.appstore_1.9.1_und3fined.ipa?dl=0

"been working like a dream for me on an M1 Pro… I've separated 20+ songs in the last hour"

"bitrise.com claims to have M1s and has a free trial"

Scaleway method:
https://cdn.discordapp.com/attachments/708579735583588366/1146136170342920302/image.png

“keep in mind that the vm has to be up for 24 hours before you can remove it, so it'll be a couple bucks in total to use it”

"I used decrypted ipa + sideloadly

seems that it doesn't have internet access or something"

So far, Ripple didn't beat voc_ft (although there might be cases when it's better) and Dango. Samples we got months ago are very similar to those from the app, also *.models files have SAMI header and MSS in model files (which use their own encryption), although processing is probably fully reliable on external servers as the app doesn't work offline (also model files are suspiciously small - few megabytes, although it's specific for mobilenet models). It's probably not the final iteration of their model, as they allegedly told someone they were afraid that their model will leak, but better than the first iteration judging by SDR with even lossy input files.

Later they told that it’s different model than the one they previously evaluated, and that time it was trained with lossy 128kbps files due to some “copyright issues”.

Most importantly, it's the good for vocals, also cleaning vocal inverts, and surprisingly good for e.g. Christmas songs, (it handled hip-hop, e.g. Drake pretty well). It's better for vocals than instrumentals due to residues in other stem - bass is “so” good, drums also decent. Vocals can be used for inversion to get instrumentals, and it may sound clean, but rather not as good as what 2 stem option or 3 stem mixdown gives.

Other stem residues appear due to the fact they told the other stem is taken from the difference of all remaining stems - they didn’t train the other stem model to save on separation time.

"One thing you will notice is that in the Strings & Other stem there is a good chunk of residue/bleed from the other stems, the drum/vocal/bass stems all have very little to no residue/bleed" doesn't exist in all songs.

It's fully server-based, so they may be afraid of heavy traffic publishing the app worldwide, and it's not certain that it will happen.

Thanks to Jorashii, Chris, Cyclcrclicly, anvuew and Bas.

Press information:

https://twitter.com/AppAdsai/status/1675692821603549187/photo/1

https://techcrunch.com/2023/06/30/tiktok-parent-bytedance-launches-music-creation-audio-editing-app/

Beta testing

https://www.ripple.club/

- Following models added on MVSep:

UVR-De-Echo-Aggressive

UVR-De-Echo-Normal

UVR-DeNoise

UVR-DeEcho-DeReverb

They are all available under the "Ultimate Vocal Remover HQ (vocals, music)" option (MDX FoxJoy MDX Reverb Removal model is available as a separate category).

- If you looked for possibility to pay for Dango using Alipay - they recently introduced the possibility to link foreign cards, and if that option fails (sometimes does), you can open 6 months “tourcard”, and open new later if necessary, but only Visa, Mastercard, Diners Club and JCB cards are supported to top tourcard up

https://ltl-beijing.com/alipay-for-foreigners/

 

- Dango no longer supports Gmail email accounts

- New piano model added on MVSEP. SDR-wise it’s better than GSep, but GSep is probably also using some kind of processing in order to get better separation results, but e.g. Dango instrumentals can be inverted to get just vocals despite the fact they claim to use some recovery technology.

- arigato78 method for main vocals

-Captain Curvy method for instrumentals added in instrumentals models list section (the top link)

- For canceling room reverb check out:

Reverb HQ

then

De-echo model (J2)

- Sometimes vox_ft can pick up SFX

- Install UVR5 GUI only in the default location picked by the installer. Otherwise, you might get python39.dll error on startup. If you see that error after installing the beta patch, reinstall the whole app.

- Few of our users finally evaluated sonically new dango.ai 9.0 models. Turns out the models are not UVR's (or no longer), and actually give pretty close results to original instrumentals, but not so good vocals.

"It's slightly better but still voc ft keeps more reverb/delays

but again, it's 99% close, Dango has maybe more noise reduction" maybe even less instrumental residues (can be a result of noise reduction).

"A bit cleaner than voc_ft in terms of having synths/instruments, but they do sound a bit filtered at times. [In] overall it's close tho"

"I discovered Dango's conservative mode keeps instrumentals even fuller, but might introduce some background vocals

still quite better than what we have.

I'm still surprised how it's so clean, as if not having vocal residues like any other MDX model. Sometimes the Dango sounds like a blend of VR's architecture, but I'm probably wrong, it could be the recovery technology" - becruily

https://tuanziai.com/vocal-remover/upload

You must use the built-in site translate option in e.g. Google Chrome, because it's Chinese.

On Android, it may not work correctly. In case of further issues, use Google Translate or one of Yandex apps with image to text translators.

You are able to pay for it using Alipay outside China.

Dango redirects to Tuanziai site - it's the same.

https://tuanziai.com/encouragement

Here you might get 30 free points (for 2 samples) and 60 paid points (for 1 full songs) "easily".

Dango.ai scores bad in SDR leaderboards due to recovery algorithms applied. Similar situation probably like in GSep.

- New BVE model on X-Minus for premium users. One of, if not the best so far. It uses voc_ft as a preprocessor.

"BVE sounds good for now but being an (u)vr model the vocals are soft (it doesn’t extract hard sounds like K, T, S etc. very well)"

"Pretty good, if still [in] training. Seems to begin a phrase with a bit of confusion between lead and backing, but then kicks in with better separation later in the phrase. Might just be the sample I used, though."

- Jarredou published the final 2.2 version of MDX23 Colab (don't confuse it with MDX23C single models v3 arch) - gives more vocal residues than 2.0/2.1, but better SDR. Now it has SRS trick, big shifts, new fine-tuning, separated overlap parameters for MDX, MDXv3 and Demucs models, and also possess one narrowband MDX23C model D1581 among other MDX ones, which states a new set of models now (also said to use VOC-FT Fullband SRS instead of UVR-MDX-Instr-HQ3, although HQ3 is still listed during processing). You can also use faster optional 2 stem only output (demucs_ft vocal stem is used here only). Float parameter returns WAV 32-bit. Don’t set overlap v3 to more than 10, or you’ll get error. It can be way more frequent with odd values.

Changing weights added: “For residues, I would first try a different weight balance, with less MDXv3 and more VOC-FT, as model D1581, and current MDXv3 models in general tend to have more residues than VOC-FT.”

- New "8K FFT full band" model published on MVSEP. Currently, a better score than only 2.2 Colab above from commonly available solutions, although more vocal residues than current default on MVSEP at least in some cases, and “voice sounded more natural [in default] than the new 10 SDR model” but in some problematic songs it can even give the best results so far.

"Sometimes 8K FFT model is false detect the vocals, in the vocal stem synth was treated as vocal. On instrumental stem, mostly are blur result compared with 12K FFT. But 12K FFT seems to be some vocal residue but very less heard (like a whisper) and happened for several songs, not all songs."

- "The karaoke ensemble works best with isolated vocals rather than the full track itself" Kashi

- Center isolation method further explained in Tips to enhance separation, step 19

- VR Kara models freeze on files over ~6 minutes in UVR beta 2 (GTX 1080).

>Divide your song into two parts.

- New public dataset published by Moises (MoisesDB). There are some problems with downloading it now, and it’s 82,7GB and link expires during downloading after 600 seconds. Not enough for 30MB/s, but good for 10Gbps one. Moises team works on the issue. Probably it's fixed already.

- RipX inside the app uses UVR for gathering stems now. Consider also comparing its stem cleanup feature to RX 10 debleed in RX Editor.

- “RipX is badass for removing residues and harmonics from vocals. The ability to remove harmonics & BGVs using RipX is amazing but is very tedious but so far so good” (Kashi)

- Sometimes using vocal model like voc_ft on the result from instrumental model might give less vocal residues or sometimes even none (Henry)

- mvsep1.ru from now on, contains a content of mvsep.com, so without MDX23/C and login features, while mvsep.com has the richer content of mvsep1.ru

The old leaderboard link has changed and is now:

https://mvsep1.ru/quality_checker/leaderboard2.php?sort=instrum

- old domain is also fixed now, redirecting leaderboard links.

If you’re uploading in quality checker is stopped, clear your browser and start over.

- Dereverb and denoiser for VR arch is not compatible with any VR Colab and manual installation of such model will fail with errors. It requires modifying nets and layers. More

- New best ensemble (all Avg/Avg)

(read entries details on the chart for settings - they can have very time-consuming parameters and differ in that aspect)

#1 MDX23C_D1581 + Voc FT | #2 MDX23C_D1581 + Inst HQ3 + Voc FT  | #3

MDX23C_D1581 + Inst HQ3 + Voc FT

Be aware that above can sound noisy/have vocal leaks at times; consider using HQ_3 or kim inst then, also:

- The best ensembles so far in Kashi's testing for general use:

Kim Vocal 2 + Kim FT other + Inst Main + 406 + 427 + htdemucs_ft avg/avg, or:

Voc FT, inst HQ3, and Kim FT other (kim inst)

“This one's much faster than the first ensemble and sometimes produces better results”

It all depends on a song. Also, sometimes "running one model after another in the right order can yield much better results than ensembling them".

- Disable "stem combining" for vocal inverted against the source. Might be less muddy, possibly better SDR.

It's there in MDX23C because now the new arch supports multiple stems separation in one model file.

- Disabling "match freq cutoff" in advanced MDX settings seems to fix issues with 10kHz cutoff in vocals of HQ3 model.

- New explanations on Demucs parameters added in Demucs 4 section

(shifts 0, overlap 0.99 won in SDR vs shifts 1, overlap 0.99 and even shifts 10, overlap 0.95)

- "Last update of Neutone VST plugin has now a Demucs model to use in realtime in a DAW

(it's a 'light' version of Demucs_mmi)

https://neutone.space/models/1a36cd599cd0c44ec7ccb63e77fe8efc/

It doesn't use GPU, and it's configured to be fast with very low parameters, also the model is not the best on its own. It doesn't give decent results, so it's better to stick to other realtime alternatives (see document outline)

- Turns out that with a GPU with lots of VRAM e.g. 24GB, you can run two instances of UVR, so the processing will be faster. You only need to use 4096 segmentation instead of 8192.

SDR difference between overlap 0.95 and 0.99 for voc_ft MDX model in (new/beta) UVR is 0.02.

0.8 seems to be the best point for ensembles

12K segmentation performed worse than 4K SDR-wise

- Recommended balanced values between quality and time for 6GB graphic cards in the latest beta:

VR Architecture:

Window Size: 320

MDX-Net:

Segment Size: 2752 (1024 if it’s taking too long)

Overlap: 0.7-/0.8

Demucs:

Segment: Default

Shifts: 2 (def)

Overlap: 0.5

(exp. 0.75,

def. 0.25)

"Overlap can reduce/remove artifacts at audio chunks/segments boundaries, and improve a little bit the results the same way the shift trick works (merging multiple passes with slightly different results, each with good and bad).

But it can't fix the model flaws or change its characteristics"

“Best SDR is a hair more SDR and a shitload of more time.

In case of Voc_FT it's more nuanced... there it seems to make a substantial difference SDR-wise.

The question is: how long do u wanna wait vs. quality (SDR-based quality, tho)”

- A script with guide for separating multiple speakers in a recording added

- If you're stuck at 5% of separation in UVR beta, try to divide your audio into smaller pieces (that's beta's regression)

- A new separation site appeared, giving seemingly better results than Audioshake:

https://stemz.mwm.io/

“Guitar stem seems better than Demucs, piano maybe too. Drums sound like Spleeter. Vocal bleeds in most of the stems, or not vocals are picked up, so they end up in the synths. But that's just from one song test” becruily

- Drumsep Colab now has GPU acceleration and much better max quality optional settings

- 1620 MDX23C model added on x-minus. Opposing the model on UVR, it's fullband and not released yet (16.2 SDR).

"Even if the separations have more bleeding than VOC-FT (and it's an issue), the voice sound itself is much fuller, "in your face" compared to VOC-FT, that I now find it like blurry sounding compared to MDXv3 models.

I think that's why the new MDXv3 models are scoring better despite having more bleeding (at the moment, like I said before, trainers/finetuners have to get familiar with new arch, and that will probably help with that new bleed issue)."

- New MDX23C model added on MVSEP (better SDR - 16.17)

- UVR beta patch 2 repairing no audio issue with GPU separation on the GTX 1600 series using MDX23C arch. Fixes some other bugs too.

- Narrowband MDX23C vocal model (MDX23C_D1581 a.k.a. model_2_stem_061321) trained by UVR team has been released. SDR is said to be better than voc_ft (but the latter was evaluated with older non-beta patch). Be aware that CPU processing returns errors for MDX23C models, at least on some configs (“deserialize model on CUDA” error). Fullband models will be released in a few weeks (and as it was usually before, on x-minus first for a few weeks later). Download (install beta patch first and drop it into the MDX-Net models folder). The patch is for only Windows now, with an upcoming Mac patch planned later. For Linux, there's probably a source of the patch already out.

MDX23C_D1581 parameters are set up with its yaml config file and its n_fft value is 12288, not 7680. It has cutoff at 14.7khz (while VOC-FT cutoff is 17.5khz)

- "(Probably all) models are stereo and can't handle mono audio. You have to create a fake stereo file with the same audio content on the L and R channel if the software doesn't make it by itself." Make sure that the other channel is not empty when isolation is executed - it can produce silent bleeding of vocals in the opposite channel (happens in e.g. MDX23 and GSEP, and errors with mono in MDX-Net)

- ”For Unbound local” error while you do anything in UVR since the new model installation, you might be forced to rollback the update

- Clear the Auto-Set Cache in the MDX-Net menu if you set wrong parameter and end up with error

- Pitch shift is the same as soprano mode except in the GUI beta you can choose how many semitones to pitch the conversion

- Dango.ai released a 9.0 model. We received a very positive report on it so far.

- UVR beta patch released. Potentially new SDR increases with the same models.

Added segmentation, overlap for MDX models, batch mode changes.

Soprano trick added. Basically, you can set it by semi-tones.

Support for MDX-NET23 arch. For now, it uses only basic models attached by Kuielab (low SDR, so don't bother for now), but UVR team already trained their own model for that arch, which will be released later, and a few weeks after x-minus and MVSep. And it's performing well already. Wait™. Don't exceed an overlap 0.93-0.95 for MDX models, it's getting tremendously long with not much of a difference, 0.8 might be a good choice as well. Also, segments can ditch the performance AF. 2560 might be still a high but balanced value.  

Sadly, it looks like max mag for single models is no longer available - you can use it only under Ensemble Mode for now.

Q: What is Demucs Pre-process model?

A: You can process the input with another model that could do a better job at removing vocals for it to separate into the other 3 stems

Beta UVR patch link

- "Post-Process [for VR] has been fixed, the very end bits of vocals don't bleed anymore no matter which threshold value is used"

- New BVE model will be ready at the beginning of August (Aufr33).

- MDX23C by ZFTurbo model(s) added on mvsep.com. They're trained by him using the new 2023 MDX-Net V3 arch.

Slightly worse SDR than MDX23 2.1 Colab on its own.

Might be good for rock, the best when all three models are weighted/ensembled)

- MDX23C ensemble/weighted available on mvsep1.ru (now mvsep.com) for premium users (best SDR for public 2 stem model).

It might still leave some instrumental residues in vocals of some tracks (which can be cleared with MDX-UVR HQ_3 model) but it can be also vice versa -  the same issue as kim vocal models, where the vocals are slightly left in the instrumentals [vs e.g. MDX23 2.1 free of the issue]

On some Modern Talking and CC tracks it can give the best results so far).

- If you have problems with “Error when uploading file” on MVSEP, use VPN. Similar issues can happen for free X-Minus for users in Turkey.

- lalal.ai cooperation with MVSEP was fake news. Go along.

- As for Drumsep, besides in fixed Colab, you can also use it (the separation of single percussion instruments from drums stem) in UVR GUI. How to do this:

Go to UVR settings and open the application directory.

Find the folder "models" and go to "demucs models" then "v3_v4"

Copy and paste both the .th and .yaml files, and it's good to go.

Overlap above 0.6 or 0.7 becomes placebo, at least for dry track, with no effects.

- Drumsep benefits from shifts a lot (you can use even 20).

- For better results, test out potentially also -6 semitones in UVR beta, or with 31183Hz sample rate with changed tempo.

12 semitones from 44100Hz is 22050 and should be rather less usable in most cases, the same for tempo preservation on.

- If you have a long band_net error log while using DeNoise model by Fox Joy in UVR, reinstall the app.

- It can happen that every second separation using MDX Colab will fail due to memory issues, at least with Karaoke 2 model.

- New fine-tuned vocal model added to UVR5 GUI download center and HV Colab (slightly better SDR than Kim Vocal 2) it's called "UVR-MDX-Net-Voc_FT" and is narrowband (because it's based on previous models).

- Audioshake 3 stem model is added to https://myxt.com/ for free demo accounts. Unfortunately, it has WAVs with 16kHz cutoff which Audioshake normally doesn't have. No other stem. Results, maybe slightly better than Demucs.

Might be good for vocals.

- Spectralayers 10 received an update of an AI, and they no longer use Spleeter, but Demucs 4, and they now also good kick, snare, cymbals separation too. Good opinions so far. Compared to drumsep sometimes it's better, sometimes it's not. Versus MDX23 Colab V2, instrumentals sometimes sound much worse. “SpectraLayers is great for taking Stems from UVR and then carrying on separating further and editing down. (...) Receives a GPU processing patch soon”

- (? some) MDX Colabs started causing errors of insufficient driver version.

> "As a temp workaround you can go to "Tools" in the main menu, and "Command Palette", and search for "Use fallback runtime version", and click on it, this will restart the notebook with the previous Ubuntu version in Colab, and things should works as they were before (at least till mid July or earlier [how it was once] where it is currently scheduled to be deleted)" probably it will be fixed.

X: Some people have an error that fallback runtime is unavailable.

- New v2 version of ZFTurbo's MDX23 Colab released by jarreadou (now also with denoiser off memory fix added). Now it should have less bleeding in general.

It includes models changed for better ones (Kim Vocal 2 and HQ_3), volume compensation, fullband of vocals, higher frequency bleeding fix. It all manifests in increased SDR.

Instrum is inverted of vocals stem

Instrum2 is the sum of drums+bass+other stems (I used to prefer it, but most people rarely see any difference between both, and it also depends on specific fragments, although instrum gets better SDR and is less muddy, so it’s rather better to stick with instrum)

If your separation ends up instantly with path written below, you wrongly wrote it in the cell.

Simply remove the `file - name.flac` at the end and leave only path leading to a file.

It's organized in a way that it catches all files within that path/folder.

Suggestion: go to drive.google.com and create a folder `input`,

and drop the tracks you want to process in there.

When the process is done, delete them, and add others you want to process.

Overlap large and small are the main settings, higher values = slightly higher score, but way longer processing.

Colab doesn't allow much higher value for chunk size, but you can try little higher ones and see when it crashes because of memory. Higher chunk size give better results.

- Updated inference with voc_ft model (Colab v2.1 has denoiser now on, but updated inference not and is essentially what 2.2 currently is).

- Volume compensation fine-tuning - it is in line 359 (voc_ft), 388 (for ensembling the vocals stem), 394 (for HQ_3 instrumental stem inversion).

- chunk_size = 500000 will fail with 5:30 track, decrease it to at least 300K in such case.

Overlap 0.8 is a good balance between duration and quality.

- In case of system error wav not found, simply retry separation.

Nice instruction how to use the Colab.

The v. 2.1 Colab was firstly evaluated with lower parameters, hence it received slightly worse SDR. Then it was evaluated again and got better score than v2.

WiP Colabs

- 2.2 Beta 1 (no voc_ft yet)

- 2.2 Beta 1.5

- 2.2 Beta (1.5.1, inference with voc_ft, replace in the Colab above; no fine-tuning)

- v2.2 beta 2/3 (working inference) (MDX bigshifts, overlap added, fine-tuning, no 4 stems > experimental, no support for now, 22 minutes for vocals only, mdx: bsf 21, ov 0.15, 500k, 5:30 track)

- v2.2 (w/ voc_ft inference) pre beta 3 w/o MDX v3 yet - comment out both bigshifts in the cell - they won’t work

- current beta link (WiP, might be unstable at times; e.g. here for 19.07 bigshifts doesn’t work, and you need to look for working inference in history or delete the two bigshifts references in the cell; doesn’t seem that MDX v3 model is here yet)

In general -

MDX23 is quite an improvement over htdemucs_ft (...).

Drum stem makes htdemucs_ft sound like lossy in comparison, absolutely beautiful

Bass is significantly more accurate, identifies and retains actual bass guitar frequencies with clarity and accuracy

"Other", equally impressive improvement over htdemucs_ft, much more clarity in guitars"

And problems with vocals they originally described are probably fixed in V2 Colab.

- “I just added 2 new denoise models that were made by FoxJoy. They are both very good at removing any residual noise left by MDX-Net models. You can find them both in the "Download Center". - Anjok

Be aware that they're narrowband (17.7kHz cutoff). Good results.

To download models from Download Center -

In UVR5 GUI, click the tools icon > click Download Center tab > Click radio button of VR architecture > click dropdown > select the model > hit Download button > wait for it to download... Profit.

- New MDX-UVR “HQ_3” model released in UVR5 GUI! The best SDR for a single instrumental model so far. Model file (but visiting download center is enough). On X-Minus I think too.

-HQ_3 model added to MDX Colab (old)

-HV just made a new version of her own updated MDX Colab with all new models, including HQ_3. It lacks e.g. Demucs 2 for Instrumentals of vocal models, but in return it allows using YouTube and Deezer links for lossless tracks, with providing ARL, and allows specifying manually more than one file name to process at the same time. Also, for any new models in the future, there's optional input for model settings, to bypass parameters of parameters autoloader. IRC, the Colab stores its files in different path, so be aware about it when uploading tracks for separations on GDrive.

- she has added volume compensation in new revision (they’re applied automatically for each model)

In previous MDX Colabs there were also min, avg, max, and chunks, but they're gone in HV Colab.

- HV also made a new VR Colab which irc, now don’t clutter all your GDrive, but only downloads models which you use (but without VR ensemble) and probably might work without GDrive mounting, but it lacks VR ensemble.

- New MDX models added to both variants of MVSep (Kim inst, Vocal 1/2, Main [vocal model], HQ_2)

- ZFTurbo’s MDX23 code now requires less GPU memory. “I was able to process file on 8 GB card. Now it's default mode.”: 6GB VRAM is not enough. Lowering overlaps (e.g. 500000 instead of 1000000) or chunking track manually might be necessary in this case. Also now you can control everything from options: so you can set chunk_size 200000 and single ONNX. It can possibly work with 6GB VRAM that way.

Overlap large and small - controls overlap of song during processing. The larger value the slower processing but better quality (both)

If you have fail to allocate memory error, use --large_gpu parameter

Sometimes turning off use large GPU and reducing chunk size from 1000000 to 500000 helps

- Models/AIs of the 1st and 2nd place winners in MDX23 music challenge (ByteDance’s and quickpepper947’s) sadly won’t be released to the public (at least won’t be open-sourced). Maybe in June, ByteDance will be released as an app in worse quality.

Judging by the few snippets we had:

"the vocal output, yes, better than what can be achieved right now by any other model, it seems.

the instrumental output... meh. I can hear vocals in it, on a low volume level." but be aware that improved their model by the time by a lot.

- MDX23 4 stem model and source code with dedicated app by ZFTurbo (3rd place) was released publicly with the whole AI and instructions how to run it locally. No longer requires minimum 16GB VRAM Nvidia GPU. It even has a neat GUI (3rd place in leaderboard C, better SDR than demucs ft). You can still use the model online on mvsep1.ru (now mvsep.com).

The command:

"conda install -c intel icc_rt"

SOLVES the LLVM ERROR

For above, you can get less vocal residues by replacing the Kim Vocal 1 model there manually by newer Kim Vocal 2 and kim inst by and Kim Inst with UVR Inst HQ 292 (“full 292 is a lot more aggressive than kim_inst”).

jarredou forked it with better models and settings already.

Short technical summary of ZFTurbo about what is under the hood and small paper.

From what I see in the code, it uses inverted vocals output for instrumentals from - Demucs ft, with - hdemucs_mmi, and - Kim vocal 1 and - Kim inst (ft other). More explanations in MDX23 dedicated section of this doc.

- jarreadou made a Colab version of ZFTurbo MDX23:

"(It's working with `chunk_size = 500000` as default, no memory error at this value after few tests with Colab free)

Output files are saved on Colab drive, in the "results" folder inside MVSep installation folder, not in *your* GDrive."

On 19.05 its SDR was tested, and had better score for instrumentals than UVR5 ensemble for that time being. Currently not, but there are new versions of the Colab planned.

- ByteDance-USS was released with Colab by jazzpear. It works better than zero-shot for SFX and “user-friendly wise” while zero-shot stil better for instruments.

"https://www.dropbox.com/sh/fel3hunq4eb83rs/AAA1WoK3d85W4S4N5HObxhQGa?dl=0

Queries for ByteDance USS taken from the DNR dataset. Just DL and put these on your drive to use them in the Colab as queries."

QA section added.

- The modified MDX Colab - now with automatic models downloading (no more manual GDrive models installations) and Karaoke 2 model.

> Separate input for 3 models parameters added, so you don’t need to change models.py every time you switch to some other model. Settings for all models listed in Colab. From now on, it uses reworked main.py and models.py (made by jarredou) downloaded automatically. Don’t replace models.py from packages with models from here now. Now denoiser optionally added!

- MDX Colab with newer models is now reworked to use with current Python 3.10 runtime which all Colabs now use.

- Since 28.04 lots of Colabs started having errors like "onnxruntime module not found". Probably only MDX Colab (was) affected.

(not needed anymore)

> "As a temp workaround you can go to "Tools" in the main menu, and "Command Palette", and search for "Use fallback runtime version", and click on it, this will restart the notebook with the previous python version, and things should works as they were before"

- OG MDX HV Colab is (also) broken due to torch related issues (reported to HV). To fix it, add new code row with:

!pip install torch==1.13.1

below mounting and execute it after mounting

> or use fixed MDX Colab with newer models and fix added (now with also old Karaoke models).

- While using OG HV VR Colab, people are currently encountering issues related to librosa. The issues are already reported to HV (the author of the Colab).

>  use this fixed VR Colab for now (04.04.23). (the issue itself was fixed by uncommenting librosa line and setting 0.9.1 version  -  deleted "#" before the lines in Mount to Drive cell, now also fresh installation issues are fixed - probably the previous fix was based on too old HV Colab revision). VR Colab is not affected by May/April runtime issues.

- If you have fast CPU, consider using it for ensemble if you have only 4GB VRAM, otherwise you can encounter more vocal residues in instrumentals. 11GB VRAM is good enough, maybe even 8GB.

- New Kim's instrumental "ft other" model. Already added to UVR's download center with parameters.

Manual settings - dim_f = 3072 n_fft = 7680 https://drive.google.com/drive/folders/19-jUNQJwols7UyuWO5PWWVUlJQEwpn78

(Unlike HQ models, it has cutoff, but better SDR than even inst3/464, added to Colab)

- Anjok (UVR5) "I released an additional HQ model to the Download Center today. **UVR-MDX-NET Inst HQ 2**  (epoch 498) is better at removing long drawn out vocals than UVR-MDX-NET Inst HQ 1." It has already evaluated slightly better SDR vs HQ_1 both for vocals and instrumentals (HQ_1 evaluation was made once more since introducing Batch Mode which slightly decreases SDR for only single models vs previous versions incl. beta, but mitigates an issue when there are sudden vocal pop-ins using <11GB VRAM cards)

- Anjok (UVR5, non-beta) “So I fixed MDX-Net to always use Batch Mode, even when chunks are on. This means setting the chunk and margin size will solely be for audio output quality. Regardless of PC specs, users will be able to set any chunk or margin size they wish. Resource usage for MDX-Net will solely depend on Batch Size.”

Edit. Batch size set to default instead of chunks enabled on 11GB cards for ensemble achieves better SDR, but separation time is longer.

- Public UVR5 patch with batch mode and final full band model was released (MDX HQ_1)

- 293/403 and 450/498 (HQ_1 and 2) full band MDX-UVR models added to Colab and (also in UVR) (PyTorch fix added for Colab)

- Wind model (trumpet, sax) beside x-minus, added also to UVR5 GUI

You'll find it in UVR5 in Download Center -> VR Models -> select model 17

(10 seconds of audio separated with Wind model, from a 7-min track, takes 29 minutes to isolate on a 3rd gen i7 - might be your last resort if it crashes your 4GB VRAM GPU as some people reported)

- (x-minus/Aufr33) "1. **Batch mode** is now enabled. This greatly speeds up processing without degrading quality.

2. The **b.v.** models have been renamed to **kar**.

3. A new **Soprano voice** setting has been added for songs with the high-pitched vocals.

*This only works with mdx models so far.*"

It slows down the input file similarly to the method we described in our tip section below.

- New MDX23 vocal model added to beta MVSEP site.

- (no longer necessary) Fork of UVR GUI and How to install - support for AMD and Intel GPUs appeared (works only for VR and MDX architectures), Besides W11, also W10 confirmed working, MDX achieves speeds of i5-4460s using 6700 XT, while for VR, speeds are v. fast and comparable to CUDA, so CPU processing might be slower in VR, but for MDX you might want to stick with the official UVR5 GUI.

- Batch mode seems to fix problems with vocal popping using low chunks values in MDX models, and also enhance separation quality while eliminating lots of out of memory issues. It decreases SDR very slightly for single models, and increases SDR in ensemble.

- (outdated) New beta MDX model “Inst_full_292” without 14.7kHz cutoff released (performs better than Demucs 4 ft). If the model didn’t appear on your list in UVR 5 GUI, make sure you’ve redeemed your code https://www.buymeacoffee.com/uvr5/vip-model-download-instructions

Or use Colab.

Newer epochs available for paid users of https://x-minus.pro/ai?hp&test-mdx

(older news/update logs)

- To use Colabs in mobile browsers, you need to switch your browser to PC Mode first.

General reading advice

- If you found this document elsewhere (e.g. as PDF), here is always up-to-date version of the doc:

https://docs.google.com/document/d/17fjNvJzj8ZGSer7c7OFe_CNfUKbAxEh_OBv94ZdRG5

c/

- If you have anything to add to this doc, ping me (deton24) on our Discord server from the footer

- You can use Table of content section or go to options and show “document outline” to see a clickable table of content too. If you don't have Google Docs installed, and you opened the doc in a mobile browser and no such option appear, use Table of content or go to options of the mobile browser and run the site in PC mode (but it's better to have Google Docs installed on your phone instead, but be aware that the document can hang on loading during the attempt of accessing specific section of the document - it doesn't happen on PC browser - it’s the most stable form of reading the doc).

- Use the search option in Google Documents, not in the browser (browser search won’t find everything unless it has been shown before - the doc is huge).

If you search for a specific keyword and if the result doesn't show up in the mobile app, you need to go to the document outline and open the last section and search again (so the whole document will be loaded first, otherwise you won't get all the search results)

- Make sure you've joined our Discord server to open some of the Discord links attached below (those without any file extension at the end). 

- If you have a crash on opening the doc e.g. on Android - reset the app cache and data.

- If it loads 4 minutes/infinitely in the doc app, update your Google Docs app and reset the app cache/data, e.g. if you started to have crashes after the app update

- You can share a specific section of this document by opening it on PC or in PC mode by clicking on one of the entries in the document outline. Now it will add a reference to the section in the link in your address bar which you can copy and paste, so opening this link will straight redirect someone to the section after opening the link (in some specific cases, some people won’t be redirected).

 - Search function in the document won't work correctly in the app until all the document is opened to the end after some short freeze (on mobile you will have to wait a while to load it, and sometimes tap “wait” a few times when the app freezes). Afterwards, searching will start working all the time. The doc is huge and the GDoc app on at least Android is cursed (desktop version on PC behaves the most stable). You've been warned.

___________________________________________________

The best models

for specific stems

2 stems:

> for instrumentals

0) KaraFan (e.g. preset 5; fork of original ZFTurbo's MDX23 fork with new features by Captain FLAM with jarredou's help on some tweaks), offline version, org. Colab and Kubinka Colab (older version, less vocal residues vs. v.3.1, although v.3.2-4.2/+ were released with fewer residues).

One of the best free solutions for instrumentals at the moment, with not bug amounts of vocal residues and clear outputs. But no 4 stems unlike below:

0a) MDX23 by ZFTurbo (weighted UVR/ZF models), free modified Colab v. 2.1 - 2.4 with fixes and enhancements - fork by jarredou

(one of the best SDR scores for publicly available 2-4 stem separator, check out also v2.2.2 with fullband MDX23C model. Might have more residues in instrumentals, but better SDR, video how to use it, now also better 2.3 version available with better SDR, and 2.4 with also BS-Roformer (default settings can be already good and balanced).

0a) dango.ai (tuanziai.com) - paid, currently one of (if not) the best instrumental separator so far

0b) Models ensembled - available only for premium users on mvsep.com

(one of the best SDR scores for publicly available 2 and 4 stem separator, a bit higher SDR than v.2.4 Colab;

shorter queues for single model separation for registered users).

Possibly shorter queues between 10:00PM - 1:00 AM UTC.

Ensembles fix some issues with muddiness of Roformer models.

0b) Models ensembled - available only for premium users on x–minus.pro

(Mel-Roformer + MDX23C)

0b) UVR 5: Ensemble of models 1296 + 1143 (BS-Roformer in beta UVR) + Inst HQ4 (dopfunk)

0b) Manual ensemble in UVR of models BS-Roformer 1296 + copy of the result + MDX23C HQ (jarredou, src)

or just 1296 + 1297 + MDX23C HQ for slower separation and similar result

0c) MDX23C 1666 model exclusively on mvsep.com

0c) MDX23C 1648 model in UVR 5 GUI (a.k.a. MDX23C-InstVoc HQ) and mvsep.com, also on x-minus.

Both have sometimes more bleeding than HQ_3, but less muddiness.

0c) MDX23C-InstVoc HQ 2 - VIP model for UVR 5. It's a slightly fine-tuned version of MDX23C-InstVoc HQ. “The SDR is a tiny bit lower, but I found that it leaves less vocal bleeding.” ~Anjok

It’s not always the case, sometimes it can be even the opposite, but as always, all may depend on a specific song.

0d) MDX-Net HQ_4/3/2 (UVR/MVSEP (no HQ_4)/x-minus/Colab/alt) - small amounts of vocal residues at times, while not muffling the sound too much like in BS-Roformer v2, but it still can be muddy at times (esp. vs MDX23C HQ models)

0e) Other single MDX23C full band models on mvsep.com (queues for free unregistered users can be long)

(SDR is better when three or more of these models are ensembled on MVSEP; alternatively in UVR 5 GUI’s via “manual ensemble” of single models (worse SDR) or at best, weighted manually e.g. in DAW, but the MVSEP “ensemble” option is specific method - not all fullband MDX23C models on MVSEP, that’s including 04.24 BS-Roformer model are available in UVR)

- BS-Roformer model ver. 2024.04 on MVSEP (further trained from checkpoint on a different dataset). SDR vocals: 11.24, instrumental: 17.55 (from 17.17 in the base viperx model). Bad on sax. Less muddy than the three below.

Though, all might share same advantages and problems (filtered results, muddiness, but the least of residues)

- Mel-Roformer model by Kim exclusively on x-minus (trained on Aufr33 dataset, It’s less muddy than viperx model, but can have more vocal residues e.g. in silent parts of instrumentals, plus can be more problematic with wind instruments putting them in vocals, and it might leave more instrumental residues in vocals.

SDR is higher than viperx model (UVR/MVSEP) but lower than fine-tuned 04.24 model on MVSEP.

- BS-Roformer model by viperx model in UVR beta/MVSEP and x-minus (struggles with saxophone too, but less, also struggles with some Arabic guitars, bad on vocoders)

- Older BS-Roformer v2 model on MVSEP (2024.02) (a bit lower SDR, all Roformer models may sound clean, but filtered at the same time - a bit artificial [it tends to be characteristic of the arch], but great for instrumentals with heavy compressed vocals and no bass and drums - the least amount of residues and noise - very aggressive)

- old MelBand Roformer model on MVSEP (don’t confuse with the one x-minus - they’re different)

- GSEP - check out also 4-6 stem separation option and perform mixdown for instrumental manually, as it can contain less noise vs 2 stem in light mix without bass and drums too (although more than BS-Roformer v2. Regular 2 stem option can be good for e.g. hip-hop, and 4/+ stems a bit too filtered for instrumentals with busy mix. GSEP tends to preserve flute or similar instruments better than the models above (for this use cases, check out also kim inst and inst 3 models in UVR) and is not so aggressive in taking out vocal chops and loops from hip-hop beats. Sometimes will be the best for instrumentals of more lo-fi hip-hop of the pre 2000s era, e.g. where vocals are not so bright and compressed/heavily processed/loud or when instrumental sound more specific to that era. For newer stuff above ~2014, it produces vocal bleeding in instrumentals much sooner than the above. "gsep loves to show off with loud synths and orchestra elements, every other mdx/demucs model fail with those types of things".

Older ensembles for UVR 5 GUI from leaderboard

Decent NVIDIA/AMD/Intel Arc/M1 GPU required (use OpenCL UVR exe for non-CUDA GPUs)

0f. #4626:

MDX23C_D1581 + Voc FT

0g) #4595:

MDX23C_D1581 + HQ_3 (or HQ_4)

0h) Kim Vocal 2 + Kim Inst (a.k.a. Kim FT/other) + Inst Main + 406 + 427 + htdemucs_ft (avg/avg)

0i) Voc FT, inst HQ3, and Kim Inst

0j) Kim Inst + Kim Vocal 1 + Kim Vocal 2 + hq3 + voc ft + htdemucs ft (avg/avg).

0k) MDX23C InstVoc HQ + MDX23C InstVoc HQ 2 + MDX23C InstVoc D1581 + UVR-MDX-NET-Inst HQ 3

“A lot of that guitar/bass/drum/etc reverb ends up being preserved with Max Spec [in this ensemble]. The drawback is possible vocal bleed.” ~Anjok

0l) MDX23C InstVoc HQ + MDX23C InstVoc HQ 2 + UVR-MDX-Net Inst Main (496) + UVR-MDX-Net HQ 1

"This ensemble with Avg/Avg seems good to keep the instruments which are counted as vocals by other MDXv2/Demucs/VR models in the instrumental (like saxophone, harmonica) [but not flute in every case]" ~dca100fb8

0m) MDX23C-instvoc HQ with HQ4

0n) Ripple / Capcut.cn (uses SAMI-ByteDance/BS-Roformer arch) - Ripple is for iOS 14.1 and US region set only - despite high SDR, it's better for vocals than instrumentals which are not so good due to noise in other stem (can be alleviated by decreasing volume by -3dB).

0n) Capcut (for Windows) allows separation only for the Chinese version above (and returns stems in worse quality). See more for a workaround. Sadly, it normalizes input already, so -3dB trick won’t work in Capcut. Also, it has worse quality than Ripple

The best single MDX-UVR models for instrumentals (UVR 5 GUI / Colab / MVSEP / x-minus):

0. full band MDX-Net HQ_4 - faster, and an improvement over HQ_3, trained for epoch 1149. In rare cases there’s more vocal bleeding vs HQ_3 (sometimes “at points where only the vocal part starts without music then you can hear vocal residue, when the music starts then the voice disappears altogether”). Also, it can leave some vocal residues in fadeouts. More often instrumental bleeding in vocals, but the model is made mainly for instrumentals (like HQ_3 in general)

1. full band MDX-Net HQ_3 - like above, might be sometimes simply the best, pretty aggressive as for instrumental model, but still leaving small amounts of vocal residues at times - but not like BS-Roformer v2/viperx, so results are not so filtered like in these ones.

HQ_3 filters out flute into vocals.

It all depends on a song what’s the best - e.g. the one below might give better clarity:

2. full band MDX23C-InstVoc HQ (since UVR 5.60; 22kHz as well) - tends to have more vocal residues in instrumentals, but can give the best results for a lot of songs.

Added also in MDX23 2.2.2 Colab, possibly when weights include only that model, but UVR's implementation might be more correct for only that single model. Available also in KaraFan so it can be used there only as a solo model.

2b) MDX23C-InstVoc HQ 2 - worse SDR, sometimes less vocal residues

2c. narrowband MDX23C_D1581 (model_2_stem_061321, 14.7kHz) - better SDR vs HQ3 and voc_ft (single model file download [just for archiving purposes])

"really good, but (...) it filters some string and electric guitar sounds into the vocals output" also has more vocal residues vs HQ_3.

*. narrowband Kim inst (a.k.a. “ft other”, 17.7kHz) - for the least vocal residues than both above in some cases, and sometimes even vs HQ_3

*. narrowband inst 3 - similar results, a bit more muddy results, but also a bit more balanced in some cases

*. narrowband inst 1 (418) - might preserve hihats a bit better than in inst 3.

3. narrowband voc_ft - sometimes can give better results with more clarity than even HQ_3 and kim inst for instrumentals, but it can produce more vocal residues, as it’s typically a vocal model.

*. less often - inst main (496) [less aggressive vs inst3, but gives more vocal residues]

*. or eventually also try out HQ_1 - (epoch 403)/HQ_2 (epoch 450) or earlier 338 epochs, or even 292 is also used frequently from time to time).

Recommended MDX and Demucs parameters in UVR

- Ensemble of only models without bleeding in single models results for specific song

- DAW ensemble of various separation models - import the results of the best models into DAW session set custom weights by changing their volume proportions

- Mateus Contini's method e.g. #2 or #4

- Captain Curvy method:

"I just usually get the instrumentals [with MDX23C] to phase invert with the original song, and later [I] clean up [the result using] with voc ft"

How to check whether a model in UVR5 GUI is vocal or instrumental?

(but since MDX23C and BS-Roformer models there is no longer clear boundary in that regard)

___


> for vocals

- MDX23 by ZFTurbo v 2.4 jarredou fork

- Ensemble on MVSEP (for premium users)

Ensembles in UVR 5 GUI

- 1296 + 1143 (BS-Roformer in beta UVR) + Inst HQ4 (dopfunk)

- 1296 + 1297 + MDX23C HQ

- Manual ensemble in UVR of models BS-Roformer 1296 + copy of the result + MDX23C HQ (jarredou) - for faster result and similar quality vs the one above

- KaraFan (preset 4)

Single models available in UVR 5 | Colab | MVSEP

- BS-Roformer viperx 1296 model (UVR beta/MVSEP a.k.a. SDR 17.17)

- fine-tuned “ver. 2024.04” SDR 17.55 on MVSEP (can pick adlibs in better, occasionally picks some SFX’, sometimes one, sometimes the other is “slightly worse at pulling out difficult vocals”)

- Mel-Roformer model on x-minus.pro (it’s different from the MVSEP one)

“godsend for voice modulated in synth/electronic songs” vs 1296 can be more problematic with wind instruments putting them in vocals, plus it might leave more instrumental residues in instrumentals.

- older BS-Roformer 2024.02 on MVSEP (generally BS-Roformer models “can be slappy with choir-like vocals and background vocals” but “hot on pre-2000 rock”)

- UVR-MDX-Net-Voc_FT (narrowband, further trained, fine-tuned version of the Kim vocal model)

>If you still have instrumental bleeding, process the result with Kim vocal 2

>Alternatively use MDX23C narrowband (D1581) then Voc-FT, "great combination" (or MDX23C-InstVoc HQ instead of D1581)

- Kim Vocal 1 (can bleed less than 2, but more than voc_ft, might depend on a song)

- Kim Vocal 2

>MDX-Net HQ_3/4 (HQ_4 can be sometimes not bad on vocals too, e.g. HQ_3 more vocal residues then Kim Vocal 2 in general)

>MDX23C-InstVoc HQ (can have some instruments residues at times, but it’s fullband unlike voc_ft and Kim Vocal 1/2 -

“This new model is [vs the narrowband vocal models], by far, the best in removing the most non-vocal information from an audio and recovering formants from buried passages... But in some cases, also removes some airy parts from specific words, and some non-verbal sounds (breathing, moaning).”

- newer MDX23C epochs available on MVSEP like 16.66.

MDX23C models are go-to models for live recorded vocals

(available also in MDX23 Colab v2.3/2.4 when weight set only for InstVoc model)

Older ensembles (before Roformer models release)

>Voc FT + MDX23C_D1581 (avg/avg)

>292, 496, 406, 427, Kim Vocal 1, Kim Inst + Demucs ft (#1449)

>Kim Inst, Kim Vocal 1 (or/and voc_ft), Kim Vocal 2, UVR-MDX-NET Inst HQ 2, UVR-MDX-NET_Main_427, htdemucs_ft (avg/avg IRC)

>Kim Vocal 1+2, MDX23C-InstVoc HQ, UVR-MDX-NET-Voc_FT

(jaredou)

>Your choice of the best vocal models only (up to 4-5 max for the best SDR)

If your separation still bleeds, consider processing it further with BS-Roformer elsewhere or even GSEP. Debleeding section further below.

(BS-Roformer models might be the best pick for RVC training)

Other services

- Ripple (since BS-Roformer models release it might be obsolete; it's very good at recognizing what is vocals and what's not and tends to not bleed instrumental into vocal stem; very good if not the best solutions for vocals)

- music.ai (paid; presumably in-house BS-Roformer models)

“almost the same as my cleaned up work (...) It seems to get the instrument bleed out quite well”)

“Beware, I've experienced some very weird phase issues with music.ai. I use it for bass, but vocals are too filtered/denoised IMO, and you can't choose to not filter it all so heavily. ” - Sam Hocking

- https://myxt.com/ (paid; uses Audioshake)

- ZFTurbo’s VitLarge23 e.g. on MVSEP or 2.3/2.4 Colab (it's based on a new transformers arch. SDR-wise it's not better than MDX23C (9.78 vs 10.17), but works "great" for an ensemble consisting of two models with weights 2, 1. It's been added in 4 models ensembled on MVSEP (although the bag of current models is a subject to change any time)

- ZFTurbo’s Bandit Plus (MVSEP)

Other decent single UVR models

- Main (427) or 406, 340, MDXNET_2_9682 - all available in UVR5, some appear in download center after entering VIP code)

- or also instrumental models: Kim Inst and HQ_3 (via applied inversion automatically)

Other models

- ZFTurbo's Demucs v4 vocals 2023 (on MVSEP, unavailable in Colab, good when everything else fails)

- MDX23 Colab fork 2.1 / 2.2 (this might be slow) / 2.3 / 2.4 (it's generally better than UVR ensembles SDR-wise, but it's not available in UVR5) (MDX23 Colab is good also for instrumentals and 4 stems, very clean, sometimes more vocal residues in specific places vs single MDX-UVR inst3/Kim inst/HQ models, but it sounds better in overall, especially the Colab modification/fork with fixes made by jarredou)

- HQ_3 (inverted result giving vocals from instrumental in 2nd stem) - more instrumental residues than e.g. Kim Vocal 2, but no 17.7 cutoff)

- Narrowband MDX23C_D1581 “Leaves too much instrumental bleeding / non-vocal sounds behind the vocals. Formants are less refined than on any of the top vocal models (Voc FT, Kim 1, Kim 2 and MDX23C-InstVoc HQ).”

- Kavas' methods for HQ vocals:

Ensemble (Max/Max) - Low pass filter (brickwall) at 2k:

- MDX23C

- Voc FT

Voc FT - High Pass Filter (brickwall) at 2k

(“Sometimes it leaves some synth bleeding in the mids" then try out min/min)

Or:

Multiband EQ split at 2kHz with a low & high pass brickwall filter with:

-MDX23C-InstVoc from 0 to 2kHz and:

-Voc_FT from 2kHz onwards

(InstVoc gives fuller mids, but leaves transients from hats in the high end, whereas Voc ft lacks the mids, but gets rid of most transients. Combine the best of both for optimal results.)

- Any top ensemble or AI appearing on MVSEP leaderboard (but it depends, - sometimes it can be better for instrumental, sometimes vocals

Ensembles are resource consuming, no cutoff if one model is fullband and the other is narrowband. Random ensembles can result in more vocal or instrumental residues, as mentioned above.

Models not exclusive for MVSEP are all available in UVR5 GUI, or optionally you can separate MDX models in Colab and perform manual ensemble in UVR5 (no GPU or fast CPU required for this task) or use manual ensemble in Colab [may not work anymore]) or also in DAW by importing all the stems together and decreasing volume (you might want to turn on limiter on the sum).

Can’t find some of these models in UVR?

Results containing models in e.g. #946 or other ensembles mentioned above, have still public models, you can access them by entering the download/vip code in UVR if you can’t access them in download center, so more models will pop up.

But be aware that MDX23C Inst Voc HQ_2 got deleted, and is no longer available when VIP code is inserted. You need to copy the model file manually.

You cannot use MVSEP Ensemble of 4 and 8 models in UVR, as they contain models not available in UVR. You can only perform manual ensemble of single models processed by MVSEP, in UVR, but it will not give the same result as MVSEP uses code more similar to MDX23 code (don’t confuse with MDX23C arch models) instead of simple ensemble using avg/avg available in UVR.

E.g. for 16.10.23, “MVSep Ensemble of 4” consists of 1648 previous epoch (maybe later updated to 16.66), VitLarge, and Demucs 2023 Vocals and beside the first, none of these models work in UVR, even if downloaded manually. 1648 on MVSEP is MDX23C HQ1 model.

Also, as for 4/8 models ensemble on MVSEP - they’re all only for premium users, as many resources and models are being used to output these results.

Q: Why not to use more than 4-5 models for ensemble in UVR - click

Others

- GSEP AI - instrumentals, vocals, karaoke, 4-6 stem (it applies additional denoiser for 4/6 stems), piano and guitar (free). As for 2 stems, it gives very good instrumentals for songs with very loud and harsh vocals and a bit lo-fi hip-hop beats, as it can remove vocals very aggressively. Sometimes even more than HQ_3.

In specific cases (can have more vocal residues in instrumentals vs HQ_3 at times - less in jarredou's Colab):

- original MDX23 by ZFTurbo (only this OG version of MDX23 still works in the offline app, min. 8GB Nvidia card required [6GB with specific parameters]) - sounds very clean though, and not that muddy like inst MDX models, in this means, comparable with even VR arch or better (because of much less vocal residues).

- Demucs_ft model (both 3 stems to mix in e.g. Audacity for instrumental) / sometimes 6s model gives better results, or in very specific cases when vocals are easy to filter out - even the old 4 stem mdx_extra model (but SDR wise full band MDX 292 is already better than even ft model). The 6s model is worth checking with shifts 20.

Might be still usable in some specific cases, despite the fact that MDX23 uses demucs_ft and other models combined.

- VR models settings + VR-only ensemble settings (generally deprecated, but sometimes more clarity vs MDX v1, though frequently more vocal residues. Some people still uses it e.g. for some rock, when it can still can give better results than other models, and also for fun dubs, but for it if you have two language tracks of the same movie, you can test out Similarity Extractor instead, but Audacity center extraction works better than that linked Colab)

- Alternatively, you can consider using narrowband Kim other ft model with fullband model settings parameters in this or the new HV Colab instead. Useful in some specific parts of songs like chorus, where there are still no persistent vocal residues using this method (clearer results than even Max-Spec) or e.g. MDX23 still doesn't give you enough clarity in such places to maybe merge fragments manually of results from different models.

Paid

- Audioshake (non-copyrighted music only, can be more aggressive than above and pickup some lo-fi vocals where other fails [a bit in manner of HQ models])

How to bypass the non-copyright music restriction (1, 2).

"They also reserve themselves the right to keep your money and not let you download the song you split if they discover that you are using a commercially released song and that you don't have the rights to it." but generally we didn't have such a case with slowed down songs (otherwise they might not pass anyway)

4 stems might be better at times then Demucs ft model.

- Dango.AI (a.k.a. tuanziai.com) free 30 seconds samples; can be the most aggressive for instrumentals vs, e.g. inst 3, tested on Childish Gambino - Algorithm). Since then, models/arch were updated and instrumentals in 9.0 seem to be the cleanest or the closest to original instrumentals for 12.08.23 at least in some cases (despite low SDR).

> If you care only about specific snippet in a song, then since 30 second samples to separate are taken randomly from a whole song, to have specific fragment separated, you can copy the same fragment over and over to make a full-length track of it, and it will eventually pick up a whole snippet for separation.

X Uploading snippet shorter than or exactly 30 seconds will not result in the whole fragment being processed from the beginning to the ending.

>Sometimes using other devices or virtual machine in addition to incognito/VPN/new email might even be necessary to reset free credits. It's pretty persistent.

https://tuanziai.com/encouragement

Here you might get 30 free points (for 2 samples) and 60 paid points (for 1 full songs) "easily".

>>>

Everything else for 2 or 4 stems than above is worse for separation tasks:

Lalal, RipX (although now it uses some UVR models (?), Moises, Demix, RX Editor 8-10, Spleeter and its online derivatives.

Debleeding/cleaning inverts

- Ripple “is AWESOME to use after inverting songs with the official instrumental”

Instrumentals can be also further cleaned with Ripple, and then with Bandlab Splitter

- Top ensemble in UVR5 (starting from point 0d)

- GSEP

Very minor difference between both for cleaning vocals (maybe GSEP is better by a pinch).

You can try separating e.g. vocal result double using different settings (e.g. voc_ft>kim vocal 2)

- MDX23 jarredou's fork Colab (maybe this one at first)

- use voc_ft model on the result you got (so separate twice if you already used that model)

(cleaning inverts - so cleaning up residues - e.g. left by the instrumental after an imperfect phase cancellation, e.g. when audio is lossy, or maybe even not from the same mixing session)

Aligning

"Utagoe bruteforces alignment every few ms or so to make sure it's aligned in the case that you're trying to get the instrumental of a song that was on vinyl."

[The previous] UVR's align tool is handy just for digital recordings… [so those] which don't suffer from that [issue] at all."

Utagoe will not fix fluctuating speed issues, only the constant ones.

Anjok already "cracked" how that specific utagoe feature works, and plans to introduce it to UVR, maybe till September 2023. edit. It’s done and introduced.

“Updated "Align Tool" to align inputs with timing variations, like Utagoe. ”

For problematic inverts, you can also try out azimuth correction in e.g. iZotope RX.

Removing bleeding of vocals in instrumentals

- RX10 De-bleed feature

Video

- Kim vocal model first and then separate with instrumental model (e.g. HQ_3). You might want to perform additional separation steps to clean up the vocal from instrumental residues first, and invert it manually to get cleaner instrumental to separate with instrumental model to get rid of vocal residues

Removing bleeding of hi-hats in vocals

- Use MelBand RoFormer v2 on MVSEP (e.g. after using MDX23C Inst HQ)

Bleeding in other stems

- RipX Stem cleanup feature (possibly)

- SpectraLayers 10 (eliminates lots of bleeding and noise from MDX23 Colab ensembles)

"You debleed the layer to debleed from using the debleed source. Results vary. Usually it's better to debleed using Unmix and then moving the bleed to where it belongs" Sam Hocking

Video

Bleeding of claps in vocals

- KaraFan (for general drum artefacts, but it doesn’t work well for inverts, try out modded preset 5 here)

- Remove drums with e.g. demucs_ft first, then separate the drumless mixture from inversion

- Settings for VR Colab

- Kim Vocal 2 (but it has a cutoff and creates a lot of noise in the output)

- Denoise model with 0.1-0.2 aggressiveness

- Sam Hocking method

Bleeding of guitars/winds/synths in vocals

- BVE (Karaoke) models

Overlapped/misrecognized stems

- Spectralayer's 9/+ Cast & Mold

Bleeding of instruments in vocals

- Denoise model with 0.1-0.2 aggressiveness

_______

Debleeding guide by Bas Curtis (other methods, e.g. Audacity)

Denoising and dereverberation later below.

See also “Vinyl noise/white noise” from the end of the list.

_______

How to check whether a model in UVR5 GUI is vocal or instrumental?

  • Read carefully the models list above - they're categorized
  • If you want to experiment with others:

The moment you see "Instrumental" on top (and "Vocal" below) in the list where GPU conversion is mentioned, you know it's an instrumental model.

When it flips the sequence, so Vocal on top, you know it's a vocal model.

Same happens for MDX and VR archs.

  • “Be aware that MDX23/MDXv3 models can be multisources, it depends on the training, so it can be only vocals, or only instrumental, or vocals+instrumental, or vocals+drums+bass+other (like baseline models are), or whatever else.
  • You can know it looking at the config file of the model, for example InstVocHQ,

https://github.com/Anjok07/ultimatevocalremovergui/blob/master/models/MDX_Net_Models/model_data/mdx_c_configs/model_2_stem_full_band_8k.yaml

Seeing by the instruments line above, D1581 and InstVocHQ models are instrumental+vocal.

Config for the rest of the models:

https://github.com/Anjok07/ultimatevocalremovergui/blob/master/models/MDX_Net_Models/model_data/model_data.json 

(decoded hashes)

____________________________________________________________________

Keeping only backing vocals in a song: 
> Karaoke

- New BVE model on X-Minus for premium users (it's also added on UVR, but without stereo width feature to fix some issues when BVs are confused with other vocals. One of, if not the best so far. It uses voc_ft as preprocessor.

"BVE sounds good for now but being an (u)vr model the vocals are soft (it doesn’t extract hard sounds like K, T, S etc. very well)"

"Seems to begin a phrase with a bit of confusion between lead and backing, but then kicks in with better separation later in the phrase."

- Chain ensemble mode for B.V. models (available on x-minus.pro for premium users, added in UVR beta 9.15 patch already).

It is possible to recreate this approach using non-BVE v2 models in UVR by processing the output of one Karaoke model by another (possibly VR model as the latter) with Additional Settings>Vocal Splitter Options.

So you might experiment with using voc_ft or Kim Vocal 2 as the model in the main UVR window, and in Vocal Splitter use HP5 or HP6 model, so you won’t have to make the process in 2 steps manually, so separate the result with another model when the first separation is done.

Recommended ensemble settings for Karaoke in UVR 5 GUI (instrumentals with backing vocals):

- 5_HP-Karaoke-UVR, 6_HP-Karaoke-UVR, UVR-MDX-NET Karaoke 2 (Max Spec)

(in e.g. “min/max” the latter is for instrumental)

- Alternatively, use Manual Ensemble with UVR with Max Spec using x-minus’ UVR BVE v2 result and the UVR ensemble result from the above.

Or single model:

- HP_KAROKEE-MSB2-3BAND-3090 (a.k.a. VR's 6HP-Karaoke-UVR)

- UVR BV v2 on x-minus (and download "Song without L.V.". Better solution, newer, different model.)

- 5HP can be sometimes better than 6HP

(UVR5 GUI / x-minus.pro / Colab) - you might want to use Kim Vocal 2 or voc_ft or MDX23C first for better results.

- De-echo VR model in UVR5 GUI set to maximum aggression

- MedleyVox with our trained model (more coherent results than current BV models)

Or ensemble in UVR:

"The karaoke ensemble works best with isolated vocals rather than the full track itself"

- VR Arc: 6HP-Karaoke-UVR

- MDX-Net: UVR-MDX-NET Karaoke 2

- Demucs: v4 | htdemucs_ft

Or:

- VR Arc: 5HP-Karaoke-UVR

- VR Arc: 6HP-Karaoke-UVR

- MDX-Net: UVR-MDX-NET Karaoke 2

(Max Spec, aggression 0, high-end process)

Or:

- VR arc: 5_HP-Karaoke

- MDX-Net: UVR-MDX Karaoke 1

- MDX-Net: UVR-MDX Karaoke 2

(you might want to turn off high-end process and post process)

Or:

- VR Arc: 5HP-Karaoke-UVR

- VR Arc: 6HP-Karaoke-UVR

- MDX-Net: UVR-MDX-NET Karaoke 1

- MDX-Net: UVR-MDX-NET Karaoke 2

(Min/Min Spec, Window Size 512, Aggression 100, TTA On)

If your main vocals are confused with backing vocals, use X-Minus and set "Lead vocal placement" to center (not in UVR5 at the moment).

Or Mateus Contini's method.

How to extract backing vocals X-Minus Guide (can be executed in UVR5 as well)

Vinctekan Q&A

Q: Which BVE aggression settings (for VR model, e.g. uvr-bve-4b-sn-44100-1) is good for backing removal?

A: “I recommend starting from exactly from 0 and working from there to either - or +.

0 is the baseline for BVE that are almost perfectly center.

If it's off to the left or right a little bit, I would start from 50”

Q: How do I tell what side BVs are panned or if they are Stereo 50 % or 80 % without extracting them?

A:  “It's more about listening to the track. The way I used to it is to invert the left channel with the right channel. In most cases this should only leave the reverb of the vocals in place, but if there is backing vocals that is panned either left or right, then it should be a bit louder than the reverb. Audacity's [Vocal Reduction and Isolation>Analyze] feature usually can give a rough estimates as to how related the two channels are, but that does not tell where the backing vocal actually is. I would only recommend doing the above with a vocal output, though.”

Q: Does anyone know how to tell what side BV's (backing Vocals) are panned similar to this? Like, is there a way to tell using RipX? Or another tool. In my case I think mine might be Stereo 20 30 percent or lower

A: “Your ears [probably the least effective]

If you have Audacity, select your entire track, and select [Vocal reduction and Isolation] and select the [Analyze] but it won't tell you which direction the panning is in.

Or use it to isolate the sides, and just take a look at the output levels of each channel.

Spectralayers's [Unmix>Multichannel Content] tab can measure the output of frequencies in the spectrogram and can tell you when certain elements are not equal in loudness, which you can restore.”

- Dango.ai has also BVE model

- AudiosourceRE Demix Pro has BVE/lead vocals model

Keeping only lead vocals in a song

- karokee_4band_v2_sn a.k.a. UVR-MDX NET Karaoke 2 (MVSEP [MDX B (Karaoke)] / Colab / UVR5 GUI / x-minus.pro) - “best for keeping lead vocal detail” on its own) - removes backing vocals from a track, but when we use min_mag_k it can return similar results to:

- Demix Pro (paid, “keeps more backing vocals [than Karaoke 2] (and somehow the lead vocals are also better most of the time, with fuller sound”

“Demix is better for keeping background vocals yes, but for the lead ones they tend to sound weaker (the spectrum isn’t as full and has more holes than karaoke 2, but this isn’t always a bad thing because the lead vocals themselves are cleaner, the mdx karaoke 2 might produce fuller lead vocals, but you will most certainly have some background vocals left too”)

- MDX B Karaoke on mvsep.com (exclusive) - good but as an alternative you could use MDX Karaoke 2 in UVR 5 (they are different)

“i personally wouldn't recommend 5/6_hp karaoke, except for using 5_hp karaoke as a last resort, you could also use the x minus bve model in uvr which sometimes is good with lead vocals”

- UVR-BVE-4B_SN-44100-1

(doesn’t work for everyone)

- MDX-UVR Inst HQ_3 - new, best in removing background vocals from a song (e.g. from Kim Vocal 2)

Or consecutive models processing:

- Vocals (good vocal stem from e.g. voc_ft or MDX23C single models or ensembles of MDX23C / MDX23 2.2 / UVR top/near top SDR / Ensemble of only vocal models: Kim 1, 2, voc_ft, MDX23C_D1581, eventually with demucs_ft

>Separated with->

Karaoke model -> Lead_Voc & Backing_Voc

Tutorial 

 (+ experimentally split stereo channels and separate them on their own, then join channels back)

- arigato78 method

“Karaoke 2 really won't pick up any chorus lead vocals EXCEPT for ad-libs

6-HP will pick up the melody, although it's usually muffled as hell”

“Q: is mdx karaoke 2 still the best for lead and back vocals' separation?

A: I'm finding it's the best for "fullness" but 6-HP picks up chorus melody while K2 only usually picks up ad-libs

I personally like mixing K2, 6-HP (and sometimes 5-HP if 6-HP sounds very thin) together

also, let's say a verse has back vocals that are just the melody behind the lead vocal (instead of harmonies) for a doubling effect, sometimes K2 will still pick up both the lead and double.”

Harmonies

For more layers from the above (e.g. starting with voc_ft>Karaoke 2)

- Medley Vox (free, model trained by Cyrus, tutorial, more info here)

- Melodyne (paid, 30 days trial) - “the best way to ensure it’s the correct voice”, or

- eventually Hit 'n' Mix RipX DAW Pro 7 (paid/trial)

In Melodyne it is "harder to do but can be cleaner since you can more easily deal with the incorrect harmonics than RipX sometimes choses"

"every time I’d run a song through RipX I was only able to separate 4-5" harmonies

(or also)

- Choral Quartets F0 Extractor - “midi outputs, but it works”

For research

https://c4dm.eecs.qmul.ac.uk/ChoralSep/

https://c4dm.eecs.qmul.ac.uk/EnsembleSet/ (similar results to MedleyVox)

> Separating two singers in a duet from one song

(use on already separated vocals)

- MedleyVox (vocals 238 model)

- MDX-UVR Karaoke models

- VR's 5_HP (ev. 6_HP, or BVE v2 on x-minus [already uses voc_ft as preprocessor for separating vocals])

(it might be still not enough, then continue and/or look for Dolby Atmos rip and retry)

- RipX (paid)

- ISSE (free, you can figure out which voice is who's just by frequencies alone; use on e.g. separated vocals too)

If artists sing the same notes, Karaoke models will rather not work in this case.

If BVs are heard in the center, don't use the MDX karaoke model but the VR karaoke model instead.

Use the chain algorithm with mdx (kar) v2 on x-minus which will use uvr (kar) v2 to solve the issue. (Aufr33/dca)

It will be available after you process the song with MDX.

“The MDX models seem to have a cleaner separation between lead and backing/background vocals, but they often don't do any actual separation, meanwhile the VR models are less clean, but they seem to be better at detecting lead and background”

“MDX models basically require the lead to be completely center and the BV to be stereo

whereas VR ones don't really care as much about stereo placement”

For vocals with vocoder 

- voc_ft

Alternatively, you can use:

- 5HP Karaoke (e.g. with aggression settings raised up) or

- Karaoke 2 model (UVR5 or Colabs). Try out separating the result obtained with voc_ft as well.

- BS-Roformer model ver. 2024.04 on MVSEP (better on vocoder than the viperx’ model).

"If you have a track with 3 different vocal layers at different parts, it's better to only isolate the parts with 'two voices at once' so to speak"

Various speakers' isolation (from e.g. podcast)

- Guide and script for WhisperX

- https://github.com/alexlnkp/Easy-Audio-Diarisation

- Spectralayers

(for further research) - some of these tools might get useful:

https://paperswithcode.com/task/speaker-separation/latest

https://arxiv.org/abs/2301.13341

https://paperswithcode.com/task/multi-speaker-source-separation/latest

____________________________________________________________________

> 4-6 stems (drums, bass, others, vocals + opt. guitar, piano):

You might want to use the already well-sounding instrumental, possessed with 2 stem model in the section above first, and then separate using the following models.
Furthermore, you can slow down your song by x0.75 speed - the result can be - more elements in other stem and better snaps and human claps using 4 stems.

Read the Tips to enhance separation for more.

- MDX23 v.2.4 fork by jarredou (Colab; 4 stems when it's enabled - weighted ensemble of various 4 stem models)

~"compared to this, demucs_ft drums sound like compressed"

- Ensemble 4/8 models (MVSEP) - similar or better results

- 1053 BS-Roformer model in UVR beta (very good drums + bass in one stem model)

- Demucs_ft - the best single Demucs 4 stem model (Colab / MVSEP / UVR5 GUI)

(better drums and vocals than in Demucs 6 stem model, decent acoustic guitar results in 6s, for 4 stems alternatively check MDX_extra, generally Demucs 6 stem model is worse than MDX-B (a.k.a. Leaderbord B) 4 stem model released with MDX-Net arch from MDX21 competition (kuielab_b_x.onnx in this Colab), and is also faster than Demucs 6s) - for Demucs use overlap 0.1 if you have instrumental instead of mixture with vocals as input; at least it works with ft model (for normal use case 0.75 is max reasonable speed-wise, as a last resort 0.95).

- GSEP AI (sonically it used to have the best other stem vs Demucs, also piano in Demucs is worse, and it picks up e-piano more frequently, GSEP electric guitar model doesn't include acoustic, it's only electric). In general, it has a very good piano model

- Ripple (for US iOS only, 4 stems, best SDR (besides the other stem), bad bleedy other stem, can be the best kick, not the best drums in overall vs Demucs_ft, “you need something to get the rest of the drums out of the ‘other’ stem and at that point might as well use a proper drum model”, good vocals

- Bandlab Splitter (4 stem, web and iOS/Android app) - not bad, can be used e.g. for cleaning stems from other services

- Audioshake (paid, only non-copyrightes music, or slowed down [see workaround in "paid" above]) - sometimes better results than Demucs ft model.

- Spectralayers 10 - mainly for bass and drum separation -

“I think I've got some really comparable samples out of jarredou's MDX23 Colab fork”, but for vocals and instrumentals it’s mediocre [in Spectralayers 10].

- music.ai - “Bass was a fair bit better than Demucs HT, Drums about the same. Guitars were very good though. Vocal was almost the same as my cleaned up work. (...) I'd say a little clearer than MVSEP 4 ensemble. It seems to get the instrument bleed out quite well, (...) An engineer I've worked with demixed to almost the same results, it took me a few hours and achieve it [in] 39 seconds” Sam Hocking

- dango.ai - also has 4 or more stems separation

- (old) MDX23 1.0 by ZFTurbo 4 stems (Colab, desktop app, as above, much cleaner vs demucs_ft, less aggressive, but in 1.0 more low volume vocal residues in completely quiet places in instrumentals vs e.g HQ_3, instrumentals as input should sound similar to the current v. 2.4 fork)

- MVSEP has also single piano and guitar models (in many cases, guitar model can pick up piano better than piano model;

"works great for songs with grand piano, but only grand piano, since that’s what it was trained on.

Same with guitar, which catches more piano than piano model does, ironically")

- Lalal.ai has also good piano model (paid; no other stem with piano stem attached)

To enhance 4 stem results, you can use good instrumental obtained from other source as input for the above (e.g. KaraFan, and its different presets ensembled in UVR5 app with Audio Tools>Manual Ensemble)

For the best results for piano or guitar models, use other stem from 4 stems from e.g. “Ensemble 8 models” or MDX23 Colab or htdemucs_ft as input.

Separating electric and acoustic guitar

- “To separate electric and acoustic guitar, you can run a song [e.g. other stem] through the Demucs guitar model and then process the guitar stem with GSEP [or MVSEP model instead of one of these].

GSEP only can separate electric guitar so far, so the acoustic one will stay in the "other" stem.”

- “⁠medley-vox main vs rest model has worked for me to separate two guitars before”

Separating parts of drums stem from e.g. Demucs/GSEP/MDX23

(kick/hi-hat/snare/toms)

drumsep/>FactorSynth (depending on how far you want to unmix the drums) >Regroover>UnMixingStation (all the last three paid)>

Virtual DJ (Stems 2.0, barely or not picks those instruments at all).

- LarsNet (vs drumsep, it also allows separating hi-hats and cymbals, but toms might be better)

- RipX (paid)

- SpectraLayers 10 (paid, sometimes worse, sometimes better than Drumsep. IDK if it was added in update or main version) "drumsep works a f* ton better when separating on this one song I've tested with the pitch shifted down 2"

FADR.com (in paid subscription)

Compared to drumsep, Regroover allows more separations, especially when used multiple times, so allows removing parts of kicks, parts of snares etc, noises etc. More deep control. Plus, it nulls easily. But drumsep sounds better on its own, especially with higher parameters like e.g. shifts 20 and overlap 0.75-0.98.

Strings

- Demix Pro (paid, free trial)

- RipX DeepRemix (once was told to be the best bass model, but it doesn’t score that good SDR-wise, probably it’s Demucs 3 (demucs_extra) and is worse than Demucs ft and rather also vs MDX23 above; could have been updated) (paid)

- Dango.ai (e.g. violin, erhu; paid, free 30 seconds fragments) - "impressive results" for at least violin

- Music.ai (paid)

Electric guitar

Audioshake>RipX/>Demix Pro>lalal.ai (e.g. lead guitars)

(paid ones)>

>GSEP>Demucs 6s (free)

>Moises.ai (paid)

Dango.AI (paid)

Music.ai

Acoustic guitar

- Demucs 6s - sometimes, when it picks it up

- GSEP - when the guitar model works at all (it usually grabs the electric), the remaining 'other' stem often is a great way to hear acoustic guitar layers that are otherwise hidden.".

- lalal.ai (both paid)>moises.ai (It picks up acoustic and electric guitar together)

- dango.ai (paid)

- Audioshake (both electric and acoustic)

Trumpet/saxophone

- "Wind" model on x-minus.pro (for premium users, paid) and UVR5 (Download Center -> VR Models -> select model 17)

You might have to use it on instrumental separation first (e.g. with HQ_3)

- Audioshake

- Music.ai

- karaoke 4band_v2_sn on e.g. MVSEP (worse than Wind model in UVR)

Piano

- GSEP

- MVSEP

- Music.ai (paid)

- Dango.ai (paid)

- htdemucs_6s (not too good)

Crowd

- UVR-MDX-NET Crowd HQ 1 (UVR/x-minus.pro) (can be more effective than MVSEP’s sometimes)

- MVSEP model (applause, clapping, whistling, noise)

- AudioSep

- USS Bytedance

- Zero Shot Audio Source Separation

- GSEP (sometimes), and e.g. drums stem is able to remove applauses

- Chant model (by HV, VR arch, e.g. works for applauses; may leave some echo to separate with other models or tools below) for Colab usage - you need to copy that model to models/v5 and then use 1 band 44100 param, turn off auto-detect arch and set it to "default". In UVR pick one of 44100 1 band parameter, possibly 512.

SFX

“You do need to first get an instrumental with a different model, because this isn't really trained to remove vocals. Just SFX”

- USS ByteDance (while providing proper sample)

- https://github.com/karnwatcharasupat/bandit model by joowon (Colab)

Better SDR for Cinematic Audio Source Separation (dialogue, effect, music) than Demucs 4 DNR model on MVSEP below (mean SDR 10.16>11.47)

- DNR Demucs 4 model on MVSEP (CDX23) - it used to output fake stereo.

"I noticed doesn't do well and doesn't detect water sounds, and fire sounds"

Can be used in UVR -  it'll always complain there about only being 3 outputs, but it will work

you have to add it manually. Just put it in the demucs model folder (Ultimate Vocal Remover\models\Demucs_Models\v3_v4_repo) alongside a yaml file like this one

- jazzpear94 Mel RoFormer model (Colab, files, instruction, user-friendly Colab, and new better fork) - it has also ability to separate specific SFX groups (Ambiance, Foley, Explosions, Toon, Footsteps, Fighting and General for all in one stem)

- jazzpear94 MDX23C model (files) -  rename the config to .yaml as UVR GUI doesn't read .yml. You put config in UVR’s models\mdx_net_models\model_data\mdx_c_configs. Then when you use it in UVR it'll ask you for params, so you locate the newly placed config file.

- myxt.com (uses Audioshake)

- AudioSep (you can try it to get e.g. the birds sfx and then use as a source to debleed or maybe try to invert phase and cancel out)

- Older DNR model on MVSEP from ‘22

- voc_ft - sometimes it can be better than Demucs DNR model (although still not perfect)

- jazzpear94 model (VR-arch) - config: 1band sr44100 hl 1024, stem name: SFX, Do NOT check inverse stem in UVR5

- (dl) source by Forte (VR)  (probably setting to: instrumental/1band_44100_hl1024 is the proper config) Might work in Colab “I tried it with the SFX models, and I just uploaded them in the models folder and then placed the model name, and it processed them” and may even work in UVR.

- Zero Shot (currently worse for SFX vs Bytedance)

- Or GSEP (sometimes)

Any other stem/instrument/sample if not listed above

- Zero Shot Audio Source Separation

- Bytedance-USS (might be worse for instruments, but better for SFX)

- Spectral removers (software or VST):

Quick Quack MashTactic (VST), Peel (VST, they say it’s worse alternative of MT), Bitwig (DAW), RipX (soft), iZotope Iris (VST/app), SpectraLayers (DAW, “Problem with RX [Editor's spectral editing] is it doesn't support working in layers non-destructively.”), R-Mix (old 32 bit 2010 Sonar plugin), free ISSE (soft, showcase), Zplane Copycat "but MashTactic also has a dynamics parameter that is really useful (you can isolate attack from longer sounds, or the opposite, coupled with the stereo placement and EQ isolation)"

RipX is “not as good as UVR5 for actual separation, but RipX is very good if you need to edit what's already separated more musically. SpectraLayers is a nicer spectral editor, RipX spectral editor is not as usable”

Consecutive multi-AI separation for not listed instruments

- Extract all other instruments "one by one" using other models in the chain (e.g. remove vocals with voc_ft, use what's left to remove drums/bass with htdemucs_ft, use what's left to remove guitars/piano with GSEP/demucs_6s, then use what's left to remove wind instruments with UVR wind model, etc.)

De-reverb

VR models

(added to UVR5 GUI, don’t work in Colab [config for all: 4band_v3, but currently unknown layers and nets])

-UVR-DeEcho-DeReverb (213 MB, might be the best)

__

(below the old ones, which might only work with vocal-remover 5.0.2 by tsurumeso’s default arch settings, [maybe 1band_sr44100_hl1024 or 512? and his nets and layers])

- VR dereverb - only works on tracks with stereo reverb (j48qny.pth, 56,5MB) (dl) (source)

- VR reverb and echo removal model (j48qny.pth, 56,5MB) (dl), works with mono/stereo)

De-reverb - MDX models (less aggressive than at the top)

“I use it when there's not so much reverb, but if it's more intense I will choose VR-Arch DeEcho”

> FoxyJoy's dereverb V2 - works only with stereo (available in UVR's download center and Colab (eventually via this dl link); it can spoil singing in acapellas or sometimes removes delay too). “I do think [that] MDX is noticeably more accurate [vs VR DeEcho-DeReverb]”

"(the model is also on X-minus) Note that this model works differently from the UVR GUI. I use the rate change (but unlike Soprano mode only by 1 semitone). This extends the frequency response and shifts the MDX noise to a higher frequency range." It's 11/12 of the speed so x 0.917, but actually something else goes on here:

(Anjok)

"The input audio is stretched to 106%, and lowered by 1 semitone using resampling. After AI processing, the speed and pitch of the result are restored."

You'll find slowing down method explained further in "tips to enhance separation"

"De-echo is superior to de-reverb in every way in my experience"

“VR DeEcho DeReverb model removes both echo and reverb and can also remove mono reverb while MDX reverb model can only remove stereo reverb”

"You have to switch [main stem/pair] to other/no other instead of vocal/inst" in order to ensemble de-echo and de-reverb models.

-older V1 de-reverb HQ MDX model by FoxyJoy (dl) (source) (also decent results, but most likely worse).

(“It uses the default older architecture with the fft size of 6144”

“After separation, UVR cuts off the frequencies at 15 kHz, so I found that to fix that is to invert the "Vocals" and mix that with the original audio file.”

Demonstration Original Dereverbed Detected reverb)

- To enhance the result if necessary, you can use more layers of models to dereverb vocals, e.g.:

Demucs + karaoke model + De-reverb HQ (by FoxyJoy)

"works wonders on some of this stuff".

“Originally I inverted with instrumentals then I ran through deecho dereverb at 10 aggression then demucs_ft then kim vocal 2 then uvr 6_ at 10 aggression and finally deecho normal” (isling)

- For room reverb check out:

Reverb HQ

then

De-echo models (J2)

“from my experience, De-Reverb HQ specifically only really works when the sound is panned in the center of the stereo field perfectly with no phase differences or effects or anything that could cause the sound to be out of phase in certain frequencies.

If the sound doesn't fit that criteria, it only accurately produces the output of whatever’s in the mid”

“I noticed that in some cases the DeEcho normal worked better than the aggressive, which was weird. That's why I ran through both, so to remove as much as possible.”

- For removing reverb bleed left over in the left and right channels of a 5.1 mix from tv shows/movies chechk out:

Melband Roformer on MVSEP

Free apps/VSTs for de-reverb/de-echo/denoise

- Voicefixer (CML, only for voice, online)

- RemFX (de: chorus, delay, distortion, dynamic range compression, and reverb or custom)

- Supertone Clear (previously known as Supertone Voice Clarity and defunct GOYO.AI)

- Noise Suppression for Voice (a.k.a. RNNoise, worse, various plugin types, available in OBS.

- Krisp app (paid, free 60 minutes per day) better (same for RTX voice) - free on Discord

- Adobe Podcast (online, a.k.a. Adobe Podcast Enhance Speech, only for narration, changes the tone of voice, so you might want to use only frequencies from it above 16kHz)

- CrystalSound.AI (app)

- Noise Blocker (paid, 60 minutes free per day)

- Steelseries GG (app, classic noise gate with EQ and optional paid AI module, activating by voice in noisy environment may not always work correctly)

- RTX Voice (in NVIDIA Broadcast app, currently for any GTX or RTX GPU)

- AMD Noise Suppression (for RX 6000 series cards, or for older ones using unofficial Amernime Drivers)

- AI SWB Noise Suppression (free, currently they give away that Mac/Windows driver only on email requests)

- Audio Magic Eraser shipped with new Google Pixel phones (separate options for cancellation of: noise, wind, crowd, speech, music)

The best paid de-reverb plugins for vocal tracks/stems/separations:

- Izotope RX (<?) 8-10 and its RX Editor (paid) both for voice and mixtures

(more possible free solutions). Good results not only for room reflections, but also regular reverb in vocals. It picks reverb where even FoxyJoy's model fails (“De-reverb” and “Dialogue de-reverb” options).

- Clear by Supertone “equally good to RX imho. Smoother imho. It's only good on vocals though” Simple 3 knob plugin - “the cleverest / least-manual to get good results and is AI-based.”

- Waves Clarity Vx DeReverb (paid; simpler than RX, models updated in 12/17/2023 Build)

- DeVerberate by Acon Digital (someone while comparing said it might be even better than RX10) "I find it's useful to take the reverb only track and unreverbed track and mix them to a nice level" “Acon is probably best if you can tweak to each stem separated. RX is imo too rough.” Comparison

- Accentize DeRoom Pro ("great", but expensive, available in DxRevive Pro, now 1.1.0)

- Accusonus ERA (was good too but discontinued when Facebook bought them)

Others:

- SPL De-Verb Plus

- Audio Damage Deverb

- Zynaptiq UnVeil

- Zynaptiq Intensity

- Thimeo Stereo Tool (one of its modules)

If you want to use some of these DAW plugins for your microphone in real-time, you can use Equalizer APO.

Go to "Recording devices" -> "Recording" -> "Properties" of the target mic -> "Advanced".

To enable a plugin in Equalizer APO select "Plugins" -> "VST Plugin" and specify the plugin dll. AFAIK, VST3 is unsupported.

To run a plugin for a microphone in a simple app and send it to any output device, alternatively, you can download savihost3x64, then edit downloaded exe name to the name of your plugin you want to use, placed nearby, and run the app. Now go to settings and set input and output device (can be virtual card, maybe not necessarily). Contrary to Equalizer APO (irc) it supports VST3 plugins too. Of course, you can also use DAWs for the same purpose (Reaper, Cakewalk etc. - but not Audacity irc)

De-echo

- UVR-De-Echo-Aggressive (121 MB)

- UVR-De-Echo-Normal (121 MB)

- UVR-DeEcho-DeReverb (213 MB)

(now added in UVR and MVSEP, won't be in Colab for now, but the first too are on HuggingFace)

- delay_v2_nf2048_hl512.pth (by FoxyJoy, all VR arch, source, can't remember if it was one of the above), decent results.

“works in UVR 5 too. Just need to select the 1band_sr44100_hl512.json when the GUI asks for the parameters”

“You [also] can use this command to run it: python inference.py -P models\delay_v2_nf2048_hl512.pth --n_fft 2048 --hop_length 512 --input audio.wav --tta --gpu 0”

They’re also on X-Minus now:

“The "minimum" and "average" aggressiveness settings use the Normal version of the model. The Aggressive one is used only at the "maximum" aggressiveness.”

“What's crazy is maximum aggressiveness sometimes does better at removing bgvox than actual karaoke models”

Denoising (vinyl noise/white noise/general)

- Denoise standard or denoise model in UVR

(Options>Choose Advanced Menu>Advanced MDX-Net Options>Denoise output)

for noise existing in almost all MDX-Net models in silent or quiet parts, and model for potentially more applications

- UVR De-Noise by aufr33 on x-minus (for premium users, less aggressive than the model above “The new model is designed mainly to remove hiss, such as preamp noise. For vocals that have pops or clipping crackles or other audio irregularities, use the old denoise model.“)

- BS-Roformer model (UVR beta (1296)/MVSEP (04.24)/x-minus) denoising and derumbling works the most efficiently on vocals here

- resemble-enhance (available on x-minus, but only as denoiser for voice/vocals, and on HuggingFace)

- https://tape.it/denoiser - (“great tool for removing tape hiss. Seems to be free without limitation at this point in time, though it seems to have issues with very large files [20 mins etc])”

- UVR-DeNoise.pth & UVR-DeNoise-Lite.pth in UVR 5 GUI download center

“It's decent, but it needs a little work compared to RX 10's Spectral De-noise, I think RX 10's Spectral De-noise is better at removing the noise MDX makes

Actually, the new UVR De-noise model is really good when you combine it with RX 10's Spectral De Noise”

- voc_ft - works as a good denoiser for old vocal recordings

- GSEP 4-6 stem ("noise reduction is too damn good. It's on by default, but it's the best I've heard every other noise reduction algorithm makes the overall sound mushier", it’s also good when GSEP gives too noisy instrumentals with 2 stem option, it can even cancel some louder vocal residues completely)

- https://github.com/eloimoliner/denoising-historical-recordings (mono, old 78rpm vinyls)

- UVR-MDX-NET Crowd HQ 1 (UVR/x-minus) can remove vinyl noises

- https://audo.ai/

Different types of noise

- Guide for classic denoiser tools in DAW, e.g. for debleeding: https://docs.google.com/spreadsheets/d/1XIbyHwzTrbs6LbShEO-MeC36Z2scu-7qjLb-NiVt09I/edit?usp=sharing

- SOUND FORGE Audio Cleaning Lab 4 (formerly Magix Audio & Music Lab Premium 22

[2016/2017] or MAGIX Video Sound Cleaning Lab)

- possibly Bytedance-USS (when similar sample provided)

- Unchirp VST (musical noise, artefacts of lossy compression)

- This VR ensemble in Colab (for creaking sounds, process your separation output more than once till you get there)

- Izotope Spectral DeNoise (better than Lab 4 and current models, also more tweakable)

- Izotope Dialogue Dereverb is also denoiser

- Resemble Enhance

- Bertom Denoiser Classic (or paid Pro)

Bird sounds

- https://blog.research.google/2022/01/separating-birdsong-in-wild-for.html?m=1

https://github.com/google-research/sound-separation/tree/master/models/bird_mixit

Google has released code & checkpoint for their bird sound separation algo last year.

De-reverb models:

- UVR-DeEcho-DeReverb (doesn't work for all songs)

Technically, if bird noises are in vocals, then equally:

- RTX Voice,

- AMD Noise Suppression or even

- Krisp and

- Adobe Podcast

might get rid of them, but at least the last changes the tone of voice, and the previous may work good only with voice instead of vocals.

Decompression (for loud or brickwalled songs with overly used compressor)

Free declipper plugins:

- KClip Zero

- FreeClip (you can use both in the same session for interesting results)

- ReLife 1.42 by Terry West (newer versions paid, works best for stereo tracks divided to mono)

- Limiter6 by vladg (Clipper module)

- GClip

- Apogee Soft Limit

- Hornet Magnus Lite (Clipper module)

Paid: KClip 3, SIR Standard Clip (popular, though KClip 3 may give better results), GClip, Izotope Trash 2, DMG Tracklimit, TR5 Classic Clipper, KNOCK, Boz Little Clipper 2, Flatline, Newfangled/Eventide Saturate, Declipper in Magix/Sound Forge Cleaning Lab,

AI tools:

- RemFX

- Neutone plugin

De-expliciter (removes explicit lyrics from songs)

https://github.com/tejasramdas/CleanBeats (more recent fork)

_________

Manipulate various MDX settings to get better results

_________

Final resort - specific tips to enhance separation if you still fail in certain fragments or tracks

_________

Get VIP models in UVR5 GUI (optional donation) - it's if you can't find some of the listed above or in top ensembles chart:

https://www.buymeacoffee.com/uvr5/vip-model-download-instructions

List of VR models in UVR5 when VIP code is entered (w/o two denoise by FoxyJoy yet):

https://cdn.discordapp.com/attachments/708595418400817162/1104424304927592568/VR-Arch.png

List of MDX models when VIP Code is entered (w/o HQ_3 and voc_ft yet and MDX23C):

https://cdn.discordapp.com/attachments/708595418400817162/1103830880839008296/AO5jKyQ.png

Models repository backup of all UVR5 models in separate links

https://github.com/TRvlvr/model_repo/releases/tag/all_public_uvr_models

Some models might be not available in the repository above, as e.g. 427 model which is available only after entering VIP code.

(just in case, here's the link for 427:

https://drive.google.com/drive/folders/16sEox9Z_rGTngFUtJceQ63O5S9hhjjDk?usp=drive_link

Copy it to UVR folder\models~MDX folder and rename the model name to:
UVR-MDX-NET_Main_427)

_____________

If you already did your best in separating your track, but it still lacks original track clarity, you can use:

AI Mastering services

Mixing/mastering

Mixing track from scratch using various AIs/models

If you're not afraid of mixing, and e.g. if you have clear instrumental already or whole track to remaster, I used for such a task:

- v. quiet mixture (original file with mixed vocals)

- demucs_ft stems (both MDX23 Colab or Ensemble of 4 models on MVSEP can be even better), with also:

- drumsep result (you can also test out stems from LarsNet)

- GSEP result for piano or guitars (MVSEP models can be handy too)

- Demucs 6s only guitar stems, and for

- bass both GSEP and Demucs ft/MDX23 aligned and mixed together

- I think "other" stem could have been paired that way too (but drums remained only from e.g. Demucs_ft - they were cleaner than GSEP and good enough)

- Actually in one of those guitars weren't recognized in guitar stem, but were in other stem, so I mixed that all together (it wasn't busy mix then)

- If it's not instrumental, probably mixing more than one vocal model might do the job, e.g. vocal_ft and something else (but it’s essentially what MDX23 and ensemble in UVR does, but not exactly the same - you can add different effects for every of such tracks, having fuller sound and change their volume manually).

The all above gave me an opportunity for a very clean mix and instruments using various plugins while setting correct volume proportions vs mastering just instrumental separation result or plain 3 stems from Demucs.

Usually, demucs_ft provides much higher quality of drums than drumsep during mixing, so you won’t use its stems on its own, but you will use drumsep more to overdub the specific parts of instruments more (e.g. snares - that’s the most useful part of using drumsep as it’s easy bury snare in busy mix when hi hats kick in overly processed track - not you won’t have to push drums from demucs_ft or MDX23 so drastically).

Another option by Sam Hocking for enhancing separated instrumentals from mixture with vocals

“I think the looking at spectrally significant things like snares can work. We can already do it manually by isolating the transient audio/snare pattern as midi and then triggering a sample from the track itself to reinforce, but it's time-consuming and requires a lot of sound engineering to get it invisible sounding.”

It will work the best in songs with samples instead of live recordings (if the same sounds repeat across the whole beat).

AI audio upscalers list

AI mastering services

Make your own remaster:

More clarity/better quality/general audio restoration of separated stem(s)

- have complete freedom over the result using (among others) spectral restoration plugins.

E.g. you can start by using Thimeo Stereo Tool which has a fantastic re/mastering chain feasible for spectral restoration useful for instrumentals sounding too filtered from vocals and lacking clarity. Also use Unchirp which states great complement to Thimeo Stereo Tool.

You can also play with free Airwindows Energy/Energy2 and Air/Air2 (or Air3, MIA Thin) plugins for restoration, and furthermore some compressors or other plugins and effects mentioned in the link above.

If you're not afraid of learning a new DAW, Sound Forge Cleaning Lab 4 has great and easy built-in restoration plugins too (Brilliance, Sound Clone>Brighten Internet Sources) with complete mastering chain to push even further what you already got with Unchirp and Stereo Tool.

Izotope RX Editor and its Spectral Recovery may turn out to be just not enough, but the rest of RX plugins also available as VST can become handy, although Cleaning Lab has lots of substitutes for filtering various kinds of noise, but working comfortably in real-time while all are opened simultaneously while combined. But you can use some plugins from RX Editor as separate VSTs in other DAWs including Lab 4.

Actually, once you finish using the plugins above, now you can try out some of the mastering services and not in the opposite way (although you might want to meet some basic requirements of AI mastering services to get the best results first, e.g. in terms of volume).

Q:  AI vocal remover did not "normalize" (I don't think it's the right word) the track on the moment where the vocal was removed, so it's noticeable, especially on instrument-heavy moments.

I make things better by created backup echo track by combine stereo tracks with inverted ones and adding this to main track with -5db, but it's still not good enough. Are there any technics that separate track with not noticeable effects or maybe there is some good restoration algorithm that I can use

A: If vocals are cancelled by AI, such a moment stands out from the instrumental parts of the song.

Sometimes you can rearrange your track in a way that it will use instrumental parts of the song when there are no vocals, instead of leaving AI separated fragments. Sometimes it's not possible, because it will lack some fragments (then you can use only filtered moments at times), and even then, you will need to take care about coherence of the final result in the matter of sound as you said.

At times, even fade outs at the ends of tracks can have decent amounts of instrumentals which you can normalize and then use in rearrangement of the track. E.g. you normalize every snare or kick and everything later in fade out, and then till the end, so it will sound completely clean.

Generally it's all time-consuming, not always possible, and then you really have to be creative using normal mastering chain to fit filtered fragments to regular unfiltered fragments of the track.

You can also try out layering, e.g. specific snare found in a good quality in the track. May work easier for tracks made with quantization, so when the pattern of drums is consistent throughout the track. Also, you can use 4 stem Demucs ft or MDX23 and overlap drums from a fragment where you don’t hear vocals yet, so drums are still crispy there.

- Nice chart describing process for creating AI cover (replace kim with voc ft there, or MDX23 vocals/UVR top ensemble).

More descriptions of models

and AIs, with troubleshooting and tips

Everyone asks which service and/or model is the best for instrumentals or vocals. The answer is - we have already listed above a few models and services which behave the best in most cases, but the truth is - the result also strictly depends on the genre, specific song, and how aggressive and heavily processed vocals it has. Sometimes one specific album gets the best results with one specific tool/AI/model, but there might be some exceptions for specific tracks, so just feel free to experiment with each track to get the best result possible using various models and services/AIs from those listed above. SDR on MVSEP doesn't always reflect bleeding well.

“Some people don't realize that if you want something to sound as clean as possible, you'll have to work for it.  Making an instrumental/acapella sounding good takes time and effort. It's not something that can be rushed. Think of it like making love to a woman. You wouldn't want to just rush through it, would you? Running your song through different models/algos, then manually filtering, EQ'ing, noise/bleed removing the rest is a start. You can't just run a song through one of these models and expect it to immediately sound like this” rAN

A good starting point is to have a lossless track. 

Then from free separation AIs to get a decent instrumental, you can start from these solutions:

- Inst fullband (fb) HQ_3/4 on paid x-minus, or on MVSEP or Colabs

HQ_4 vs 3 has some problems with fadeouts when occasionally it can leave some vocal residues

HQ_3 generally has problems with strings. mdx_extra from Demucs 3/4 had better result with strings here, sometimes 6s model can be good compensation in ensemble for these lost instruments, but HQ3 gives some extra details compared to those.

HQ_3/4 are generally muddy models at times, but with not much of vocal residues (near Gsep at times, but more than BS-Roformer v2).

For more clarity, use MDX23C HQ model (HQ 2 can have less vocal residues at times).

Another possibly problematic instruments are those wind ones (flute, trumpet etc.)

- use Kim inst or inst 3 then

HQ3 has worse SDR vs:

- voc_ft, but given that HQ_3 is an instrumental model, the latter can leave less vocal residues at times.

https://mvsep.com/quality_checker/leaderboard2.php?id=4029

https://mvsep.com/quality_checker/leaderboard2.php?id=3710

These are SDR results from the same patch, so the voc_ft vs HQ_3 comparison is valid.

- MDX23C_D1581 (narrowband) - usually worse results than voc_ft and probably worse SDR if evaluation for both models was made on the same patch

Can be a bit better for instrumentals

“The new model is very promising

although having noise, seems to pick vocals more accurately and the instrumentals don't have that much of the filtering effect (where entire frequencies are being muted).”

While others say it’s worse than demucs_ft

- GSEP AI an online closed source service (cannot be installed on your computer or your own site). mp3 only, 20kHz cutoff.

Decent results in some cases, click on the link above to read more about GSEP in the specific section below. This SDR leaderboard underestimates it very much, probably due to some kind of post-processing used in GSEP [probably noise gate and/or slight reverb or chunking). As a last resort, you can use 4-6 stems option and perform mixdown without vocal stem in e.g. Audacity or other DAW. 4-6 stem option has additional noise cancellation vs 2 stem.

GSEP is good with some tracks with not busy mix or acoustic songs where everything else simply fails, or you’re forced to use the RX10 De-bleed feature.

- GSEP is also better than MDX-UVR instrumental models on at least tracks with flute and possibly duduk/clarinet or oriental tracks, and possibly tracks with only piano, as it has a decent dedicated piano model.

- To address the issue with flute using MDX-UVR, use the following ensemble: Kim_Inst, HQ1, HQ2, INST 3, Max Spec/Max Spec (Anjok).

- Sometimes kim inst and inst3 models are less vulnerable to the issue (not in all cases).

- Also, main 406 vocal model keeps most of these trumpets/saxes or other similar instruments

- Passing through a Karaoke model may help a bit with this issue (Mateus Contini method).

- inst HQ_1 (450)/HQ_2 (498)/HQ_3 MDX-UVR fullband models in download center of UVR5 - great high quality models to use in most cases. The latter a bit better SDR, possibly a bit less vocal residues. Not so few like inst3 or kim ft other in specific cases, but a good point to start.

What you need to know about MDX-UVR models is that they're divided into instrumental and vocal models and that instrumental models will always leave some instrumental residues in vocals and vice versa - vocal models will more likely to leave some vocal residues in instrumentals. But you can still encounter specific cases of songs when breaking that rule will benefit you - that might depend on the specific song. Usually, instrumental model should give better instrumental if you’re fighting with vocal residues.

Also, MDX-UVR models can sometimes pick up sound midi effects which won’t be recovered.

- kim inst (a.k.a. ft other) - cutoff, cleaner results and better SDR than inst3/464 but tends to be more noisy than inst3 at times. Use:

- inst3/464 - to get more muddy, but less noisy results, although it all depends on a song, and sometimes HQ_1/2/3 models provides generally less vocal residues (or more detestable).

- MDX23 by ZFTurbo v1 - the third place in the newest MDX challenge. 4 stem. Already much better SDR than Demucs ft (4) model. More vocal residues than e.g. HQ_2 or Kim inst, but very clean results, if not the cleanest among all at the time. Jarredou in his fork fixed lots of those issues and further enhanced the SDR so it’s comparable with Ensemble on MVSEP, which was also further enhanced since the first version of the code released in 2023, and also has newer models and various enhancements.

- Demucs 4 (especially ft 4 stem model; UVR5, Colab, MVSEP, 6s available) - Demucs models don't have so aggressive noise cancellation and missing instruments issue like in GSEP. Check it out too in some cases (but it tend to have more vocal bleeding than GSEP and MDX-UVR inst3/464 and HQ_3 (not always, though), and 6 stem has more bleeding than 4 stem, but not so much like the old mdx_extra 4 stem model).

- Models ensemble in UVR5 GUI (one of the best results so far for both instrumentals and vocals SDR-wise). Decent Nvidia GPU required, or brace for 4 hours processing on 2/4 Sandy Bridge per whole ensemble of one song. How to set up ensemble video.

General video guide about UVR5.

"UVR-MDX still struggles with acoustic songs (with a lot of pianos, guitars, soft drums etc.)" so in this case use e.g. GSEP instead.

Description of vocal models by Erosunica

"That's my list of useful MDX-NET models (vocal primary), best to worst:

- MDX23C-8KFFT-InstVoc_HQ (Attenuates some non-verbal vocalizations: short low-level and/or high-frequency sounds)

- Kim Vocal 2

- UVR-MDX-NET-Voc_FT

- Kim Vocal 1

- Main (Attenuates some low level non-verbal vocalizations)

- Main_340 (Attenuates some non-verbal vocalizations)

- Main_406 (Attenuates some non-verbal vocalizations)

- Kim Inst (Attenuates some non-verbal vocalizations)

- Inst_HQ_3 (Attenuates some non-verbal vocalizations)

- MDXNET_2_9682 (Attenuates some non-verbal vocalizations)"

and it’s also worth to check HQ_4.

“UVR BVE v2 model [currently on x-minus] is actually full band. There is, however, a small nuance. This model uses MDX VocFT preprocessing, which is not full band. MDX VocFT model is rebalancing the song. The music is slightly mixed with the vocals (25% music + 100% vocals). This mix is then processed by the BVE model. A small amount of music can help the model better understand the context (it's important for harmony separation). We train the model on a rebalanced dataset. It contains 25% of music.” aufr33

_____

Misc section tips moved to last points of Tips to enhance separation section

_____

Screenshot and video showcase

MDX settings in UVR5

For vocal popping in instrumental, read about chunks or update UVR to use better option used automatically, called batch mode (if you didn't update already).

In the latest GUI update, the following min/avg/max features for single models got replaced by a better alternative. Now it’s only applicable for ensemble and manual ensemble in Audio Tools.

Ensemble algorithm explanations

Rules to be broken here, but:

Max Spec is generally for vocals

Min Spec for instrumentals in most cases (it leaves the similarity)

Avg is something in between

E.g. following the above, we get the following setting:

“Max Spec / Min Spec”

Left side = about the Vocal stem/output

Right side  = about the Instrumental stem/output

These three algos in newer UVR versions might be no longer available for single models, and only for Ensemble Mode. So you might still get cleaner results of e.g. voc_ft with max_mag on X-Minus or Colab (or rollback your UVR version).

Further explanations

For ensemble, avg/avg got the highest SDR, then worse results for respectively max/max, min/max and min/min.

For single MDX model, min spec was the safest for instrumental models and gave the most consistent results with less vocal residues than others.

Max spec - is the cleanest - but can leave some artifacts (if you don't have them in your file, then Max Spec for your instrumental like now might be a good solution).

Avg - the best of the both worlds and the only possible to test SDR e.g. at least for ensembles, maybe even to this day if it wasn't patched

“Max Spec/Min Spec” option

For at least a single instrumental model, it's the safest approach for instrumentals and universal for vocals. E.g. Min Mag/Spec in Colab gives me the only acceptable results with hip-hop I usually separate using a single model, but cannot guarantee that Min Spec here may necessarily work exactly like Min Mag in Colab. But the explanation is the same. The best option might even depend on a song.

TL;DR

For vocals bleeding in instrumentals

You can use Spectral Inversion for alleviating problems with bleeding in instrumentals.

Max Spec/Min Spec is also useful in such scenario.

You want less bleed of Vocal in Instrumental stem?

Use Max-Min

For bleeding instruments in vocals

Phase Inversion enabled helps to get rid of transients of the kick that might be still hearable in vocals

Set Ensemble Algorithm: Min/Avg when you still hear bleeding.

If still the same, try Min/Max instead of Avg/Avg when doing an ensemble with Vocals/Instrumental output.

Also, you can resign from ensemble setting, and simply use only Kim vocal model if the result is still not satisfactory.

_______

MDX v2 parameters (HQ_1-4, Kim inst, Inst 1-3, NET)

Don't exceed an overlap of 0.93 for MDX models, it's getting tremendously long with not much of a difference.

Overlap 0.7-0.8 might be a good choice as well.

Also, segments can ditch the performance AF -

segments 2560 and 2752 (for 6GB VRAM) might be still a high, but balanced value, although not fully justified SDR-wise, as 512 or 640 can be better than higher values in many cases.

Overlap: 0.93-0.95 (0.7-0.8 seems to be the best compromise for ensembles, with the biggest measured SDR for 0.99)

Best measured SDR on MVSEP leaderboard have currently following settings:

Segment Size: 4096

Overlap: 0.99

with 512/0.95 worse by a hair (0.001 SDR) and 0.9 overlap as long, but still not tremendously long processing time (1h30m31s vs 0h46m22s for multisong dataset on GTX 1080 Ti).

Segments 512 had better SDR than many higher values on various occasions (while 256 has lower SDR, and has almost the same separation time).

Also, segments 12K performed worse than 4K SDR-wise (counterintuitively to what it is said, that higher means better result, but maybe diminishing returns at some point here, so too big values maybe cause SDR drop in some cases)

It seemed to be correlated with set overlap.

For overlap 0.75, segments 512 was better than 1024,

but for overlap 0.5, 1024 was better, but the best SDR out of these four results has 0.75/512 setting, although it’s a bit slower than 1024, but for 0.99 overlap, 4096 segments were better than 512.

SDR difference between overlap 0.95 and 0.99 for voc_ft in UVR is 0.02.

Segment size 4096 with overlap 0.99 (here) vs 512/0.95 (here) showed only 0.001 SDR difference for voc_ft and vocals in favour of the first result.

Difference between segment size 512 with overlap 0.25 (here) vs 0.95 (here) is 0,1231 SDR for the latter.

The difference between default segment size 256 with overlap 0.25 (here) vs 512/0.95 (here) is 0,1948 SDR for vocals, and 0,1969 with denoiser on (standard, not model), and 0.95 is longer by triple.

1024/0.25 vs 256 has not much longer processing time (7 vs 6 mins) than default settings, and better SDR by 0.0865

For overlap 0.75, segments 512 were better than 1024 (at least on 1 minute audio).

Segments 1024 and 0.5 overlap are the last options before processing time increases very much.

Measurement is logarithmic, meaning that 1 SDR is 10x difference.

_____

MDX v3 parameters (MDX23C-InstVoc HQ and 2 and MDX23C_D1581)

(Biggest measured SDR)

Segment Size: 512

Overlap: 16

“512/16 is slightly better for big cost of time” vs default 256/8.

- Turns out that with a GPU with lots of VRAM e.g. 24GB, you can run two instances of UVR, so the processing will be faster. You only need to use 4096 segmentation instead of 8192.

It might be not fully correct to suggest by segment and overlap measurements received from multisong dataset, as every single file in the dataset is shorter than average normal track, and that might potentially lead to creating more segments and different overlaps than with normal tracks, so achieved results won’t fully reflect normal separation use cases (if e.g. number of segments is dependent on input file). Potentially, the problem could be solved by increasing overlap and segments for a full length song to achieve the same SDR as with its fragment from multisong dataset.

Recommended balanced values for various archs

between quality and time for 6GB graphic cards:

VR Architecture:

Window Size: 320

MDX-Net:

Segment Size: 2752 (1024 if it’s taking too long as it’s the last value before processing time increases really much; at least SDR-wise, 512 is better in every case than default 256 unless overlap is increased, and still gets good SDR results)

Overlap: 0.7-/0.8

Demucs:

Segment: Default

Shifts: 2 (def)

Overlap: 0.5

(experimental: 0.75,

default: 0.25)

The best SDR for the least time for Demucs (more a compromise, as it takes longer than default settings ofc):

Segments: Default

Shifts: 0

Overlap: 0.99 (max can be 0.999 or even more, but it’s getting tremendously long)

"Overlap can reduce/remove artifacts at audio chunks/segments boundaries, and improve a little bit the results the same way the shift trick works (merging multiple passes with slightly different results, each with good and bad).

But it can't fix the model flaws or change its characteristics"

“Best SDR is a hair more SDR and a sh*load of more time.

In case of Voc_FT it's more nuanced... there it seems to make a substantial difference SDR-wise.

The question is: how long do you wanna wait vs. quality (SDR-based quality, tho)”

For lack of spectrum above 14.7kHz

E.g. in such ensemble:

5_HP-Karaoke-UVR, 6_HP-Karaoke-UVR, UVR-MDX-NET Karaoke, UVR-MDX-NET Karaoke 2

Set Max Spec/Max Spec instead of Min Spec/Min Spec, and also hi-end process (both need to be enabled for fuller spectrum).

Karaoke models are not full band, even VR ones are 17.7kHz and MDX are 14.7kHz IRC. Setting Max Spec with hi-end process will give around 21kHz output in this case.

Cutoff with min spec in narrowband models is a feature introduced at some point in UVR5 GUI for even single MDX models in general, and doesn't exist in CLI version. It's to filter out some noise in e.g. instrumental from inversion. Cutoff then matches model training frequency (in CLI MDX, vocal model after inversion with mixture gives full band instrumental). Also, similar filtering/cutoff is done in ensemble with min spec.

More explanations

Why not always go for Min-Max when you want the best acapella?

Why not always go for Max-Min when you want the best Instrumental?

So far, I hear Max-Min on Instrumental sounds more 'muddy/muffled' compared to Avg-Avg.

I bet this will be the same for acapella, but it's less noticeable (I don't hear it).

Hence, I think the best approach would be always going with Avg-Avg.

Then based on the outcome - after reviewing, tweak it based on your desired outcome,

and process again with either Min-Max or Max-Min.”

Min = less bleeding of the other side/stem (into this side/stem), but could get sound muddy/muffled

Max = more full sound, but potential it will have more bleeding

Avg = average, so a bit of all models combined

Average/Average is currently the best for ensemble (the best SDR - compared with Min/Max, Max/Min, Max/Max).

“Ensemble is not the same as chopping/cutting off and stitching, it blends/removes frequencies. If song 1 has high vocals in the chorus, and song 2 has deep vocals in the chorus, max will mash them together, so the final song will have both high and deep vocals

while min will remove both vocals”

"If I ensembled with max, it would add a lot of noise and hiss, if I ensemble with min it would make the overall sound muted gsep."

Max - keeps the frequencies that are the same and adds the different ones

“Max spec tends to give more artifacts as it's always selecting the loudest spectrogram frequency bins in each stft frames. So if one of the input have artifacts when it should be silent, and even if all other inputs are silent at the same instant, max spec will select the artifacts, as it's the max loud part of spectrogram here.” jarredou

Min - keeps the frequencies that are the same and removes any different ones

"if the phases of the frequencies are not similar enough min spec and max spec algorithms for ensembles will create noisy artifacts (idk how to explain them, it just kinda sounds washy), so it's often safer to go with average"

by Vinctekan

"Min = Detects the common frequencies between outputs, and deletes the different ones, keeps the same ones.

Max = Detects the common frequencies between outputs, and adds the difference to them.

Now you would think that Max-Spec would be perfect since it should combine the all of the strengths of every model, therefore it's probably the best option

That would be the case if it wasn't for the fact that the algorithms that are used are not perfect, and I posted multiples tests to confirm this.

However, it still gives probably the cleanest results, however, there are a few issues with said Max_Spec:

1. Lot of instrumentals are going to be left within the output

2. If you are looking to measure quality by SDR, don't expect it to be better than AVG/AVG

The average algorithm, basically, combine all the outputs and averages them. Like the average function in Excel.

The reason why it works best is that it does not destroy the sound of any of the present outputs compared to Max_Spec and Min_Spec

The 2 algorithms still have potential for testing, though."

Compensation values

“Volume compensation compensates the audio of the primary stems to allow for a better secondary stem.''

For the last Kim's ft other instrumental model, 1.03 or auto seems to do the best job.

For Kim vocal 1 and NET-X (and probably other vocal models), 1.035 was the best, while 1.05 was once calculated to be the best for inst 3/464 model, but the values might slightly differ in the same branch (and compensation value in UVR5 only changes secondary stem - changing compensation value in at least UVR GUI for inst models doesn't change SDR of instruments metric)

self.n_fft /  dim_f / dim_t

These parameters directly correspond with how models were trained. In most cases they shouldn't be changed, and automatic parameter detection should be enabled.

- Fullband models:

self.n_fft = 6144 dim_f = 3072 dim_t = 8

- kim vocal 1/2, kim ft other (inst), inst 1-3 (415-464), 406, 427:

self.n_fft = 7680 dim_f = 3072 dim_t = 8

- 496, Karaoke, 9.X (NET-X)

self.n_fft = 6144 dim_f = 2048 dim_t = 8 (and 9 kuielab_a_vocals only)

- Karaoke 2

self.n_fft = 5120 dim_f = 2048 dim_t = 8

- De-reverb by FoxyJoy

self.n_fft = 7680 dim_f = 3072 dim_t = 9

Denoising

Denoise option used to increase SDR for MDX-Net v2, but instrumentals get a bit muddier (result).

Denoise model has slightly lower SDR (result).

For MDX23C models it somehow changed and using standard denoiser doesn’t change SDR.

Spectral Inversion on bigger dataset like Multisong Leaderboard decreases SDR, but sometimes you can avoid some e.g. instrumental residues using it - can be helpful when you hear instruments in silent parts of vocals.

Explanation:

"When you turn on spectral inversion, the SDR algorithm is forced to invert the spectrum of the signal. This can cause the SDR to lose signal strength, because the inverse of a spectrum is not always a valid signal. The amount of signal loss depends on the quality of the signal and the algorithm used for spectral inversion.

In some cases, spectral inversion can actually improve the signal strength of the SDR. This is because the inverse of a spectrum can sometimes be a more accurate representation of the original signal than the original signal itself. However, this is not always the case, and it is important to experiment with different settings to find the best results.

Here are some tips for improving the signal strength of the SDR when using spectral inversion:

* Use a high-quality input. The better the quality of the signal, the less likely it is that the SDR will lose signal strength when the spectrum is inverted. (...)"

Further, there is also about picking a good inversion algorithm and experimenting with different ones, but UVR seems to have one to pick anyway.

Q: I noticed https://mvsep.com/quality_checker/leaderboard2.php?id=2967

has Spectral Inversion off for MDX but on for Demucs. The Spectral Inversion toggle seems to apply to both models, so should it be on or off?

A: Good catch.

Once u put it on for one or the other, both will be affected indeed.

I've enabled it (so for both, actually) [for this result].

Tips to enhance separation

1. De-bass

Turn down all the bass to stabilize the voice frequencies of your input song (example EQ curves: 1 and 2).

Male setting: cut all below 100hz + cut all above 8khz.

Female setting: cut all below 350hz + cut all above 17khz.

This works, because jitter is reduced a lot.

2. De-reverb

You can also test out the de-reverb e.g. in RX Advanced 8-10 on your input song. One or both combined in some cases may help you get rid of some synth leftovers in vocals. Alternatively (not tested for this purpose), you can also try out this or this (dl is in UVR's Download Center) de-reverb model (decent results). Currently, the VR dereverb/de-echo model in UVR5 GUI seems to give the best results out of the available models (but RX or others described in the models list section at the top can be more aggressive and effective with more customizable settings).

3. Unmix drums (mainly tested on instrumentals)

Separate an input song using 4 stem model, then mix the result tracks together without drums and separate the result using strong ensemble or single vocal or instrumental model (doesn't always give better results).

Alternatively, unmix bass as well. There’s great bass+drums BS-Roformer model released for UVR (currently in beta)

4. Pitch it down/up - soprano/tenor voice trick + ensemble of both

(already implemented as an option in newer versions of UVR under “Shift Conversion Pitch” option”)

Slow down the track before separation, so e.g. model with cut-off will be compensated for its band lost a bit after speeding up again.

If you slow down the input file, it may allow you to separate more elements in the “other” stem of 4-6 stems separations of Demucs or GSEP.

It works either when you need an improvement in such instruments like snaps, human claps, etc. The soprano feature on x-minus works similarly (or even the same), it’s also good for high-pitched vocals.

Be aware that low deep male vocals might not get separated while using this method (then use tenor voice trick instead - pitch up instead of pitch down).

Also, it serves the best for hard paned songs (e.g. 1970 and pre era, e.g. The Beatles, etc). On multisong dataset, it decreases SDR by around 1.

"Basically lossless speed conversion [a.k.a. soprano voice trick]:

Do it in Audacity by changing sample rate of a track, and track only (track > rate), it won't resample, so there won't be any loss of quality, just remember to calculate your numbers

44100 > 33075 > 58800

48000 > 36000 > 64000

(both would result in x0.75 speed)

etc." (by BubbleG)

If you have a mix of soprano and baritone voices, you possibly can do:

"1. Soprano mode (slow down sample rate), then bring back to normal

after that

2. Tenor mode (speed up sample rate), then bring back to normal

and finally combine the two with max algorithm"

Making an ensemble of such results can also increase the quality of separation.

5. Better 4 stem result -

Use 2 stem model result as input for 4-6 stem separation

You may get better results in Demucs/GSEP/MDX23C Colab using previously separated good instrumental result from UVR5 or elsewhere (e.g. MDX HQ3 fullband or Kim inst narrowband in case of vocal residues, or BS-Roformer 1296)

6. Debleed

If you did your best, but you still get some bleeding here and there in instrumentals, check RX 10 Editor with its new De-bleed feature. Showcase

7. Vocal model>karaoke model

You might want to separate the vocal result achieved with a vocal model with MDX B Karaoke afterwards to get different vocals.

8. The same goes for unsatisfactory result of instrumental model - you can use MDX-UVR Karaoke 2 model to clean up the result, or top ensemble or GSEP like for cleaning inverts

9. Mixdown of 4 stems with vocal volume decreased for final separation 

An old trick of mine. Used in times of Spleeter to minimize vocal residues.

Process mixture to 4 stems and then mix stems in a way that vocal is still there, but quieter, so lower their volume, and set drums louder, then send the mixture from it to one good isolation model/ensemble, so in result drums after separation will be less muddy, and possible vocal residues will be less persistent.

But it was in times when there wasn't even Demucs (4) ft or MDX-UVR instrumental models, where such issues are much less prevalent.

10. If you use UVR5 GUI and 4GB, you may hear more vocal residues using GPU processing than e.g. while using 11GB GPU. In this case, use CPU processing instead.

11. Fake stereo trick

Aufr33: “process the left channel, then the right channel, then combine the two. [Hence] the backing vocals in the verses are removed” (it still may be poor, but better). “I'm having to process as L / R mono files otherwise I get about 3-5% bleed into each channel from the other channel, but processing individually, totally fixes that” -A5

On an example of Audacity: import your file, click on down arrow in track selection near its label, click Split Stereo Track, go to Tracks>Add New>Stereo Track.

Mark the whole channel, copy and paste on one of the tracks you divided before.

It will overlap the same mono track in stereo track, so the same across both channels.

Do the same for both L and R separately. Then separate with some model both results separately. Then import both files and join their separate channels by method above. Don’t confuse L and R channel while joining both.

12. Turn on Spectral Inversion in UVR 

It can be helpful when you hear instruments in silent parts of vocals and sometimes also denoiser (although it can make results slightly muddier)

13. For vocal residues in instrumental, you can experimentally separate it with e.g. Kim vocal (or inst 3) model first and then with instrumental model. You might want to perform additional steps to clean up the vocal from instrumental residues first, and invert it manually to get cleaner instrumental to separate with instrumental model to get rid of vocal residues. Tutorial 

14. To not clean silences from instrumental residues in the vocal stem manually, you can use a noise gate in even Audacity. Video

In some cases, using noise reduction tool and picking noise profile might be necessary. Video

15. Choice of good models for ensemble

Use only instrumental models for ensemble if you have some vocal residues (and possibly vice versa - use only vocal models for ensemble for vocals to get less instrumental residues) - mainly used in times when there was still strong division between vocal and instrumental models (before MDX23C release). Now it can narrow down to picking only models which doesn’t have bleeding - listening all the separate models results carefully, and pick the best 2-5 results to make an ensemble.

16. For vocals with vocoder

You can use 5HP Karaoke (e.g. with aggression settings raised up) or Karaoke 2 model (UVR5 or Colabs). Try out separating the result as well.

"If you have a track with 3 different vocal layers at different parts, it's better to only isolate the parts with 'two voices at once' so to speak"

Be aware that BS-Roformer model ver. 2024.04 on MVSEP is better on vocoder than the viperx’ model.

17. Find some leaked or official instrumental for inversion

 

To get better vocals

If you're struggling hard getting some of the vocals:

"I used an instrumental that I don't remember where I found it (I'm assuming most likely somewhere on YouTube) and inverted it and then used MDX (KAR v2) on x-minus and then RX 10 after.

I Just tried the one-off Bandcamp and funnily enough it didn't work with an invert as good as the remake that I used from YouTube, but I don't remember which remake it was I downloaded because it was a while ago"

18. Fix for ~"ah ha hah ah" vocal residues

Try out some L/R inverting, try out to separate multiple times to get rid of some vocal pop-ins like this

19. Center channel extraction method 

by BubbleG using Adobe Audition:

"The idea is that you shift the track just enough where for example if you have a hip hop track, and the same instrumental tracks the drums will overlap again in rhythm, but they will be shifted in time so basically Center Extract will extract similar sounds. You can use that similarity to further invert/clean tracks... It works on tracks where samples are not necessarily the same, too…”

>

Step-by-step guide by Vinctekan (video)

1. You take your desired audio file

2. Open it in Audacity

3. Split Stereo to Mono

4. Click the left speaker channel (now mono), and duplicate it with Ctrl+D.

*: If the original and duplicate is not beside eachother, move it so that it's next to eachother

5: Select the original left speaker channel and it's duplicate, and click "Make Stereo Track"

6: Solo it.

7. Export it in Audacity, preferably in 44100hz since UVR doesn't output in higher frequencies. Format, and bit depth don't really matter, I prefer wav always.

8: Do the same thing for the right speaker channel.

9: Open UVR

10: Navigate to Audio Tools>Manual Ensemble.

11: Make sure to choose Min Spec (since that function is supposed to isolate the common frequencies of 2 outputs)

12: Select the 2 exported fake stereo files of both the left and right speaker channels.

13: Hit process

___

20. Q&A for the above

Q: For the right channel are you doing the same with the duplicate and moving the file next to the original or just duplicating and making that stereo?

A: Those 2 steps go hand in hand. These reason I mentioned it is because if you try to make a Stereo Track with those 2 (the left/right channel speaker, and it's duplicate mono]) when there is a track between them, it doesn't work. Even if you select those 2 with Ctrl held down.

Take that 1 channel (left/right), Ctrl+C, Ctrl+V, now you have 2 of the exact same audio. Hold Ctrl select the 2, click "Make Stereo Track". Finally, export.

21. Passing through lot of models one by one

"I usually do ensemble to make an instrumental first, then demucs 4_ft… sometimes I do it once, then take that rendered file and pass it back through the algo a few more times, depends until it strips out artifacts."

It can be beneficial also in case of more vocal residues of MDX23 or Demucs ft model compared to current MDX models or their ensembles.

22. If you still have instrumental bleeding in vocals using voc_ft, process the result further with Kim vocal 2

23. Rearrange cleaner parts

When a verse starts, and you start having muddy drums and their pattern is consistent (e.g. some hip-hop), and you have cleaner drums from fragments before the verse starts, you can rearrange drums manually, using 4 stems model and paste that cleaner fragments throughout the track. Sometimes fade outs or intros can have clean loops without vocals, which can be rearranged without even the need of separation. Listen carefully to the track. Such moments can be even briefly in the middle of the song.

24. arigato78 method for lead vocal acapella

1) Try to make the best acapella (using mvsep.com site or using UVR GUI). I recommend the MDXB Voc FT model for this with an overlap setting set to at least 0.80 (I used 0.95 for this example). The overlap for this model at mvsep.com is set to 0.80. Speaking of the "segment size" parameter in UVR GUI - changing it from 320 to 1024 doesn't make much of a difference. It acts randomly, but we're working on a beta version of UVR GUI - remember that. (...)

I noticed all the "vocal-alike" instruments still remaining on the acapella track, but wait...

2) The second part is to process the acapella thru the mdx karaoke model (I did it using mvsep.com). I prefer the file with "vocalsaggr" in the name. It has more details than the file with "vocals" in it. The same goes to the background vocals in this case - I prefer the "instrumentalaggr" one.

One important thing - all (maybe almost) of the residue instrumental sounds were taken by mdx karaoke model to the backing vocals stem, leaving the lead vocal almost studio quality ("studio"). But - it may be helpful for all you guys trying to make good acapellas. I was just playing with all the models and parameters and I accidentally came across this. Please, let me know what you think about it. I'm gonna try this on some tracks with flutes, etc. And I realize that this method is not perfect - we get nice lead vocals, but the backing vocals are left with all that sh*tty residues.

So the track is called "Reward" by Polish singer Basia Trzetrzelewska from her 1989 album "London, Warsaw, New York".

___

25. Uneven quality of separated vocals

You can downmix your separated vocal result to mono and repeat the separation (works for e.g. BVE model on x-minus).

26. Experimental vocal debleed with AI for voice

Sometimes for instrumental residues in vocals, AIs for voice recorded with home microphone can be used (e.g. Goyo, or even Krisp, RTX Voice, AMD Noise Suppression, Adobe Podcast as a last resort) it all depends on the type of vocals and how destructive the AI can get.

27. Minimize vocal residues for very loud songs

For very loud tracks between -2.5 and -4 iLUFS, try to decrease volume of your track before separation. E.g. for Ripple, -3dB for loud tracks is a good choice. If your track you’re trying to separate is already quiet and around -3dB, then the step is not necessary.

28. Brief models summary

MDX-Net HQ_3 or 4 is a more aggressive model for instrumentals, with usually fewer amounts of residues vs MDX23C HQ models or sometimes even vs KaraFan or jarredou’s MDX23 Colab v2.3. HQ_3 can give muddier results vs competition, though.

The most aggressive are BS-Roformer models, but they can sound filtered and even muddier at times, but cleaner. It’s good to use them with ensemble with e.g. MDX23C model.

voc_ft is pretty universal for vocals (with residues in instrumental, but not less muddy results), while people also liked Ripple/Capcut, although they give more artefacts (use the released BS-Roformer models now for vocals instead). Consider using MDX23C HQ model(s) as well, but they tend to have more instrumental residues.

29. Cleaning up bleeding between mics in multitracks

(by SeniorPositive)

"Demucs bleed "pro" tip that I figured out now, and I didn't see mentioned, that I will probably try to use every time I hear some bleed between. (...) I was cleaning multitrack from bleed between microphones in conga track, and used demucs for separation drums/rest pair, and [the] other [stem] had some of those bongos still, very very low, but it existed, and I heard it just enough.

- So I took rest signal, boosted it +20db (NOT NORMALISE! Other value but make note how much of it you boosted, go few dbs less to 0db threshold). If you do not boost it to sensible levels, the algorithm will skip it.

- Do separation once again (this time I've done it using spectralayers one, but it's also demucs)

- lower result -20db add this result to first separation result

[The] result [is -] better separation, fewer data in other/bleed and with proper proportions.

It looks like AI is not yet perfect with low volume information and, as seen in ripple Bas Curtiz discovery, too hot content also."

Showcase

30. For clap leftovers in vocal stem

Methods suggested in debleeding

31. (paraphrase of point 17)

Use traditional phase inversion method and then feed them to the UVR models if you had a chance finding any official instrumental or vocal, but it doesn’t invert perfectly. This way, the models will have less noisy data to work with. But it sometimes happens that the official instrumental and the vocal version of tracks have slightly different phasing. This makes isolating vocals via phase inversion difficult, or even sometimes impossible ~Ryan_TTC

Sometimes only specific fragments of song will align, and in further parts of the track it will stop and require manual aligning. You may try to use utagoe or possibly UVR with Aligning in Audio Tools as it shares some similar functionalities.

Why official stems don’t invert?

“Very rarely will the vocal or instrumental fully invert out of the master. This is because of master bus processing and non-linear nature of that processing. I.e. part of the masters sound is the processing reacting to the vocal and instrumental passing through the same chain.

Sidechaining and many limiters are also looking ahead to the signal. Also, some processing is non-linear so even if you set it up identically re. settings, each bounce will be slightly different in nature. Stuff like saturation/distortion. Some reverbs, limiters and transient shapers etc are not outputting the same signal / samples every time you bounce, so instrumental bounce is not the same as the master bounce in terms of phase inversion.” - Sam Hocking

32. Muddiness in instrumentals of some BS-Roformer models

Invert (at best lossless) mixture (original song - instrumental mixed with vocals) with vocal result of separation. It might increase vocal residues outside busy mix parts.

Inverting vocals instead of mixture will result in less residues, but more artificial results in busy mix parts.

Similar trick might even increase SDR for MDX23C models irc.

How to perform inversion is explained somewhere in this doc by Bas Curtiz.

It might be unnecessary to use in UVR - it might use this trick for BS-Roformer models already, but for 2024.02 on MVSEP it was beneficial.

The trick is not necessary for 04.2024 BS-Roformer model (it sounds worse after inverting). Furthermore, for some muddiness in this model, you can use the premium’s feature - ensemble. The default output without intermediates should be enough (min_fft is very muddy, and max_fft very noisy). Strangely, the result from Roformer from intermediates might sound v. slightly better (maybe it was something random). The ensemble is kinda mimicked in jarredou’s MDX23 v2.4 Colab and to some extend it can be mimicked in UVR by using 1296+1297+MDX23 HQ ensemble (or copy of 1296 result via Manual ensemble instead, for faster processing).

33. Descriptions of models, pt. 2

Muddiness of instrumentals in specific archs

Beside changing min/avg/max spec for MDX ensembling (or in Colab for single models), plus aggression for VR models, or manipulating shifts and overlap for Demucs models, you need to know that some models or AIs sound usually less muddy than others. Like e.g. VR tends to have less muddiness vs MDX-Net v2 arch, but the first tends to have more vocal residues. Consider using HQ2/3/4/inst3/Kim inst for fewer residues than in VR arch or BS-Roformer.

For less muddiness than in MDX-Net, consider using MDX23 Colab 2.0/2.1 or 2.2 (more residues) or KaraFan (e.g. preset 5).

34. Muddiness of 4/+ stem results after mixdown

UVR5 supports even 64 bit output for Demucs, eventually you can use Colab or CML version for 32-bit float, but mvsep.com supports 32 bit output in MDX23 model when you choose WAV. It has better SDR vs Demucs, anyway, but sometimes more vocal residues.

Then, on MVSEP beside 4 stems, you have also instrumental - ready mixture of the three for instrumental in 32 bit provided, which is not bad, but you can go to extreme, and download e.g. Cakewalk, and 3 stems separately, and now in Cakewalk:

1) Don't use splash screen project creation tool, close it

2) Go to new

3) Pick 44100 and 64 bit

4) Make sure that double 64 bit precision is enabled in options

5) Import MDX23 3 stems (without vocals)

6) Go to file>Export

7) Pick WAV 64

Output files of 64 bit mixdown are huge, but that way you get the least amount of muddiness as possible. If only MDX23 model doesn't give you much more vocal residues vs MDX-UVR inst models or top ensemble which you wouldn't accept.

Be aware that 32-bit float vs 16 bit outputs can sound more muddy. Probably due to the fact that most sound cards/DACs don’t have native 32-bit float output support in drivers and additional downsampling must be done in-fly during playback, probably even if some drivers allow using 32-bit output in Sound settings in Control Panel for the same device (while other version might not).

Spectrum-wise, instrumentals downloaded from MVSEP vs manual mixdowns are nearly identical. The only difference in one case I saw was in an instrumental intro in the song where the site's instrumental had more high end, maybe noise, but besides, spectrum looks identical at first glance without zooming it. Still, when I performed mixdown to anything lower than 64 bit, I didn't get comparable clarity to the site's instrumental. Maybe I'd need to change some settings, e.g. change project bit depth to the same 32 bits as stems and later perform mixdown to 64 bit. Haven't tested it yet.

35. Debleeding of drums in vocals by Sam Hocking

For drums, I usually try and do some kind of sidechained denoise using the demixed Drum stem itself as the signal to invert with. If you shape 'shape' the sidechained input using spectral tools/filters/transient tools etc, you can often null more of the drum out of the vocal. My favourite tool for this is Bitwig Spectral Split, but there's several FFT Spectral VSTs out there. The key is the tools has smoothing to extend the transients in time a bit so they null more.

Difficult to audibly hear on a video, but here's a vocal stem with a lot of residue i've exaggerated in a passage without singing. I turn on a sidechain bass, drums and other stem to phase invert them out the vocal a bit via the spectral transient split in Bitwig. I then take a spectral noiseprint in Acon Digital of what's left and that works as a mild denoiser, but only after the inversion has done its thing. Don't take the noise print until you're happy everything else is inverting out as much as you can get it, and it's not noticeable.

36. Manual MDX23 stems mixdown issues

It can happen that after importing three stems from MDX23 or other arch, into the same session, all combined they sound so loud that they clip on the master fader. I’d rather suggest that, in many cases it can be ignored, as after mixdown it will be fine in most cases and better than with using limiter, but it also depends on a song loudness of how much clipping even the instrumental from single model will have:

37. Q: Why sometimes separated instrumentals have clipping?

A: “Mixture doesn't clip, but instrumental is clipping.

This is because where the instrumental is clipping in positive values, the vocals are in negative values, and so vocals are lowering instrumental peak value when mixed together.

If you separate a song peaking at 0 with high loudness, the instrumental will probably clip because of this (and the more loudness, the more chances this clipping can happen, as waveform is brickwalled toward boundaries values). It's the laws of physics, as that's because of these laws that audio phase/polarity inversion works.

That's why Demucs is using the "clamp" thing, or can also lower the volume of the separated stem to avoid that clipping.

- Most of the time, lowering your input by 3dB solves that issue.  

- Saving your audio to float32 can be a solution, as "clipped" audio data is not lost in this case” (jarredou)

So theoretically in 32-bit float, the volume can be decreased after separation and still nothing is lost, and clipping should be fixed.

38. Separated audio using MDX-Net arch has noise when mixture has no audio and is silent

Use denoise standard (or denoise model) in Options>Choose Advanced Menu>Advanced MDX-Net Options>Denoise output

39. MDX23C/BS-Roformer models ringing issue

“It was reported that maybe DC offset can amplify it. Fixing it with RX before separation was said to alleviate the issue” See the screenshot how to do it,

“don't forget to use "mix" pasting mode” - jarredou

It serves to alleviate the issue of horizontal lines in specific frequencies across the whole track, cause most likely by bandsplitting neural network artifacts. Problem presented above.

Q: Mine is 0.047% for the DC offset, so I would just do 0.047 or 0.04

A: “0.047% is kind of normal value, it's even a great one. No need to fix that.

I don't know at what value it could be become problematic for source separation models.

On some raw instrument recordings, I have seen 20%~30% DC offset sometimes, which can become a real issue for mixing then, as it's reducing headroom” - jarredou

40. Ensemble of pitch shifted results

After you follow the point 4, so “you change sample rate before each separation and restore it after for each, then ensemble them all”

“on drums it was really working great, where sometimes you have sudden muffled snare because other masked it, the SRS ensemble [irc in MDX23 and KaraFan] was helping a lot with that, making separation more coherent across the track.”

41. A5 method for clean separations

Consider the fake stereo trick fist from point 11, separate with BS-Roformer 1296, clean the residues in vocals manually, put the vocals back into mixture - so perform mixdown to have a mixture again, and then separate this mixture with demucs_ft

42. Using surround versions of songs

Sometimes you can get vocals separated easier from center channel from surround version of the song. Perhaps you might also get different separations of instrumentals from such versions, also with possibility of manipulating the volume of specific tracks before mixdown to 2.0 file. It might be necessary anyway, because otherwise you might run into some errors on an attempt of separation of 5.1 file or with more channels.

Visit this section for more.

43. Matchering as substitute of ensemble (UVR>Audio Tools)

If the result of some separation is too noisy, but it preserved the mood and clarity of the instrumental much better than some cleaner, but muddy result, you can use that noisy result

as the reference for more muddy target file. E.g. voc_ft used as reference for GSEP 2 stem instrumental output.

44. Retry separation 2–3 times

At least for MDX23C models it happened for someone, that every separation made in UVR differed in terms of muddiness and residues, and someone received satisfactory result after the second or third attempt. Consider turning on Test mode in UVR, so the few digits number will be added to the output file name, and results won’t be overwritten

45. Use Dolby Atmos/360 Reality Audio/5.1 version of the song

Multichannel mixes can give better results for separation. For more read

Be aware that center may contain not only vocal, but also some effects.

Consider separating every channel separately, or one pair of channels at the time (rear, front, center, sides separately) or only separate center channel separately and all the rest separately.

______________________

Current SDR leaderboard:

https://mvsep.com/quality_checker/leaderboard2.php?&sort=instrum

(some models/AIs/methods are not public, or only on MVSEP, all others you will find in UVR's and/or download center if you can't find some models, some only after using VIP code, or somewhere in this doc if it’s public)

Older dataset, more of older models, a bit less reliable, no longer updated leaderboard by the results of new models

https://mvsep.com/quality_checker/leaderboard.php?sort=insrum

The biggest SDR doesn’t have automatically mean that your result will be the best for your song, and your use case (inst/voc/stem). Read the list of all the best models and methods, and experiment.

More settings explanation

Leaving both shifts and overlap default vs shifts 10 decreases SDR by only 0.01 SDR in ensemble, but processing time is much faster - 1.7x for each shift. Also, 0.75 overlap increases SDR at least for a single model when even shift is set to 1)

 

It takes around 1 hour 36 minutes on a GTX 1080 Ti for 100 1-minute files.

“And 18 hours on i5-2410M @2.8 for 5:04 track.

Rating 1 Ensemble on a 7-min song to compare.

Time elapsed:

1080Ti = 5m45s = 345s = 100%

4070Ti = 4m49s = 289s = 83,8%

4070Ti = ~16% faster

1080Ti = ~€250 (2nd hand)

4070Ti = €909 (new)

Conclusion: for every 1% gain in performance, u pay €41 extra (€659 extra in total).” Bas

Get VIP models (optional donation)

https://www.buymeacoffee.com/uvr5/vip-model-download-instructions

If you still see some missing models in UVR5 GUI, which are mentioned in this document, get them from download center (or here, expansion pack) and click refresh in model list if you don't see some models.

- (old) For specific songs, other ensemble configurations can give better results.

E.g. check Kim model + Inst 3 Max avg/avg (Max Spec/Min Spec as the last resort).

"Since the SDR [on MVSEP] is flawed from the get-go due to the dataset being used isn't really music, but sample-based, don't get your hopes up too much." (it's about the previous synth dataset in particular).

But it generally reflects in greater extent differences between models, e.g. used in Demixing Challenge 2021, so it's not totally bad and multisong dataset might be even better - just be aware that different settings can give you better results for your particular song rather than average best combination of models on the chart.

About SDR evaluation on MVSEP and how important factor is that to the final result -

It still depends on the specific song, what bag of models or what specific model will come out the best in specific scenarios. Suggesting by SDR of at least multisong dataset can be misleading, as the metric doesn’t really reflect the differences between HQ_3 and MDX23C fullband model in case of bleeding in instrumentals occurring in lots of contemporary songs. Although, the bleeding issue doesn’t always occur, and then, HQ_3 results can be more muffled, so in this case, SDR metric will be more accurate to human listening scenario where MDX23C models gets better metric.
“The thing is that SDR evaluates at the same time how "full" the stem separation is and how much bleed there is in the separated stem. You can't know, only based on SDR score, which of "fullness" or "bleedless" is impacting the score the more” jarredou

Also, according to some SDR evaluations conducted by Bas, it’s not true that permanent bleeding have way more impact on SDR than bursts of bleeds here and there.

Still, in some scenarios SDR metric of multisong dataset on MVSEP can be a safe approach, giving you some reassurance that the result in a strict test scenario will be at least decent in some respects, although you can (or even should when some instruments are missing) still experiment trying to get a better result, but it doesn't have to be reflected in SDR. To sum up, SDR evaluation is only kind of averaging toward a specific dataset of songs. For example, if you could measure SDR for a specific song by its official, perfectly inverting instrumental, then it may not get the best result by the settings of the best ensemble combination measured by SDR for the time being. Suggesting by SDR means there’s just higher chance to hit a good result in a certain spectrum of sonic changes - it’s a good starting point to experiment further.

Judging by 9.7 NET 1 models, MVSEP synth dataset usually gives ~0.7 higher scores than on Demixing Challange 2021 leaderboard.

“A calculation by a computer isn't a human ear”.

Another way to at least sonically evaluate a model/ensemble, is to test it on a set of AI killing tracks which tend to have specific issues after separation with most if not all models, and to see how better or worse it got. Childish Gambino – Algorhythm is a good starting point to chase differences in vocal bleeding in instrumentals among various models.

How the ensemble in UVR works

"Max takes the highest values between each separation to create the new one (fuller sounding, more bleed).

Min takes the lowest values between each separation to create the new one (filtered sounding, less bleed).

Avg is the average of each separation."

“[E.g.] HQ 1 would be better if ensemble algorithm worked how I thought it did.

It was explained to me that [ensemble algorithm] tries to find common frequencies across all the outputs and combines them into the result, which to me doesn't actually seem to happen when HQ1 manages to bring vocals to the mix in an 8 model ensemble, how is it not like "okay A those are vocals, and B you're the only model bringing those frequencies to me trying to imply that they are not vocals" and discard them. I mean I am running max/max, but I swear all avg/avg and min/min do is lower the volumes [see enemble in DAW], It's hard to know without days of testing”

“If u try avg/avg it will get quite muddy on instr result than max/max. But some song if you put kim vocal 1 will get vocal residue on the result”.

Other ensembles for UVR5

Best newer ensembles on the list at the top of the doc. Older configurations follow after the listed hidden results below.

For reference, read SDR evaluation chart (UVR ensembles will appear later in the chart).

Be aware that some of the results on the chart above at the top are not from UVR5 or use various forks or methods to achieve better results and might be not public/still WiP, e.g. the following:

Hidden leaderboard results (all SDR results provided for instrumentals):

- Bas’ unreleased fullband vocal model epoch 299 + voc_ft - SDR 16.32) 

- this older viperx’ unreleased custom weights code (newer one is up already), besides, “instrumental vX” entries are his ones (it rather utilizes public models with his own non-public weighted inference, and he gatekeeps it for more than since MDX23 results were published).

BTW. ebright is probably the 2nd place in MDX23, at least the result appeared in similar time like ByteDance. 2nd place decided not to publish their work.

- 32-bit result of original dataset uploaded as output, opposed to previous 16-bit.

- Bytedance v.0.2 - inst. SDR 17.26, now it’s outperformed by v.0.3 and is 17.28, now called 1.0),

-"MSS" - is probably ByteDance 2.0, not multi source stable diffusion, as BD's test files which were published were starting with MSS name before, but the first doesn't necessarily contradict the latter, although they said to use novel arch - SDR 18.13, and probably another one by ByteDance - SDR 18.75, let's call it 2.1, but seeing inconsistent vocal result vs previous one here, we have some suspicions that the result was manipulated at least for vocals (or stems were given from different model).

- Ripple app/SAMI-Bytedance on the chart is 16.59, also input files weren't lossless.

- BS-Roformer results by viperx posted in Training

____

Some of these models in the download center are visible after using the VIP code.

The best ensembles for UVR by SDR (old):

(some newer/better ones than these can be at the top of the doc)

For 28.07.23

Kim Vocal 2 + MDX23C_D1581 + Inst HQ3 + Voc FT | Avg/Avg

For 28.07.23 (#4563)

Kim Vocal 1 + Kim Vocal 2 + MDX23C_D1581 + Inst HQ3 + Voc FT + htdemucs_ft | Avg/Avg

For 27.07.23 (#4561)

Kim Vocal 1 + Kim Vocal 2 + Kim Inst + MDX23C_D1581 + Inst HQ3 + Voc FT + htdemucs_ft | Avg/Avg (beta UVR)

For 24.06.23 (#3842)

Kim Vocal 1 + 2 + Kim Inst + HQ3 + Voc FT + htdemucs_ft | Avg/Avg | Chunks: ON

(but for ensembles instead of single models it can score better with chunks disabled)

[Consider using MDX23C_D1581 vocal model above as well, if ensemble in this arch works correctly, if not, perform manual ensemble, not sure here)

As for the very big ensemble from older synth leaderboard (2023-04-30):
MDX-Net: 292, 496, 406, 427, Kim Vocal 1, Kim Inst + Demucs ft

Optionally, with later released models - voc_ft and Kim Vocal 2 -

It doesn't score too good SDR-wise on newer synth dataset, since it uses older models which have better counterparts already. Synth dataset is not used for evaluations for a long time.

For 13.06.23 (#3322)

Inst HQ2 + 427 + Inst Main + Kim Inst + Kim Vocal 1 + 2 + Demucs FT | Avg/Avg | Chunks Batch | Spectral inversion OFF

Most probably you can safely replace Inst HQ2 with HQ3 (better SDR) getting a slightly better SDR in ensemble (it’s just not tested in ensemble yet).

But be aware that “The moment you introduce Instrumental models, there will be a bit of residue in the vocal output.

However, the SDR scores higher.

I'd say go with Vocal models only, if you care about your vocal output.”

The same is vice versa for instrumentals.

- Older ensemble configurations or custom settings with lower SDR

(but might be useful for some specific songs or genres if further info is given)

From public models, the best SDR on 14.04.23:

Ensemble | Kim vocal 1 + Inst HQ 2 + Main 427 + htdemucs_ft | Avg/Avg | Chunks Batch | Denoise Output ON | Spectral Inversion OFF | WAV

For instrumentals

And

Ensemble | Kim vocal 1 + Inst 3 + Inst HQ 2 + Inst Main + htdemucs_ft | Avg/Avg | Chunks Batch | Denoise Output ON | Spectral Inversion OFF | WAV

For vocals

As of 01.01.23 the best SDR for vocals/instrumentals has:

-UVR-MDX-NET INST MAIN + UVR-MDX-NET Inst 3` + `kim vocal model fine tuned (old)` + `Demucs: v4 | htdemucs_ft - Shifts: 2 - Ensemble Algorithm: Avg/Avg`, chunk margin: 44100 (better SDR compared to 22050), denoise output on (-||- off), spectral inversion off (-||- on)

- MDX-Net: Kim vocal model fine-tuned (old) + UVR-MDX-NET_Main_427 + Demucs: v4 | htdemucs_ft - Ensemble Algorithm: Avg/Avg, Volume Compensation: Auto

(it sets `1.035` - the best for Kim (old) model vs other options)

Shifts: 10 - Overlap: 0.25

- a bit worse ensemble settings than both ensemble settings above SDR-wise:

UVR-MDX-NET Inst 3 (464) and “UVR-MDX-NET_Main_438” vocal model (main) and htdemucs_ft - Ensemble Algorithm: Average/Average

- Also good combo (for instrumentals, vocals in half of the cases):

MDX-Net: UVR-MDX-NET Inst Main

VR Arc: 7_HP2_UVR

Demucs: v4 | htdemucs_ft

Max Spec/Max Spec

- UVR-MDX-NET Inst 3 as a main model and 7_HP2-UVR as a secondary with the scale set to 75%

(Anjok 21.12.22: Personally, I found that using [it] produces the cleanest instrumental."

“It means the final track will be 25% hp2 model and 75% inst 3 (similar to ensemble feature, but you have more control over how strong you want the secondary model to be)”

- MDX-NET inst3 model (464) with secondary model 9_HP2_UVR 71% (hendrysetiadi: seems to get the best results with e.g. disco songs).

- Inst Main + 427 + Net 1 (CyPha-SaRin: was a pretty good combo. One big model, one medium, one small, pretty decent results across the board. If a song going to have problematic parts, it's going to have regardless of what combo you picked, it seems.)

- kim vocal 1 + instr 3 + full 403 + inst HQ 1 + full 292 + instr main with MAX/MAX (hendrysetiadi: i think that's the best combination of ensemble that i found)

- For Rock/Metal - The MDX-Net/VR Architecture ensemble with the Noise Reduction set between 5-10 (depending on the track) and Aggression to 10.

- For Pop - The MDX-Net/VR Architecture ensemble with the Noise Reduction set between 0-4 and Aggression to 10. (Anjok, 13.05.22)

- Here is another ensemble that I have tried myself "VR Arc: 1_HP-UVR x MDX-Net:  Kim Vocal 1 x MDX-Net: UVR-MDX-NET: Inst HQ 1 x MDX-Net: UVR-MDX-NET: Inst HQ 2" All with the average/average ensemble (Mikey/K-Pop Filters)

- Inst HQ 1 & Main 427 are best for India

-VR: 7_HP2-UVR, MDX: Kim vocal 1, Inst 3, Inst Main, Main, htdemucs_ft

Max/Max, main pair: vocals/instrumental

"Instrumentals sound so good using these settings also. I can’t believe this is possible. What an amazing software. Thank you to whoever made this." StepsFan

- I got an ensemble that works well for loud and crazy tracks (this instance it's dariacore lol) - by knock:

Models: Inst HQ 3, Main, Voc FT

Ensemble Algorithm: Avg/Avg

MDX-Net settings:

Vol Comp: Auto

Segment Size: 4096 (you can go up to 6144 if you want to wait longer, 4096 has seemed to be perfect for me)

Overlap: Default (which I believe is 0.5)

Shift Conversion Pitch: -6 (semitones)

Match Freq Cut-off: Off

Denoise Output: Yes

Spectral Inversion: No

Mateus Contini's methods
 #1 (old)

-“TIP! For busy songs: I was testing some ensembles trying to get Instrumental Stems with less volume variation (muddy), preserving guitar solos, pads the most and I had great results doing the following, for anyone interested:

Ensemble (Demucs + 5_HP-Karaoke with Max for Instrumental stem) - The result will be the Instruments + Backing Vocals and this preserves most of the guitar solos, pads and things that MDX struggles.

Instrumental Stem Output > Demucs to remove the Backing Vocals from the track - This pass will remove the rest of the Vocals. In some cases will be some minor leftovers that you can clean later with other methods.

I find the results better than Demucs alone/ MDX models or other ensembles for what I'm looking for. I'm not evaluating noise, but fuller instrumental Stems, trying to preserve most of it and also the cost (time) to do it.

Since I'm not interested, for this case, in doing manual work song by song and just use these stems to sing over it, I find the results great.” - Mateus Contini

Q: Do you mean that you process Demucs 2 times? Once for ensemble with VR then the result was processed using Demucs again?

A: You can add other models with the ensemble, like Demucs, VR_5-Karaoke and HQ3 for an extra, before processing again with Demucs.

Also, this method is very good for leave good backing vocals into the instrumentals (only the ensemble result). I find extracting bv from the Vocal Stem to be less effective, giving you less material (comparing if you would join the bv with instrumentals later)

M.Contini Method #2 (newer)

Well, I tried to improve the results of the method I posted, so here it is, for **anyone interested in get fuller Instrumentals**, with a bit of bleed in some songs, wielding great results overall.

I'm doing this in the UVR-gui. The idea behind it is to scoop the vocals little by little, so the instrumentals is preserved the most. The proccess requires 3 extractions. Here are the Ensembles:

1. pass Ensemble: 5_HP-Karaoke-UVR + Inst HQ3 + htdemucs - Min/ Max

                     - If the song doesn't have BV, this will already give you good Instrumental Stem results. If you have Vocals bleeding into the Instr, continue to pass 2, but sometimes jumping straight to pass3 will produce better results.

                     - If the song have BV, this you keep a fuller **Instrumental Stem with BV** in it. If you want to keep the BV, but there is some Main Vocals bleeding through the Instr, continue to pass 2.

2. pass Ensemble: Kim Vocal 2 + Inst HQ3 + MDX Karaoke 2 - Min/Max

                     - This pass will try to preserve the BV in the Instrumental Stem while removing Main Vocal bleed. You can stop here if you want the **Instrumental Stem with BV**

3. pass Ensemble: Kim Inst + Inst HQ3 + htdemucs - Min/Max

                     - This pass will try to remove BV from the instrumental Stem and other Main Vocal Bleed while keep the Instrumental fuller.

The idea behind it, is to have less volume variation where the vocals are extracted, leaving the Instrumental Stem less muddy. Since the extraction of the vocals is done little by little using the Min/Max, the Models will not be so aggressive. This is a great starting point if you want to improve further in a DAW or just sing over it. The Con is that, sometimes, the track will have tiny bleeds. If you try this method, please post the results here.

#3

- -try this ensemble: 9_HP (10 agression) + HQ3 (chunks on) + demucs_ft, Min/Max

- it preserves most of the instruments.

M. Contini method #4 (new)

Another Ensemble sugestion for good instrumentals with minimized bleeding vocals and a bit of noise in some cases:

Ensemble: 9_HP + HQ3 + Demucs_6s (secondary full_292) - Algorith [min/max]

Configs:

9_HP Window[512], Agress[10], TTA[on], Post[off], High-End [off])

HQ3 Chunks[on] [auto], Denoise[on], Spectral[off]

Demucs_6s Chunks[on] [auto], Split[off], Combine[off], Spectral[off], Mixer[off], Secondary Model - Vocals/Instr [MDX-Inst_full_292] [50%]

Why Demucs_6s and not _ft - I compare them in some songs and 6s have less vocal bleed in the instrumental track.

Description:

The idea is to take the good bits of the models using only one from each Group (VR, MDX and Demucs). The secondary model on Demucs is to minimize some vocal bleeding with sustained notes that was happening in some songs.

Comparing the results from multiple models I find that Chunks enabled on MDX and Demucs removes some bleeding vocals from the Instrumental track and gives better results overall. This ensemble in my machine completes in about 5 min per song (GTX 1070 8GB, 16GB ram, Ryzen 1600x).

____________

- “The best combo is the HQ instrument models ensemble averge/averge including HQ3/Main/Main Inst/Kim1/2/Kim Inst/demucs3 (mdx_extra)/htdemucs_ft/hdtdemucs6s” (MohammedMehdiTBER)

"Wow, I tried out the ensemble with all those models you said, and it actually sounds pretty good. There's a definitely more vocal bleed but in a saturated/detailed distortion type of way. I can't tell which one I like better, the ensemble sounds more full and has more detailed frequencies, but the vocal bleed is a lot more obvious. The HQ_3 by itself has almost no vocal bleed but sounds more thin and watery."

- Kim instr + mdx net instr3 + HQ2 + HQ3 + voc ft max/max

The result is so amazing… Now can hear more detail on instrumental result where before I cannot hear a bit of music parts. (Henry)

- "I am very much enjoying making an ensemble of HQ3 and MDX23C_D1581, then inverting the vocals into the instrumental and running that through hq3 with 0.5 overlap" (Rosé)

__________________________________

Ensembles for specific genres

By Bas Curtiz

Evaluation based on public models available at 23.04.23 and multisong dataset on MVSEP. The list might be outdated, as it doesn’t take all the current models into account.

SDR sorted by genre

"If we remove **Kim vocal 2**, so only those that are available right now will be taken into account:

- Ensemble Rating 1 scores highest on average overall

[Probably this one https://mvsep.com/quality_checker/entry/974

At least it was the best for the given date.

But now we have ensembles which score better.]

- Kim vocal 1 is best for Rock

- Kim vocal 1 & Ensemble Rating 1 are best for RnB/Latin/Soul/Funk

- MDX'23 Best Model is best for Pop

- Main 427 & MDX'23 Best Model are best for Other

- Main 427 & MDX'23 Best Model are best for Blues/Country

- Main 427 & Ensemble Rating 1 are best for Jazz

- Main 427 & Ensemble Rating 1 are best for Acoustic genres

- Ensemble Rating 1 is best for Beats

- Ensemble Rating 1 is best for Hip Hop

- Ensemble Rating 1 is best for House

Sheet where **Kim vocal 2 **is removed:

https://docs.google.com/spreadsheets/d/1ceXA7XKmECwnsQvs7a0S81XZOUokIXUN8ndsUDcYRcc/edit?usp=sharing"

Further single MDX-UVR models descriptions

E.g. used for ensembles above, but if a model has a cutoff, using ensemble with models/AIs without cutoff like Demucs 2-4 will fill the gap above. But it's still a good alternative for people without decent Nvidia GPUs or are force to use Colab.

UVR-MDX models naming scheme

All models called "main" are vocal models.

All models called "inst" and "inst main" are instrumentals.

NET-X [9.X/9.XXX in Colab] are vocal models

Kim vocal 1/2 (self-explanatory)

Inst main is 496

Kim other ft is Kim inst

Model labelled as just ‘main’ is vocal, and was reported to have the same checksums as 427 and 423, but it doesn't seem to be true as 427 and main have different SDR (427 has better SDR than main, so apparently main is 423 [CRC32: E3C998A6]).

- MDX HQ_1/2 models - excellent, vivid snares, no cutoff (22kHz) high quality, rarely worse results than narrowband inst1-3 models, HQ_2 might have slightly less loud snares, but can have fewer problems with removing some vocals from instrumentals

- MDX-UVR Inst 3 model (464) - 17.7 cutoff (the same cutoff as for Inst 1, 2 inst main, but maybe not applicable for vocals after inversion in Colab), it was third-best single model in our SDR chart available in Colab update and UVR5 GUI with VIP models package - now available for free.

- Forth-best single model for instrumentals is currently inst main (496, MDX 2.1), then inst 1 and inst2.

- There was some confusion about MDX 2.1 model being vocal 438 (even 411), but it’s currently inst main.

- Beta full band MDX models without cutoff (better SDR than Demucs 4 ft)

As for SDR, the epochs score is following: 292<403<386<(inst 1)<338<382<309<337

<450 (first final, HQ_1)<498 (HQ_2)<(inst3)<(Kim inst)<HQ_3

Epochs 292, 403 and 450 are also in Colab (and the latter in UVR5)

- (currently the best, maybe not single model, but custom ensemble, as for vocals) MDX23 in MVSEP beta,

and in UVR5 - Kim vocal model -

It’s further trained MDX-UVR vocal model from their last epoch (probably UVR-MDX-NET Main). It's based on a higher n_fft scale which uses more resources.

Not always gives that good results for instrumental as SDR may suggest, and also more people shares that opinion [both Colab and UVR users, so i’ts not due to no cutoff in Colab]).

In UVR5 generally for the best vocal result use vocal models, and for the best instrumental result use instrumental models or eventually 4 stem Demucs 4 ft.

"[Kim_Vocal_1] is an older model (November), than Kim uploaded at 2022-12-04 to" https://mvsep.com/quality_checker/leaderboard.php?sort=insrum&ensemble=0

(steps below no longer necessary, the model is added to GUI and these are the same models)

You can download her (so-called “old”) model from here (it still gets better results for vocals than inst 3 and main): https://drive.google.com/drive/folders/1exdP1CkpYHUuKsaz-gApS-0O1EtB0S82?usp=sharing

When you copy/paste the model in `C:\Users\YOURUSERNAME\AppData\Local\Programs\Ultimate Vocal Remover\models\MDX_Net_Models` It asks you to configure, hit Yes.

Then change `n_fft to 7680`."

For instrumentals, it gets worse results, frequently with more bleeding, and UVR manually applies cutoff above training frequency to instrumentals after inversion, to avoid some noise and possibly bleeding. Colab version of Kim model doesn’t have that cutoff, so instrumentals as a result of inversion have max 22kHz frequency (but UVR applies it to prevent some noise).

- (generally outperformed by models above) MDX-UVR 9.7 vocal model a.k.a. UVR-MDX-NET 1 (instrumental is done by inversion, older model) - available in Google Colab/mvsep (here 24 bit for instrumentals)/UVR5 GUI.

Compared to 9.682 NET 2 model, it might have better results on vocals, where 9.682 NET might have better results for instrumentals, but everything might still depend on a song. Generally, 9.7 model got better SDR both in Sony Demixing Challenge and on MVSEP. Generally, 438 vocal, or 464 inst_3 should give better results for instrumentals. 427 vocal model tends to give worse results for instrumentals than even this older 9.7/NET1 model.

More about MDX-UVR models -

If they don't have more vocal bleeding than GSEP, they’re better in filtering more vocal leftovers which sometimes GSEP tend to leave (scratches, additional vocal sounding sounds, also so-called “cuts” [short multiple lo-fi vocal parts] which GSEP doesn’t catch, but MDX-UVR does probably due to bigger dataset). But using single instrumental MDX-UVR models instead of ensemble will result in cut off of a training frequency (e.g. 17.7kHz or lower).

Also, MDX-UVR like GSEP may not have this weird constant "fuzz" which VR models tend to leave as vocal leftovers (but in other cases, 9.7 model can leave very audible vocal residues, so test out everything on this list, till you get the best result).

The 9.7 model (or currently newer models) is also good for cleaning inverts (e.g. when having lossy a cappella and regular song).

If you tested all the alternatives, and you stick to the MDX-UVR 9.7 for some song, and it doesn't have (too much) bleeding, to fine-tune the results you can try out two 9.6 models to check whether it's better for you than 9.7 in this specific case (they're available at least in HV Colab and UVR5 GUI).

Newer MDX-UVR 423 vocal model usually provides more audible leftovers than 9.7 model.

To further experiment with MDX-UVR results, and you’re stuck with Colab, you can enable Demucs 2 model on Colab to "ensemble" it with MDX-UVR model (although metrics say it slightly decreases SDR, I like what it does in hi-end - it was suspected at some point the SDR decreasing problems may come out from enabling chunking).

________________

- Demucs 4 (htdemucs_ft) - no cutoff, it’s 4 stem, but you can perform mixdown without vocals in Audacity for instrumental - sometimes it may give you louder snare than in GSEP, but usually muffled shakers compared to GSEP. Also, it will give you more vocal residues than GSEP and MDX-UVR 464 (Inst 3). 6 stem models gives more vocal residues than 4 stem model (ft is the best one and also outperformed mdx_extra model [better than mdx_extra_q - quantized) but in some cases that might be worth to check old mdx_extra model as well (but

- (outperformed in many cases when used at least as a single models)

VR-architecture models (Colab, CLI or UVR5 GUI) sometimes provide cleaner and less muddy results for instrumentals than single narrowband models of MDX or even GSEP, only if they do not output too much vocal bleeding (which really happens for VR models frequently - especially for heavily processed vocals in contemporary music), but bleeding also depends on specific model:

- E.g. 500m_1 (9_HP2-UVR) and MSB2 (7_HP2-UVR) models are the most aggressive in filtering vocals among VR models, but other, less aggressive VR models may provide better sounding, less spoiled instrumentals (only if it is not paid for with worse vocal bleeding [BTW. I haven’t heard the newest 2022 VR model yet (available at least in UVR5 GUI, maybe for Patreons, not sure]).

All parameters and settings corresponding to specific models you’ll find in “VR architecture models settings” section.

- VR models-only ensemble settings - if your track doesn’t have too many problems with bleeding using VR-models above, to fine-tune the results achieved with VR, and to get rid of some mud, and e.g. get better sounding drums in the mix, I generally recommend VR-architecture models ensemble with settings I described in the linked section above.

I'd say it's pretty universal, though the most time/resource-consuming method.

Also, these ensemble settings from the UVR HV Colab seem to make decent job for extracting vocals in some cases when above solutions failed (e.g. claps leftovers).

Check also demucs_6s with 9 HP UVR and gsep in min-specs mode

Also, UVR5 GUI has rewritten MDX, so it can use their Demucs-UVR models from Demucs 3 (I think mvsep doesn't provide ensembling for any MDX models):

- (generally outperformed by MDX-UVR 4xx models) Demucs-UVR models - 1 and 2 models beside "bag" are worth trying out (mainly 1) on their own if the results achieved with above methods still provide too much bleeding - better results than e.g. bare MDX-UVR 9.7 or VR models or even GSEP in some specific cases (available on mvsep and UVR5 GUI). They're Demucs 3, 2 stem better trained models by UVR team. No cutoff - 22kHz.

_______________________________

- As for extracting -

Karaoke / Backing Vocals

(more up-to date, but less descriptive list at the top)

check MDX-UVR Karokee 2 model (available on MVSEP, UVR 5 GUI)

TL;DR - "Usually MDX B Karaoke has really good lead vocals and UVR Karaoke has really good backing vox”

"There are 3 good karaoke models (the ones I'm referring to are on mvsep.com [they seem to be no longer available there]). "MDX B (Karaoke)" seems to be the best at getting lead vocals from karaoke while "karokee_4band_v2_sn" (UVR) and "HP_KAROKEE-MSB2-3BAND-3090" (UVR) seem to be best for backing vocals. I recommend using a mix of the 3 to get as many layers as possible, and then use Melodyne to extract layers as best as possible. Then combine the filter results and Melodyne and you should have smthn that sounds pretty good" karokee_4band_v2_sn model might be not compatible with Colab (check mvsep or UVR5 GUI)

- Demix Pro may do a better job in B.V. than models on x-minus.

Even than the new model on x-minus since 01.02.23, but might be worth trying out on some songs (the problem is probably bound to MDX architecture itself).

"MDX in its pure form is too aggressive and removes a lot of backing vocals. However, if we apply min_mag_k processing, the results become closer to Demix Pro"

- Medley Vox

(installation tutorial)

For separating different voices, including harmonies or backing vocals check out this vocal separator, the demos sound quite good and Cyrus model has pretty similar results.

It's for already separated or original acapellas. Sometimes it gives better results than BVE models. Output sample rate is 24kHz, but it can be easily upscaled by AudioSR well.

Org. repository

https://github.com/jeonchangbin49/medleyvox

Old info:

https://media.discordapp.net/attachments/900904142669754399/1050444866464784384/Screenshot_81.jpg

How to get vocals stems by using specific models:

Song -> vocal model -> Voc & Inst

Vocal model -> Karaoke model -> Lead_Voc & Backing_Voc

Lead_Voc + Inst = Lead_Inst

-How to get backing vocals using x-minus

https://x-minus.pro/page/bv-isolation?locale=en_US

-If you have x-minus subscription, you can use chain mode for Karaoke as it currently gives the best results

How it probably works under the hood?

"On sitting down and reading  https://discord.com/channels/708579735583588363/900904142669754399/1071599186350440540

It's a multistep process where it mixes a little bit from MDX's split vocals and instruments.

Then passes that mixture through the UVR v2 karaoke/backing vocals model.

Then with those results, it inverts the separated lead vocal, and adds it to the instrumental result"

- As for 4 stem separation, check GSEP or Demucs 4 (now check better MDX23)

(other stem is usually the best in GSEP, bass in Demucs 4, rest depends also on a song, and as for drums, if you further process them in DAW using plugins, then Demucs 4 is usually better as it's lossless and supports up to 32-bit float output).

Demucs 4 has also experimental 6 stem feature. Guitar (can give good results) and piano (it's bad and worse than GSEP).

- As for free electric guitar and piano stems, currently GSEP and MVSEP models are the best, but paid Audioshake provides better results than GSEP. Also in GSEP "when the guitar model works (and it grabs the electric), the remaining 'other' stem often is a great way to hear acoustic guitar layers that are otherwise hidden.". LALAL.AI also has piano model and is “drastically” better than Demucs.

- From paid solutions for separating drums' sections, there are FactorSynth, UnmixingStation, or free Drumsep.

- As for specific sounds separation, check Zero Shot Audio.

______

Cutoffs examination with spectrograms for various models and AIs, available in UVR5 GUI, along with examined times needed for each model to process on CPU or GPU (1700x/1080 Ti) by Bas Curtiz (cutoffs examination not applicable for MDX Colab where there is none unlike in UVR [it's to prevent noise]):
https://docs.google.com/spreadsheets/d/1R_pOURv8z9GmVkCt-x1wwApgAnplM9SHiPO_ViHWl1Q/edit#gid=23473506

Spreadsheet of songs that use Vocals as a melody with snippets how they separate on various models/AIs

http://vocalisolationtesting.x10.mx/

___

Below you’ll find more details, links, Colabs, all tools/AIs listed, more information about specific models as alternatives to experiment further (mostly MDX-UVR instrumental and vocal models available in UVR5 GUI and https://x-minus.pro/ai). I also provide some technicalities/troubleshooting everywhere when necessary.

_________________________________________________________________________

Table of content

(click on an entry to be moved to a specific section; available in document outline too)

Last updates and news        1

General reading advice        30

Instrumental, vocal, stems separation & mastering guide

The best models

for specific stems

for instrumentals        31

for vocals        34

How to check whether a model in UVR5 GUI is vocal or instrumental?        39

for karaoke        39

for 4-6 stems (drums, bass, others, vocals + opt. guitar, piano):        43

SFX        45

De-reverb        46

Vinyl noise/white noise (or simply noise)        50

Mixing and mastering        51

Audio upscalers list        52

More descriptions of models        53

      MDX settings in UVR5 explained        57

Tips to enhance separation        63

Other ensembles in UVR5 - list        71

50 models sorted by SDR        87

Separating speakers in recording        93

General section of UVR5 GUI (MDX-Net, VR, Demucs 2-4, MDX23 with UVR team models)        95

GUI FAQ & troubleshooting        96

Chunks may alter separation results        99

Q: Why I shouldn’t use more than 4-5 models for UVR ensemble (in most cases)        100

(older) UVR & x-minus.pro updates        101

MVSEP models from UVR5 GUI        107

Manual ensemble Colab for various AI/models        108

Joining frequencies from two models        109

DAW ensemble        110

Manual ensemble in UVR5 GUI of single models from e.g. Colabs        110

UVR’s VR architecture models (settings and recommendations)        110

            VR Colab by HV        110

VR settings        111

VR models settings and list        113

VR ensemble settings        116

VR Colab troubleshooting        123

First vocal models trained by UVR for MDX-Net arch:        125

(the old) Google Colab by HV        126

Upd. by KoD & DtN & Crusty Crab & jarredou, HV (12.06.23)        126

Other archs general section

Demucs 3        134

Demucs 4 (+ Colab) (4, 6 stem)        135

Gsep (2, 4, 5, 6 stem, karaoke)        139

Dango.ai        144

MDX23 by ZFTurbo /w jarredou fork (2, 4 stems)        145

KaraFan by Captain FLAM (2 stems)        149

Ripple/SAMI-Bytedance/Volcengine/Capcut (Jianying)/BS-RoFormer (2-4 stem)        152

Single percussion instruments separation (from drums stem)        159

drumsep (free)        159

FactorSynth        160

Regroover        161

UnMixingStation        161

VirtualDJ 2023/Stems 2.0 (kick, hi-hat)        162

RipX DeepAudio (-||-) (6 stems [piano, guitar])        162

Spectralayers 10        162

USS-Bytedance (any; esp. SFX)        163

Zero Shot (any sample; esp. instruments)        164

Medley Vox (different voices)        165

About other services:        167

Spleeter        167

Izotope RX-8/9/10        167

moises.ai (3 EU/month)        167

phonicmind        167

melody.ml        167

ByteDance        167

Real-time separation

Serato        167

Stems 2.0        168

Acon Digital Remix        168

      Misc

FL Studio (Demucs)        168

Fadr.com from SongtoStems.com        168

Apple Music Sing        168

Music to MIDI transcribers/converters        169

Piano2Notes        169

Audioshake        169

Lalal.ai        170

DeMIX Pro V3        171

Hit'n'Mix RipX DeepAudio        171

Moises.ai        172

How to remove artefacts from an inverted acapella? (can be outdated)        174

Sources of FLACs for the best quality for separation process        175

Dolby Atmos ripping        184

AI mastering services        186

How to get the best quality on YouTube for your audio uploads        192

How to get the best quality from YouTube and Soundcloud - squeeze out the most from the music taken from YT for separation        193

Custom UVR models        195

Repository of other Colab notebooks        196

Google Colab troubleshooting (old)        199

Repository of stems/multitracks from music - for creating your own dataset        200

List of cloud services with a lot of space        205

AI killing tracks - difficult songs to get instrumentals        211

Training models guides        215

            Volume compensation for MDX models        229

            UVR hashes decoded by Bas Curtiz        231

Local SDR testing script        233

Best ensemble finder for a song script        233

_________________________________________________________________________

Models master list

50 models sorted by SDR

(from the public ones - so available to download and offline use)

(07.04.2023)

These are basically the top single models for now

(conventionally after these, additional vocal residues kick in, especially if not a vocal model)

Based on Multisong dataset evaluation on MVSEP chart.

model_bs_roformer_ep_317_sdr_12.9755

model_bs_roformer_ep_368_sdr_12.9628 (UVR beta)

0) MDX23C HQ (fullband a.k.a. 1648, 8K FFT)

0b) MDX23C HQ 2 (fullband)

1) voc_ft

1b) UVR-MDX-NET HQ_4 (inst)

2) MDX23C_D1581 (a.k.a. narrowband)

3) Kim Vocal 2

4) Kim Vocal 1

5) UVR-MDX-NET_Main_427 (voc)

6) UVR-MDX-NET_Main_406 (voc)

7) UVR-MDX-NET HQ3 (inst)

8) UVR-MDX-NET_Main_438 (voc)

9) UVR-MDX-NET_Main_390 (voc)

10) Kim inst (a.k.a. other)

11) UVR-MDX-NET_Main_340 (voc)

12) Inst 3 (a.k.a. 464)

13) HQ2 (inst)

(for vocal models, here start those with more vocal residues in instrumentals - can be still handy for specific songs)

+4 pos.

9) Inst Main (496)

10) Inst 2

11) HQ1

12) HQ 337 >382>338 epoch

13) Inst 1

14) HQ 386>403>292 epoch

15) NET2>NET3>NET1>9482 (NET 3 a.k.a. 9.7)

16) htdemucs_ft (4 stem) (S 10/O 0.95)

17) hdemucs_mmi (4 stem)

18) htdemucs_6s (6 stem)

19) UVR-MDX-NET_Inst_82_beta

20) Demucs3 Model B (4 stem)

21) UVR-MDX-NET_Inst_187_beta

(dango.ai, Audioshake, Bandlab not evaluated)

Somewhere here, trash begins (excluding GSEP)

22) Moises.ai

23) DeMIX Pro 4.1.0

24) Myxt (AudioShake 128kbps)

25) UVR-MDX-NET_Inst_90_beta

26) RipX DeepRemix 6.0.3

27) kuielab_b (4 stem) (MDX Model B from 2021 MDX Challenge)

28) kuielab_a (4 stem)

29) LALAL.AI

30) GSEP (6 stem) (although it sometimes gives much better results than its SDR)

VR arch

31) 7_HP2-UVR (a.k.a. HP2-MAIN-MSB2-3BAND-3090_arch-500m)

32) 3_HP-Vocal-UVR

33) 2_HP-UVR (HP-4BAND-V2_arch-124m)

34) 9_HP2-UVR (HP2-4BAND-3090_4band_arch-500m_1)

35) 1_HP-UVR (HP_4BAND_3090_arch-124m)

36) 8_HP2-UVR (HP2-4BAND-3090_4band_arch-500m_2)

37) 14_SP-UVR-4B-44100-2 (4 band beta 2)

38)  4_HP-Vocal-UVR

39) 13_SP-UVR-4B-44100-1 (4 band beta 1)

39) 15_SP-UVR-MID-44100-1

40) 16_SP-UVR-MID-44100-2

41) 14_HP-Vocal-UVR

42) VR | MGM_LOWEND_A_v4

43) 12_SP-UVR-3B-44100

44) Demucs 2 (4 stem)

(6 other old VR models proceeds)

50) Spleeter 4 stems

51) Spleeter 2 stems

52) GSEP after mixdown from 4 stems separation

Only instrumental models listed

(4 stem and MDX23C models lies in all categories):

Tier 1

MDX-Net models (trained by UVR team)

0) MDX23C HQ 1648 fullband

1) MDX23C HQ 2 fullband

1b) UVR-MDX-NET HQ_4 (inst)

2) MDX23C_D1581 narrowband

7) HQ3

10) Kim inst (other)

12) Inst 3

13) HQ2

Tier 2

+4 pos.

9) Inst Main (496)

10) Inst 2

11) HQ1

12) HQ 337 >382>338 epoch

13) Inst 1

14) HQ 386>403>292 epoch

Demucs 4

16) htdemucs_ft (S 10/O 0.95)

17) hdemucs_mmi

18) htdemucs_6s

20) Demucs 3 Model B (mdx_extra)

Tier 3

(somewhere between place 9-20 might be dango.ai, Audioshake, later maybe Bandlab)

22) Moises.ai

23) DeMIX Pro 4.1.0

24) Myxt (AudioShake 128kbps)

26) RipX DeepRemix 6.0.3

27) MDX-Net Model B from 2021 MDX Challenge (kuielab_b)

28) kuielab_a

29) LALAL.AI

30) GSEP (although it sometimes gives much better results than its SDR)

Tier 4

VR arch

31) 7_HP2-UVR (a.k.a. HP2-MAIN-MSB2-3BAND-3090_arch-500m)

33) 2_HP-UVR (HP-4BAND-V2_arch-124m)

34) 9_HP2-UVR (HP2-4BAND-3090_4band_arch-500m_1)

35) 1_HP-UVR (HP_4BAND_3090_arch-124m)

36) 8_HP2-UVR (HP2-4BAND-3090_4band_arch-500m_2)

Tier 5

37) 14_SP-UVR-4B-44100-2 (4 band beta 2)

38) 13_SP-UVR-4B-44100-1 (4 band beta 1)

Tier 6

39) 15_SP-UVR-MID-44100-1

40) 16_SP-UVR-MID-44100-2

42) VR | MGM_LOWEND_A_v4

43) 12_SP-UVR-3B-44100

44) Demucs 2

(6 other old VR models proceeds)

Tier 7

50) Spleeter 4 stems

51) Spleeter 2 stems

52) GSEP after mixdown from 4 stems separation

Differences by SDR divided for vocals and instrumentals are important to divide I think only in ensembles. In all other cases, if SDR is bigger for instrumentals in some model, it will be bigger for vocals vs the same model. At least only for ensembles there were so little differences that we had two top ensembles for both vocals and instrumentals.

__________________________________

Great thanks to Anjok, Aufr33 (creators of UVR), KimberleyJSN a.k.a. Kim (model contributor and MDX support), tsurumeso (the creator of VR arch base code), BoskanDilan (creator of the old UVR GUI), IELab a.k.a Kuielab & Woosung Choi (MDX-Net arch creators), GAudio (GSEP creators), Alexandre Deffosez a.k.a. Adefossez (Demucs creator), ZFTurbo (creator of MVSEP, MDX23, and many models), jarredou (MDX23 fork, tons of support), Captain FLAM (KaraFan), Bytedance (BS-Roformer), lucidrains (for recreating the BS-Roformer from the paper), FoxyJoy (de-reverb, de-echo, denoise models) - thanks to all of these people for the best freely available AI separation technologies and models so far.

Special thanks for users of our Discord:

HV (MDX and VR Colabs creator and UVR contributor), txmutt (Demucs Colab), CyberWaifu (lots of testing, some older Colabs), KoD (first HV MDX Colab fork), becruily (tons of advice), viperx (our former heavy user, supporter and model creator), Bas Curtiz (insane amount of testing and UVR5 settings guidance, tutorials with SDR evaluating, private models creator), dca100fb1 (a.k.a dca100fb8) (VR ppr bug, finding tons of UVR bugs and models testing and feedback), CyPha-SaRin (lots of models/UVR testing), BubbleG, ᗩรρเєг, Joe, santilli, RC, Matteoki (a.k.a. Albacore Tuna, our “upscaling” guru), Syrkov, ryrycd, Mikeyyyyy/K-Kop Filters, Mr. Crusty ᶜʳᵃᵇ (our mod; compensation values finding, MDX Colab mods and testing), knock (ZF’s MDX23 fine-tuning), A5 (lots of feedback on existing models), Infisrael (MDX23 guide), Pashahlis/ai_characters (WhisperX guide and script), Sam Hocking (our most valuable pro sound engineer on the server)

 - thanks to all of these people - for knowledge, help, testing and everyone whose advice, quotes and stuff appear in this doc. This guide wouldn't be created without you. If I forgot someone, forgive me.

__________________________________

You can support UVR team by these links:

https://www.buymeacoffee.com/uvr5/vip-model-download-instructions

and

https://boosty.to/uvr

(subscription to https://x-minus.pro/ai to process some VIP models there online)

If you see duplicated models on the list in UVR5, click refresh.

X-minus FAQ

Q: how come level 1 will be eliminated? is it possible to leave it since i use this site very little and paying ( 2.79$ ) per month is too much and anyway 360 minutes of audio per week is a lot. i do 5/ 6 per week. it is a waste of minutes.

A: If you renew your subscription several months in advance, you can use Level 1 even after removal. In addition, once your subscription Level 1 expires, you can use it for another month for free (after removing it in February).

Similarity Extractor

(OG broken)

(fixed 16.02.24)

https://colab.research.google.com/drive/1WP5IjduTcc-RRsvfaFFIhnZadRhw-8ig?usp=sharing

Don’t forget to click cell with dependencies after mounting

"If you have two language track it'll remove the vocals, but not its adlibs"

"It works like invert but instead of mixing the inverts together, it removes the difference and leaves the ones that sound the same"

It uses a specifically trained model on 100 pairs.

Sadly, “It's like a downgrade of Audacity Vocal and Center Isolation feature” it’s muddier

Audacity can be used in browser at:

https://wavacity.com/

(Effect>Special>Vocal Reduction and isolation)

“Adobe Audition works a similar, but you can actually tweak a lot of settings. But the difference is pretty much non-existent. Or any better for that matter. Similar way. Even with Audacity, Adobe Audition, and PEEL [3d Audio Visualize], we are still not quite there yet.

Currently, Audacity, and maybe Waves Stereo Center plugin have the best capabilities, but they are still aren't perfect.” Vinctekan

Sadly, it turns out that all the three solutions can sound worse than current models for the use case of getting rid of dubbing in movies.

It can be used with window size 768 on CPU as well. Probably the lowest supported for GPU is 272 (352 was set, and 320 is possible too), but probably it won't change here much.

One of the use cases of Audacity method to get lead vocals (in 2021) was by obtaining e.g. main vocals from vocal or BVE model, and processing that stem with these settings:

Audacity>Effect>Vocal reduction and isolation>

on action, make it Isolate Center

Strength: 1.1 or 1.6

Then click OK. That effect must go on vocal part. If you use center isolation, low/high cut will be ignored

______

The OG Colab is broken for now.

It was fixed by adding these lines to it:

!apt-get install python3.8

!apt-get install python3.8-distutils

!apt-get install python3.8 pip

!python3.8 -m pip install librosa==0.9.1

!python3.8 -m pip install numpy==1.19.5

!python3.8 -m pip install numba==0.55.0

!python3.8 -m pip install tqdm

!python3.8 -m pip install torch==1.13.1

and renamed inference Colab line to python3.8

(not necessary)

! pip install soundfile=0.11.0

distutils was necessary to fix numpy wheel error, but regular 3.8 installed before was necessary for Colab to recognize !python3.8 commands. Because 3.8 was bare, it needed pip installed separately for this 3.8 installation. Then the rest of the necessary packages are installed for 3.8 - the old librosa fix, numpy for 3.8, and broken dependencies numba and tqdm. Then, the last torch working in HV Colabs was 1.13.1, 1.4 didn't work though it's compatible with 3.8. Maybe CUDA or generally upgraded Ubuntu problem. Can't tell. It was necessary anyway because Torch wasn't installed for 3.8.

Additionally, to fix the regular VR Colab, this line was necessary:

!python3.8 -m pip install opencv-python

And for some reason, I needed to install these with normal pip like below, and with python 3.8, so basically twice, otherwise it gave module not found

! pip install pathvalidate

! pip install yt_dlp

That all hassle with Python 3.8 is necessary because numpy on Colab got newer version, and newer ones no longer supports function used in HV Colabs, as they got deprecated.

Separating speakers in recording

Guide and script for WhisperX by Pashahlis/ai_characters

“A script on the AI hub discord for automatically separating specific voices from an audio file, like separating a main character's voice from an anime episode.

I massively updated this script now, and I am also posting it here now, since this discord is literally about that kinda stuff.

Script to automatically isolate specific voices from audio files

(e.g. isolating the main character's voice from an anime episode where many different characters are speaking).

After literal hours of work directing ChatGPT, fixing errors, etc, there is now a heavily updated and upgraded script available:

I encountered some transcription errors (musical notes, missing speaker or start and end times) that would result in the entire script failing to work. So the updated script now skips such audio. That is not a problem, however, as for a 22-min file it skipped only 16s of audio and the errored audio is just music or silence anyway.

It now also automatically merges all your audio files into one if you provide multiple, so that the speaker diarization remains consistent. This increases diarization time by quite a lot, but is necessary. The merged file will be temporarily saved as a .flac file, as .wav files have a maximum file size of 4gb. The resulting speaker files at the end of the script are created as .wav again, though, as it is unlikely they will reach 4gb in size.

I also added helpful messages that tell you at which state of the script it currently is at and which audio files it is processing at the start with the total length of audio being processed.

I also made sure that it saves the speaker files in the original stereo or mono and 16 bit or 32 bit format.

At the end of the script execution, it also lists all the speakers that were identified in order of and with the audio length for each speaker. It also lists the total amount of audio length that had to be skipped due to processing errors, as well as the total time it took to execute the script.

Last but not least, I ran this script on a vast.ai rented Ubuntu based VM with a 4090 GPU and it worked. I did this to test Linux as well as because I was processing over 4h of audio, so I wanted this to be fast. Keep in mind that if you are running this script on your home PC with a bad GPU and are processing a lot of audio, it can take quite a while to complete.

Script is attached.

https://cdn.discordapp.com/attachments/708579735583588366/1132503488610455624/diarization_example.py

example console output:

https://cdn.discordapp.com/attachments/708579735583588366/1132503651672408094/message.txt

example speaker output:

https://cdn.discordapp.com/attachments/708579735583588366/1132503868194967684/speaker-SPEAKER_33_combined.wav

Usage instructions:

install whisperx and its additional dependencies such as FFmpeg as per the instructions on the GitHub page https://github.com/m-bain/whisperX

Additionally, install pydub (and any other dependencies you might be missing if the script gives an error message indicating you are missing a dependency)

install ffmpeg-python, make sure to use the following command instead of pip install if you're running this in a conda environment, otherwise it won't work: conda install -c conda-forge ffmpeg-python

edit the script to include your huggingface token and path to the folder containing the audio files you want to process

run the script simply by python your_filename_here.py

Results are quite good for what it is, but you'll definitely need to do some additional editing in audacity and ultimate vocal remover or whatever afterwards to cut out music, noise, and other speakers that were wrongfully included. It definitely works best with speakers that appear a lot in the audio file, like main characters. It does a very good job at separating those.

I won't provide tech support beyond this, as I am no programmer and did this all by just directing ChatGPT.”

Or check alternatives

UVR5 GUI (MDX, VR, Demucs 2-4 and UVR team models)

(more options and models/AIs compared to Colabs):

https://github.com/Anjok07/ultimatevocalremovergui/releases

AIs: MDX-Net, MDX23C, VR, Demucs, BS-Roformer (beta) all with models trained by UVR team (besides 4 stem).

It has also a new feature of ensemble MDX model with UVR's trained Demucs 3 model and Demucs 4 (or UVR 2 stem or stock 4 stem option) along with UVR models (V4 and V5). In new/beta versions also MDXC new v3 MDX-Net arch.

Official app Win 11 installation tutorial:

https://youtu.be/u8faZW7mzYs

MacOS build:

https://github.com/Anjok07/ultimatevocalremovergui/tree/uvr_5_5_MacOS

MacOS Catalina tutorial (outdated at this point):

https://www.youtube.com/watch?v=u8faZW7mzYs

(you better don’t run Windows build in W10 VM or you will get like 3 hours processing time)

Windows 7 users:

"To use the newest python 3.8+ with Windows 7 install VxKex API extensions and in case of problems select Windows 10 compatibility in EXE installer properties."

Here you can find a searchable PDF guide by the devs for UVR5 GUI describing functions and parameters (can be outdated)

https://cdn.discordapp.com/attachments/767947630403387393/1002916955679891548/UVR5GUIGUIDE.pdf

Video guide:

https://youtu.be/jQE3oHXfc7g

If you don't have all the models in UVR5 GUI described in this guide, download the expansion pack:

https://github.com/Anjok07/ultimatevocalremovergui/releases/download/v5.3.0/v5_model_expansion_pack.zip

and VIP models (free now)

https://www.buymeacoffee.com/uvr5/vip-model-download-instructions

For settings for the GUI, check:

https://photos.app.goo.gl/EUNMxm1XwnjMHKmW6

(though it's mostly outdated).

Web version from the aufr33 (UVR5 co-author) -

https://x-minus.pro/ai

for free users only one UVR model without parameters for "lo-fi" option for free users (unreleased model, mp3, 17kHz cutoff) and Demucs V3 (2 stem) (or 6 stems?) for registered users (site by of the authors) and Demucs 4 (4 stem) for premium users (and its better htdemucs_ft model for songs shorter than 5 minutes [better equivalent of previous demucs_extra model which wasn't quantized) and 7-8 minutes in the future (not sure if it also got replaced by 6s model for premium users as well).

Besides WAV, paid users get exclusive unreleased VR model when aggressiveness is set to minimum.

(no longer needed as UVR now has separate DirectML branch and executable)

Optional fork of UVR GUI for AMD and Intel cards, currently supporting only VR Architecture and MDX using DirectML (Demucs currently not supported). If you have Nvidia card, then use official app above since CUDA is supposed to be faster.

“A four minute and 20 second audio takes about 30 seconds (including saving) using 1_HP-UVR on an Intel Arc A770 16GB. It takes up approximately 6GB of VRAM.”

If you only use MDX models, in most cases it won't be faster than processing with CPU - i5 4460 has similar performance to RX 6700 XT here, so better stick to official app.

https://github.com/Aloereed/ultimatevocalremovergui-directml

Python command line fork of UVR 5 with current models support:

https://github.com/karaokenerds/python-audio-separator

GUI FAQ & troubleshooting

- It's not guaranteed to run on older versions of Windows than 10, so do it at your own risk.

“3.8.10 is the last [Python] official installer that works on Win7, however I was able to find an unofficial [Python] installer from GitHub for 3.10.13 on Win7 and that seemed to do the trick! No more error on load of UVR”

“If anyone needs the solution to running it on [MacOS] Mojave+ go to the Releases page on GitHub scroll down to 5.5, under assets grab UVR 5.5 x86_64_9_29.dmg. Confirmed working now on my Mojave machine. Thanks to @ambresakura on GH”

- Installing the GUI outside the default location on C: drive may result in e.g. startup issues. If you lack space on C: drive, create your folders using Symlink Creator to redirect the content to some other disk, keeping the C: location in the Windows file system logic.

Or else, Copying only Ultimate Vocal Remover\gui_data folder to the C: drive while keeping the GUI installation on another drive might work as well

- MDX-Net HQ3 in UVR with CPU takes 2 minutes with Ryzen 5 3600

- HQ_4 takes ~13 minutes on C2Q @3.6 DDR2 with default settings (and it’s faster than HQ_3)

- For faster GPU processing with UVR5 with CUDA, you need an Nvidia card with min. CUDA 11 (Late Kepler GPUs with compute capability v. 3.5 or newer - Wikipedia article, Nvidia list) - min. GeForce GT 640 (GDDR5) and all GT/GTX 700 series (although 2GB cards will probably cause issues, and 4GB on certain models and settings).

Or recently, you can also use AMD/Intel GPU, with separate installer with OpenCL support (most likely min. requirement is GCN or Polaris architectures and up - HD 7XXX and RX 4XX, but even 4GB variants may crash on certain settings).

You can also use Mac M1 for GPU acceleration (MPS GPU support in separate installer).

Not meeting these requirements, you’re forced using CPU processing, which is very slow (even Ryzen 5950X is slower than 1050 Ti in this application, and 1700X is slower by double than even 940M). Intel Pentium might be unsupported, but AVX or SSE4.2 is not required.

OCed Q9650 for CPU processing of HQ_3 model and VR models with 512 window size is fine, but MDX23C and Demucs ht/ft cannot be processed under ~5-17 hours without GPU acceleration. Be aware that your system may occasionally become unresponsive while separation is progressing (although, you can set all the priorities in Process Lasso to Idle, and it will be saved for future use).

As from new Nvidia GPUs, something like RTX 3050 (8GB) is a good choice for even the heaviest processing and is (theoretically) equivalent to Colab's Tesla T4 for CUDA computing power (but it's not really enough for training, of course, and in Colab slower like 3 times). But watch out for smaller 4GB laptop variants, as they can be more problematic.

(probably fixed) 4GB GPUs will sometimes force you to reopen UVR after each separation to free up VRAM or else separation might be unsuccessful (setting chunks in old versions of UVR to 10 or lower might alleviate the issue).

AMD and Intel GPUs using OpenCL are slower in this separation task vs CUDA.

Vocal chops using MDX models are more likely to appear on 4GB VRAM cards (use CPU processing with e.g. 12GB of RAM to get rid of the issue). MDX HQ_1 (or later) model can cause errors on some 4GB VRAM laptop GPUs at least with wrong parameters (you might want to use CPU processing instead, then min. 8GB RAM recommended).

Official requirements from GH page:

Nvidia RTX 1060 6GB is the minimum requirement for GPU conversions.

Nvidia GPUs with at least 8GBs of VRAM are recommended.

- If you want a fast 2nd hand GPU with more VRAM, consider 1080 Ti or 2080 Ti or even 3080 Ti (16GB). Pretty fast ones for separations.

1080 Ti is much faster in this task than 3060 12GB.

https://media.discordapp.net/attachments/767947630403387393/1133164474749169864/image.png

The higher the total amount of CUDA cores, the better.

- As for very weak CPU processing -

On AMD A6-9225 Dual-Core CPU (2/2), 4GB RAM three models ensemble (MDX, MDX, Demucs 4) it took almost 17 hours.

On i3 3xxx it took around 8 hours.

Inference time of HQ_3 on a server Quad counterpart - E5450, OC @3.6 DDR2 @~800Mhz for 4:19 track is 20 minutes and 22 seconds (default overlap and 256 segments) - HQ_4 is faster

- 2GB VRAM cards had some issues even on CPU, maybe it's fixed already

- (no longer needed) 4GB VRAM cards should be supported out of the box with chunks set to auto (6GB may be required for longer tracks for auto setting or batch processing for chunks higher than at least 10).

-  Minimum RAM requirement is 8GB. With 4GB RAM you can run out of memory on longer tracks (probably fixed in many cases in the v 5.5). 6GB RAM works correctly, at least on single MDX-Net models.

- UVR5 GUI instead of old CML “mirroring” has now “hi-end process” for VR models which is actual mirroring (no mirroring2, not sure about possible automatic bypass from CML while using ensemble of VR models) but don’t confuse it with old “hi-end process” from CML version which was dedicated for 16kHz models.

Q: If you run a single model with default configuration it is okay with success... the problem is when ensemble 2 models, it does not have enough resources to complete the process. Unless using a manual ensemble. It also has an error if the chunk size was changed, even with a single model. Seems there is not enough VRAM for processing the song.

A: I had the same issue the other day running ensemble 4 models.

Turned out - as the error msg showed, the chunk size was too big...

I prolly must have changed it by accident to `Full` - when I set it back to `Auto` - it was able to process.

U can find this setting under Settings > Advanced MDX Options.

-(probably fixed) For 4GB VRAM GPU and VR 3090 models (e g. 1,2,9_HP-UVR) you may need to split e.g. 2:34 song into two parts (I recommend lossless-cut) or eventually use chunks option if you encounter run out of memory CUDA error. Lossless-cut will do chunking, so it won’t be necessary to set chunks in UVR in case of some problems (not in all cases on 4GB VRAM).

-  UVR GUI will only process files with English characters (maybe fixed)

- (fixed irc)  "When choosing to save either vocals or instrumentals only, the app saves the exact opposite (if I want to save vocals only, it will save instrumental, and vice versa)"

-  A value of 10 for aggressiveness in VR models is equivalent to 0.1 (10/100=0.1) in Colab

-  Hidden feature of the GUI:

"All the old v5 beta models that weren't part of the main package are compatible as well. Only thing is, you need to append the name of the model parameter to the end of the model name"

Also, V4 models are compatible using this method.

- The GUI also has full stock 4 stem Demucs 4 implemented. For 4 stem, simply pick up _ft model since it's the best for 4 stems. Demucs-UVR model from Demucs 3 gets worse results than newer Demucs 4 ft model.

- You might consider using Nvidia Studio Drivers for UVR5. Versus Game Ready normal drivers, they can be more stable, but less often updated. You can check your current type of drivers in GeForce Experience (but if you don’t know which ones you have, they’re probably Game Ready)

Q: Is there a way I can remove models I already have downloaded?

I want to remove all the HP models, but I don't want to delete them from the directory, I want to be able to get them back if I need them

A: Check the current download center and if all the models you want are there, then you can delete them and redownload from there later

1. Delete the models from the directory, or

2. Move the models to a separate folder out of the directory

- At least since introduction of Batch mode, stability of the app on lower VRAM GPUs improved, but you can see more vocal residues processing on 4GB GPU vs on CPU, while 11GB GPU doesn't really have that problem.

Maybe something changed since batch mode was introduced, but some vocal pop-ups could be fixed only with chunks set to 50-60 (11 and 16GB VRAM cards only) in the old versions and CML.

Some low values were still culprits of vocal pop-ups in chunks mode (at least before the patch).

- Chart showing separation times for various MDX models and different chunks settings on desktop GTX 960 4GB click and click

- (no chunks in 5.6 anymore) On 4GB VRAM cards, you can encounter crashes with the newest instrumental and Kim vocal model while using batch processing. Lowering chunks to at least 10 (but better lower, sometimes still crashes) should help

-  If something is suddenly eating your disk space on system disk, check: C:\Users\User\AppData\Local\CrashDumps because UVR can create even few gigabyte crash dumps. Consider turning on compression in properties for that folder.

-  You should have around 20GB of free space on C: drive after UVR installation on 12GB RAM configurations for separating top ensemble settings (it uses a lot of pagefile) and at least 10GB for 24GB RAM for long songs on 4GB VRAM cards. You can enable pagefile on another drive as well if you run out of space on the system drive (better if it was an SSD as well).

If your disk space in not freed after separation, check in PowerShell if you have Memory Compression and Page Combining enabled, by typing:

MMAgent. If not 1) Type: Get-MMAgent 2) Then: Enable-MMAgent -mc (video tutorial)

-  Q: When ensembling and having settings test mode enabled, UVR keeps all the different outputs before ensembling in a folder. If you're not careful, these quickly can stack up.

Possible to have a feature where UVR automatically deletes those after ensembling?

A: Disable '*Save all outputs*' in *Ensemble Customization Options* > *Advanced Option Menu* is what you ask for.

- Performance of GPU per dollar in training and interference (running a model): click

- How to check whether the model is instrumental or vocal?

Q: Are VR Arc models also grouped between instrumentals/vocal models, or it's just MDX-Net models?

A: The moment you see Instrumental on top (and Vocal below) in the list where GPU conversion is mentioned, you know it's an instrumental model.

When it flips the sequence, so Vocal on top, you know it's a vocal model.

Same happens for MDX and VR archs.

Q: [How to] have UVR automatically deleting the ensemble result folder after processing a song.

A: Go to settings, ensemble options, uncheck "Save all outputs".

- You can perform the manual ensemble on your own already separated files (e.g. from Colab) in UVR5 under "Audio Tools”. Just ensure that files are aligned (begin in the same place). Sometimes using lossy files can mess with offset and file alignment.

- Furthermore, you can use Matchering in Audio Tools, e.g. to fit muddy results without residues, to the separation with more clarity, but containing residues you want to get rid of. Just use file without residues as target,

- If you have crashes on “saving stem” uninstall, odrive

- Q: an option to add the model's name to the output files (this existed in a previous version of UVR but now it's gone) it was really useful when you needed to test multiple models on the same song

A: It's still there under additional settings "Model Test Mode"

- Q: I want to separate an audio from a video (input is still empty when I choose a file)

A: Go to General Process Settings>Accept Any Input

- Q: First time trying the ensamble mode and I used the VR Models: "De-Echo-Aggresive, De-Echo-Normal, DeEcho-DeReverb, DeNoise" now the outputs confuse me. In folder called "Ensembled-Outputs" there are many files which are from each of the models. Outside that directory are 2 wav files, one says Echo the other No Echo. Isn't the ensemble mode basically a wav file goes through each model and saves a final wav file after it went through all the models listed?

A:The two files outside the ensemble folder are the final ensembled files.

The folder is all the separate outputs from each model (you've enabled that in settings)

Q: Those files are final after they went through all the models, right? Not just the DeEcho model.

A: Yes

Q: I am just suspicious of the naming, I see at the time, and it makes sense that the files outside the directory are the final version although are they after all the models or just 1 model.

A: the naming is just whatever stem is assigned to the models, in your case all the models output echo and no echo file

so the final ensemble files will have that in the name

- Q: What is this "band" that I keep seeing in the spectrograph of tracks that I've isolated with x-minus?

A: MDX noise - a noise it produces no matter what. You can either use UVR De-noise model or isolate the track twice. Once normal one and already inverted,

then u add the results of normal-inst, inverted-inst, reinvert the inverted-inst, merge both normal and reinverted-inst.

The merged will be without noise, but 6db's higher - so lower the gain accordingly, and u'll get the same, just no noise. Repeat for vocals obv

- Q: voc_ft doesn’t have any spectrum above 17.7kHz. How to restore it, and have e.g. 48kHz or 96kHz output like input file has?

A: Turn off “Match Freq Cut-off” but it copies the remaining frequencies from the original, leading to possibly more noise.

“if you want true 96khz you need to manually lower the rate for 44100 or less since the models themselves are 44100”

- It can happen that VR models using 512 window size can crash on 4GB cards, but 272 will be fine, although it will take more time

Q: “I have tried everything and also googled a lot, but UVR with MDX-Net is producing me this type of noise in every sample I have tried, that was not in the recording before. Anybody have an idea what can cause it?”

A: “It’s just part of the architecture. Either run it through a denoise model or run it through it twice with the second time the sound being phase-inverted”

“Enabling Denoise Output should do the trick. I use the Denoise Model option, seems to work quite well, to my ears, at least”

Q: “Is there any way to fix the uvr bve model saying "vocals" on the bgv and "instrumental" on the lead vocal file? It's unbelievably annoying”

A: Change primary stem from whatever it is set to the opposite in model options (screenshot)

Q: Matchering gives errors with long files.

A: 14:44 input length limit for both target and reference audio is set, and sth slightly above it caused error (probably a bit above 15 mins, so maybe 15 minutes is a limit).

If you see the error log, it will specify whether reference or target file is too long, but the limit is the same for both.

___

- Q: Why I shouldn’t use more than 4-5 models for UVR ensemble (in most cases)

A: It's easier to get, when you separate the same song using some models. Get the best 4-5 models out of the most recommended currently, plus make some more separations, using some random ones. Then try to reflect avg spec from UVR by importing all of these results to your DAW.

You'll do it by decreasing volume by 3dB per one stem, so for a pair you need to decrease the volume of two stems by 6dB (possibly 6.02 as well). Decrease the volume by the same value further for more than a pair for all stems accordingly, so you'll get pretty much similar result like avg spec in UVR.

You can also maybe apply a limiter on the master. In the second variant, manipulate the volume of all stems by your taste instead of keeping the same volume. By this process, you can observe that the more results imported above 4-5 results, the worse result you have when you don't decrease volume of worse results. When you have control over the volume of single results, you'll end up decreasing the volume of bad results (or deleting them completely). You don't have this opportunity in UVR using avg spec - so like in the first variant in your DAW when you set the same volume for all results. The only way to not deteriorate the final result further, is to delete such worse results from the bag entirely, to not worsen the final outcome when you have too many models ensembled. Without the possibility of decreasing volume of such a result when all volumes are equal, the more results you'll import to the bag of the 4-5 the best models, the worse final result you'll get. Because you cannot compensate for bad results in the bag by decreasing their volume like in avg spec - all tracks are equally loud in the bag of avg to the models with good results - hence, good models sound quieter if they are in minority and the final outcome is worse.

The 4-5 max models ensemble rule is taken from long-conducted tests of SDR on MVSEP multisong leaderboard. When various ensembles were tested in UVR, most of these combinations didn't consist of more than 4-5 models, because above that, SDR was usually dropping. Usually due to all the reasons I mentioned.

Even using clever methods of using only certain frequencies of specific models, like in ZFTurbo, jarredou and Captain FLAM code from MDX23 (don't confuse with MDX23C arch) and its derivations, which minimize the influence of "diminishing returns" when using too many models I think they never used more than 4-5 in their bags, and they conducted impressive amount of testing, and jarredou even focused on SDR during developing his fork (actually OG ZFTurbo code too).

- Chunks may alter separation results

(update: chunks are now replaced with batch mode on even 4GB cards, feature was introduced in one of beta patches and is available in v. 5.6, and you cannot use chunks experimentally in this version if batch mode gives you some vocal pop-ups vs 10GB GPUs which is pretty common issue in 5.6; the old text follows).

E.g. a bigger chunk value will less likely cause instruments disappearing.

Chunks 1 is not the same as chunks full (disabled). Also, chunks may cause distorting briefly some vocals in the middle when split is being made. Chunks “auto” is calculated individually for your VRAM and RAM configuration (also song length), so the result will differ among various users for the same song. Maximum chunks value differ for various MDX models (e.g. NET 1 will allow for bigger values than newer Inst models with higher training frequency). You can test what is the maximum supported chunk size for your computer specs till you encounter crash (e.g. for 5:11 song and inst main 496 - chunks 15 (20 for 30 s song) for 4GB desktop card, 38 for 6GB laptop card (50 for NET 1 model), and around 50 for 11GB). Sweet spot for 3:39 track is chunks 55 (works at least on 16GB VRAM) - more than that gives worse results. Also on some GPUs/configuration you may notice some variations in very short vocal bleeding not (fully) associated with chunks which don’t happen on e.g. x-minus or other configurations (1660 Ti vs 1080 Ti and 960 (we don’t know what causes it). In this case, you can only alleviate the issue by changing chunks. Be aware that low maximum chunks on 4GB cards beside more sudden vocal residues and cuts in the result, may cause also specific artefacts like e.g. beeping not existing on e.g. 11GB card (the issue happen in Kim vocal model).

(older) UVR & x-minus.pro updates

Q: What is the segment size/ overlap for VOC FT processing for uve bve models on x-minus, aufr33?

A: --segments_mdx 384

--overlap_mdx 0.1

uvr bve v1

-0.2, -0.05 and 0.15

Average aggressiveness is 0.0 (for v2

- Anjok (UVR5) “I made a few more fixes to batch mode for MDX-Net before I release it publicly to GitHub later this week. This install also includes a new full band model that will be included in this week's public patch. Please let me know if you run into any bugs or issues.”

Link (not needed anymore):”

(the model is called UVR-MDX-NET-Inst_HQ_1 - it’s epoch 450, better SDR than 337 and 403 models, only sometimes worse than narrowband inst3 [464])

- Anjok: "I decided to make a public beta for everyone here who wants to try the new patch with **batch mode for MDX-Net** before I release it publicly to GitHub next week. This install also includes a **new full band beta model**! [full_403] Please let me know if you run into any bugs or issues.” Patch download link

If you don't have the new model on the list, make sure you have "Download more models" on your list.

- The beta patches are currently only for Windows (but just the fb 403/450 models can be used in the older UVR version, and it works correctly - the patch itself is an exe installer which has the model inside and doesn't check for current UVR installation)

Update 14.02.23

"I found a bug in the MDX-NET.

If the input song contains a DC offset,

there will be a lot of noise in the output!

It has already been fixed on the XM.

It will also be fixed soon in the next UVR GUI update." Examples

Update 11/12.02.23

"I will soon add a new setting to fine tune the Karokee / B.V. model. This will help remove **even wide stereo lead vocals**.

"You can now specify the placement of the lead vocal. The percentages are approximate vocal wideness."

Here is the current result. As you can hear, the lead vocals are hardly removed [in the old setting]."

"this is super cool, if you invert the 2 results you can actually get the stereo width vocals isolated

1 step closer to more than just 1 track bgvox separation"

"Ooo that's very interesting, stereo lead vocals always get confused for background ones"

Update 4.02.23

New chain ensemble mode for B.V. models available on x-minus

"the chain is the best bg vox filtering I've ever heard"

"It mixes MDX lead vocal and a little bit of instruments. The resulting mix is then processed by the UVR (b.v.) v2 model and the cleaned lead vocal is inverted with the input mix (song).

Unlike min_mag and other methods, when using chain, processing is sequential. One model processes the result of another model. That's why I called it a "chain"." Aufr33

Update 31.01.23

"**The new MDX Karokee model is ready and will be added to [x-minus.com] tomorrow!***" aufr33

New Demucs 4 (probably instrumental) model is in training. edit. training stopped due to technical issues and full band MDX models were trained instead.

"Throwing a Demix Pro karaoke model for comparison... I think the bgv parts still sound better for this song, but demix has more noise on the lead parts

Demix keeps more backing [backround] (and somehow the lead vocals are also better most of the time, with fuller sound)"

"MDX in its pure form is too aggressive and removes a lot of backing vocals. However, if we apply min_mag_k processing, the results become closer to Demix Pro.”

“In the future, we will create a [b.v.] model for Demucs V4. The MDX-NET is not really well suited for such a purpose."

Update 24.12.22

Wind instruments model (saxophone, trumpet, etc.) added to x-minus for premium users (since March now also in UVR5).

"I tested. Maximum aggressiveness extracts the most amount of instrument, while minimum the least. The model is not bad at all, but has hiccups often (maybe it needs a much larger dataset)"

Maximum aggressiveness "gives you more wind".

Update 20/19.12.22

New UVR5 GUI 5.5.0 rewrite was released. Lots of changes and faster processing.

MDX 2.1 model added as inst main (inst main 496) in UVR5 GUI.

- There was some confusion about MDX 2.1 model being vocal 438, but it’s inst main.

MacOS native build available on GitHub.

VIP models are now available for free with a donation option.

More changes:

"Pre-process mode for Demucs is actually very useful. Basically, you can choose a solid mdx-net or VR model to do the heavy lifting in removing vocals and Demucs can get the rest with far less vocal bleed"

"Secondary Models are a massive expansion of the old "Demucs Model" checkbutton MDX-Net used to have. You'll want to play around with those to find what works for the tracks your processing."

There was also Spectral Inversion added, but it seems to decrease SDR slightly.

There was an additional cutoff to MDX models introduced - “Just a heads up, for mdx-net, the secondary stem frequencies have the same cut-off as the primary stems now

There were complaints about lingering vocals (or instrumentals depending on the model) in the upper frequencies that was audible and very bothersome”

Update 04.12.2022

"**A new MDX model has been added!**

This model uses non-standard FFT settings optimized for high temporal resolution: 2048 / 5210

https://x-minus.pro/ai?hp&test

[results are very promising]

edit. 19.12. Final main model sometimes leaves more vocal leftovers.

Update 16.11.2022

"Due to anti-Russian sanctions, I will no longer be able to receive your donations from December 9th. All available withdrawal methods are no longer available to me. I will try to solve this issue, and probably move to another country such as Kazakhstan or Uzbekistan, but it will take some time, and servers must be paid for monthly.

As a temporary solution, I will use Boosty. I ask everyone who is subscribed to Patreon to cancel your subscription and subscribe to Boosty: https://boosty.to/uvr

**Just a reminder that I'm switching from Patreon to Boosty.**

If you want to renew your subscription but don't want to mess with Boosty, I've found an alternative for *European* users!

https://www.profee.com"

If you have any questions, DM aufr33 on Discord.

Update September 2022

New VR model added to UVR5 GUI for patreons.

Update 31.10.22

The release of the new instrumental models for patreons -

optimised for better hi-end (lower FFT parameter), not so big cutoff during training and possibly better results for hip-hop (and possibly more genres).

https://www.patreon.com/uvr

UVR-MDX-NET-Inst_1 is Epoch 415

UVR-MDX-NET-Inst_2 is Epoch 418

UVR-MDX-NET-Inst_3 is Epoch 464

The last one is the best model (at least out of these three) so far, although -

“I like it 50/50. In some cases it does a really good job, but on others it's worse than 418.”

“New models are great! I'm having a little issue on higher frequencies hanging in the vocals, but I found I can remove that by processing again”

"Anyone else still uses inst 464? I've been testing it and my conclusion is that it's a great model alongside 418

the pros of it are that it sounds fuller and doesn't have a lot of vocal residues, but it falls short when picking up some vocals, there might be occasions where it misses some bits, or you can hear some very low or very high-pitched vocals (though this is mostly fixed by using other models)"

"I've only tested one track so far, with 468 (My usual first test; Rush - The Pass). First off, it's the cleanest vocal removal of the track yet. First model to really deal with the reverb/echo and faint residuals ... but also the first model to trap a ton of instrumentation in the vocal stem.

Fascinatingly again, the UVR Karaokee model was able to almost perfectly remove the trapped instrumentation from the vocal line, creating a much more perfect result. I don't know if the new models were trained with this in mind, but the Karaokee model has proven to be extremely effective at this. The two almost work as a necessary pair."

(UVR Karaoke model should be available on MVSEP or maybe also x-minus, and of course UVR5 GUI and it's free and public)

September update 

of MDX vocal models added only for premium users (more models available in GUI, to be redeemed with code). They're available online exclusively for our Discord server via this link:

https://x-minus.pro/ai?hp&test-mdx

(probably not needed or working anymore as training is finished and final models are already released from this training period, but I'll leave it just in case).

edit. Be aware that models below are outdated and newer above supposed to outperform already them

(outdated, as some old models got deleted from x-minus)

mdx v2 (inst) = 418 epoch (inst model)

mdx v2 (voc) = 340 epoch (voc model)

Description for new MDX VIP vocal ones (instrumental based on inversion) and instrumental models (vocal models 9.7 (NET 1) and 423 available on MVSEP under MDX-B option):

Vocal models:

- beta 340 is better for vocals, while -

- 390 has better quality for instrumentals, though it has more vocal residues.

- "423 is really nice for extracting vocals, but is not good for instrumentals

- 427 is not good for me."

- “In the last 438 vocals are really nice, also backing vocals. Unfortunately, we can hear more music noises, but voices are amazing” (it's good for cleaning artifacts from inverts). “(no longer available, at least on x-minus).

- Beta 390 is better than 340. Instruments are cleaner but have more vocal disturbances.

- I've tried a combination of MDX 390 - UVR min_mag_k. Not really bad at all”.

- "406 keeps most of these trumpets/saxes or other similar instruments, and ensembling with max_mag means it combines it with UVR instrumental which already keeps such instruments, so you get best of both worlds".

Instrumental models:

- 430 or 418 are worth checking.

Update 17.11.2021 - older public UVR 9.6 and 9.7 vocal models (but still decent) for MDX are described in "MDX-Net with UVR team" section.

Upcoming UVR5 updates (outdated)

Since the training of MDX September models is completed, some older beta models might not be available anymore.

As of the middle of September a new VR model was in training, but cancelled due to not "conclusive" results, although later a new VR model was released.

"these models will be next:

1. Saxophone model for UVR.

2. "Karokee" model for MDX v2."

Also, completely rewritten UVR5 GUI version.

Among many new features - new denoiser for MDX models available and new Demucs 4 models (SDR 9).

Alternative online site, completely free:

https://mvsep.com/ (FLAC, WAV, 24 bit for MDX instrumentals and Demucs 3, 100MB per file limit, MP3 320kbps available, 512 window size for VR models (all leading UVR models including WiP piano [it's better than Spleeter worse than GSEP]), big choice of various AIs including Demucs 3-UVR instrumental models (as for free ones, worth trying out especially if you suffer bleeding in regular UVR5 or MDX and GSEP - model 1 less aggressive, model 2 more destructive [happens the opposite, though], “bag” leaks even more), also regular 4 stem model B - mdx_extra from Demucs 4 and also HT Demucs 4 (better ft model), and MDX-UVR 423 and 9.7 vocal model (choose MDX model B and the new field will appear), but in this case for instrumental you need to ensemble with UVR model to get rid of vocal bleeding). Biggest queue in the evenings till around 10 PM CEST, close to none around 15:00 (working days).

MVSEP models from UVR5 GUI

- MDX23 on https://mvsep1.ru/ (not in UVR) - custom tech, consisting of various models from UVR.

- Demucs 4 ft - self-explanatory (might be shifts 1 and overlap 0.75 as he tested once)

MDX B (not sure about whether min, avg, max is set):

- Newest MDX models added - Kimberley - Kim inst (ft other), Kim Vocal 1 & 2, HQ_2

- 8.62 2022.01.01 - is NET 1 (9.7) with Demucs 2, this one has a new name now. It had slightly bigger SDR in the same multisong dataset as the newer model below - discrepancy vs UVR5 SDR results might be on the server side (e.g. different chunks), so it might be still the same. The dates can only relate to date of adding the model to the site and nothing more, not sure here, but it might be it - NET 1 is older model than below indeed. Looks like that the model is used with Demucs 2 enabled (at least he said it was configured like this at some point)

- 8.51 2022.07.25 - might be vocal 423 a.k.a main, not sure if with Demucs 2 (judging how instrumental from inversion in 423 looked like - cannot be any inst model yet, since they were released in the end of 2022 - epoch 418 in September to be precise) - it was tested in multisong dataset on page 2 as MDX B (UVR 2022.07.250 - the date is the same as before, so nothing new here), can't say now if Demucs 2 is used here. In times of 9.7/NET 1 it was decreasing SDR a bit on I don't know which dataset, but instrumentals usually sounded kinda richer with this enabled. Now it's better to use other models to ensemble.

The change in MDX-B models scheme was probably to unify SDR metrics to multisong dataset.

- Demucs 3 Model B - mdx_extra (and rather not mdx_extra_q as ZFTurbo said it's "original" and used mdx_extra name on the channel referring to this model; in most cases the one below should be better)

- Ultimate Vocal Remover HQ

Window size used - 512

Here all VR arch model names

- UVRv5 Demucs - rather the same names

- MVSEp models - unavailable in UVR5

- MDX B Karaoke - Possibly MDX-UVR Karokee or MDX-UVR Karokee 2 (karokee_4band_v2_sn irc), maybe the latter

The rest is outdated and not really recommended to use anymore

Issues using mvsep

- NaN error during upload is usually cause by unstable internet connection, and it usually happens on mobile connections when you already upload more than one file elsewhere.

If you have NaN error, just retry uploading your file.

- Rarely it can happen after upload that error about not uploaded file occurs - you need to upload your file again.

- If you finished separation and click back, model list can disappear till you won’t click on other algorithm and pick yours again. But if you click separate instead, it will process with the first model which was previously on the list (at least if it was also your previous choice).

- Slow download issues. Separation was complete, and I was listening to the preview when playback on the preview page simply stopped, and couldn't be started. Main page didn't load (other site worked).

Also, I couldn't download anything. It showed 0b/s during attempt of downloading.

Two solutions:

- close all mvsep tabs completely and reopen

- Connect to VPN, previewe some track, but after a short time, the same can happen and nothing is playing or buffering. Then fire up Free Download Manager, and simply copy the download link there, and it will start downloading. Later, the browser can also start downloading something you clicked a moment a go. Crazy.

Comparing to MDX with 14.7 cutoff, depending on a track, VR models only (not MDX/Demucs) might leave or cut more instruments or leave more constant vocal residues, but in general VR is trained model at 20kHz with possible mirroring covering 20-22kHz, generally less aggressive vocal removing (with exceptions) but most importantly, comparing to MDX, VR tends to leave some specific noise in a form of leftover artifacts of vocal bleeding, but from the other hand MDX, especially models with cutoff, can be more muddy and recall original track mastering less.

______

Manual ensemble

Colab for various AI/models

If you want to combine results from various MDX models with e.g. Demucs VR and eventually VR architecture or different AIs using Google Colab, here’s a notebook for you:

https://colab.research.google.com/drive/1fmLUYC5P1hPcycI00F_TFYuh9-R2d_ap?usp=sharing (you can perform manual ensemble on your own files in UVR5 under "Audio Tools")

The Colab got deleted. Here’s a mirror
https://cdn.discordapp.com/attachments/708912741980569691/1102706707207032833/Copy_of_Ensemble.ipynb

(we got two reports that it throws out some errors now, and could stop working due to some changes Google made into Colabs this year)

You should be able to modify it to use with three models and different weights like 3, 2, 2 in example of Ensemble MDX-B (ONNX) + MVSep Vocal Model + Demucs4 HT on SDR chart (so it rather does not work like avg/avg in GUI).

Joining frequencies from two models

Sometimes it may happen that a regular ensemble even with min spec doesn't give you complete freedom over what you want to achieve, having one cleaner narrowband model result with fullband model result with more vocal residues, but you still want to have a full spectrum.

Instead of using ensemble Colab, you can also mix in some DAW, MDX-UVR 464/inst3 or Kim inst model result which has 17.7Hz cutoff, with HQ_1/2 or Demucs 4 result, which has full 22kHz training frequency model.

First, import both tracks. Now rather the most correct attitude to avoid any noise or frequency overlapping is to use brickwall highpass in EQ at 17680Hz everywhere on Demucs 4 stems, and leave MDX untouched, and just it. You can use GSEP instead of Demucs 4 (possibly less vocal residues).

If you want to experiment further, as for a cut-off, once I ended up with 17725.00 flat high pass with -12dB slope for "drums" in Izotope Elements EQ Analog and I left MDX untouched. “Bass” stem set to 17680.00 in mono and "other" in stereo at 17680.00 with Maximiser with IRC 1 -0.2, -2, th. 1.55, st. 0, te 0. But it might produce hissy hi-hat in places with less busy mix or when hi-hat is very fast, so tweak it to your liking.

For free EQ you can use e.g. TDR Nova - click LP and set 17.7 and slope -72dB.

As a free DAW you can use free Audacity (new versions support VST) or Cakewalk, Pro Tools Intro, or Ableton Lite.

The result of above will probably cause a small hole in a spectrum, and a bit lack of clarity. Alternatively, you can apply resonant high pass instead of brickwall, so the whole will be filled without overlapping frequencies.

Similar method to this can also be used for joining YT Opus frequencies above 15750Hz with AAC (m4a) files, which gives more clarity compared to normal Opus on YT. Read this.

DAW ensemble

The counterpart of avg ensemble can also be made in DAW. When you drag and align all stems you want to ensemble in your DAW (Audacity is enough), you simply need to lower the volume of stems according to the number of imported stems to ensemble.

It's -3dB per one stem, so for a pair you need to decrease the volume of two stems by 6dB (possibly by 6.02 as well).

So, when you add another stem (so for 3 models ensemble), you need to decrease the volume of all stems by 9dB, and so on.

The other way round, it's 3dB decrease every time you import a new track.

"I've made some tests by simply overlaying each audio above each other and reducing their volume proportionally of the number of audio overlays (like you would do in a daw), it scores like ~0.0002 better SDR than UVR's average."

Manual ensemble in UVR5 GUI of single models from Colabs

"You can use Colabs to make individual model separations and then use the manual ensemble tool from UVR5 GUI to merge them together (you don't need special CPU/GPU to use that tool and it's fast! 15-year-old computers can handle it).

In UVR GUI > process method = Audio Tools, then choose "Manual Ensemble" and the desired ensembling method."

Combine input is even more aggressive than Max Spec.

It takes two e.g. -15 ilufs songs, and make pretty loud -10 ilufs result.

To potentially deal with harshness of such output, you can set quality in options to 64 bit (sic!), or possibly manually decrease volume of ensembled files before passing through UVR Combine Inputs.

CI was good for ensembling KaraFan results with the least amounts of residues, and preset 5 with more clarity, but a bit more residues. The instrumental result was fuller sound, better snares and clarity.

UVR’s VR architecture models

(settings and recommendations)

VR Colab by HV

(old) https://colab.research.google.com/github/NaJeongMo/Colaboratory-Notebook-for-Ultimate-Vocal-Remover/blob/main/Vocal%20Remover%205_arch.ipynb 

Use this fixed notebook for now (04.04.23)

(since 17.03.23 the official link above for HV Colab stopped working (librosa, and later pysound related issues with again YT links, but somehow fixed) “!pip install librosa==0.9.1” in OG Colab fixes the issue and is only necessary for both YT and local files and clean installation works too.)

- HV also made a new VR Colab which irc, now don’t clutter all your GDrive, but only downloads models which you use (but without VR ensemble) and probably might work without GDrive mounting.

(Google Colab in general allows separating on free virtual machine with decent Nvidia GPUs - it's for all those who don't want to use their personal computer for such GPU/CPU-intensive tasks, or don’t have Nvidia GPU or decent CPU, or you don’t want to use online services - e.g. frequently wait in queues, etc.)

Video tutorial how to use the VR Colab (it’s very easy to use): https://www.youtube.com/channel/UC0NiSV1jLMH-9E09wiDVFYw

You can use VR models in UVR5 GUI or

To use the above tool locally (old command line branch for VR models only):

https://github.com/Anjok07/ultimatevocalremovergui/tree/v5-beta-cml

Installation tutorial: https://www.youtube.com/watch?v=ps7GRvI1X80

In case of CUDA out memory error due to too long files, use Lossless-cut to divide your song into two parts,

or use this Colab which includes chunks option turned on by default (no ensemble feature here):

https://colab.research.google.com/drive/1UA1aEw8flXJ_JqGalgzkwNIGw4I0gFmV?usp=sharing#scrollTo=I4B1u_fLuzXE

_________

Below, I'll explain Ultimate Vocal Remover 5 (VR architecture) models only (fork of vocal remover by tsurumeso).

For more information on VR arch, see here for official documentation and settings:

https://github.com/Anjok07/ultimatevocalremovergui/tree/v5-beta-cml

https://github.com/Anjok07/ultimatevocalremovergui

The best

VR settings

explained in detail

(these available on Colab and in CLI branch, and also UVR 5 GUI (but without at least mirroring2 - also - mirroring in UVR5 GUI for VR arch got replaced entirely by High End Process (works as mirroring now, and not like original High End Process which was originally dedicated for very old 16kHz VR models only).

These models can be used in this 1) Colab or in 2) UVR5 GUI or on 3) mvsep.com (512 windows size, aggressiveness option, various models) 4) x-minus.pro (for free one UVR (unreleased) model without parameters ("lo-fi" option, mp3, 17,7 kHz cutoff) [Demucs 4 for registered users irc (site by of the author(s) of UVR5)]

I had at least one report that results for just VR models are better using Colab above/old CLI branch instead of the newest UVR5 GUI, so be aware (besides both mirroring settings - only mirroring is working under high-end process - no mirroring2 [272 window size is added back as user input] all settings should be available in GUI). Interestingly, I received similar report for MDX models in UVR5 GUI comparing to Colab (be aware just in case). The problems might be also bound to VRAM, and don't exist on 11GB and up or in CPU mode.

Before we start -

Issue with additional vocal residues when postprocess is enabled

“--postprocess option masks instrumental part based on the vocals volume to improve the separation quality." (https://github.com/tsurumeso/vocal-remover) 

where HV in Colab says: “Mute low volume vocals”. So, if it enhances separation quality, then maybe it should cancel some vocals residues ("low volume vocals") so that's maybe not too bad explanation.

But that setting enabled in at least Colab may leave some vocal residues:

(it’s fixed in UVR GUI "the very end bits of vocals don't bleed anymore no matter which threshold value is used")

Customizable postprocess settings (threshold, min range and fade size) in HV's Colab were deleted, and were last time available in this revision:

https://colab.research.google.com/github/NaJeongMo/Colaboratory-Notebook-for-Ultimate-Vocal-Remover/blob/b072ad7418f6b1825d3dcff7cef70c5b0985d540/Vocal%20Remover%205_arch.ipynb#scrollTo=CT8TuXWLBrXF

So change default 0.3 or 0.2 threshold value (depending on revision) and set it to 0.01 if you have the issue when using postprocess.

The threshold parameter set to 0.01 fixes the issue (so quiet the opposite thing happened using default settings than this option should serve to, I believe).

Also, default threshold values for postprocess changed from 0.3 to 0.2 in later revisions of the Colab.

- window size option set to anything other than 512 somehow decrease SDR, although most people like lower values (at least 320, me even 272; 352 is also possible, but anything above changes the tone of sound more noticeably) - we don’t know yet why lower window sizes mess with SDR (similar situation like with GSEP) - 512 might be a good setting for ensemble with other models than VR ones or for further mastering. Sometimes compared to 512 windows size, 272 can lead to a bit more noticeable vocal residues. You might find bigger window sizes less noisy in general, but also more blurry for some people.

- aggressiveness - “A value of 10 is equivalent to 0.1 (10/100=0.1) in Colab”.

Strangely, the best SDR for aggressiveness using MSB2 instrumental model turned out to be 100 in GUI, 10 in Colab, while we usually used 0.3 for this model and 500m_x as well, while HP models usually behaves the best with lower values than HP2 models (0.09/10 in GUI).

- mirroring turned out to enhance SDR. It adds to the spectrum above 20kHz which is base training frequency of VR models.

    none - No processing (default)

    bypass - This copies the missing frequencies from the input.

    mirroring - This algorithm is more advanced than correlation. It uses the high frequencies from the input and mirrored instrumental's frequencies. More aggressive.

    mirroring2 - This version of mirroring is optimized for better performance.

--high_end_process - In the old CLI VR, this argument restored the high frequencies of the output audio. It was intended for models with a narrow bandwidth - 16 kHz and below (the oldest “lowend” and “32000” ones, none more). But now in UVR5 GUI, High-end process is counterpart of mirroring.

(current 500MB models don’t have full 22kHz coverage, but 20kHz, so consider using mirroring instead or none if you want fuller spectrum)

- Be aware, that even for VR arch, the same rule for GPUs with less than 8GB VRAM applies (inb4 - Colab T4 has 16GB) - separations on 6GB VRAM have worse quality with the same parameters. In order to work around the issue, you can split your audio into specific parts (e.g. for all chorus, verses etc).

VR models settings and list

For VR architecture models, you can start with these two fast models:

Model: HP_4BAND_3090_arch-124m (1_HP-UVR)

1) Fast and reliable. V2 below has more “polished” drums, while here they’re more aggressive and louder. Sometimes V2 might be safer and can fit in more cases where it’s not hip-hop and music is not drum oriented, but that one rarely harms some instruments more in certain cases with more busy mix with e.g. repeatable synth. You may want to isolate using these two models and pick the best results on even the same album.

Windows size: 272

Aggressiveness: 0.09 (9 in GUI)

TTA: ON (OFF if snare is too harsh)

Post-processing: OFF (at least for this model - it can get muffle instruments in background beside drums of the track in some cases, e.g. guitar)

"Mirroring" (Hi-end process in GUI) (rarely "Mirroring2" here, since the model itself is less smooth and usually have better drums, but it sometimes leads to overkill - in that case check mirroring2 in CLI or V2 model above)

Better yet, to increase the quality of the separation (when drums in e.g. hip-hop can be frequently damaged too much during the process) go now straight to the Demucs section and read the "Anjok's tip".

If you have too many vocal residues vs 500m_1 model, increase aggressiveness from 0.09 to 0.2 or even 0.3, but it’s destructive for some instruments (at least without Demucs trick above).

Model: HP-4BAND-V2_arch-124m (2_HP-UVR)

!) Fast and nice model, but sometimes gives lots of vocal residues comparing to above, but thanks to this, it may sometimes harm snare less in some cases (still 4 times faster than 500m_1) it’s ~55/45 which model is better and depends on the album even on the same genre:

 

Window size: 272 (the lowest possible; in some very rare cases it can spoil the result on 4 band models, then check 320)

Aggressiveness: 0.09 (9 in GUI)

TTA: ON (instr. separation of a better quality)

Postprocess: (sometimes on, it rather compliments to the sound of this model especially when the result sounds a bit too harsh, but it also can spoil drums in some places when e.g. strong synths suddenly appear in mix for short, probably misidentifying them as vocals, so be aware)

Mirroring (it fits pretty well to this model in comparison to mirroring2 which is not “aggressive” enough here) [mirroring doesn’t seem to be present in GUI so be aware)

Processing time for this model is 10 minutes using the weakest GPU in Colab (but currently you should be getting better Tesla T4).

(for users of x-minus) “slightly different models [than in GUI] are used for minimum aggressiveness. When we train models, we get many epochs. Some of these models differ in that they better preserve instruments such as the saxophone. These versions of the models don't get into the release, but are used exclusively on the XM website.”

Model: HP2-4BAND-3090_4band_arch-500m_1 (9_HP2-UVR)

3) Older good model, but resource heavy - check it if you get too many vocal residues, or in other cases - when your drums are too muffled - rarely there might be more bleeding and generally more spoiled other instruments in comparison to those above, it depends on a track. In some cases it bleeds vocal less than HP_4BAND_3090_arch-124m

Window size: 272

Aggressiveness: 0.3-0.32 (30-32 in GUI)

TTA: ON

Postprocess: (turned ON in most cases with exceptions (it’s polishing high-end), and the problem with muffling instruments using ppr doesn’t seem to exist in this model)

Mirroring2 (I find mirroring[1] too aggressive for this model, but with exceptions)

! Be aware these settings are very slow (40 minutes per track in Colab on the former default K80 GPU, but it's faster now) so just in case, you might want to experiment with 320/384, or at worse even 512 window size if you want to increase processing speed in cost of isolation precision.

Colab’s former default Tesla K80 processes slower than even GTX 1050 Ti, so if you have a decent Nvidia GPU, consider using UVR locally. Since May 2022 there is faster Tesla T4 available as default, so there shouldn't be any problem.

HP2-4BAND-3090_4band_arch-500m_2 (8_HP2-UVR)

was worse in I think every case I tested, but it’s good for a pair for ensemble (more about ensemble in section below).

Model: HP2-MAIN-MSB2-3BAND-3090_arch-500m (7_HP2-UVR.pth)

4) Last resort, e.g. when you have a lot of artifacts (heavily filtered vocal residues) some instruments spoiled, and no equal sound across the track. Last resort, because it’s 3 band, instead of 4 band, and it lacks some hi-end/clarity, but if your track is very demanding to filter out vocal residues, then it’s good choice. The best SDR among VR-arch models.

Window size: 272

Aggressiveness: 0.3

TTA: ON

Postprocess: ON

Mirroring

It’s similarly nightmarishly slow in Colab just like 500m_1/2 using these settings (1 hour for a track on K80) when you got accidentally slower Tesla K80 assigned in Colab instead of Tesla T4.

HighPrecison_4band_arch-124m_1

*)

May sometimes harm instruments less than HP_4BAND_3090_arch-124m, but may leak vocals more in many cases, but generally instrumentals lacks some clarity, but it sounds more neutral vs 500m_1 with mirroring (not always an upside). It’s not available in GUI by default due to its not fully satisfactory results vs models above.

Window size: 272

Aggressiveness: 0.2

TTA: ON

Postprocess: off

mirroring

SP in the GUI models stands for "Standard Precision". Those models use the least amount of computing resources of any other models in the application. HP on the other hand  stands for "Higher Precision" those models use more resources but have better performance.

So, what's the best VR arch model?

I'd stick to HP_4BAND_3090_arch-124m (1_HP-UVR) if it only gives good result for your song (e.g. hip-hop). If you're forced to use any other VR model for a specific song due to unsatisfactory results with this model, then probably current MDX models will achieve better results.

Second most usable model for me was 500_m1(9_HP2), and then HP-4BAND-V2_arch-124m (2_HP-UVR) or something in between, but compared to MDX-UVR models, it might be not worth to use it anymore due to possibility of more vocal residues.

Old VR models by UVR team (less aggressive) -

13/14 (4 band beta 1 or 2) - less aggressive than above

VR ensemble settings

As for VR architecture, ensemble is the most universal and versatile solution for lots of tracks. It delivers, when results achieved with single models fail - e.g. when snare is too muffled or distorted along with some instruments, but sometimes a single model can still provide more clarity, so it’s not universal for every track.

In most cases, ensemble of only VR models is dedicated for the tracks when in the most prevailing moments of busy mix in the track, you don’t have major bleeding using single VR model(s) because it rarely removes that well vocal residues from instrumentals better than current MDX models, or with high aggressiveness it becomes too destructive.

Order of models is crucial (at least in the Colab)! Set the model with the best results as the first one. Usually, using more than 4 models has a negative impact on the quality. Be aware that you cannot use postprocess in HV Colab in this mode, otherwise you’ll encounter an error. Please note that now UVR 5 GUI allows an ensemble of UVR and MDX models in the app exclusively, so feel free to check it too. Here you will find settings for “only” UVR models ensemble only.

- HP2-4BAND-3090_4band_arch-500m_1.pth (9_HP2-UVR)

- **HP2-4BAND-3090_4band_arch-500m_2.pth (8_HP2-UVR)

- HighPrecison_4band_arch-124m_1.pth (probably deleted from GUI, and you’d need to copy this model from here to your GUI folder manually - if it will only work)

- HP_4BAND_3090_arch-124m.pth (1_HP-UVR)

(order in Colab is important, keep it that way!)

Or for less bleeding, but a bit more muffled snare, use this one instead:

HP-4BAND-V2_arch-124m.pth (model available only in Colab, recommended

*on slower Tesla K80 you can run out of time due to runtime disconnection, but you should get faster Tesla T4 by default on first Colab connection on the account in 24h.

Aggressiveness: 0.1 (pretty universal in most cases, 0.09 rarely fits).

Or for more vivid snare if bleeding won’t kick in too much: 0.01 (in cases when it’s more singing than rapping - for the latter it can result in more unpleasant bleeding (or just in some parts of the track). Suggested very low aggressiveness here doesn’t leak as much as it could using the same settings on a single model, but it leaks more in general vs single models’ suggested settings).

0.05 is not good enough for anything AFAIK.

high_end_process: mirroring2 (just ON in GUI)

(for less vivid snare check “bypass”, (not “mirroring” for ensemble - for some reason both make the sound more muffled), be aware that bypass on ensemble results with less vocal leftovers)

ensembling_parameter: 4band_44100.json

TTA: ON

Window size: 272

FlipVocalModels: ON

Other ensemble settings

  • For clap leftovers in vocal stem, check out this ensemble settings.
  • For creaking sounds, process your separation output more than once till you get there with this setting
  • Also reported clean instrumentals with this setting

Make sure you checked separated file after the process and file length agrees with original file. Occasionally, the result file can be cut in the middle, and you’ll need to start isolation again. Also, you can accidentally start isolation before uploading of source file is finished. In that case, it will be cut as well.

It takes 45 minutes using Tesla T4 (~RTX 3050 in CUDA benchmarks) for these 4 models settings. Change your songs for processing after finishing the task FAST, otherwise you’ll be disconnected from runtime when the notebook is idle for some time (it can even freeze in the middle).

In reality, Tesla T4 maybe has much more memory, but what takes 30 minutes on a real RTX 3050, here might take even more than 2 hours and sometimes slower or sometimes slightly faster (usually slower). So you're warned.

**Be aware that these 4 model ensemble setting with both 500m models in most cases won’t suffice for the slowest (and no longer available in 2023) Tesla K80 due to its time and performance limit to finish such a long operation which exceeds 2 hours (it takes around 02:25h). Certain tasks too much above 2 hours ends up with runtime disconnection, so you're warned.

Also be aware that the working icon of Colab on the opened tab sometimes doesn’t refresh when operation is done.

Furthermore, it can happen that the Colab will hang near 01:45-02:17h time of executing the operation. To proceed, you can click F5 and press cancel on prompt to whether to refresh. Now the site will be functional again, but the process will stop without any notice. It is most likely the same case when you suddenly stop connection to the internet, and the process will still run virtually till you reconnect to the session. But here, you just don’t have to click the reconnect button on the right top. Most likely you have very limited time to reestablish the connection till the process will stop permanently if you don't connect on connection lost (or eventually if progress tracker/Colab will stop responding). So in the worst case, you need to observe if the process is still working between 01:45-02:17h of processing. If you see that your GPU has 0.84GB instead of ~2GB, you’re too late and your process is permanently interrupted, and the result is gone. It’s harder to track how long it processes when you already used the workaround once, and the timer stopped, so you don't know how long it is separating already.

Limit for faster Tesla T4 is between 1:45 and 2:00h/+ (sometimes 2:25, but can disconnect sooner, so try not to exceed two hours) of constant batch operation, which suffice for 2 tracks being isolated using ensemble settings above with both 500m models (rarely 3 tracks).

HP2-4BAND-3090_4band_arch-500m_1 (9_HP2-UVR) - I think it tends to give the most consistent results for various songs (at least for songs when vocal residues are not too prevalent here)

HP-4BAND-V2_arch-124m (2_HP-UVR) - much faster and can give crisp results, but with too many vocal residues for some songs (like VR arch generally tends to)

HP_4BAND_3090_arch-124m (1_HP-UVR) - something between the two above, and can give the best results for some song too (out of other VR models)

HP2-MAIN-MSB2-3BAND-3090_arch-500m (7_HP2-UVR.pth) - tends to have the least vocal residues out of the VR models listed above, but in cost of instrumentals not sounding so "full"

HighPrecison_4band_arch-124m_1 (I think not available in UVR, you'd need to install it manually) - can be a good companion if you only have VR models for ensemble

HP2-4BAND-3090_4band_arch-500m_2 (9_HP2-UVR) - the same situation, I think it rarely gives any better results than 500m_1 (if in even any case) but it's good for purely VR ensemble

_______VR algorithms of ensemble _______

by サナ(Hv#3868)

“np_min takes the highest value out, np_max does vice versa

it's also similar to min_mag and max_mag

So the min_mag is better for instrumental as you could remove artefacts.

comb_norm simply mixes and normalizes the tracks. I use this for acapella as you won't lose any data this way”

Batch conversion on UVR Colab

There’s a “ConvertAll” batch option available in Colab. You can search for “idle check” in this document to prevent disconnections on long Colab sessions, but at least if you get the slowest K80 GPU, the limit is currently 2 hours of constant work, and it simply terminates the session with GPU limit error. The limit is enough for 5 tracks - 22 minutes with ~+/-3m17s overhead (HP_4BAND_3090_arch-124m/TTA/272ws/noppr/~2it/s) so better execute bigger operations in smaller instances using various accounts and/or after 3-5 attempts you can also finally hit on better GPU than K80.

To get faster GPU simply go to Runtime>Manage session>Close and connect and execute Colab till you get faster Tesla T4 (up to 5 times). But be aware, that 5 reconnections will reach the limit on your account, and you will need to change it. It’s easier to get T4 and not reach the limit reconnecting, around 12:00 CET in working days. 14:30 o’clock it was impossible to get T4, but probably it depended on a situation when I already used T4 this day since I received it immediately on another account.

For single files isolation instead of batch convert I think it took me 6-7 hours till the GPU limit was reached, and I processed 19 tracks using 272 ws in that session.

JFI: Even 5800X is slower than the slowest Colab GPU.

Shared UVR installation folder among various Google accounts

Since we no longer can use old Gdrive mounting method allowing mounting the same drive across various Colab sessions - to not clutter all of your accounts by UVR installation, simply share a folder with editing privileges and create a shortcut from it to your new account. Sadly the trick will work for one session at a time.

Firstly - sometimes you can have problems with opening the shared folder on proper account despite changing it after opening the link (it may leave you on old account anyway). In that case, you need to manually insert id of your account where you want to open your link to. E.g. https://drive.google.com/drive/u/9/folders/xxxxxxxx (where 9 is an example of your account ID which shows right after you switch your account on main Google Drive page).

After you opened the shared UVR link on your desired account, you need to add the shortcut to your disk (arrow near folder’s name) and when it’s done, create “track” and “separated” folder on your own - so delete/rename shared “tracks” and “separated” folder and create it manually, otherwise you will get error during separation. If you still get an error anyway, refresh file browser in the left of Colab and/or retry running separation three times till error disappears (from now on it shows error occasionally, and you need to retry from time to time and/or click refresh button in file manager view in the left or even navigate manually to tracks folder in order to refresh), Colab gets changes like moving files and folders on your disk with certain delay. And be aware that most likely such way of installing UVR will prevent you from any further updates from such account with shared UVR files, and on the account you shared the UVR files from, you need to repeat folder operations if you will use it back again on Colab.

Comparing 500m_1 and arch_124m above, in some cases you can notice that the snare is louder in the first, but you can easily make it up using mirroring instead of mirroring2. Downside of normal mirroring might be more pronounced vocal residues due to higher output frequency.

Also, in 500m_1 more instruments are damaged or muffled, though more aggressiveness in the default setting of 500m_1 sometimes makes an impression that more vocal residues are cancelled.

(evaluation tests window size 272 vs 320 -

it’s much slower, doesn’t give noticeable difference on all sound systems, 272 got slightly worse score, but based on my personal experience I insist on using 272 anyway)

(evaluation tests aggressiveness 0.3 vs 0.275 -

doesn’t apply for all models - e.g. MGM - 0.09)

(evaluation tests TTA ON vs OFF -

in some cases, people disable it)

5a) (haven’t tested thoroughly these aggressiveness parameters yet)

HP2-4BAND-3090_4band_arch-500m_1.pth

w 272 ag 0.01, TTA, Mirroring

5c)

HP2-4BAND-3090_4band_arch-500m_1.pth

w 272, ag 0.0, TTA, Mirroring 2

Low or 0.0 aggressiveness leaves more noise, sometimes it makes instrumental cleaner, if you don’t care for more vocal bleeding (it depends also on your sound system how you are able to catch them. E.g. whether you listen on headphones or speakers).

But be aware that:

“A 272 window size in v5 isn't recommended [in all cases]. Because of the differing bands. In some cases it can make conversions slightly worse. 272 is better for single band models (v4 models) and even then the difference is tiny” Anjok (developer)

(so on some tracks it might be better to use 320 and not below 352, but personally I haven’t found such case yet)

DeepExtraction is very destructive, and I wouldn’t recommend it with current good models.

Karokee V2 model for UVR v5 (MDX arch)

(leaves backing vocals, 4band, not in Colab yet, but available on MVSep)

Model:

https://mega.nz/file/yJIBXKxR#10vw6lRJmHRe3CMnab2-w6gAk-Htk1kEhIp_qQGCG3Y

Be sure to update your scripts (if you use older command line version instead of GUI):

https://github.com/Anjok07/ultimatevocalremovergui/tree/v5-beta-cml

Run:

python inference.py -g 0 -m modelparams\4band_v2_sn.json -P models\karokee_4band_v2_sn.pth -i <input>

5d) Web version for UVR/MDX/Demucs (alternative, no window size parameter for better quality):

https://mvsep.com/

How to use this free online stem splitter with a variety of quality algorithms -

1. Put your audio file in.

2. Choose an algorithm. Usually, you really only need to choose one of two algorithms:

- The best algorithm for getting clean vocals/instrumental is selecting Ultimate Vocal Remover. Once you selected Ultimate Vocal Remover, select HP-4BAND-V2 as the "Model type".

- The best algorithm for getting clean separate instrument tracks, like bass, drums and other, is Demucs 3 Model B.

3. Hit Separate, and mvsep will load it for you. This means you can do everything yourself, no need to ask for other people's isolations if you can't find them.

6) VR 3 band model (gives better results on some songs like K Pop)

HP2-MAIN-MSB2-3BAND-3090

(I think default aggresiveness was 0.3)

7) deprecated - in many cases lot of bleeding (not every time) but in some cases it hurts some instruments less than all above models (e.g. quiet claps).

MGM-v5-4Band-44100-BETA2/

(MGM-v5-4Band-44100-_arch-default-BETA2)

/BETA1

Agg 0.9, TTA, WS: 272

Sometimes I use Lossless-Cut to merge beta1 and beta2 certain fragments.

Models from point 4 surpasses ensemble of both BETA1 and BETA2 models.

(!) Interesting results (back in 2021)

“Whoever wants to know the HP1, HP2 plus v4 STACKED model method, I have a [...] group explaining it"

https://discord.gg/PHbVxrV4yS

Long story short - you need to ensemble HP1 and HP2 models, then on top of it, apply stacked model from v4.

Be aware that ensemble with postprocessing in Colab doesn't work.

Instruction:

1 Open this link

https://colab.research.google.com/drive/189nHyAUfHIfTAXbm15Aj1Onlog2qcCp0?usp=sharing

2. Proceed all the steps

3. After mounting GDrive upload your, at best, lossless song to GDrive\MDX\tracks

4. Uncheck download as MP3, begin isolation step

5. Download the track from "separated" folder on your GDrive. You can use GDrive preview on the left.

1*. Alternatively if you have a paid account here, upload your song to: https://x-minus.pro/ai?hp

Make sure you have "mdx" selected for the AI Model option. Wait for it to finish processing.

2*. Set the download format to "wav" then click "DL Music." Store the resulting file in the ROOT of your UVR installation.

6. Use a combination of UVR models to remove the vocals. Experiment to see what works with what. Here's a good starting point:

HP2-4BAND-3090_4band_arch-500m_1.pth

HP2-4BAND-3090_4band_arch-500m_2.pth

HP_4BAND_3090_arch-124m.pth    

HP-4BAND-V2_arch-124m.pth

7. Store the resulting file in the ROOT of your UVR installation alongside your MDX result.

8. Finally, ensemble the two outputs together. cd into the root of your UVR installation and invoke spec_utils.py like so:

$ python lib/spec_utils.py -a crossover <input1> <input2>

the output will be stored in the ensembled folder

9* (optional). Ensemble the output from spec_utils with the output from UVR 4 stacked models using the same algorithm

Ensemble

spec_utils.py allowing ensemble is standalone, and doesn't require UVR installed in order to work. It accepts any of the audio files

mul - multiplies two spectrograms

crossover - mixes the high frequencies of one spectrogram with the low frequencies of another spectrogram

Default usage from aufr33:

python lib/spec_utils.py -o inst_co -a crossover UVR_inst.wav MDX_inst.wav

https://github.com/Anjok07/ultimatevocalremovergui/blob/v5-beta-cml/lib/spec_utils.py

Custom UVR Piano Model:

https://drive.google.com/file/d/1_GEEhvZj1qyIod1d1MX2lM6u65CTpbml/view?usp=s

______________

VR Colab troubleshooting

If you somehow can't mount GDrive in the VR Colab because you have errors or your separation fails:

- Use the same account for Colab and for mounting GDrive (or you’ll get an error)

- If you’re on mobile, you might be unable to use Colab without PC mode checked in your browser settings (although now it works in Chrome Android)

- In some cases, you won’t be able to write “Y” in empty box to continue on first mounting on some Google account. In that case, e.g. change browser to Chrome and check PC mode.

- In some cases, you won’t be able to paste text from clipboard into Colab if necessary, when being in PC mode on Android, if some opened on-screen applications will prevent the access - you’ll need to close them, or use mobile mode (PC mode unchecked)

- (probably fixed) If you started having problems with logging into Colabs.

> Actually, it doesn't show that you're logged in while the button says to log in.

So, it should respect redirections in Colab links to specific accounts, but if you're mounting to GDrive, and it fails with Colab error, simply click the button in the top right corner to log in. It will. Just won't show that you did that. Then Colab will start working.

- Don't use postprocess in ensemble, or you'll encounter error

- You can try checking force update in case of errors

- Go to runtime>manage sessions>terminate session and then try again with Trigger force update checked (ForceUpdate may not work before terminating session after Colab was launched already).

- Make sure you got 4.5GB free space on GDrive and mounting method is set to "new". You can try out "old" but it shouldn't work.

Try out a few times.

- If still nothing, delete VocalRemover5-COLAB_arch folder from GDrive, and retry without Trigger update.

On fresh installation, make sure you still have 4.5GB space on GDrive (empty recycle bin - automatic successful models installation will leave separate files there as well, so you can run out of space on cluttered GDrive easily)

- If still nothing (e.g. when models can’t be found on separation attempt), then download that thing, and extract that folder to the root (main) directory of Gdrive, so it looks like following: Gdrive\VocalRemover5-COLAB_arch and files are inside, like in the following link:

https://drive.google.com/drive/folders/1UnjwPlX1uc9yrqE-L64ofJ5EP_a8X407?usp=sharing

and then try again running the Colab:

https://colab.research.google.com/drive/16Q44VBJiIrXOgTINztVDVeb0XKhLKHwl

- if you cannot connect with GPU anymore and/or you exceeded your GPU limit

try to log into another Google account.

- Try not to exceed 1 hour when processing one file or one batch of files, otherwise you'll get disconnected.

- Always close the environment in Environment before you close the tab with the Colab.

That way, you will be able to connect to the Colab again after some time, even if you previously connected to the runtime and stopped using it. Not shutting down the runtime before exit, makes it wait in idle, and hitting timeout. Then the error of limit reached will appear after you'll try to connect to Colab again if it wasn't closed before. Then you'll need to wait up to 24h, or switch Colab account, while using the same Google account as for Colab in the mounting cell (otherwise, it will end up with error when you'll use different account for Colab and different for GDrive mounting).

- New layer models may not work with 272 window size causing following error:

“raise ValueError('h1_shape[3] must be greater than h2_shape[3]')

ValueError: h1_shape[3] must be greater than h2_shape[3]”

- (fixed) Sometimes on running mounting cell you can have short “~from Google Colab error” on startup. It will happen if you didn’t log into any account in the top right corner of the Colab. Sometimes it will show a blue “log in” button, but actually it’s logged in, and Colab will work.

- A network error occurred, and the request could not be completed.

GapiError: A network error occurred and the request could not be completed.

In order to fix these error in Colabs, go to hosts file in your c:\Windows\System32\Drivers\etc\hosts and check if you don’t have any lines looking like:

127.0.0.1 clients.google.com

127.0.0.1 clients1.google.com etc.

It can be introduced by RipX Pro DAW.

- These are all the lines which fix problems in our Colabs since the beginning of the year when new versions of these dependencies became incompatible (but usually one Colab linked is forked when told and up-to-date with these necessary fixes applied already)

!pip install soundfile==0.11.0

!pip install librosa==0.9.1

!pip install torch==1.13.1

!pip install yt-dlp=2022.11.11

!pip install git+https://github.com/ytdl-org/ytdl-nightly.git@2023.08.07

Later in February 2024 we needed to switch to older Python 3.8 in order to make numpy work correctly with used deprecated functions. More details and used lines below Similarity Extractor section (all those fixes should be already applied in the latest fixed Colab at the top).

MDX-Net trained by UVR team models (aufr33 & Anjok)

First vocal models trained by UVR for MDX-Net arch:

(9.703 model is UVR-MDX-NET 1, UVR-MDX-NET 2 is UVR_MDXNET_2_9682, NET 3 is 9662, all trained at 14.7kHz)

(instrumental based on processed phase inversion)

List of all (newer) available MDX models at the very top.

I think main was 438 in UVR 5 GUI at some point. At least now it's simply main_438 (if it wasn't from the beginning, but it was easy to confuse it with simply main model or even inst main)

(use MDX is a way to go now over VR) Generally use MDX when the results achieved with VR architecture are not satisfactory - e.g. too much vocal bleeding (e.g. in deep and low voices) or damaged instruments. If you only want acappella - it’s currently the best solution. Actually the best in most cases now.

MDX-UVR models are also great for cleaning artifacts from inverts (e.g. mixture (regular track) minus official instrumental or acappella).

(outdated) 9.682 might be better for instrumentals and inversion in some cases, while 9.7 for vocals, but better check already also newer models like 464 from KoD update (should be better in most cases) and also check Kim Model in GUI.

Generally on MVSEP's multisong dataset, these models received different SDR than on MDX21 dataset back in the days.

On MVSEP there’s 9.7 (NET 1) model, and it doesn't have any cutoff above training frequency for inverted instrumentals like currently GUI has. For (new) model it’s vocal 423 model and possibly with Demucs 2 enabled like in Colab, but it doesn’t have a specific jaggy spectrum above MDX training frequency which is specific to inverted vocal 4XX models from that period including Kim’s model.

Non-onnx version of voc_ft model in (pth) - 20x faster on MPS devices

https://cdn.discordapp.com/attachments/887455924845944873/1204148727820984370/UVR-MDX-NET-Voc_FT.pth

code https://drive.google.com/file/d/1aSe0bwgIWhR7vvF1aoHQlCHpj39Kd-YK/view?usp=sharing

(the old) Google Colab by HV

https://colab.research.google.com/drive/189nHyAUfHIfTAXbm15Aj1Onlog2qcCp0?usp=sharing

Add separate cell as following, or else it won’t work

!pip install torch=1.13.1

If you're still getting errors, delete whole MDX_Colab folder, terminate your session, make clean installation afterward, and don't forget to have this torch line executed after mounting (that might happen in case you manually replaced model.py with some of the ones below, and didn't restore the correct old one).

(The Colab to use MDX easily in Google’s cloud. Newer models not included, and it gives error if you add other models manually - custom models.py necessary, only 9.7 [NET 1-3] and karaoke models included above)

(In case of “RuntimeError: Error opening 'separated/(trackname)/vocals.wav': System error.” simply retry)

More MDX models explained in UVR section in the beginning of the document since they're a part of UVR GUI now.

Optionally, 423 model can be downloaded separately here (just in case, it’s main). It is on MVSEP as well.

Upd. by KoD & DtN & Crusty Crab & jarredou, HV (12.06.23)

____________________________________________________________________

The newest MDX Colab Colabs - now with automatic models downloading (no more manual GDrive models installation). Consider everything in the divided section later below as unnecessary.

https://colab.research.google.com/github/NaJeongMo/Colab-for-MDX_B/blob/main/MDX-Net_Colab.ipynb

(new, by HV 2023)

https://colab.research.google.com/github/jarredou/Colab-for-MDX_B/blob/main/MDX_Colab.ipynb (Beta. Might lack HQ_3 and voc_ft. It supports batch processing. Works with a folder as input and will process all files in it.

In "tracks_path" must be a folder containing (only) audio files (not the direct link to a file).

But the below might still work.)

https://colab.research.google.com/github/kae0-0/Colab-for-MDX_B/blob/main/MDX_Colab.ipynb (stable, lacks voc_ft batch process + also manual parameters loading per model like in the two above)

https://colab.research.google.com/drive/1CO3KRvcFc1EuRh7YJea6DtMM6Tj8NHoB?usp=sharing (older revision with also auto models downloader, but with manual n_fft dim_f dim_t parameters setting like HV added)

____________________________________________________________________

(old) May update

New MDX Colab with separate input for 3 models parameters, so you don’t need to change models.py every time you switch to some other model. Settings for all models listed in Colab. From now on, it uses reworked main.py and models.py downloaded automatically (made by jarredou). Don’t replace models.py from below packages with models from now on. Now denoiser also optionally added.

___________________________

(older Colab instruction)

To use more recent MDX-UVR models in Google Colab:

  1. Use and install this Colab (new) to GDrive at least once, run all the cells, nothing more - if you used MDX HV Colab (the one in the section above) on your specific Google Drive account before, ignore this step.
  2. Copy these files to onnx folder in MDX_Colab on your GDrive (inst1-3, 427) (down) https://drive.google.com/drive/folders/13SsV7b_kC6SqkICeX5wKhx-Z05uC8dLl (down)
  3. Overwrite models.py in MDX_Colab folder by provided below (not for new Colab)

(compatible with inst1-3, 427, Kim vocal and other)

https://cdn.discordapp.com/attachments/945913897033023559/1036947933536473159/models.py (completely different one with self.n_fft set to 7680 - incompatible with NET-1/9.x and 496 models)

  1. Use this notebook with added models

(the same as the link in point 1):

https://colab.research.google.com/drive/1zx7DQM-W9i7MJuEu6VTYz1xRG6lKRKVL?usp=sharing

  1. For Kims vocal model (poor instrumentals on Colab and no cutoff after inversion) copy vocals.onnx

(use the same models.py from point 3): https://drive.google.com/drive/folders/1exdP1CkpYHUuKsaz-gApS-0O1EtB0S82?usp=sharing

to onnx subfolder named "MDX-UVR-Kim Vocal Model (old)"

  1. For 496 inst model (inst main/MDX 2.1) go to the link below and put the model to onnx subfolder named “MDX-UVR Ins Model 496 - inst main-MDX 2.1” but you must replace attached models.py in the link in your GDrive (it’s from the OG HV Colab), and it is incompatible with the rest of the models in this new Colab - make a copy/rename the previous models.py in order to go back to it

(496 model is not as effective as 464/inst3 leaving more vocal residues in some cases, but might work well in specific scenarios). 496 is the only model requiring the old models.py from 9.7/NET1-3 models (attached below). https://drive.google.com/drive/folders/1iI_Zvc506xUv_58_GPHfVKpxmCIDfGhx?usp=share_link (if you place model in the wrong place, you’ll get missing vocals.onnx error [e.g. wrong folder structure or name] or “Got invalid dimensions for input: input for the following indices index: 2 Got: 3072 Expected: 2048.” [when having wrong models.py])

  1. Demucs turned on works only with default mixing algorithm and vocal models (or else you’ll get “ValueError: operands could not be broadcast together with shapes (8886272,2) (8886528,2)”). Also, chunks might have to be decreased.
  2. Be aware that after following these steps if you launch the old HV Colab above, it may overwrite models.py by the old one in point 6, which is compatible only with inst main/496 or full band models, so you'll need to repeat step 3 or 10 in case of invalid dimensions error or cutoff of full band model.
  3. In case of runtime error, to use Kim model decrease chunks from 55 to 50, and for Demucs on, decrease it to 40 (or respectively even lower)
  4. (beta) Full band beta 292 model (with new, only working for that model, models.py file with self.n_fft changed to 6144).

Go to the link below, copy model file to onnx subfolder called “MDX-UVR Ins Model Full Band 292” as in the link, and replace models.py (ideally make a backup/rename the old one in order to use previous models)

Thanks for help to Kim

https://drive.google.com/drive/folders/1CTJ6ctldr_avwudua1qJJMPAd7OrS2yO?usp=sharing

  1. (beta) Full band beta 403 model (with the same modified models.py for these two models)

Copy model file to:

Gdrive\MDX_Colab\onnx\MDX-UVR Ins Model Full Band 403\” as in the link below, and replace models.py in Gdrive\MDX_Colab

https://drive.google.com/drive/folders/1UXPxQMVAocpyDVb3agXu0Ho_vqFowHpA?usp=sharing

  1. (final) Full band 450/HQ_1 model (with the same modified models.py for the full band models)

Copy model file to:

Gdrive\MDX_Colab\onnx\MDX-UVR Ins Model Full Band 450 (HQ_1)\” as in the link below, and replace models.py in Gdrive\MDX_Colab (if you didn’t already for full band models)

https://drive.google.com/drive/folders/126ErYgKw7DwCl07WprAXWPD_uX6hUz-e?usp=sharing

  1. From now on, you’re forced to run separately newly added torch cell to fix PyTorch issues
  2. Newer full band 498/HQ_2 model (with the same modified models.py for the full band models)

Copy model file to:

Gdrive\MDX_Colab\onnx\MDX-UVR Ins Model Full Band 498 (HQ_2)\” as in the link below, and replace models.py in Gdrive\MDX_Colab (if you didn’t already for full band models)

https://drive.google.com/drive/folders/1O5b-uBbRTn_A9B2QkefklCT41YR9voMq?usp=sharing

  1. For full band models, use only modified models.py attached above, or you’ll get cutoff at 14.7kHz instead of 22kHz in spectrograms while using 427 models.py file.
  2. For Kim other FT instrumental model with cutoff but the highest SDR (even than inst3)

Copy both (vocals and other) model files to:

Gdrive\MDX_Colab\onnx\Kim ft other instrumental model\” as in the link below, and replace models.py in Gdrive\MDX_Colab (if you didn’t already for full band models)

https://drive.google.com/drive/folders/1v2Hy4AgFOJ9KysebGuOgn0rIveu510j6?usp=sharing (it will give only 1 stem output, models duplicated fixes errors in Colab, models.py is from inst3 model)

  1. If you use models.py from fullband model, it will output fullband for ft other model, but giving much more vocal residues (but it still might be even better in some busy mix parts than VR models, while having still less vocal residues only in those busy parts like chorus) - definitely use min_mag here.
  2. To fix the following error, make sure both vocals and invert vocals are always checked:

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

Intel MKL FATAL ERROR: Cannot load /usr/local/lib/python3.9/dist-packages/torch/lib/libtorch_cpu.so.

Above error can also mean you need to terminate your session and start over. It randomly happens after using the Colab:

  1. I've reverted old "Karokee" and "Karokee_AGGR" models to use with the oldest HV’s models.py file, but these are old models (maybe they will do the trick, though).
  2. ModuleNotFoundError: No module named 'models'

Sometimes switching models.py doesn’t work correctly (especially during working on previously shared Colab folder with editing privileges) in that case, check Colab’s file manager if models.py is actually present after you’ve made a change on GDrive. If not, rename it to models.py (it might have been renamed to something else).

  1. Collection of all three models.py for all models for your comfort:

https://drive.google.com/drive/folders/1J35h9RYhPFk8dH-vShSW_AUharXY1YsN?usp=sharing

  1. Main_406 vocal model

https://mega.nz/file/dcREzKTR#PYKk3s1NPicC3mBBYH8ejC2rK_Im3sAj0p9xcOi1cpE

        "compensate": 1.075,

        "mdx_dim_f_set": 3072,

        "mdx_dim_t_set": 8,

        "mdx_n_fft_scale_set": 7680,

   

Models include here only: baseline, instrumental models: 415 (inst_1), 418 (inst_2), 464 (inst_3) trained on 17.7kHz, and vocal model 427, and Kim’s vocal model (old) (instrumental should be automatically made by inversion option, but it’s not a very good one for it) and 292 and 403 full band. If you want to use older 9.7 models, use old HV Colab above.

464/inst 3 should be the best in most cases for instrumentals and vocals than previous 9.x models, but depending, even in half of the cases, 418 can achieve better results, while full band 403 might give better results than inst3/464 in half of the cases.

Settings

max_mag is for vocals

min_mag for instrumentals

default

(deleted from the new HV Colab, still in Kae Colab above)

But "min mag solve some unwanted vocal soundings, but instrumental [is] more muffled and less detailed."

Also check out “default” setting (whatever is that, compare checksums if not one of these).

Chunks

As low as possible, or disabled.

Equivalent of min_mag in UVR is min_spec.

Be aware that UVR5, opposed to MDX Google Colab, applies cutoff to inverted output, matching the frequency of training frequency e.g. 17.7kHz for inst 1 and 3 models. It was to avoid some noise and vocal leftovers. You might have to apply it manually.

Also, you can uncomment visibility of compensation value in Colab, and change it to e.g. 1.08 to experiment.

Compensation value for 464 MDX-UVR inst. model is 1.0568175092136585

Default 1.03597672895 is for 9.7 model, and it also does the trick with at least Kim (old) model in GUI (where 1.08 had worse SDR).

Or check + 3.07 in DAW (it worked on Karokee model).

In Collab above, I also enabled visibility of max_mag for vocals and min_mag for instrumentals settings (mixing_algoritm).

Also, if you want to use Demucs option (ensemble) in Kae Colab, it uses stock Demucs 2, which in UVR5 was rewritten to use Demucs-UVR models with Demucs 3 or even currently better Demucs 4.

According to MVSEP SDR measurements, for ensemble Max Spec/Min Spec was better than Min Spec/Max Spec, but Avg/Avg was still better than these both.

Also for ensemble, Avg/Avg is better compared to e.g. Max Spec/Max Spec - it's 10.84 v 10.56 SDR in other result.

How denoiser work

It's not frequency based, it processes “the audio in 2 passes, one pass with inverted phase, then after processing the phase is restored on that pass, and both passes mixed together with gain * 0.5. So only the MDX noise is phase cancelling itself.”

Or the other way round:

“it's only processing the input 2 times, one time normal and one time phase inverted, then phased restored after separation, so when both passes are mixed back together only the noise in attenuated. There's no other processing involved”

Denoise serves to fix so called MDX noise existing in all inst/voc MDX-NET (v2) models.

______

Web version (32 bit float WAV as output for instrumentals, just use MDX-B for single MDX-Net models.

It was 9.682 MDX-UVR model in 2021, but in the end of 2022 it's probably inst 1 judging by SDR (not sure, as results are not exactly the same), then more models were added (e.g. HQ_3):

https://mvsep.com/

Web version (paid for MDX, lossless):

https://x-minus.pro/ 

In kae Colab, you can keep the option Demucs: off (ONNX only), it may provide better results in some cases even with the old MDX narrowband models (non-HQ).

In Colab you can change chunks to 10 if your track is below 5:00 minutes. It will take a bit more time, but the quality will be a bit cleaner, but more vocal residues can kick in (esp. short sudden ones).

Be aware that MDX Colabs for single models have 16 bit output.

And also noise cancellation implementation for MDX models in kae and HV Colab can differ a bit, plus there is also separate denoise method available as separate model.

Code for denoise method in HV Colab here.

As for any other settings, just use defaults since they're the best and updated.

Just for a vocal it’s one of the best free solutions on the market, very close to the result of paid and (partly) closed Audioshake service (#1 AI in a Sony separation contest; SDRs are from the contest evaluation based on private dataset). Very effective, high quality instrumental isolation AI and custom model (but the old models are trained at 14.7 kHz [NET-X a.k.a. 9.x] in comparison to VR models, and 17.7kHz in newer models like inst X and kim inst). 

In most cases MDX-UVR inverted models give less bleeding than VR (especially on bassy voice), while occasionally the result can be worse comparing to VR above, especially in terms of hi-end frequencies quality, but in general, MDX with UVR team models behaves the best for vocals and instrumentals.

Even instrumental from inverted vocals from vocal models gets less impaired than in VR, since vocal filtering is less aggressive, but with even more bleeding in some cases. Depends on a song.

You can support the creators of UVR and the newest MDX model is also available on https://www.patreon.com/uvr https://boosty.to/uvr to visit https://x-minus.pro/ to get an online version of MDX there as well (with exclusive paid models).

At least paid x-minus subscription allows you to use MDX HQ_2 498 (or HQ_3 already) instrumental model and for VR arch - 2_HP-UVR (HP-4BAND-V2_arch-124m), and Demucs 6s on their website. Feel free to listen and download lots of uploaded instrumentals on x-minus already. Dozens of instrumentals available.

Outdated

Alternatively you can experiment with 9662 model and ensemble it with the latest UVR 5's 4 band V2 with -a min_mag as Anjok suggested (but it was when new models weren't released yet).

Remotely I only know about old Colab which ensembles any two audio files, but it uses old algorithm if I'm not mistaken, so it is not as good (better use the ensemble Colab linked at the very top of the document):

https://colab.research.google.com/drive/1eK4h-13SmbjwYPecW2-PdMoEbJcpqzDt?usp=sharing

_____

Note

Don’t disable invert_vocals in Colab even if you only need vocal instead of instrumental, otherwise the Colab will end up with error.

MDX noise

There is a noise using all MDX-UVR inst/vocal models, and it’s model dependent (irc 4 stems don’t have it). It's fixed in Colabs using denoiser "however by using my method, conversions will be 2x slower as it needs to predict twice.

I see no quality degradation at all, and I can't believe it actually worked rofl" -HV

Also, UVR 5 GUI has the same noise filtering implemented (if not better, also with alternative model).

Current MDX Colab has normalization feature “normalizes all input at first and then changes the wave peak back to original. This makes the separation process better, also less noise. IDK if you guys have tried this, but if you split a quiet track, and normalize it after MDX inference the noise sounds more audible than normalizing it and changing the peak back to original.”

If you want to experiment with MDX sound, the Colab from before that change is below:

https://colab.research.google.com/drive/1EXlh--o34-rzAFNEKn8dAkqYqBvhVDsH?usp=sharing (might no longer work due to changes made by Google to Colab environment, the last maintained are kae and HV (new) Colabs)

Furthermore, you can also try manually mixing vocal with original track using phase inversion and add specific gain on vocal track (+1.03597672895 or +3.07) for 9.7 model (or other ones with different values), using both this and below Colab and save result as 32 bit float (but this might have more bleeding, but it uses 32 bit while chunking):

https://colab.research.google.com/drive/1R32s9M50tn_TRUGIkfnjNPYdbUvQOcfh?usp=sharing#scrollTo=lkTLtOvyBuxc

(for e.g. the best compensation value for 464 MDX-UVR inst. model is 1.0568175092136585

and it's not constant)

Also be aware that MVSEP uses 32 bit for MDX-UVR models for ready inversion of any model too.

If you look for eliminating the noise from MDX-UVR instrumentals, also the method described in Zero Shot below might work.

"I just run the mdx vocals thru UVR to remove any remaining buzz noises and synths, it works great so far" (probably meant one of VR models)

Average track in Colab is being processed in 1:00-1:30 minute using slower Tesla K80 (much faster than even UVR’s HP-4BAND-V2_arch-124m model).

If you want to get rid of some artifacts, you can further process output vocal track from MDX through Demucs 3.

Options in the old HV MDX Colab/or kae fork Colab (from the very top)

Demucs model

When it's enabled, it sounds better to me, used with the old narrowband 9.X and newer vocal models, as Demucs 2 model is fullband, but opinions on superiority of this option are divided, and MVSEP dev made some SDR calculation where it achieved worse results with Demucs enabled. But be aware, that inverted results from narrowband are still fullband despite the narrowband training frequency, as there’s no cutoff matching present in Colab, as it’s implemented in UVR GUI as a separate option. Using such cutoff matching training frequency (which can be observed in non-inverted stem) might lead to less noise and residues in the results. Demucs model will work correctly only with vocal models in Colabs (we didn’t have any MDX instrumental models back then, so naming scheme is reversed for these models, hence Demucs model with instrumental model produces distorted sound, it mixes vocals with instrumental in a weird way).

“The --shifts=SHIFTS performs multiple predictions with random shifts (a.k.a. the shift trick) of the input and average them. This makes prediction SHIFTS times slower but improves the accuracy of Demucs by 0.2 points of SDR. It has limited impact on Conv-Tasnet as the model is by nature almost time equivariant. The value of 10 was used on the original paper, although 5 yields mostly the same gain. It is deactivated by default, but it does make vocals a bit smoother.

The --overlap option controls the amount of overlap between prediction windows (for Demucs one window is 10 seconds). Default is 0.25 (i.e. 25%) which is probably fine.”

You can even try out 0.1, but for Demucs 4 it decreases SDR in ensemble if you’re trying to separate a track containing vocals. If it’s instrumental, then 0.1 is the best (e.g. for drums).

(outdated/for offline use/added to Colab)

Here's the new MDX-B Karokee model! https://mega.nz/file/iZgiURwL#jDKiAkGyG1Ru6sn21MkIwF90C-fGD0o-Ws58Mn3O7y8

The archive contains two versions: normal and aggressive. The second removes the lead vocals more. The model was trained using a dataset that I completely created from scratch. There are 610 songs in total. We ask that you please credit us if you decide to use these models in your projects (Anjok, aufr33).

__________________________________________________________________

Demucs 3 

for 4 stems

(SDR 7.7 for 4 stems, it’s better than Spleeter (which is SDR 6.5-7), or better than MDX 4 stem. In most cases, it’s even better than Audioshake - at least on tracks without leading guitar)

Accompanied by MDX-UVR 9.7 vocal model, it gives very good 4 stem separation results

(For Demucs 4 a.k.a "htdemucs" check below)

https://colab.research.google.com/drive/1yyEe0m8t5b3i9FQkCl_iy6c9maF2brGx?usp=sharing (by txmutt), alternatively with float32 here

Or https://huggingface.co/spaces/akhaliq/demucs

Or https://mvsep.com/

Pick up from the list Demucs Model B there.

You can export result files in MP3 320kbps, WAV and FLAC. File limit is 100MB and has a 10 minute audio length limit.

To use Demucs 3 locally: https://discord.com/channels/708579735583588363/777727772008251433/909145349426384917

Currently, all the code uses now main branch which is Demucs 4 (previously HT) but these Colabs use old mdx_extra model.

Demucs 3 UVR models 2 stem only available on MVSEP.com or in UVR5 GUI (nice results in cases when you suffer vocal bleeding i regular UVR5, GSEP, MDX 9.7 - model 1 less aggressive, model 2 more destructive, model bag has more bleeding of all three).

In Colab, judging by quality of drums track, I prefer using overlap 0.1 (only for instrumentals), but default set by the author is 0.25 and is better for sound of instrumental as a whole.

But it still provides decent results with instrumentals.

Also, HV had overall better separation quality results using shifts=10, but it increases separation time (it's also reflected by MVSEP's SDR calculations). Later we found out it can be further increased to 20.

Also, I have a report that you may get better results in Demucs using previously separated instrumental from e.g. UVR.

Anjok’s tip for better instrumentals: “I recommend removing the drums with the Demucs, then removing the vocals and then mixing the drums back in”. Yields much better results than simple ensemble.

It works the best in cases when drums get muffled after isolation, e.g. in hip-hop. You need to ensure that tracks are aligned correctly. E.g. if you isolate drumless UVR track, isolate also regular track to align drumless UVR track easier with drums track from Demucs, otherwise there will be hard to find the same peaks. Then simply align drumless UVR the same as regular track is aligned and mute/delete UVR regular (instrumental) track.

Be aware! This is not a universal solution for the best isolation in every case. E.g. in tracks with busy mix like Eminem - Almost Famous, the guitar in the background can get impaired, and so even drums (UVR tends to impair guitars in general, but on drumless track it was even more prevalent - in that case normal UVR separation did better job).

Also, if you slow down the input file, it may allow you to separate more elements in the “other” stem.

It works either when you need an improvement in such instruments like snaps, human claps, etc.

Normally, the instrumental sounds choppy when you revert it to normal speed. The trick is - "do it in Audacity by changing sample rate of a track, and track only (track menu > rate), it won't resample, so there won't be any loss of quality, just remember to calculate your numbers

44100 > 33075 > 58800

48000 > 36000 >  64000

(both would result in x 0.75 speed)

etc.".

Also, there's dithering enabled in Audacity by default. Might be worth disabling it in some cases. Maybe not, but still, worth trying out. There should be less noise.

BTW. If you have some remains of drums in acapella using UVR or MDX, simply use Demucs, and invert drums track.

“The output will be a wave file encoded as int16. You can save as float32 wav files with --float32, or 24 bits integer wav with --int24” it doesn’t seem to work in Colab.

Demucs 4 (+ Colab) (4, 6 stem)

4 stem, SDR 9 for vocals on MUSDB HQ test, and SDR 9 for mixdowned instrumentals (5, 6 stem - experimental piano [bad] and guitar)

https://github.com/facebookresearch/demucs (all these models available in UVR 5 GUI or MVSEP [just x-minus doesn’t have ft model for at least free users, it was mmi model at some point, but then got replace by MDX-B which “ turned out to be not only higher quality, but also faster”])

Google Colab (all 4-6 stem models available, 16-32 bit output)

https://colab.research.google.com/drive/117SWWC0k9N2MBj7biagHjkRZpmd_ozu1

or Colab with upload script without Google Drive necessity:

https://colab.research.google.com/drive/1dC9nVxk3V_VPjUADsnFu8EiT-xnU1tGH?usp=sharing

or Colab with batch processing, but only mp3 output and no parameters beside model choice

https://colab.research.google.com/drive/15IscSKj8u6OrooR-B5GHxIvKE5YXyG_5?usp=sharing

"I'd recommend using the “htdemucs_ft” model over normal “htdemucs” since IMHO it's a bit better"

Also, SDR measurements confirm that. 6s might have more vocal residues than both, but will be a good choice in some cases (possibly songs with guitar).

All the best stock models:
htdemucs_ft (f7e0c4bc, d12395a8, 92cfc3b6, 04573f0d [drums, bass, other, vocals])

 - “fine-tuned version of htdemucs, separation will take 4 times more time but might be a bit better. Same training set as htdemucs”.

Can be obtained with UVR5 in download center (04573f0d-f3cf25b2.th, 04573f0d-f3cf25b2.th, d12395a8-e57c48e6.th, f7e0c4bc-ba3fe64a.th; not in order)

“htdemucs - first version of Hybrid Transformer Demucs. Trained on MusDB + 800 songs.”

Default Demucs model in e.g. UVR5 (955717e8-8726e21a.th)

“htdemucs_mmi = Hybrid Demucs v3, retrained on MusDB + 800 songs

htdemucs_6s =  6 sources version of htdemucs, with piano and guitar being added as sources. Note that the piano source is not working great at the moment.”

mdx_extra: The best Demucs 3 model from MDX 2021 challenge. Trained with extra training data (including MusDB test set), ranked 2nd on the track B of the MDX 2021 challenge.

mdx_extra_q: a bit worse quantized version

Be aware that also UVR team and also ZFTurbo [available on MVSEP and GitHub] trained their own Demucs models (respectively instrumental and vocal ones), but there are some issues with ZFTurbo model using inference other than provided on his GitHub (so it’s so far not compatible with e.g. UVR).

To use the best Demucs 4 model in the official Colab (the 2nd link) rename model to e.g. “htdemucs_ft”. It can behave better than 6 stems if you don’t need extra stems.

In other cases, extra stems will sound better in the mix, although using 6s model, vocal residues are usually louder than in ft model (but that might depend on a song or genre).

Despite the fact that 6s is an electric guitar model, it can also pick up acoustic guitar very well in some songs.

The problem with 6s models is that “when a song has a piano because not only the piano model is not the best, but it also makes the sound itself worse

rather than just very filtered piano, it sounds like distorted filtered piano”

Sometime Gsep can be “still better because each stem has its dedicated model" but it depends on a song (other stem in GSep can be better more frequently, but now MDX23 jarredou fork or Ensemble models on MVSEP returns good other stems as well)

Gsep instead of inverting the whole result among stems like Demucs, won’t preserve all the instruments occasionally.

"htdemucs (demucs 4) comes a bit closer [vs Gsep], most of the time the bass is better and there are few instances where demucs picks up drums better"

“From my experience and testing: If you decide to process an isolated track through Demucs, it has no trouble identifying what is bass guitar and what isn't bass guitar [does not matter if it's finger/pick/slap, it works on all of them for me, except distorted wah-wah bass]. The leftover noise [the part's that demucs did not pick up, and left it in the (No Bass) stem] is usually lower than minus 40 - 45 DB, and it's either noise, or hisses usually.

The problem comes when there are instruments besides the bass guitar that are playing beside it [a.k.a. music], since these are separation models, not identification models. It starts having trouble grabbing all the upper harmonics [which is the multiple of the root note frequency], and the transients, potentially starts mis-detecting, or in extreme cases, it does not pick up the bass at all.”

“When used with "--shifts" > 0, demucs gives slightly different results each time you use it, that can also explain some little score differences”

https://github.com/facebookresearch/demucs/issues/381#issuecomment-1262848601

Initially, Shifts 10 was considered as max, but it turned out 20 can be used.

Overlap 0.75 is max before it gets very slow (and 0.95 when it becomes overkill).

While we also thought overlap 0.99 is max, it turned out you can use 0.99999 in UVR, and 0.999999 in CLI mode, but both make separations tremendously long, even 0.999 much longer than 0.99.

On GTX 1080 Ti on 1 minute song:

`0.99`  = Time Elapsed: `00:09:45`

`0.999` = Time Elapsed: `01:36:45`

Also, shifts can be set to 0.

With htdemucs_ft, shifts doesn't matter nearly as much as overlap, I recommend keeping (shifts) at 2 [for weaker GPUs].

The drum SDR with 1 and 10 shifts difference is about 0.005

So overlap impacts SDR a bit more than shifts.

“The best way to judge optimum settings is to take a 10-second sample of a vocal extraction where there's evident bleeding and just keep trying higher overlaps etc until you're happy, or you lose patience, then you'll arrive at what I call the 'Patience Ratio'. For me, it's 2x song length.”

Installation of only Demucs for Windows

Use UVR, or:

Download the git repo, extract it, then open PowerShell and write

"pip install *insert the directory of the extracted repo here*"

https://github.com/facebookresearch/demucs#egg=demucs

Alternatively, execute this command:

pip install git+https://github.com/facebookresearch/demucs#egg=demucs

or download the git repo first and then

"pip install *insert the directory of the extracted repo here*"

In case of “norm_first_ error run this line or update torch to 1.13.1

python.exe pip install -U torch torchaudio

In Colab, judging by quality of drums track, I prefer using overlap 0.1 (better only for instrumentals) with shifts 10 (actually can be set to even 20), but default set by the author is 0.25 and is better for sound of instrumental as a whole.

Also, we have overall better separation quality results using shifts=10, but it increases separation time (it's also reflected by MVSEP's SDR calculations). Overlaps also increase general separation quality for instrumentals/vocals, at least up to 0.75, but everything above starts being tremendously slow (few hours for 0.99 max setting).

If you use particularly high overlap like 0.96 for a full length song, you can run out of Colab time limit if it’s not your first file being processed during this session (for cases when processing takes more than 1 hour). If you exceed the limit, you can change Google account in the right top (don’t use other account during mounting, or you’ll end up with error). The limit is reset after 12 hours (maybe sooner). It’s capable of processing one file for two hours, at least only if it’s the first file being processed for a longer time during this day. Also, rarely, it can happen that your file is being processed faster than usual despite the same T4 GPU.

If you have “something has gone terribly wrong” error right on the separation start, simply retry. If in the end of long separation - ignore it, and don’t retry - your result is in the folder.

- *clipclamp* - uncheck it to disable hard limiter, but it may cause separation artifacts on some loud input files or will change volume proportions of the stems. I like it enabled somehow.

Parameters explained by jarreadou

- “Overlap is the percentage of the audio chunk that will be overlapped by the next audio chunk. So it's basically merging and averaging different audio chunk that have different start (& end) points.

For example, if audio chunk is `|---|` with overlap=0.5, each audio chunk will be half overlapped by next audio chunk:

```

        |---|

      |---|

    |---| etc...

  |---| (2nd audio chunk half overlapping previous one)

|---| (1st audio chunk)

```

-shifts is a random value between 0 and 0.5 seconds that will be used to pad the full audio track, changing its start(&end) point. When all "shifts" are processed, they are merged and average. (...)

It's to pad the full song with a silent of a random length between 0 and 0.5 sec. Each shift add a pass with a different random length of silence added before the song. When all shifts are done (and silences removed), the results are merged and averaged.

Shifts is performing lower than overlap because it is limited to that 0.5 seconds max value of shifting, when overlap is shifting progressively across the whole song. Both works because they are shifting the starting point of the separations. (Don't ask me why that works!)

But overlap with high values is kinda biased towards the end of the audio, it's caricatural here but first (chunk - overlap) will be 1 pass, 2nd (chunk - overlap) will be 2 passes, 3rd (chunk - overlap) will be 3 passes, etc…”

So Overlap has more impact on the results than shift.

“Side-note: Demucs overlap and MVSEP-MDX23 by ZFTurbo overlap features are not working in the same way. (...)

Demucs is kinda crossfading the chunks in their overlapping regions, while MVSep-MDX23 is doing avg/avg to mix them together”

Why is overlapping advantageous?

Because changing the starting point of the separation give slightly different results (I can't explain why!). The more you move the starting point, the more different the results are. That's why overlap performs better than shifts limited to 0-0.5sec range, like I said before.

Overlap in Demucs (and now UVR) is also crossfading overlapping chunks, that is probably also reducing the artifacts at audio chunks/segments boundaries.

[So technically, if you could load the entire track in at once, you wouldn't need overlap]

Shifts=10 vs 2 gives +0.2 SDR with overlap=0.25 (the setting they've used in their original paper), if you use higher value for overlap, the gain will be lower, as they both rely on the same "trick" to work.

Shifts=X can give little extra SDR as it's doing multiple passes, but will not degrade "baseline" quality (even with shifts=0)

Lower than recommanded values for segment will degrade "baseline" quality.

So in theory, you can equally set shifts to 0 and max out overlap.

Segments optimum (in UVR beta/new) is 256.

Gsep (2, 4, 5, 6 stem, karaoke)

https://studio.gaudiolab.io/gsep

Electric guitar (occasionally bad), good piano, mp3 320kbps output (20kHz cutoff), input: wav 16-32, flac 16, mp3 accepted, don’t upload files over 100MB (and also 11 minutes may fail on some devices with Chrome "aw snap" error), capable of isolating crowd in some cases, and sound effects. Ideally, upload 44kHz files with min. 320kbps bitrate to have always maximum mp3 320kbps output.

About its SDR

10.02 SDR for vocal model (vs Byte Dance 8.079) on seemingly MDX21 chart, but non-SDR rated newer model(s) were available from 09.06.22, and later by the end of July, and now new model is released since 6 September (there were 4 or 5 different vocal/instrumental models in total so far, the last introduced somewhere in September and no models update was performed with later UI update). MVSEP SDR comparison chart on their dataset, shows it's currently around SDR 9 for both instrumental and vocals, but I think evaluation done on demixing challenge (first model) was more precise. Be aware that GSEP causes issue of cancelling different sounds which cannot be found in any stem.

Instruction

Log in, and re-enter into the link above if you feel lost on the landing page.

For instrumental with vocals, simply uncheck drums, choose vocal, and two stems will be available for download.

As for using 4/5 stem option for instrumental after mixing if you save the tracks mixed in 24 bit in DAW like Audacity, it currently produces less voice leftovers, but the instrumental have worse quality and spectrum probably due to noise cancellation (which is a possible cause of missing sounds in other stem). Use 5 stem, but cut silence in places when there is no guitar in the stem to get comparable quality to 4 stem in such places.

For 3-6 stem, you better don’t use dedicated stems mixing option - yes, it respects muting stems to get instrumental as well, but the output is always mp3 128kbps while you can perform mixdown from mp3s to even lossless 64 bit in free DAWs like Audacity or Cakewalk.

In some very specific cases you can get a bit better results for some songs by converting your input FLAC/WAV 16 to WAV 32 in e.g. Foobar2000.

Troubleshooting

- (fixed for me) Sometimes very long "Waiting" or recently “Waiting” - can disappear after refreshing the site after some time (July 2023) - e.g. if you see “SSG complete” message, you can refresh the site to change from waiting to waveform view immediately. I had that on a fresh account once when uploading the very first file on that account, and then it stopped happening (later it happened for me on an old account as well).

- (might be fixed too) If you don’t see all stems after separation (e.g. while choosing 2 stems, only vocals or only instrumental is shown) and only one stem can be downloaded (can’t be done on mobile browser) - workaround:

> log out, log in again and go to Chrome DevTools>Network>necessarily before clicking on your track’s waveform to get it.

Now both stems should be shown in DevTools starting with input file name, instrumental with ending name “rest of targets” usually marked as “fail” in State column (the stems in this table won’t appear if DevTools was opened after clicking on your song waveform - it must be opened before clicking on your song waveform, or you’ll need to log out, then - files won’t appear in DevTools, sometimes CTRL+F5 might be needed to hit on songs list after opening DevTools - rarely, if at all now). Don’t click open files called thumbnails.mp3.

inb4 - stems downloaded that way and from mixer

- "Aw snap" error on mobile Chrome can happen on regular FLACs as well as an attempt to download a song. Simply go back to the main page and try to load the song again and download it.

- If nothing happens when you press download button on PC, also go to Chrome DevTools>Network>All and click download again. Then new files will appear on the list. Right click and open mp3 file in a new tab to begin download. Alternatively, log into your account in incognito mode.

- If you have "An error has occurred. Please reload the page and try again." try deleting Chrome on mobile (cleaning cache wasn't enough in one case).

- (fixed?) If you have “no audio” error all the time when separation is done, or preview loading is infinite, or you have only one stem, also -

In PC Chrome go to DevTools>Network>All and refresh this audio preview site, and new entries will show up on the right, which among others will list filenames with your input file name with stems names e.g. "rest of targets" in the end.

Double click it or click RBM on it and press open on new tab, and download will start.

If no filenames to download appear on the list, press CTRL+R to refresh the site, and now they should appear.

In specific cases, files in the list won’t show up, and you will be forced to log in to GSEP using incognito mode (the same account and result can be used). Also, make sure you have enough of disk space on C:.

Alternatively, clean site/browser cache (but the latter didn't help me at some point in the past, don't know how now).

If still the same, use VPN and/or new account (all three at the same time only in very specific cases when everything fails). You can also use different browser.

- When you see loop of redirections when you just logged, and you see Sign In (?~go to main page) simply enter the main link https://studio.gaudiolab.io/gsep

- If you’re getting mp3 with bitrate lower than 320kbps which is base maximum quality in this service (but you get 112/128/224 output mp3 instead)

> Probably your input file is lossy 48kHz or/and in lower bitrate than 320kbps > your file must be at least mp3 320kbps 44kHz (and not 48kHz). The same issue exists for URL option and for Opus file downloaded from YouTube when you rename it to m4a to process it in GSEP. To sum up - GSEP will always match bitrate of the input file to the output file if it’s lower than 320kbps. To avoid this, use lossless 44kHz file or if you can’t, convert your lossy file to WAV 32 bit (resample Opus to 44kHz as well - it’s always 48kHz, for YT files, don’t download AAC/m4a files - they have cutoff at 16kHz while Opus at 20kHz). Now you should get 320kbps mp3 as usual without any worse cutoff than 20kHz for mp3 320kbps.

If you still not get 320kbps, try using incognito mode/VPN/new account (at best all three at the same time).

You can use Foobar2000 for resampling e.g. Opus file (RBM on file in playlist>convert>processing>resampler>44100. And in output file format>WAV>32 bit). Don’t download from YT in any other audio than Opus, otherwise it will have 16kHz cutoff and separation result will be worse.

- (fixed) Also on mobile, the file may not appear on your list after upload, and you need to refresh the site.

- If FLAC persists to be stuck in the "Uploading" screen, try converting it to WAV (32-bit float at best)

- Check this video for fixing issues in missing sounds in stems (known issue with GSEP)

- GSEP separation results don't begin at the same time signature like UVR results.

> In order to fix it, convert mp3 to WAV or align stems manually if you need it for some comparisons or manual ensemble. Also some DAWs can correct it automatically on import.

Eventually hit their Discord server and report any issues (but they’re pretty much inactive lately).

Remarks about quality of separation

“The main difference (vs old model) is the vocals. I can't say for sure if they're better than before, but there is a difference, the "others" and "bass" are also different. Only the drums remain the same. Generally better, but the difference is not massive, depends on the song” (becruily)

GSEP is generally good for tracks where using all the previous methods you had bleeding (e.g. low-pitched hip-hop vocals) or got flute sounds removed, although it struggles with “cuts” and heavily processed vocals in e.g. choruses. Though, it has more bleeding in some cases when the very first model didn't, so new MDX-UVR models can achieve generally better results now.

"GSEP is good at piano extraction, but it still lacks in vocal separation, in many times the instruments come out together with the voices, this is annoying sometimes."

Electric guitar model got worse in the last update in some cases. Also, bass & drums also not so loud since the first release of gsep.

"Electric guitar model barely picks up guitars, it doesn't compare to Demix/lalal/Audioshake".

“I kinda like it. When it works (that's maybe 50-60% of time), it's got merit.”
The issue happens (also?) when you process (GSEP) instrumental via 5 stems. If you process a regular song with vocals - it picks up guitar correctly. It happens only in a place where previously was vocal removed by GSEP 2 stem.

I only tested GSEP instrumental so far, I don’t know whether it happens on official instrumentals too (maybe not).

The cool thing is that when the guitar model works (and it grabs the electric), the remaining 'other' stem often is a great way to hear acoustic guitar layers that are otherwise hidden.

The biggest thing I'd like to see work done on is the bass training. At present, it can't detect the higher notes played up high... whereas Demucs3/B can do it extremely well.”

It has “much superior” other stem than Demucs or even better than Audioshake. It has changed since 6 September 2022, but probably got updated since then and is probably fine.

As for 14.10.22 piano model sounds “very impressive”.

As for the first version of the model comparable vocal stem to MDX-UVR 9.7, but with current limitation to mp3 320kbps and worse drums and bass than Demucs (not in all cases). Usually less bleeding in instrumentals than VR architecture models.

“Gsep sounds like a mix between Demucs 3 and Spleeter/lalal, because the drums are kind of muffled, but it's so confident when removing vocals, there aren't as many noticeable dips like other filtered instrumentals, and it picks up drums more robustly than Demucs. [it can be better in isolating hihats then Demucs 4 ft model too]

It removes vocals more steadily and takes away some song's atmospheres, rather than UVR approach which tries to preserve the atmosphere, but [in UVR] you end up with vocal artefacts”

As for tracks with more complicated drums sections: “GSEP sounds much fuller, Demucs 3 still has this "issue" with not preserving complex drums' dynamics” it refers to e.g. not cancelling some hi-hats even in instrumentals.

It happens that some instruments can be deleted from all stems. “From what I've heard, [it] gets the results by separating each stem individually (rather than subtractive / inverting etc.), but this means some sounds get lost in between the cracks you can get those bits by inverting the gsep stems and lining up with the original source, you should then be left with all the stuff gsep didn't catch”.

Also, I'd experiment with the result achieved with Demucs ft model, and apply inversion for just the specific stem you have your sounds missing.

As for June 2023 gsep is still the best in most cases for stems, not anywhere close to being dead

gsep loves to show off with loud synths and orchestra elements, every other mdx/demucs model fail with those types of things

Processing

After your track is uploaded (when 5 moving bars disappear) it’s very fast, and it takes 3-4 minutes for one track to be separated using 2 stem option (processing takes around 20 seconds). If 5 bars are moving longer than expected track upload time, and you see that nothing uses your internet upload, simply press CTRL+R and retry, if still the same, log off and log in again. It can rarely happen that the upload stuck (e.g. when you minimize the browser on mobile or switch tabs).

Generally it’s very fast and long after the very first GSEP days, I needed to wait briefly in queue twice at 6-9 PM CEST, and I think once on Sunday in weekend of adding new model once in my whole life I waited around 7 minutes. Usually you wait in a queue longer than processing takes, so it’s bloody fast.

___

(outdated)

If your stems can’t be downloaded after you click the download button, go to Tools for Developers in your browser and open the console and retry. Now you should see an error with file address and your file name in it. You can simply copy the address to the address bar and start downloading it.

(Outdated - 3rd model changes) The quality of hi-hats is enhanced, sometimes at the cost of less vivid snare in less busy mix, while it’s usually better in busy mix now, but it sometimes confuses snare in tracks when it sounds similar to hi hat making it worse than it was. So a trap with lots of repetitive hi-hats and also tracks with a busy mix should sound better now.

dango.ai

(2 or more [up to 6+] stems, paid only, 30 seconds free preview of mp3 320 output, 20kHz cutoff)

drums, vocal, bass guitar, electric guitar, acoustic guitar, violin, erhu

“10 tracks = €6.33 + needs Alipay or WeChat Pay”

max 12 minutes input files allowed

Now the site has English interface

Currently, one of the best instrumental results (if not the best). Not so good vocals.

(for older models) The combination of 3 different aggression settings (mostly the most aggressive in busy mix parts) gives the best results for Childish Gambino - Algorithm vs our top ensemble settings so far. But it's still far from ideal (and [not only] the most aggressive one makes instruments very muffled [but vocals are better cancelled too], although our separation makes it even a bit worse in more busy mix fragment).

As for drums - better than GSEP, worse than Demucs 4 ft 32, although a bit better hihat. Not too easy track and already shows some diffrences between just GSEP and Demucs when the latter has more muffled hi-hats, but better snare, and it rather happens a lot of times

(old) Samples:

Instrumental

Drums

Also, it automatically picks the first fragment for preview when vocal appears, so it is difficult to write something like AS Tool for that (probably manipulations by manual mixing of fake vocals would be needed). Actually, smudge wrote one.

Very promising results even for earlier version.

They wrote once somewhere about limited previews for stem mode (for more than 2 mode) and free credits, but haven’t encountered it yet.

They’re accused by aufr33 to use some of UVR models for 2 stems without crediting the source (and taking money for that).

Now new, better models are released. Better instrumentals than in UVR/MVSep, and rather not the same models.

It used to be possible to get free 30 seconds samples on dango.ai, but recently 5 samples are available for free (?also) here:

https://tuanziai.com/vocal-remover/upload

You must use the built-in site translate option in e.g. Google Chrome, because it's Chinese only. You are able to pay for it using Alipay outside China.

music.ai

Paid - $25 per month or pay as you go (pricing chart). In fact, no free trial.

Good selection of models and interesting module stacking feature.

To upload files instead of using URLs “you make the workflow, and you start a job from the main page using that custom workflow” [~ D I O ~].

Allegedly it’s made by Moises team, but the results seem to be better than those on Moises.

“Bass was a fair bit better than Demucs HT, Drums about the same. Guitars were very good though. Vocal was almost the same as my cleaned up work. (...) An engineer I've worked with demixed to almost the same results, it took me a few hours and achieve it 39 seconds” (...) I'd say a little clearer than MVSEP 4 Ensemble. It seems to get the instrument bleed out quite well,”

“Beware, I've experienced some very weird phase issues with music.ai. I use if for bass, but vocals are too filtered / denoised imo and you can't choose to not filter it all so heavily.”

Sam Hocking

MDX23 by ZFTurbo /w jarredou fork (2, 4 stems)

(4 stems, 32-bit float output)

v2.4 (with BS-Roformer model), 2.3 (Kubinka fork of jarredou’s Colab /w FLAC conversion, ZIP unpacking, new fullband preservation), 2.1 (enhanced jarredou’s fork of ZFTurbo base code, a bit better SDR over 2.0), 2.2 (with MDX23C model, may have more vocal residues), org. 2.3 (with VitLarge model instead instr-HQ3), GUI/CML (GUI only for older original release by ZFTurbo)

The ZFTurbo Colab from initial v. 1.0 was further modified by jarredou to alleviate vocal residues. It adds better models and volume compensation, fullband of vocals, higher frequency bleeding fix and much more. Currently, it achieves similar or not much worse SDR as current “Ensemble 4 models” on MVSEP.

“I have successfully processed a ~30min track with vocals_instru_only mode [on Colab] while I was working on that 2.3 version, but it was probably with minimal settings.

[Errors/freezes are] already happening during demucs separation when you do 4-stem separation with files longer than 10~15min” jarredou

MDX23 Colab combines results of currently the best UVR and VitLarge model in 2.3) using weights for every, instead of usual ensemble as in UVR. Initially released code by ZFTurbo received 3rd place in the latest MDX 2023 challenge. Very clean results for instrumentals, although it can rarely fail in getting rid of some vocals in quiet fragments of a track, but bigger SDR than the best ensembles in UVR. One of the best SDR scores for 4 stems (maybe with slightly better implementation on MVSEP as “Ensemble” 5 or more models, although it could be 24 or 32 bit output used for that evaluation which increases SDR (jarredou’s v2.3 evaluation was made using 16 bit).

Since v.2.4 there’s an option to omit using VitLarge model (the model increases some noise at times)

How MDX23 Colab works under the hood (more or less):

- MDX models vocal outputs (so inversion of one inst model there) + Demucs only vocals>inversion of these to get instrumental>demucs_ft+demucs 6s+demucs+mmi to get remaining 3 stems (weighted) to get remaining 3 stems (all steps weighted). Something in this recipe could be changed since then.

Or differently - “The process is:

1. Separate vocals independently with InstVocHQ, VitLarge (and VOC-FT as opt)

2. Mix the vocals stems together as a weighted ensemble to create final vocals stem

3. Create instrumental by inverting vocals stem against source

4. Save vocals & instrumental stems

5 (if 5). Take the instrumental to create the 3 others stems with the multiple demucs models weighted ensembles + phase inversion trick and save them.”

Modified inference will probably work locally too, e.g. if you use that 2.1 repo locally (and probably newer too), but the modified inferences from jarredou crashes the GUI, so you can only use CML version locally in that case.

Usage:

python inference.py --input_audio mixture1.wav mixture2.wav --output_folder ./results/

For instrumentals, I’d rather stick to instrum2 results (so sum of all 3 stems instead of inversion with e.g. inst only enabled) but some fragments can sound better in instrum and it also slightly better SDR, so e.g. instrum can give louder snares at times, while instrum2 is muddier but sometimes less noisy/harsh. It can all depend on a track. Most people can’t tell a difference between both.

If you stuffer from some vocal residues in 2.2.2, try out these settings

BigShifts_MDX: 0

overlap_MDX: 0.65

overlap_MDXv3: 10

overlap demucs: 0.96

output_format: float

vocals_instru_only: disabled (it will additionally give instrum2 output file for less vocal residues in some cases)

Also, you can manipulate with weights.

E.g. different weight balance, in 2.2 with less MDXv3 and more VOC-FT.

For vocals in 2.2 you can test out these settings (21, 0, 20, 6, 5, 2, 0.8)

File names with brackets can fail to separate.

To separate locally, it generally requires a 8GB VRAM Nvidia card. 6GB VRAM is rather not enough but lowering overlaps (e.g. 500000 instead of 1000000) or chunking track manually might be necessary in this case. Also, now you can control everything from options: so you can set chunk_size 200000 and single ONNX. It can possibly work with 6GB VRAM that way.

If you have fail to allocate memory error, use --large_gpu parameter

Overlap large and small - controls overlap of song during processing. The larger value, the slower processing but better quality (both).

Jarredou made some fixes in 2.2.2.x version in order to handle memory better with MDX23C fullband model.

jarredou:

“The only colab with MDX23C handling currently is my MVSEP-MDX23 fork, you can use a workaround to have MDX23C InstVoc-HQ results only with these settings:

(all weights beside MDXv3 set to 0, BigShifts_MDX set to min. of 1, and demucs overlap 0 [at least for vocal_instru_only)

You can use a higher "overlap_MDXv3" value than in the screenshot to get slightly better results.

(and also, as it's only a workaround, it will still process the audio with other models, but they will not be used for final result as their weights = 0)

(MDX23C InstVoc-HQ = MDXv3 here)

You can also use the defaults settings & weights, as it scores a bit higher SDR than InstVoc alone ”

Be aware that 2.0 version wasn’t updated with:

!python -m pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/

Hence, it’s slow (so use 2.1-2.3 instead as they work as intended or add the line at the end of the first cell yourself)

Explanations on features added in 2.2 Colab (2.2 might have more residues vs 2.1) by jarredou

What are BigShifts?

It's based on Demucs' shift trick, but for Demucs it is limited to 0.5 second shifting max (with a randomly chosen value).

Each BigShifts here shifts the audio by 1 second, no more random shfiting.

f.e. bigshifts=2, it will do 1 pass with 0 shifting, and a second pass with 1 second shifting, then merge the results

bigshifts=3 means 1 pass with 0 shifting + 1 pass with 1 sec shift + 1 pass with 2 sec shift, etc...

Overlap is doing almost the same thing but at audio chunks level, instead of full audio, and the way overlap is implemented (in MVSEP-MDX23), f.e. with overlap=0.99, first audio chunk will have 1 pass, 2nd audio chunk will have 2 passes, etc... until 99th audio chunk and following ones will have 99 passes. With BigShifts, the whole audio is processed with the same number of passes.

So bigshifts shifts the audio forward one second each time.

Overlap computing is different between MDXv2 models and the other ones in the fork:

For MDXv2 models (like VOC-FT), it uses the new code from UVR and goes from 0 to 0.99.

For MDXv3 (InstVoc) & VitLarge models [introduced in v2.3] it uses code from ZFTurbo (based on MDXv3 code from KUIELab, https://arxiv.org/abs/2306.09382) and it goes from 1 to whatever.

I'm using low overlap values in the fork because it's kinda redundant with the BigShifts experimental feature I've added and which is based on Demucs' "shift trick" (described here https://arxiv.org/pdf/1911.13254.pdf, chapter 4.4). But instead of doing shifts between 0 and 0.5 sec like Demucs by adding silence before input, BigShifts are much larger (and related  to input length). Having larger time shifting gives more amplitude in possible results.

Instead of adding silence before input to shift it, which would be a waste of time & resources as BigShifts can be above 30s or 1 min of shifting, instead, it changes the shifted part position in audio input (like move the 1st minutes of audio at the end of the file before processing and restores it after processing).

Then like Demucs original trick all shifted & restored results are merged together and averaged.

From my tests, it can influence results from -2SDR to +2SDR for each shifted results, depending of input and BigShifts value. It's not linear !

Using BigShifts=1 (disabled) and high overlap value probably gives more stable results, in the other end, but maybe not always as high and/or fast as what BigShifts can give.

Weights have been indeed evaluated on MVSep's multisong dataset. I haven't tried every possible settings, but default values should be not far away from optimal settings, if not optimal [already].

There are few other "tricks" used in the fork:

The phase inversion denoise trick (was already in original code from ZFTurbo, also used in UVR):

Some archs (MDXv2 mostly, so VOC-FT here) are adding noise to output signals. So to attenuate it, we process the input 2 times, including one time with phase polarity inverted before processing, and restored after processing. So, only the model noise is phase cancelled when the 2 passes are mixed together. (It doesn't cancel 100%, but it's attenuated). This is also applied to Demucs processing (since original code).

MDXv3 & VitLarge don't seem to add noise (or at insignificant volume) so this trick is not used with these models.

Segment_size (dim_t) original model value is doubled since v2.3 of the fork.

Some benchmarks done by Bas Curtiz showed that it gives a little bit better results (here with VOC-FT, there's the same benchmark with InstVocHQ model here).

Multiband ensembling:

I'm using a 2band ensemble, with different ensemble in frequencies below 10khz and above. This is a workaround to get fullband final results even when not fullband models are part of the ensemble (like VOC-FT). Without it, the instrumental stem, obtained by vocals phase inversion against the input audio would have small permanent vocals bleeding above VOC-FT's cutoff, as phase cancellation would be biased there.

It was a really more essential feature in previous versions when most of the models were not fullband.

VitLarge is not used too in high freq band, but it's a more personal taste (so in the end there's only InstVoc model results above the crossover region)

Troubleshooting

“just got hit by another "no such file" error in MDX 23 Colab v2.3 for batch processing

this time it's square brackets [ ]

when it sees a [ in the filename it then thinks there's two additional [ ] in the name

changing to regular parentheses does work”

Comparisons of MDX23 (2.1 or 2.0?) vs single demucs_ft model by A5

The Beatles - She Loves You - 2009 Remaster (24-bit - 44.1khz)

So I tried out the MDX23 Colab with She Loves You, which is easily the most ratty sounding of all the Beatles recordings, as it is pure mono and the current master was derived from a clean vinyl copy of the single circa 1980. So if it can handle that, it can handle anything. And well, MDX23 is very nice, certainly on par with htdemucs_ft, and maybe even better. I'm surprised. You can hear the air around the drums. Something that is relatively rare with demucs. And the bass is solid, some bleed but the tone and the air, the plucking etc is all there. Plus, the vocals are nicer, less drift into the 'other' stem.

John Lennon - Now and Then (Demo) - Source unknown (16-bit - 44.1khz)

OK, another test, this time on a John Lennon demo, Now and Then. The vocals are solid, MDX23 at 0.95 overlap is catching vocals that were previously in htdemucs_ft being lost to the piano. So, yeah, it's pretty good. MDX23 is now my favored model. In fact, upon listening to the vocals, it's picking up, from a demo, from a poor recording, on a compact cassette, lip smacks, breathing and other little non-singing quirks. It's like literally going back and having John record in multitrack.

Queen - Innuendo - CD Edition TOCP-6480 (16-bit 44.1khz)

Every single model fell down with Freddie Mercury's vocals, not anymore. (...) I've heard true vocal stems from his vocals and the MDX23 separation sounds essentially like that. We're now approaching the 'transparent' era of audio extraction.

NOTE: [voc_ft not tested] for Innuendo, will be tested by 07/07/2023

Colab instruction by Infisrael

Install it, click on the play button and wait until it's finished (should show a green checkmark in the side).

It will ask you for permission for this notebook to access your Google Drive files, you can either accept or deny it (it is recommended to accept it if you want to use Google Drive as i/o for your files).

After you've done installing it, go to the configuration, it's below the 'Separation' tab.

https://i.imgur.com/qD9jsYG.png

 (Recommended settings)

Input "`overlap_large`" & "`overlap_slow`" with what you desire, at the highest (1.0), it will process slower but will give you a better quality. The default values for large are (0.6), and for small (0.5) [with 0.8 still being balanced in terms of speed and quality].

Input "`folder_path`" with the folder destination where you have uploaded the audio file you'd like to separate

Input "`output_folder`" with the folder you'd like the stems to be separated

Change your desired path after `/content/drive/MyDrive/`, so for example:

> `folder_path: /content/drive/MyDrive/input`

> `output_folder: /content/drive/MyDrive/output`

You can also make a use of "`chunk_size`" and put it in a higher value  by a little, but if you experience memory issues, lower it, default value for it is 500000.

Afterwards, click on the play button to start the separation, **make sure** you uploaded the audio file in the `folder_path` you provided.

After it's done, it will output the stems in the `output_folder`.

Also note, "`filename_instrum`" is the inversion of the separated vocals stems against the original audio.

"`filename_instrum2`" is the sum of the Drums + Bass + Other stems that are obtained by processing "`instrum`" with multiple Demucs models.

So "`instrum`" is the most untouched and "`instrum2`" can have fewer vocals residues.

KaraFan by Captain FLAM

(2 stems)

Colab w/ more models (AI Hub fork, also fixed), fixed org. Colab, org. Colab (slow), GUI, GH documentation

How to install it locally (advanced), alt. tutorial,

or easy instruction

Should work on Mac with Silicon or AMD GPU (although not for everyone)

& Linux with Nvidia or AMD GPU

& Windows probably with at least Nvidia GPU, or with CPU (v. slow)

- For Colab users - create “Music” in the main GDrive directory and upload your files for separation there (the code won’t create the folder on the first launch).

- Sometimes you’ll encounter soundfile errors during separation. Just retry, and it will work

KaraFan (don’t confuse with KaraFun) is a direct derivative of ZFTurbo’s MDX23 code forked by jarredou, but with further tweaks and tricks in order to get the best quality of instrumentals and vocals sonically, but without overfocusing on SDR only, but the overall sound.

Its aim is to not increase vocal residues without making instrumentals too muddy like e.g. sometimes HQ_3 model does, but without having so many vocal residues as MDX23C fullband model (but it depends on chosen preset).

Since v. 4.4 and 5.x you have five presets to test out.

Presets 3 and 4 are more aggressive in canceling vocal residues (P4 can be good for vocals).

Preset 5 (takes 12 minutes+ on the slowest setting for 3:25 track on T4) has more clarity of instrumentals over presets 3 and 4, but also more vocal residues (although less than P1 and P2 (takes 8 minutes for 3:24 track on the slowest setting).

On 23.11.24 “Preset 5 was corrected to be less aggressive as possible”. All the below Preset 5 descriptions refer to the old P5. The original preset 5 is here, and is less muddy, but has more vocal residues (at least the OG contains more models and is slower).

Speed and chunks affect quality. The slower, the muddier, but also slightly less vocal residues, although they’ll be still there (just slightly quieter). I’d recommend the “fastest” Speed setting and 400K chunks for the current P5 (tested on 4:07 song, may not work for longer tracks). If you replace Inst Voc HQ1 model by HQ2 using AI Hub fork in current P5, the instrumental will be muddier.

- To preserve instruments which are counted as vocals by other MDXv2 models, use these preset’s 5 modified settings - they have more clarity than P5 and preserve hi-hats better. But to preserve the same processing time as in P5, but setting “Speed” slider to medium, in this case will result in more constant vocal residues vs P5 with the slowest setting (too much at times, but it might serve well for specific song fragments). It will take 12 minutes+ for 3:24 track on medium. Debug and God mode on the screenshot are unrelated and optional.

- To fix issues with saxophone in P5 use these settings. They even have more clarity than the one above, but also more hearable vocal residues. It helps to preserve instruments better than the setting from the above. It can be better than P2 - less hearable consistent vocal residues, but in similar amount, while on other artists sax preset even gives more vocal residues than P2. Sax setting is worse in preserving piano than the setting above.

- Using the slowest setting here in sax fix preset will result in disconnection of runtime with free T4 after 28 minutes of processing, but it should succeed anyway (result files might be uploaded on GDrive after some time anyway).

Vs medium, the slowest setting gives more muffled sound, but not always less vocal residues. It can be heard the best in short parts with only vocals. 18 minutes for 4:07 track on Fast setting (God Mode and Debug Mode are disabled in KaraFan by default).

After 3-4 ~18 minutes separations (in this case not made in batch, but with manually changed parameters in the middle), when you terminate and delete environment, you might be not able to connect with GPUs again as the limit will be reached unless you switch Colab account (mount the same GDrive account as Colab to avoid errors)

- Preset 5 provides more muffled results than the two settings above, but with good balance of clarity and vocal residues. Sometimes this one has less vocal residues, sometimes 16.66 MDX23C model on MVSEP (or possibly a bit older HQ_1 model in UVR), it can even depend on a song fragment.  Using newer MDX23C HQ 2 in P5 instead of MDX23C HQ doesn’t seem to produce better results

After 5th separation (not in batch) you must start your next separation very fast because or you’ll run out of Colab free limit when GUI is in idle state. In such case, switch Colab account, and use the same account to mount GDrive (or you might encounter error).

Comparisons above made with normalization disabled and 32-bit float setting.

The code handles mono and 48kHz files too, 6:16 (preset 3) tracks, and possibly 9 minutes tracks too (but can’t tell if with all presets). It stores models on GDrive, which takes 0,8-1,1GB (depending on how many models you’ll use). One 4:07 song in 32-bit float with debug mode enabled (all intermediate files will be kept) will take 1,1GB on GDrive. Instrumentals will be stored in files marked as Final (in the end), Music Sub (can sound a bit cleaner at times, but with more residues), and Music Extract (from specific models).

Older 1.3 version Colab fork by Kubinka was deleted.

Colab fork made by AI HUB server members also includes MDX23C Inst Voc HQ 2 and HQ_4 models, and contains slow separation fix from the “fixed Colab”.

KaraFan used to have lots of versions which differ in these aspects with an aim to have the best result in the recent Colab/GUI version. E.g. v.3.1 used to have more vocal residues than in 1.3 version and even more than in HQ_3 model on its own, and it got partially fixed in 3.2 (if not entirely). But 1.3 irc, had some overlapped frequency issue with SRS disabled, which makes the instrumentals brighter, but it got fixed later. The current version at the time of writing this excerpt is 4.2, with pretty good opinions for v.4.1 shortly before.

Colab troubleshooting

- (no longer necessary in the fixed Colab) If you suffer from very slow or unfinishable separations in the Colab using non-MDX23C models (e.g. stuck on voc_ft without any progress), use fixed Colab (the onnxruntime-gpu line added in the end of the first cell)

- Contrary to every other Colab in this document, KaraFan uses a GUI which launches after executing inference cell. It triggers Google’s timeout security checks frequently esp. in free Colab users, because Google behaves like the separation is not being executed where you do it in GUI, and it’s generally against their policies to execute such code instead pasting commands to execute in Colab cells directly. The same way many RVC Colabs got blocked by Google, but this one is generally not directly for voice cloning, and is not very popular yet, so it wasn’t targeted by Google yet.

- Once you start separation, it can get you disconnected from runtime quickly, especially if you miss some multiple captcha prompts (in 2024 captchas stopped appearing at all, so the user inactivity during separation process seems to be no longer checked).

- After runtime disconnection error, output folder on e.g. GDrive can be still constantly populated with new files, while progress bar is not being refreshed after clicking close or even after closing your tab with Colab opened. At certain point it can interrupt the process, leaving you with not all output files. Be aware that final files always have “Final” in their names.

- It can consume free "credits" till you click Environment>Terminate session. It happens even if you close the Colab tab. You can check “This is the end” option so the GUI will terminate the session after separation is done to not drain your free limit.

- (rather fixed) As for 4.2 version, session crashes for free Colab users can occur, due to running out of memory. You can try out shorter files.

Currently, if you rename your output folder with separation, and retry separation, it will look for the old folder with separation to delete, and return the error, and running the GUI cell again may cause disappearing of GUI elements.

it's a default behavior of Colab and IPython core : Sync of files the Colab sees is not real time

Two possible solutions:

  • wait until sync with Google Drive is done
  • restart & run Colab

- Sometimes shutting down your environment in Environment options and starting over might do the trick if something doesn't work. E.g. (if it wasn't fixed), when you manipulate input files on GDrive when GUI is still opened, and you just finished separation, you might run into an error when you start separating another file with input folder content changed.

In order to avoid it, you need to run the GUI cell again after you've changed the input folder content (IRC it's "Music" folder by default). Maybe too low chunks (below 500k for too long tracks if something hasn't changed in the code). Also, check with some other input file you used before and worked before first.

Also, be more specific about what doesn't work. Provide screenshot and/or paste the error.

- You can be logged to a maximum of 10 Google accounts at the same time. You can’t log out of any of these single accounts on PC in browser. The only way is to do it on your Android phone, but it might not fix the problem, as it will tell “logged out” on that account on PC, and logging into other one might not work and the limit will be still exceeded. At this situation you can only logged out from all accounts (but it will break accounts order, so any authorizations set to specific accounts in your bookmarked links will be messed up - e.g. those to Colab, GDrive, Gmail, etc. I mean: /u/0 and in Colab authuser= in links. Easier way to access to extra Google account will be to log into it from Incognito mode.

If you possess lots of accounts and you don’t log for some for 2 years, Google can delete it. To avoid it, create YT channel on it, and upload at least one video, and the account won’t be deleted.

Tests of four presets of KF 4.4 vs MDX-UVR HQ_3 and MDX23C HQ (1648)

(noise gate enabled a.k.a. “Silent” option)

Not really demanding case, so without modern vocal chain in the mix, but probably enough to present the general idea of how different presets sound here. So, more forgiving song to MDX23C model this time, and less aggressive models with more clarity.

Genre: (older rap) Title: (O.S.T.R. - Tabasko [2002])

BEST Preset : 3

Music :

Versus P4, hi-hats are preserved better in P3.

Snare in P3 is not so muffled like in P4.

HQ_3 has even more muffled snares than in P4.

P3 still had less vocal residues than MDX23C HQ 1648 model, although the whole P3 result was more muffled, but residues are smartly muffled too.

MDX23C had like more faithfully sounding snares than P3, to the extent that they can be perceived brighter (but vocal residues, even on a more forgiving song like this, are more persistent in MDX23C than in P3).

Sometimes it depended on specific fragment where P4 and where P3 has more vocal residues in that specific case, so P3 turned out being pretty much balanced, although P4 had less consistent vocal residues, although still not so few like HQ_3, but it's not that much of a problem (HQ_3 is really muffled). If it was 4 stems, then I'd describe P3/4 as having very good "other" stem but drums too as I mentioned.

WORST Preset (in that case) : 1

Music : Too much consistent vocal residues

There's a similar situation in P2, but at least P2 has brighter snares than even MDX23C.

In other songs, P1 can be better than P2, leaving less vocal residues in specific fragments for a specific artist, but noticeably more for others.

Preset 4 with setting slow (but not the slowest) takes 16 minutes for 5 minutes song on T4 in free Colab (performance of ~GTX 3050). For 3:30 track, it takes 13:30 for the slowest setting. In KF 5.1 with default chunks 500K and slowest setting, for 4:50 song and preset 2 it took <10 minutes, preset 3, 12 minutes.

VS preset 3, the one from the screenshot (now added as preset 5) is more noisy and has more vocal residues, mainly in quiet places or when there is no instrumental. Processing time for 6:16 track on medium setting is 22:19 minutes. But it definitely has more clarity over preset 3. And there is still less vocal residues than in Preset 1 and 2, which have more clarity, but tend to have too many vocal residues in some tracks. Hence, preset 5 is the most universal for now.

Ripple/Capcut/SAMI-Bytedance/Volcengine/BS-RoFormer (2-4 stem)

Output quality in Ripple is: 256kbps M4A (320kbps max) and lossless (introduced later). 50MB upload limit, 4 stems

Min. iOS version: 14.1

Ripple is only for US region (which you can change, more below)

Ripple for iOS: https://apps.apple.com/us/app/ripple-music-creation-tool/id6447522624

Capcut for Android: https://play.google.com/store/apps/details?id=com.lemon.lvoverseas

(separation only for Pro, Indian users sometimes via VPN)

Capcut a.k.a. Jianying (2 stems) works also on Windows (only in Jianying Pro, separation option is available)

Can be used instead of Ripple if you're on unsupported iOS below 14.1 or don’t have iOS. To get Ripple you can also use a virtual machine remotely instead (instructions below). Ripple can also be run on your M1 Mac using app sideloading (instructions below).

Ripple = better quality than CapCut as of now (and fullband)

with fixed the click/artifacts using cross-fade technique between the chunks.

Capcut = “the results are really low quality but if you export the instrumental and invert it with the lossless track, you will get the vocals with the noise which is easy to remove with mdx voc ft for example, then you can invert the lossless processed vocals with the original and have it in better quality.

The vocals are very clean from cap cut, almost no drum bleed”

Ripple and Capcut uses SAMI-Bytadance arch (later known as BS-Roformer). It was developed by Bytedance (owner of TikTok) for MDX23 competition, and holds the top of our MVSEP leaderboard. It was published on iOS and for the US region as “Ripple - Music Creation Tool” app. Furthermore, it's a multifunctional app for audio editing, which also contains a 4 stem separation model. Similar situation with Capcut (which is 2 stems only IRC). The model itself is not the same as for MDX23 competition (SAMI ByteDance v1.0), as they said, models for apps were trained on 128kbps mp3 files to avoid copyright issues, but it’s the same arch, just scores a bit lower (even when exported losslessly for evaluation). SDR for Ripple is naturally better than for Capcut.

Seems like there is no other Pro variant for Capcut Android app, so you need to unlock regular version to Pro.

At least the unlocked version on apklite.me have a link to the regular version, so it doesn't seem to be Pro app behind any regional block. But -

"Indian users - Use VPN for Pro" as they say, so similar situation like we had on PC Capcut before. Can't guarantee that unlocked version on apklite.me is clean. I've never downloaded anything from there.

Bleeding

Bas Curtiz found out that decreasing volume of mixtures for Ripple by -3dB (sometimes -4dB) eliminates problems with vocal residues in instrumentals in Ripple. Video

This is the most balanced value, which still doesn't take too many details out of the song due to volume attenuation.

Other good values purely SDR-wise are -20dB>-8dB>-30dB>-6dB>-4dB> /wo vol. decr.

The method might be potentially beneficial for other models, and probably work best for the loudest tracks with brickwalled waveforms.

The other stem is gathered from inversion to speed up the separation process. The consequence is bleeding in instrumentals.

- If you suffer from bleeding in other stem of 4 stems Ripple, beside decreasing volume by e.g. 3/4dB also “when u throw the 'other stem' back into ripple 4 track split a second time, it works pretty well [to cancel the bleeding]”

The forte of the Ripple is currently vocals - the algo is very good at differentiating what is vocals and what is not, although they can sound “filtered” at times.

Currently, the best SDR for public model/AI, but it gives the best results for vocals in general. For instrumentals, it rather doesn’t beat paid Dango.ai (and rather not KaraFan and HQ_3 or 1648/MDX23C fullband too).

It's good for vocals, also for cleaning vocal inverts, and surprisingly good for e.g. Christmas songs, (it handled hip-hop, e.g. Drake pretty well). It's better for vocals than instrumentals due to residues in other stem - bass is very good, drums also decent, kicks even one if not the best out of all models, as they said some fine-tuning was applied to drums stem. Vocals can be used for inversion to get instrumentals, and it may sound clean, but rather not as good as what 2 stem option or 3 stem mixdown gives as output is lossy.

BS-RoFormer

Their paper was published and later reimplemented by lucidrains for possibility of training:

https://github.com/lucidrains/BS-RoFormer

ZFTurbo from MVSEP was already in the process of training his model, but it would take him a year to train, they said. Later,

Mel-Band RoFormer based on band split was released, which is faster, but doesn't provide such high SDR as BS. Mel variant might require some revision of the code, and its paper might lack some features need to keep up SDR-wise with extremely slow BS original variant. On paper, it should be better than BS-Roformer, but for some reason, models trained with Mel have worse results than with BS-Roformer (so probably problem with reimplementation from paper).

For more information, check the training section.

Capcut (2 stems only)

https://www.capcut.cn/

It is a new Windows and Android app which contains the same arch as Ripple inst/vocal, but lower quality model, and without an option of exporting 4 stems.

It normalizes the input, so you cannot use Bas’ trick to decrease volume by -3dB to workaround the issue of bleeding like in Ripple (unless you trick out the CapCut, possibly by adding some loud sound in the song with decreased volume).

“At the moment the separation is only available in Chinese version of Windows app which is jianyingpro, download available at capcut.cn [probably here - it’s where you’re redirected after you click “Alternate download link” on the main page, where download might not work at all]

Some people cannot find the settings on this screen in order to separate.

Separation doesn't require sign up/login, but exporting does, and requires VIP, which is paid depending on whether you’re from rich or poor country, then it’s free.

- There’s a workaround for people not able to split using Capcut for Windows in various regions.

- Bas Curtiz' new video on how to install and use Capcut for separation incl. exporting:

https://www.youtube.com/watch?v=ppfyl91bJIw

"It's a bit of a hassle to set it up, but do realize:

- This is the only way (besides Ripple on iOS) to run ByteDance's model (best based on SDR).

- Only the Chinese version has these VIP features; now u will have it in English

- Exporting is a paid feature (normally); now u get it for free

The instructions displayed in the video are also in the YouTube description."

- mitmproxy script allowing to save to FLAC instead of AAC (although it just reencodes from AAC 113kbps with 15.6kHz lowpass filter). It’s a bit more than script. See the full tutorial.

- For some people using mitmproxy scripts for Capcut (but not everyone), they “changed their security to reject all incoming packet which was run through mitmproxy. I saw the mitmproxy log said the certificate for TLS not allowed to connect to their site to get their API. And there are some errors on mitmproxy such as events.py or bla bla bla... and Capcut always warning unstable network, then processing stop to 60% without finish.” ~hendry.setiadi

“At 60% it looks like the progress isn't going up, but give it idk, 1 min tops, and it splits fine.” - Bas

“in order to install pydub within mitmproxy, you additionally need to:

open up CMD

pip install mitmproxy

pip install pydub”

- IntroC created a script for mitmproxy for Capcut allowing fullband output, by slowing down the track. Video

Older Capcut instruction:

The video demonstration of below:

0. Go offline.

1. Install the Chinese version from capcut.cn

2. Use these files copied over your current Chinese installation in:

C:\Users\(your account)\AppData\Local\JianyingPro

Don’t use English patch provided below (or the separation option will be gone)

3. Now open CapCut, go online after closing welcome screen, happy converting!

4. Before you close the app, go offline again (or the separation option will be gone later).

! Before reopening the app, go offline again, open the app, close welcome screen, go online, separate, go offline, close. If you happen to missed that step, you need to start from the beginning of the instruction.

(no longer works after 4.6 to 4.7 update, as it freezes the app) The only thing that seems to enable vocal separation without requiring replacing everything, is to replace that SettingsSDK folder contents inside User Data. It's probably the settings_json file inside responsible for that.

FYI - the app doesn’t separate files locally.

The quality of separation vs Capcut is not exactly the same as Ripple. Seeing by spectrograms, there is a bit more information in vocals in Capcut, while Ripple has a bit more information in spectrum in instrumentals.

Separated vocal file is encrypted and located in C:\Users\yourusername\AppData\Local\JianyingPro\User Data\Cache\audioWave”

The unencrypted audio file in AAC format is located at \JianyingPro Drafts\yourprojectname\Resources\audioAlg (ends with download.aac)

“To get the full playable audio in mp3 format, a trick that you can do is drag and drop the download.aac file into Capcut and then go to export and select mp3. It will output the original file without randomisation or skipping parts”

(although it resulted in VIP option disappearing but Bas somehow managed to integrate it in his new video tutorial, and it started to work, English translation isn't the culprit of the problem, but if you use both language pack and SettingsSDK folder from above)

You can replace the zh-Hans.po file with English one to have English language on Chinese version of the app possessing separation feature in:

jianyingpro/4.6.1.10576/Resources/po

While you can’t use that language pack, you can always use Google Translate to transform Chinese into your own language on a screen of your smartphone.

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQwk_qynMHMwquSfQZFrrn30F355Ihta_GHQNo7vhnPUhfjj-kUiqSRBiLQbPlgmB5Gqro&usqp=CAU

https://support.google.com/translate/answer/6142483?hl=en&co=GENIE.Platform%3DDesktop

“Trying out capcut, the quality seems the same as the Ripple app (low bitrate mp3 quality)

at least the voice leftover bug is fixed, lol”

Random vocal pops from Ripple are fixed here.

Also, it still has the same clicks every 25 seconds as before in Ripple.

How to change region to US

in order to make Ripple work on iOS

in Apple App Store to make "Ripple - Music Creation Tool" (SAMI-Bytedance) work.

https://support.apple.com/en-gb/HT201389

- Bas' guide to change region to US for Ripple on iOS

https://www.bestrandoms.com/random-address-in-us

Or use this Walmart address in Texas, the number belongs to an airport.

Do it in App Store (where you have the person-icon in top right).

You don't have to fill credit cards details, when you are rejected,

reboot, check region/country... and it can be set to the US already.

Although, it can happen for some users that it won't let you download anything forcing your real country.

"I got an error because the zip code was wrong (I did enter random numbers) and it got stuck even after changing it.

So I started from the beginning, typed in all the correct info, and voilà"

If ''you have a store credit balance; you must spend your balance before you can change stores''.

It needs (an old?) a simcard to log your old account out if necessary

Ripple on Windows or MacOS

- Another way to use Ripple without Apple device -

virtual machine

Sideloading of this mobile iOS app is possible on at least M1 Macs.

- Saucelabs

Sign up at https://saucelabs.com/sign-up

Verify your email, upload this as the IPA: https://decrypt.day/app/id6447522624/dl/cllm55sbo01nfoj7yjfiyucaa

Rotating puzzle captcha for TikTok account can be tasking due to low framerate. Some people can do it after two tries, others will sooner run out of credits, or completely unable to do it.

- https://mobiledevice.cloud/

Mobile device cloud

- Scaleway

"if you're desperate you can rent an M1 Mac on scaleway and run the app through that for $0.11 an hour using this https://github.com/PlayCover/PlayCover

IPA file:

https://www.dropbox.com/s/z766tfysix5gt04/com.ripple.ios.appstore_1.9.1_und3fined.ipa?dl=0

"been working like a dream for me on an M1 Pro… I've separated 20+ songs in the last hour"

More info:

-https://cdn.discordapp.com/attachments/708579735583588366/1146136170342920302/image.png

- “keep in mind that the vm has to be up for 24 hours before you can remove it, so it'll be a couple bucks in total to use it”

Fixing chunking artefacts (probably fixed)

- Every 8 seconds there is an artifact of chunking in Ripple. Heal feature in Adobe Audition works really well for it:

https://www.youtube.com/watch?v=Qqd8Wjqtx-8

-The same explained on RX10 example and its Declick feature:

https://www.youtube.com/watch?v=pD3D7f3ungk

Volcengine (a.k.a. The sami-api-bs-4track - 10.8696 SDR Vocals)

https://www.volcengine.com/docs/6489/72011

Ripple/SAMI Bytedance's API was found. If you're Chinese, you can go through it easier -

you need to pass the Volcengine facial/document recognition, apparently only available to Chinese people

We already evaluated its SDR, and it even scored a bit better than Ripple itself.

"API from volcengine only return 1 stem result from 1 request, and it offers vocal+inst only, other stems not provided. So making a quality checker result on vocal + instrument will cost 2x of its API charging.

Something good is that volcengine API offers 100 min free for new users"

API is paid 0.2 CNY per minute.

It takes around 30 seconds for one song.

It was 1.272 USD for separating 1 stem out MVSEP's multisong dataset (100 tracks x 1 minute).

"My only thought is trying an iOS Emulator, but every single free one I've tried isn't far-fetched where you can actually download apps, or import files that is"

So far, Ripple didn't beat voc_ft (although there might be cases when it's better) and Dango.

Samples we got months ago are very similar to those from the app, also *.models files have SAMI header and MSS in model files (which use their own encryption), although processing is probably fully reliable on external servers as the app doesn't work offline (also model files are suspiciously small - few megabytes, although it's specific for mobilenet models). It's probably not the final iteration of their model, as they allegedly told someone they were afraid that their model will leak, but better than the first iteration judging by SDR with even lossy input files.

Later they told that it’s different model than the one they previously evaluated, and that time it was trained with lossy 128kbps files due to some “copyright issues”.

"One thing you will notice is that in the Strings & Other stem there is a good chunk of residue/bleed from the other stems, the drum/vocal/bass stems all have very little to no residue/bleed" doesn't exist in all songs.

It's fully server-based, so they may be afraid of heavy traffic publishing Ripple worldwide, and it's not certain whether it will happen.

Thanks to Jorashii, Chris, Cyclcrclicly, anvuew and Bas, Sahlofolina.

Press information:

https://twitter.com/AppAdsai/status/1675692821603549187/photo/1

https://techcrunch.com/2023/06/30/tiktok-parent-bytedance-launches-music-creation-audio-editing-app/

Site:

https://www.ripple.club/

About ByteDance

Winners of MDX23 competition. They said at the beginning, that it utilizes novel arch (so no weighting/ensembling of existing models). In times of v.0.1 seemingly the best vocals, not so good instrumentals, as it was once said by someone who heard samples, but they came a long way lately. It's all about their politics. It's a Chinese company responsible for TikTok, famous for d**k moves outside China - manipulating their algorithms - encourage of stupidity outside China, and greedy, wellness-centered attitudes for users in China, manipulating their alghoritms to promote only black-white relationships in western countries, spying on users copying their clipboard, spying even on journalists to find their sources of information about the company, and also, a subject to ban in some countries for bad influence on children, or data infringement by storing non-China users data directly on their servers which is against the law of many countries. Decompiling TikTok analysis (tons of spying improper behavior of the app). Currently, Bytedance is only around 40% owned by founders, Chinese investors, and their employees and the rest (60%) state global investors (incl. lots of American).

They said, the CEO, told them to hold this ByteDance arch for two years for themselves. Initially they had plans to release it in some kind of app, firstly at the end of June, later something was planned at the end of year, but now they said something about two years (maybe more about open sourcing, but we can't have our hopes high). Previously, they said the case of open sourcing/releasing was stuck in their legal department. Later they told they used MUSDBHQ+500 songs for their dataset. These 500 songs could have been obtained illegally for training (although everyone does it), but they might be extremely cautious about it (or it's just an excuse). Eventually, they released Ripple and Capcut.

Later, they seemingly spread information among users privately, that despite the similarities in SDR, the 18.75 score is a result of a trolling, someone other than ByteDance. Some people favoring ByteDance were rumored for disruptive, trolling behavior on our server too, harassing other users, or just being unkind to others etc. Besides, the same person responsible, was also the most informed about ByteDance next moves, and was also changing nicknames or accounts frequently. Also possessed great ML knowledge. Many coincidences. In the end, the same user, zmis (if you see the details of the account above), was behind a lot of newly created, accounts, which were banned on our server.

The same day or in very similar period, a new account was created, conducting the same behavior, when previous was banned.

The main core of their activities, was spreading misinformation about SDR metrics, telling that is the most important thing in the world, because their own arch is good at it, hence the narration.

So don't bother, and do your good job not feeding troll from other company. They don't like competition, doing their own moves behind backdrops and become better.

It’s not impossible to fake SDR results in the MVSEP leaderboard. For current public archs, you’d need to feed your dataset both by the songs in the evaluation dataset, keeping your regular big dataset in place, so you simply lose evaluation factor of this leaderboard, or you can simply mix your result stems with original stems. Those results, which are not faked, are at least those, which were uploaded by various users evaluating the same public, available for offline use models, but usually uploaded with different parameters which affects SDR, so usually the higher parameters, the higher SDR (but not always), and such results remain consistent among various users evaluations with similar parameters, so scalability is correct and preserved, thus the results wasn’t faked. For the other scores from unpublic inferences/models/methods, we simply trust ZFTurbo and rather viperx too, as they’re/were our trusted users for years. Also, the leaderboard in the current multisong dataset tends to give better SDR to the results with more residues on different occasions before, so the chart is simply not fully reliable for that, but rather not manipulated in its core either.

ViperX currently possesses his own trained, private BS-Roformer model similar to SAMI v1.0 model, but it's not planned for public release (he was offered by 5K songs by ZFTurbo for expanding his dataset, but he refused the offer). On the bright side, his sounds similar to Ripple (although it's probably only 2 stem, while 4 stem Ripple variant scores a bit higher than 2 stem variant l, but still lower than ViperX and v1.0)

Single percussion instr. separation

If you want to further separate single instruments from drums stem separated with e.g. MDX23 or Demucs_ft to: hihat, cymbals, kick, snare, you might want to check below solutions.

From free ones, there's a Demucs model called -

drumsep 

- Fixed Colab or Kubinka Colab (you can provide direct links there)

- Available on MVSEP.com

(Use these solutions instead of GitHub Colab as the model's GDrive link from OG GitHub Colab is currently deleted, so drumsep won’t work correctly, unless you replace GDrive link with model to the .th model reupload:

https://drive.google.com/file/d/1S79T3XlPFosbhXgVO8h3GeBJSu43Sk-O/view)

- Windows installation - execute the following:

demucs --repo "PATH_TO_DrumSep_MODEL_FOLDER" -n modelo_final "INPUT_FILE_PATH" -o "OUTPUT_FOLDER_PATH"

- You can also use drumsep in UVR 5 GUI

(beside using in fixed Colab or in CML):

Go to UVR settings and open application directory.

Find the folder "models" and go to "demucs models" then "v3_v4"

Copy and paste both the .th and .yaml files, and it's good to go.

Be aware that stems will be labelled wrong in the GUI using drumsep.

It's much more sensitive to shifts than overlap, where above 0.6-0.7 it can become placebo. Consider testing it with shifts 20.

But some people find using shifts 10 and overlap 0.99 better than shifts 20 and overlap 0.75.

Just be aware, that if you’re willing to wait, you can further increase shifts to 20 if you want the best of both worlds.

Also, consider testing it with -6 semitones e.g. in UVR 5.6/+, or with 31183Hz sample rate with changed tempo.

-12 semitones from 44100Hz is 22050 and should be rather less usable in most cases, the same for tempo preservation, it should be off.

Be aware that sometimes it can “consistently put hi hats in snare stem” and can contain some artefacts, and results might not null with the source.

“From what I've tested (on drums already extracted with demucs4_ft from a live band recording from the output of the soundboard... so shitty sounding!), It is quite good at separating cymbals from shells, and kick from snare, but there are parts of kick or snare sounds that can go into the toms stem (...it's easy to fix manually in a DAW)”

"Ok I did test it.

- You're right, Drumsep is good if shifts are applied, this makes a HUGE difference, first time i did test it with 0 or 1 shift and results were meh.  Shifts (from about 5/6/10 depending on source) clean it nicely.

Minuses: only 4 outputs. Not enough for a lot of drumtracks (but hey you can Regroove results, and this is what i will be doing probably from now) - It takes a long time with a lot of shifts, - it doesnt null with original tracks

- Regroove allows me more separations, especially when used multiple times, so as a producer it allows me to remove parts of kicks, parts of snares etc, noises etc. More deep control. Plus it nulls easily (it always adds the same space in front) so I can work more transparently.

But you're right, I will use drumsep in the Colab with a lot of shifts as a starting point in most cases now."

"It's trained with 7 hours of drum tracks that I made using sample-based drum software like Adictive Drums, trying to get as many different-sounding drums as I could. As everything was controlled with MIDI, I could export the isolated bodies: kick, snare, toms (all on one track), and cymbals (including hi-hat). So every dataset example is composed of kick, snare, toms, cymbals, and the mixture (the sum of all of them)." - the author

From paid solutions for separating drums' sections there is mainly a paid FactorSynth and other alternatives are more problematic or less perfect.

Use free zero shot for separating single other instruments from e.g. others stem from Demucs or GSEP.

FactorSynth

Since version 3 available in a form of plugin for most DAWs. Demo runs for 20 minutes at a time. Exporting and component editing are disabled.

Till v. 2 it was Ableton-only compatible add-on. And (probably) could be used on free Ableton Live.

Also, not for separating drums from a full mix, but for separating your already separated drums into further layers like kick, snare, transients, cymbals, etc. from Demucs or GSEP (the latter usually has better shakers and at least hi-hats when they're in fast tempo).

[till v2 demo version limit was 8 seconds and no limit for full version]” “it’s amazing”.

It works the same way as Regroover VST (which may have some problems with creating a trial account).

It’s comparable or better quality (both better than zero shot for at least drums).

“Factorsynth has more granularity, but drumsep is easier to work with and gets less confused between toms and kicks.”

Regroover

Regroover is only for 30 seconds chunks, and they require manual align due to phasing issues - additional silence is added in the beginning and ending.

“Get your 30-second drum clip, then drag and drop it into Regroover.

Make sure to de-select the Sync option, as it will time stretch it by default.

On the right-hand side, I recommend changing the split to 6 layers instead of 4, simply for flexibility.

Once it has processed that, you can choose export -> layers."

There was a report that probably newer versions might not be feasible for this task anymore.

In other words:

It’s much more hassle to use it than drumsep but it’s very good “if you need particular sound and not about pattern etc.

1. separate drums from whole track (demucs)

2. Cut drum track into max 30second cuts [regroover limits] and ideally dont cut right on transient, some space before kick helps,

2. You use regroover for the first time and for example try to separate to 4 tracks, just so overall separation.

3. Those separation sums exactly to that is given, sometimes it just need to be realigned few ms.

4. And if for example kick still has some not needed parts, you just regroove it once again.

If are looking overall fast and for patterns, drumsep. Regroover for painfull but precise job. Also in most cases hihats are trash, but snare's and kicks you often can find perfetclu usable ones. I'm not sure about metal but overall.”

UnMixingStation

"Very, very old and almost impossible to find, but the separations are 95% close to Regroover". The software is 13 years old, and their site is down, and the tool doesn’t seem to be available to buy anywhere.

LarsNet

Adden on MVSep. Colab. Source: https://github.com/polimi-ispl/larsnet

It separates previously separated drums into 5 stems: kick, snare, cymbals, toms, hihat.

It’s worse than Drumsep as it uses Spleeter-like architecture, but “at least they have an extra output, so they separate hihats and cymbals.”. Colab

“Baseline models don't seem better quality than drumsep, but the provided checkpoints are trained with oly 22 epochs, it doesn't seem much. (and STEMGMD dataset was limited by the only 10 drumkits), so it could probably be better with better dataset & training”

Similar situation as with Drumsep - you should provide drums separated from e.g. Demucs model.

There’s also Zynaptiq Unmix Drums, but it’s not exactly a separation tool, I’d say.

- For only kick and hi hat separation now free -

VirtualDJ 2023/Stems 2.0 (kick, hi-hat)

Probably using drums from Demucs 4 or GSEP first, will give better results but, it's not perfect. In many cases it may leave bleeding of snare a little bit, in both hi-hat and kick track. Sadly it sometimes confuses these elements of a mix.

 

"If you are not using it professionally, and do not use any professional equipment like a DJ controller, or a DJ mixer, then VirtualDJ is (now) FREE".

RipX DeepAudio (-||-) (6 stems [piano, guitar])

Popular tool. Decent results for specific drums' sections separation (but as for vocal/instrumental/4 stems separation, all the tools mentioned in at the very top of the document outperforms RipX, so use it only for specific drums’ section separation only, at best using Demucs 4 or GSEP for drums stem).

"It can separate a file into a buncha things into a lot more types of instruments than just the basic 4 stems (with varying degrees of success ofc).

Might be a case that old cracked versions of RipX don't allow separating drums sections well, or just the opposite - check both the newest version and Hit'n'Mix RipX DeepAudio v5.2.6, but probably the latter doesn't support separating single drums yet.

It’s basically UVR but with their custom models + SFX single stem

It's good for guitar, but not in all cases (possibly Demucs for 4 stems).

Piano and guitar models were added recently (somewhere in the January 2023)

- Hit 'n' Mix RipX DAW Pro 7 released. For GPU acceleration, min. requirement is 8GB VRAM and 10XX card or newer (mentioned by the official document are: 1070, 1080, 2070, 2080, 3070, 3080, 3090, 40XX). Additionally, for GPU acceleration to work, exactly Nvidia CUDA Toolkit v.11.0 is necessary. Occasionally during transition from some older versions separation quality of harmonies can increase. Separation time with GPU acceleration can decrease from even 40 minutes on CPU to 2 minutes on decent GPU.

They say it uses Demucs.

Spectralayers 10

Received an update of an AI, and they no longer use Spleeter, but Demucs 4 (6s), and they now also good kick, snare, cymbals separation too. Good opinions so far. Compared to drumsep sometimes it's better, sometimes it's not. Versus MDX23 Colab V2, instrumentals sometimes sound much worse, so rather don’t bother for instrumentals.

USS-Bytedance (any; esp. SFX)

https://github.com/bytedance/uss

(COMMAND: "conda install -c intel icc_rt" SOLVES the LLVM ERROR)

You provide e.g. a sample of any instrument or SFX, and the AI separates it solo from a song or movie fragment you choose to separate.

It's working in mono. Process right and left channel separately.

ByteDance USS with Colab by jazzpear94

https://colab.research.google.com/drive/1lRjlsqeBhO9B3dvW4jSWanjFLd6tuEO9?usp=share_link

Probably mirror (fixed March 2024):

https://colab.research.google.com/drive/1f2qUITs5RR6Fr3MKfQeYaaj9ciTz93B2

It works (much) better than zero-shot (not only “user-friendly wise”).

Better results, and It divides them into many categories.

Great for isolating SFX', worse for vocals than current vocal models. Even providing acapella didn't give better results than current instrumental models. It just serves well for other purposes.

"Queries [so exemplary samples] for ByteDance USS taken from the DNR dataset. Just download and put these on your drive to use them in the Colab as queries [as similarly sounding sounds from your songs to separate]."

https://www.dropbox.com/sh/fel3hunq4eb83rs/AAA1WoK3d85W4S4N5HObxhQGa?dl=0

Also, grab some crowd samples from here:

https://youtu.be/-FLgShtdxQ8

https://youtu.be/IKB3Qiglyro

https://youtu.be/Hheg88LKVDs

Q&A by Bas Curtis and jazzpear

Q: What is the difference between running with and without the usage of reference query audio?

A: Query audio lets you input audio for it to reference and extract similar songs based upon (like zeroshot but way better) whereas without a query auto splits many stems of all kinds without needing to feed it a query.

Q: Let's say there is this annoying flute you wanna get rid off...

and keep the vocals only....

You feed a snippet of the flute as reference, so it tries to ditch it from the input?

A: Quite the reverse. It extracts the flute only which ig you could use to invert and get rid of it

Zero Shot (any sample; esp. instruments)

(as USS Bytedance came out now, zero shot can be regarded as obsolete now, although zero-shot might is rather better for single instruments than for SFX)

You provide e.g. sample of any trumpet or any other instrument, and AI returns it from a song you choose to separate.

Guide and troubleshooting for local installation (get Discord invitation in footer first if necessary).

Google Colab troubleshooting and notebook (though it may not work at times when GDrive link resources are out of download limit, also it returns some torch issues after Colab updates in 2023).

Check out also this Colab alternative:

https://replicate.com/retrocirce/zero_shot_audio_source_separation

It's faster (mono input required).

Official GitHub page.

Also available on https://mvsep.com/ in a form of 4 stems without custom queries, and it’s not better than Demucs in this form.

"Zero shot isn't meant to be used as a general model, that's why it accelerates on a specific class of sounds with some limitations in mind.... It mostly works the best when samples match the original input mixture, of course there are limitations"

"You don’t have to train any fancy models to get decent results [...] And it’s good at not destroying music". But it usually lefts some vocal bleeding, so process the result using MDX to get rid of these low volume vocals. Zero-shot is also capable of removing crowd from recordings pretty well.

As for drums separation, like for snares, it’s not so good as drumsep/FactorSynth/RipX, and it has cutoff.

"I did zero shot tests a week or two ago and it was killing it, pulling harmonica down to -40dB, synth lines gone, guitars, anything. And the input sources were literally a few seconds of audio.

I've been pulling out whole synths and whistles and all sorts.

Knocks the wind model into the wind, zero shot with the right sample to form the model backbone works really well

The key is to give it about 10 seconds of a sample with a lot of variation, full scales, that kinda thing"

Special method of separation by viperx (ACERVO DOS PLAYBACK) edited by CyberWaifu

Process music with Demucs to get drums and bass.

Process music with MDX to get vocals.

Separate left and right channels of vocals.

Process vocal channels through Zero-Shot with a noise sample from that channel.

Phase invert Zero-Shot's output to the channel to remove the noise.

Join the channels back together to get processed vocals.

Invert the processed vocals to music to get the instrumental.

Separate left and right channels of instrumental.

Process instrumental channels through Zero-Shot with a noise sample from that channel.

Phase invert Zero-Shot's output to the channel to remove the noise.

Join the channels back together to get processed instrumental.

Process instrumental with Demucs to get other.

Combine other with drums and bass to get better instrumental.

So it sounds like Zero-Shot is being used for noise removal.

As for how Zero-Shot and the noise sample works...

Medley Vox (different voices)

Local installation video tutorial by Bas Curtiz:

https://youtu.be/VbM4qp0VP8

Cyrus version of MedleyVox Colab with chunking introduced, so you don't need to perform this step manually:

https://colab.research.google.com/drive/1StFd0QVZcv3Kn4V-DXeppMk8Zcbr5u5s?usp=sharing

Use already separated vocals as input (e.g. by vox_ft or MDX23C fullband a.k.a 1648 in UVR or 1666 on MVSEP).

“Run the 1st cell, upload song to folder infer_file, run 2nd cell, get results from folder results = profit”

Currently, we have a duet/unison model 238 (default in Colab),

and main/rest 138 to uncomment in Colab.

Recommended model is located in vocals 238 folder (non ISR-net one).

While:

“The ISR_net is basically just a different type of model that attempts to make audio super resolution and then separate it. I only trained it cuz that's what the paper's author did, but it gives worse results than just the normal fine-tuned.”

The output for 238 is 24kHz sample rate (so 12kHz model in Spek).

You might want to upscale the results using:

https://github.com/haoheliu/versatile_audio_super_resolution (it gives decent results for this model).

https://replicate.com/nateraw/audio-super-resolution

https://colab.research.google.com/drive/1ILUj1JLvrP0PyMxyKTflDJ--o2Nrk8w7?usp=sharing

Be aware that it may not work with full length songs (you might need to divide them into smaller 30 s pieces).

The output is mono.

You might want to create a "fake stereo" as input by copying the same channel over the two, then do the same with another channel, and then create the stereo result from both channels processed separately in dual mono with MV.

The AI will create a downmix from both input channels instead of processing channels separately.

Be aware that “dual mono processing with AI can often create incoherencies in stereo image (like the voice will be recognized in some part only in left channel and not the other, as they are processed independently)” jarredou

"The demos sound quite good (separating different voices, including harmonies or background [backing] vocals)"

It's for already separated or original acapellas.

Original repo (Vinctekan fixed it - the video at the top contains it)

https://github.com/jeonchangbin49/medleyvox

Old info

https://media.discordapp.net/attachments/900904142669754399/1050444866464784384/Screenshot_81.jpg

Colab old

https://colab.research.google.com/drive/17G3BPOPBPcwQdXwFiJGo0pKrz-kZ4SdU

Older Colab

https://colab.research.google.com/drive/1EHJFBSDd5QJH1FQV7z0pbDRvz8yXQvhk

(The same one, but here you need to change the .ckpt, .json and .pth files there from Cyrus [more details in the video above].)

The model is trained by Cyrus. The problem is, it was trained with 12kHz cutoff… “audiosr does almost perfect job [with upscaling it] already but the hugging page doesn’t work with full songs, it runs out of memory pretty fast”.

It was possible at some point that later stages of the training looking like over fitting were responsible for higher frequency output.

It’s sometimes already better than BVE models and the model has already similar to demo results on their site.

Sadly, the training code is extremely messy and broken, but a fork by Cyrus with instructions is planned, with releasing datasets including the one behind geo-lock. Datasets are huge and heavy.

____________________________________________________________________

About other services:

Check this chart by Bas Curtiz to check what AIs use various (also online) services, plus their pricing.

At this point everything mentioned above this link for at least instrumentals, vocals, 4-6 stems is better than below, (with exceptions for some single stems described at the top) commonly known services:  

Spleeter

and its implementation in:

Izotope RX-8/9/10

which just uses 22kHz models instead of 16kHz in the original Spleeter. There is no point in using these anymore. The same goes to most AIs described below (or only for specific stems):

moises.ai (3 EU/month)

voiceremover.org, lalal.ai,

phonicmind
melody.ml

RipX, Demix,

ByteDance

For reference, you can check a comparison chart on MVSEP.com,

or results of demixing challenge from Sony (kimberley_jensen there is 9.7 MDX-UVR model for vocals - 2nd best on the time)

and watch this comparison.

To hear 4 stems models comparison samples you can watch this video comparison (December 2022).

It all also refers to new

real-time

AI separation tools like

Serato

and

Stems 2.0

tensorflow model (which can be found in newer Virtual DJ 2023 versions, now free for home users - better than Serato and Spleeter implementations) - they do not perform better than the best offline solutions at the very top of the document. But “Esp. since it's on-the-fly [...] results are more than decent (compared to others).”

Acon Digital Remix

(Vocals, Piano, Bass, Drums, and Other)

“Just listened to the demo, not great [as for realtime] but still”

Others

FL Studio (Demucs)

It’s actually not realtime. It takes some time to process tracks first (hence maybe it’s the best out of the three).

It's Demucs 4, but maybe not ft model and/or with low parameters applied or/and it's their own model.

"Nothing spectacular, but not bad."

"- FL Studio bleeds beats, just like Demucs 4 FT

- FL Studio sounds worse than Demucs 4 FT

- Ripple clearly wins"

djay Pro 5.x

“very good realtime stems with low CPU” Allegedly “faster and better than Demucs, similar” although “They are not realtime, they are buffered and cached.” but it’s very fast anyway. It uses AudioShake. It can be better for instrumentals than UVR at times.

Neutone VST

Has Demucs model to use in realtime in a DAW

(it uses light “retrained, smaller version” version of Demucs_mmi)

https://neutone.space/

https://neutone.space/models/1a36cd599cd0c44ec7ccb63e77fe8efc/

It doesn't use GPU, and it's configured to be fast with very low parameters, also the model is not the best on its own. It doesn't give decent results, so it's better to stick to other real-time alternatives. It won’t work correctly on low-end CPU, breaking audio in the middle and giving inconsistent audio stream with breaks.

- Service rebranded to

Fadr.com from SongtoStems.com

is just Demucs 4 HT, but paid.

"My assumption, Fadr uses Gain Normalize [for instrumentals] was right [...].

Demucs 4 HT seems to get a cleaner result. The rest = practically identical." And someone even said that vocals in VirtualDJ with Stems 2.0 had less artifacts on vocals.

Apple Music Sing

“I heard a few snippets, and what stood out is, whether intentional or not, the vocals remained in the background just enough to actually hear them.

Now that could be great for Karaoke, so u have a kind of lead to go on.” but as for just instrumentals, it’s bad.

____________________________________________________________________

Music to MIDI transcribers/converters

https://github.com/magenta/mt3

https://colab.research.google.com/github/magenta/mt3/blob/main/mt3/colab/music_transcription_with_transformers.ipynb

https://basicpitch.spotify.com/

“Tried Basic-Pitch and It is way worse than MT3 as It produces midi tracks without an identifier.”

Good results for piano:
https://colab.research.google.com/notebooks/magenta/onsets_frames_transcription/onsets_frames_transcription.ipynb

If you have notes:

musescore

Piano2Notes

(notes and midi output, paid, 30 seconds for free, very good results)

Audioshake

Paid, $16 per wav stem, 2 or 5 stems (6? (guitar and piano) or 4 stems for preview (Indie creators)

Better piano model than GSEP.

"gsep piano model is very clean but sometimes fails in bigger mix, when there are a lot of instruments"

And also guitar stem

Audioshake is suspected that it is just MDX with expanded dataset, but there’s no evidence at the moment. Comparing to UVR/MDX-UVR NET 1 model, vocal stem is 9.793 vs 9.702 in free MDX-UVR, so they’re close as for vocals.

Their researcher said they were training UMXHQ model at this period of time of 2020 Demixing Challenge.

Free Demucs 3 has a much better SDR for drums and bass than Audioshake, however the SDR for vocals and others is worse.

It accepts only non-copyrighted music for separations, but you can slow it down to circumvent it (some music like K-Pop BTS is not detected) but changing speed to 110% yields better results, even vs reversing the track.

Upload limit is one minute, so theoretically you can cut and merge chunks, but AS will fade out each chunk, so you need to find specific overhead to begin every next chunk with, to merge chunks seamlessly (I don’t remember if it solves the problem of AS watermark, though).

Then, you can download preview (chunk) for free using similar method like described in allflac section (Chrome Dev Tools -> Network -> Set filter to amazon) but result file is unfortunately only 128kbps mp3.

They are now limiting how many audio files you can upload to preview, but that can easily be mitigated by just using a temporary email provider or adding “+1” or “+2” or “.” to your gmail address, so you will still receive your email e.g. y.o.u.r.m.a.i.l.@gmail.com is the same for Google as yourmail@gmail.com.

You can also ping txmutt, Baul Plart or Bas Curtiz in #request-separation to isolate some file and make this all easily just for you (edit. 09.02.2023 - at least the Bas’ tool stopped working, so the rest like AS Tool might be dead too - at least in terms of API access).

Lalal.ai

7 stem

Acoustic and electric guitar models, piano, bass, drums and vocal with instrumental (for 2 stem UVR/MDX should do the job better)

Online service with 10 minutes/50MB per file limitation per free user.

“I love Demucs 3, although for some specific songs (with a lot of percussion and loops) I still find lalal better.

Demucs is great at keeping punchy drums, for example hip-hop, rap, house etc songs”

“lalal is[n’t] worth it anymore, most of their models like strings or synths are crap and don't work at all” ~becruily

How to… abuse it. Doesn't always work for everyone, and sometimes you'll receive only 19 seconds snippets.

Go to the signin/register screen and use a temp email from https://tempail.com/

When you are in, make sure you use the settings with a P icon, P meaning Pheonix, which seems to be some hybrid mvsep lalal shit they made

I'd recommend making the processesing level normal, although you can play around with the settings to see what sounds better

They will later process it and since lalal has shorter queues, you get them faster. It took me like 10 seconds to get a preview for a song and 20 seconds for full which is wild.

You will get a sample and if you like it, you can submit it and get your stems!"

You can also use dots in Gmail addresses, instead of +1 (and more) at the end which is unsupported in lalal. You'll receive your email with dots in it's username anyway and it will be treated as a separate email by their system.

Their app uploads input files to separate on external servers.

DeMIX Pro V3

Paid, 6 stem model, trial

Official site:

https://www.audiosourcere.com/demix-pro-audio-separation-software/

https://www.demixer.com/?utm_source=audiosourcere&utm_medium=pop&utm_campaign=exit&utm_term=asre-exit-pop

paid 33$/month, or x10 for year, or x2,5 permanent license, 7 days trial available

https://www.audiosourcere.com/demix-pro-audio-separation-software/

Vocal, Lead Vocal, Drum, Bass & Electric Guitar

https://www.demixer.com/ has the same models implemented, though they don’t currently even describe that guitar model is available, but when you log in, it’s there. Guitar might be a bit worse than RipX (not confirmed)

“audioshake has the best guitar model (its combined [paid only]), second place is deemix pro (electric guitar)”

"Demix launched a new v4 beta, and it can now process songs locally + new piano and strings models

the piano model is not bad at all, it sounds a bit thin/weak, but it detects almost all notes

hadn't found good songs to test the strings model yet, but it might be good too"

Hit'n'Mix RipX DeepAudio

Moises.ai

https://moises.ai/

Not really a good models, no previews for premium features.

“also has a guitar and a b.v. model, and a new strings model, but it's not that good, in my opinion it is not worth buying a premium account.

4-STEM model is something like demucs v2 or demixer.

B.V. model is worse than the old UVR b.v..

GUITAR model is not really good, it's probably MDX, it has a weird noise, and it tries to take the "guitar" where is not at all. It takes acoustic and electric guitar together.

PIANO model is just splitter, maybe better at some songs.

STRINGS model is interesting, It's good for songs with orchestra, but still not that clean

Their service is very interesting, and the appearance of their site is clear and simple, but the models have better competitors.” thx, sahlofolina.

Byte Dance

available on https://mvsep.com/

“This algorithm took second place in the vocals category on Leaderboard A in the Sony Music Demixing Challenge. It's trained only on the MUSDB18HQ data and has potential in the future if more training data is added.

Quality metrics are available here (SDR evaluated by his authorship non-aircrowd method):

https://mvsep.com/quality.php

Demos for Byte Dance: https://mvsep.com/demo.php?algo=16 “

(8.08 SDR aicrowd for vocal)

MDX-UVR SDR vocal models (kimberley_jensen a.k.a. KimberleyJSN) were evaluated by the same dataset as ByteDance above (aircrowd):

https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021/leaderboards?challenge_round_id=886&challenge_leaderboard_extra_id=869&post_challenge=true

https://discord.com/channels/708579735583588363/887455924845944873/910677893489770536

and presumably the same goes to GSEP and their very first vocal model (10 SDR) since their chart showed the same ByteDance SDR score like in aircrowd.

___UVR settings for ensemble (section deprecated, see the section above)__

Ensemble can provide different results from one current main model, but not especially better in all cases, so it’s also a matter of taste and conscious evaluation.

  • Aggressiveness shouldn’t be set to more than 0.1

(also check 0.01)

  • high_end_process: bypass (official recommendation) or mirroring 2 (in some cases)
  • In most cases, you shouldn’t use more than 4 models to not decrease the quality (developer recommendation)

Don't use postprocessing in HV Colab for ensemble (doesn't work).

Other recommended models for ensemble:

HP2-4BAND-3090_4band_arch-500m_1.pth,

HP2-4BAND-3090_4band_arch-500m_2.pth

(+new 3 band?)

as they currently the best (15.08.21) but feel free to experiment with more (I also used old MGM beta 1 and 2 with two above,

some people used also vocal models as well, and later there was also HP2-MAIN-MSB2-3BAND-3090_arch-500m model released, which gives good results solo).

___Good UVR accapella models______

In general, it’s better to use MDX-UVR models for clean acappellas, but for UVR, these are going to be your best bet:

- Vocal_HP_4BAND_3090 - This model with come out with less instrumental bleed.

- Vocal_HP_4BAND_3090_AGG - This is a more aggressive version of the vocal model above.

“If you wanna removes the vocals but keeping the backing vocals, you can use the latest BV model”

HP-KAROKEE-MSB2-3BAND-3090.pth

(HV)

For clean vocal, you can also use ensemble with following models:

https://cdn.discordapp.com/attachments/767947630403387393/897512785536241735/unknown.png

(REUim2005)

__How to remove artefacts from an inverted acapella?_____

This section is old, and “cleaning inverts” in current models section can provide more up-to date solutions.

      0) Currently, GSEP is said to be the best in cleaning inverts. But at least for vocal you can use some MDX model like Kim, or even better MDX23 from MVSEP beta.

  1. by charm

(rather outdated) Use Vocal_HP_4BAND_3090_arch-124m.pth at 0.5 aggressiveness, tta enabled

then use any model u like with Vocal_HP_4BAND_3090_arch-124m.pth instrumental results to filter out any vocals that weren't detected as vocals with Vocal_HP_4BAND_3090_arch-124m.pth model

combine the two results

then use model ensemble with whatever models u like (i used HP2 4BAND 1 and 2)

drag both vocal hp 4band+another model and ensemble results into audacity

use the amplify effect on both tracks and set it to -6.03

render

then use StackedMGM_MM_v4_1band_arch-default.pth

tbh vocal models even at 0 aggressiveness really help inverts

Or 0.3

I mostly use acapellas for mashups and remixes, so the little bit of bleed i get at 0.0 aggressiveness is fine

drums-4BAND-3090_4band.pth

0.5 optionally (less metallic sound)

2) Utagoe (English version with guide) - if the invert isn't good, then try utagoe, but it’s not the best.

Settings for Utagoe by HV:

if your tracks don't invert perfectly (even when aligned)

https://cdn.discordapp.com/attachments/708579735583588366/874693316099330058/unknown.png

if it's perfectly inverting:

https://cdn.discordapp.com/attachments/708579735583588366/874693413545607178/unknown.png

“It has a weird issue sometimes tho, even when everything is perfectly aligned and inverts perfectly, utagoe misses some places, and it won’t insert for a second or so”

__Sources of FLACs for the best quality for separation process__

Introduction

  • Don’t use YouTube or mp3 as input files. Compression decreases the quality of the output (if you're forced to use YT, download audio only as Opus if it's available for your video).
  • If you want to verify if your input file is really lossless:

https://fakinthefunk.net/en

  • You can untick in Colabs export_as_mp3 (you can export your separations as WAV)

Various versions of the same song

Sometimes the same track you may try to isolate can exist in few versions: e.g.

0) album version (e.g. on streaming services - sometimes both explicit and non-explicit album versions are available)

1) single version
2) deluxe edition/reissue/remastered (sometimes separated instrumentals from remastered versions can be crispier than leaked multitracks which are rarely even mastered)
3) vinyl rip
4) CD version - singles and albums (sometimes recent masters are made louder on CDs than on streaming services providing fewer dynamics, and in most cases such CD should be worse for AI separation when mastered to -9 ilufs vs -14 ilufs for streaming).

5) Regional CD version (certain albums in the past used to have different releases for some countries, e.g. Japan, different track order, even slightly different mastering)

Be aware that CD or vinyl singles can contain instrumental or acapella versions of some tracks.

As for good quality music on streaming services you can get FLAC 16 bit and 24 bit on Qobuz) or on Tidal eventually Master MQA 24 bit (but most of Max (formerly Master) quality on Tidal is 16 bit MQA, and High (formerly Hi-Fi) is always FLAC 16 bit; MQA is lossy, but 24 bit MQA file might give better results than 16 bit FLAC).

Most importantly -
Feel free to experiment with different versions and find the best result with a specific version of your song.
If you have seemingly the same FLAC Audio CD rip from before streaming services times (~<2013), it can happen that a lossless file taken from a streaming service may be slightly different in most cases (same length but slight changes in Spek across the whole track which normally don’t exist when comparing FLACs from various streaming services which have the same Audio MD5 checksums - also sometimes track finishes in slightly different place). Sometimes it can sound better, sometimes worse (and we’re talking about situation that it’s not MQA 16 bit like "Master" quality files on Tidal [but lots are 24 bit as well, though it's better to get them from e.g. Qobuz if 24 bit for some track is available, or at least compare both, because it can give slightly different results]).
Also, it can happen that 24 bit MQA on Tidal will sound better for whatever reason than seemingly better FLAC on Qobuz - it might be possibly due to different files sent to streaming services by the provider/label.

How to notice difference on spectrogram in e.g. Spek between MQA and FLAC is frequencies from 18kHz (only in certain places) but in all cases - frequencies from 21kHz - press alt-tab between the two windows’ - don’t hover your mouse between preview of both windows’ - use alt-tab - you’ll notice the changes easier. That way, you’ll notice CD rip vs streaming differences if there are any.

Generally, MQA is the least of lossy codecs - you might consider picking it where its 24 bit variant is available over regular 16 bit FLAC (separate the track using the two, and you should notice any differences easier if you already can’t hear them on mixtures/original songs).

5) 5.1 -  DTS/DVD Audio/SACD (you can search Discogz to look for multichannel versions released on disks, e.g. whole DTS Entertainment label)

6) 360 Reality - e.g. on Tidal, Apple Music (how to download is described below).

Sometimes in surround releases, vocals can be there simply in the center channel, but it's not always the case - still, it can be a better source, e.g. when you manipulate with volume of specific channels, or for vocals - when you get only center channel with very little instrumentalization which may turn out easier to separate by AI (for instrumental you might possibly invert result of vocal model and center channel to receive the remain instrumentalization in center).

E.g. "With the Dolby Atmos release of Random Access Memories some vocals and instrumentals can be separated almost like stems"

Or alternatively, you can simply downmix everything to stereo and then separate (just to check the outcome vs regular 2.0 versions).

7) DSD - if available, they are different masters and might be worth to check

Comparisons of various versions of the FLAC files on streaming services

Use Audio MD5 in Foobar >properties of the file (or download AudioMD5Checker) to not run in circles looking for various versions of the same track with the same length. Some FLAC files don’t have MD5 checksums in F2K shown, so you’ll need to download AudioMD5Checker.

E.g. on Tidal Recovery by Eminem returns the same MD5 for Deluxe and regular album, but using https://free-mp3-download.net (Deezer), checksums are different for both (to differentiate - albums on the net site have various release dates), but Deluxe on Tidal with regular on (Deezer) have the same MD5. And when Audio MD5 checksums were different, there were different results after separation. In this case of one unique vs 3 same MD5, the unique resulted in worse separation (but it can depend on more factors in other cases).

Sites and rippers

List of a ways to get lossless files for separation process

0) https://doubledouble.top/ (Qobuz, Deezer, Tidal, Amazon Music, Napster, or lossy Spotify, Soundcloud, Beatport, KKBOX) - they introduced queue for Qobuz (around 4 minutes for 30 people in queue) and Apple Music, and Deezer (3 people, much fewer people in queue).

Amazon is free from regional blocks. Sometimes your file names can be Japanese, but files are valid.

1) Amazon bot - up to 24/96 FLAC

https://t.me/GlomaticoAmazonMusicBot

2) Murglar app - apk for Android - player and downloader working with Deezer, SoundCloud, VKontakte and Yandex Music (alternatively you can use it in Android virtual machine)

3) Apple Music ALAC/Atmos downloader

https://github.com/adoalin/apple-music-alac-downloader (valid subscription required, you can’t use an account that’s on a family sharing plan, more about installation below in Dolby Atmos section)

Might be less comfy to install for beginners. It requires Android (rooted at best and in Android Studio w/o Google APIs) and installing specific Frida server version (for not rooted devices it might be more complicated) and specific version of Apple Music app.

Refer to GitHub link above and Frida website for further instructions.

General bots usage instructions

Go to proper dl request channel and write

!dl

and after !dl (on Discord $dl), paste a link to your album or song from the desired streaming service and send the message, e.g.

!dl https://www.deezer.com/en/track/XXXXXX

To open the Deezer player to search for files without active subscription, log-in and just go to:

https://www.deezer.com/search/rjd2%20deadringer

And replace the search query by yours after opening the link.

If it the bot doesn’t respond in an instant, it probably means the track/album is regional-blocked, and you should use a link from another service or another channel (UK and NZ alternative servers available). It's capable of printing out unavailability errors as well.

Some bots rip tracks or whole albums from Qobuz, Deezer, Tidal - all losslessly, while:

Spotify, Soundcloud Go+, YouTube Music, JioSaavn are lossy.  

Providing valid links for bot

For your comfort, you should register and log into every streaming service and share links for specific tracks or albums from these services (e.g. instead of pasting full album links if you want), when you can’t simply find a specific single track in Google for this service, or share the link only for it comfortably. So basically go to https://play.qobuz.com/,

and you can share single tracks to paste for bot to download - available only after logging into free account and only in the link above instead of regular Qobuz file search you can find in Google - there you cannot share single songs to download using bot later.

Because the bot rips from Qobuz, it’s the best source of 24 bit files which I recommend if only available (either 44, 48 or 96kHz) as it delivers FLACs for end users, instead of partly lossy MQA on Tidal when some album/song uses Master quality which is compulsory for 24 bit (44/48) there, but MQA 16 bit and Master is also possible for some albums (and you should avoid 16 bit MQA). Of course there might be some exceptions where 24 bit MQA on Tidal will sound better than FLAC 24 bit on Qobuz as I mentioned above - the example is Eminem - Music To Be Murdered By (Deluxe Edition) - Volume 1 (the newer Side B, track - Book of Rhymes).

For using Deezer links with bot, you need to find a song/album, use option to share a link to track or album, then open the shared link so it will be redirected, and then rename the link to this form for a single song (otherwise bot will return “processing” instead of ripping or even possible error):

https://www.deezer.com/en/track/XXXXXXX

Hint: There's also something like ARL, which is a cookie's session identifier which can be shared, so everyone can log into the premium account and download FLACs with ARLs of different regions and regional locks. Might be useful for some specific tracks. ARLs are frequently shared online, though harder to find nowadays (Reddit censorship).

IRC, Deemix might use ARLs beside regular account log in process.

5) Get Tidal Downloader Pro (the fastest method for batch and local downloading) in GUI.

HiFi Plus subscription is no longer necessary, just Hi-Fi subscription (for at least Hi-Fi albums, the two are merged in one for the price of the cheaper now).

You won’t be able to download with better quality than 22/48 and Atmos with this downloader (consider using orpheusdl_tidal instead)

Install Tidal app on Windows and log in, then open the downloader and click log, copy and paste the given code in the opened browser tab and voila.

Or if that GUI temporarily doesn’t work, go to: https://github.com/yaronzz/Tidal-Media-Downloader/releases and download the newest source code. It contains CMD version for downloading, located in: Tidal-Media-Downloader-202x.xx.xx.x\TIDALDL-PY\exe

Documentation: https://doc.yaronzz.com/post/tidal_dl_installation/

If you have problems with running the app and people also write in GitHub issues that the current version is not working, keep tracking new versions, or read all the issues about this version, it may happen that someone else will update the app before.

Versions “2022.01.21.1” and ”1.2.1.9” need to be updated to newer versions, they stopped working entirely.

(not needed anymore, as current should still work)

You can alternatively grab this recompiled version by another user

By these downloaders you can easily download whole albums including hi-res and in GUI (PRO), and also queue for single tracks to download automatically is available (Pro).

You need a valid Hi-Fi Plus account for Tidal downloader.

There are cases when certain songs are behind regional block, and won’t be downloaded by any Divolt or Discord bot resulting in error.

In such case, you’ll need the above downloader used locally, along with a Hi-Fi Plus subscription bought for your localization. Accounts bought from elsewhere, or paid with foreign currency, will most likely have regional block for some other country, so after you log into the service, certain songs won’t show in search, so the only way to show them without proper account (at least for your region) is to log out from regional locked account, start new account, and visit: https://listen.tidal.com/ (you don’t need to have subscription to search for songs on Tidal).

Besides trial, you can go for Tidal Hi-Fi cheap subscription to: https://www.hotukdeals.com/vouchers/tidal.com or pepper.pl or mydealz.de which always have some free or almost free giveaways (linked to a ready search). Then install the desktop Tidal app and log in and open the downloader. It might automate the login process in the downloader

(if you need to switch an account, you better delete Tidal-GUI folder from your documents folder in case of any problems). Monthly Argentinian subscription is the most reliable solution now if you don’t want to change your account every month or two searching for new offers.

Tidal over some other streaming services has some tracks in master quality which is 24 bit, and it gives better results for separation as the dynamics are usually better. But check if your downloaded file is really a 24 bit and your downloader is configured properly (read the documentation in case of any issues).

But, on Tidal there are some fake master files, which in reality are 16 bits, and they’re MQA to save space on their servers or mislead people, so there is no benefit from using them vs Audio CD 16 bit rip, since MQA alters quality in higher frequencies (only) and it will have an influence on separation process. So to verify if your downloader is set up properly, check whether you can download any track from Music To be Murdered By, by Eminem in 24 bit. If you can, you have properly installed and authorized the downloader, so it can download 24 bit files or in higher sample rate than 44kHz if available.

You can paste links from Tidal into the GUI browser to find that track. Just delete “?u” in the end of the shared link.

6) For Deezer https://archive.org/details/deemix - it allows you to download mp3 320 and FLAC files for premium Deezer accounts, and only mp3 128kbps for free Deezer accounts.

Be aware that deemix.pro site is unofficial, and the PC 2020 version linked there is not functional. The last 2022 is on the archive.org linked above from reddit.

Qobuz or Deezer might give better results since Tidal is recently deleting FLAC support for 16 bit files on some albums, making all the files 16 bit MQA, which is not fully lossless file format, but close (of course Tidal Downloader converts the same MQA to FLAC). It also provides some high resolution files, but most likely less of them than on Tidal.

Be aware that using some streaming services downloaders or even official Deezer/Tidal/Spotify apps, you might not be able to find or even play there some specific tracks or albums due to:

a) premium lock (it won't be played for free users)

b) regional lock (search will come up empty [the same applies to Tidal here])

Example: Spyair - Imagination instrumental - it shows up in search probably in Japan, though it cannot be downloaded using 2) https://free-mp3-download.net, but deemix with premium Deezer subscription did the job in downloading the song (not sure if it was Japan account).

PS. You can cancel your trial subscription of Deezer or Tidal immediately to avoid being charged in the future, but also keeping the access to premium till the previous charge date at the same time.

7) https://github.com/yarrm80s/orpheusdl

Supports Qobuz, Tidal (with this module, and unlike tidal-dl, also downloads files greater than 24/48 and Atmos) and probably more

(*May not work anymore)

7*) If you have a Qobuz subscription, you can just use qobuz-dl (last updated a year ago, probably no longer works, but not sure, there might be some alternative already).

Alternatively check:
Qobuz Downloader X

or Allavsoft (both requires subscription)

7b) https://github.com/nathom/streamrip

A scriptable stream downloader for Qobuz, Tidal, Deezer and SoundCloud.

8*) For Deezer you can use Deezloader or Deezloader Remix - it doesn’t require any subscription for mp3 128kbps, just register a Deezer account for free before, and use the account in the app. For free users it gives only mp3 128kbps with 16kHz, so it's worse than YT and Opus, so don't bother.

9a) For Spotify, you can use Soggfy, or

9b) SpotiDown (premium subscription for 320kbps downloading and app compiling required)

9c** Seemingly you can use https://spotiflyer.app/

but it “doesn't download from Spotify, but from Saavn, in 128kbps/low-quality.

Also, since it doesn't d/l from Spotify, you can't d/l exclusives released from there.”

It doesn't require a valid subscription irc and also allows playing and sharing music inside the app.

9d** The same sadly goes to this telegram bot downloader:

https://t.me/Spotify_downloa_bot

9e) https://spotify-downloader.com/

________________

10) Go to allflac.com - it’s paid, but they don’t pay royalties to the artist and its labels, as I spoke with at least one. They don’t keep up with the content with the streaming services, but they share stuff also not available on streaming services, even including vinyl rips as hi-res ones. Most if not all the files on the site are CD rips taken from around the net.

I’ll explain to you how to download files for free from allflac and flacit:

0. Log in

1. Find desired album (don't press play yet!)

2. Open the Chrome Menu in the upper-right-hand corner of the browser window and select More Tools > Developer Tools>navigate to “Network”

3. Press CTRL+R as prompted

4. Play audio file

5. If it's 16/44 FLAC, go to media, sort by size, right-click on the entry and open in new tab to download (sometimes it appear after some time of playing and only in “all” instead of “media”)

6*. On some 24 bit files, go to all, play the file and sort by size. You will find entry with increasing size with xhr type and flac name if it’s not shown in media tab.

7. Recently it happened once, that the point five stopped working and appearing FLAC link is red. Now you need to go to console and open a link with ERR_CERT_DATE_INVALID in new tab and open the site, clicking on advanced.

Some albums on allflac.com doesn't have tracks separated, but all the album is in track 1.

If you want to physically divide the audio file -

In such case, you can search for cue sheet here: https://www.regeert.nl/cuesheet/

Place it near the file, and eventually rename, and it's ready, but it's only for playing and playlist purposes. It doesn't separate the audio file physically. To cut the file losslesly you need lossless-cut https://github.com/mifi/lossless-cut/releases/ - it allows importing cue sheet to cut the album. Now if you have all the files divided you can probably use MusicBrainz (probably Foobar2000 plugin is available) to tag the files (but not the filenames - for that, you need mp3tag and tagged files to copy tags to filenames with specific masks). I know that lossless-cut might be not precise, and it may create a problem with automatic content detection in MusicBrainz, but I know that tool or similar allowed to just search for the album you specifically searched for, and not by just mark files>album detection in Foobar which may fail. So technically cutting and tagging the files should be possible, but time-consuming.

Looks like, unlike 24/48 files, all 24/96/192kHz ones are just vinyl rips taken from various torrents. If again there’s only one or two files with the whole album, originally attached with cue, you should be able to find specific cue files simply searching in Google for its specific file name with quotes (file list is below track list there). Of course, you can also cut your album manually, or even make your own cue sheet to cut the album.

Also be aware that sometimes you won’t be able to download the file, and it won’t appear as FLAC, if you do not press CTRL+R on Network before starting playing the file, otherwise you need to close and reopen the tab and press CTRL+R in Network again.

And also, such file can reset during downloading near the end (maximum size of downloaded file cannot exceed 1GB, otherwise it gets reset for some reason). To prevent it, copy the download link from your browser, and paste it to some download accelerator. Even free BitComet will do the trick since it supports HTTP multiple connection downloading. If you’re lazy, to prevent losing at least these 1GB, simply open the still downloaded file using MPC-HC and Chrome won’t reset the file size after it starts to reset the whole download (because the file cannot be deleted now), wait for the reset of the download, now just make a copy of the file and rename file extension to FLAC from temporary extension added by e.g. Chrome during downloading. Now you can stop downloading in Chrome. The downside is - the moment the file gets reset is not when it ends, meaning it’s not fully complete. But mostly. Of course, you can be lucky enough to find the original torrent with the files and simply finish downloading by verifying checksums of existing ones in P2P client (filenames must match to torrent files, simply replace them and find option to verify checksums).

10b. All of the above applies to https://www.flacit.com/

Looks like it has the same library taken from

adamsfile.com

which is warez also allowing playing files and downloading them using the method above.

You also need to register before playing any file there (registration is free).

11. http://flacmusicfinder.com/

But it has a small library.

12. Soulseek - but it’s simply P2P based client, so carefully with that, and better use VPN (good one at best).

13. Rutracker (the same advice as above)

14. Chomikuj.pl (use their search engine, eventually Yandex, Duckduckgo, Bing) - free 50MB per week for unlimited amounts of accounts, free transfer for points from files uploaded or shared from other people’s profiles. People upload there separate tracks on loose as well, but they frequently get taken down, so search for rather full album titles in archives rather than single files. Mp3 files and those files which allow preview, can be downloaded for free with JDownloader, but occasionally some of such files might not work in JDownloader, and they’ll have to be downloaded manually.

15. Simply search for the track on Google, or even better - Yandex, Duckduckgo, eventually Bing, because Google frequently blacklists some sites or search entries. Also, your specific provider may cut connection to some sites, so you'll be forced to use VPN in that cases when a search engine shows up a result from a site you cannot open.

16. YouTube Music - higher bitrate (256kbps) than max 128/160kbps on regular YT for Opus/webm (20kHz) and AAC/M4A 128kbps (16kHz). Similarly, like in Spotify - it can possess some exclusive files which are unavailable in lossless form on any other platform.

If you have YouTube Premium you apparently can download files from it if you provide your token properly to yt-dl.

Maybe logging into Google account with enabled premium in JDownloader 2 will do the trick as well.

Anyway, Divolt bot will work too.

______

Outdated/closed/defunct

(it's been closed)

0) Go to https://free-mp3-download.net (Deezer, FLAC, separate songs downloading)

Here you can find (all?) mp3/flac files from Deezer. If the site doesn't work for you, use VPN. If the site doesn't search, mark "use our VPN". Single files download and captcha. No tags for track numbers and file names, FLAC/MP3 16 bit only.

- If you see an error “file not found on this server” don't refresh, but go back and click download again.

- From time to time it happened that it didn’t show up the FLAC option, and that it's “unavailable”, and sometimes it can show up after some period of time. The site started to have some problems, but it was fixed already.

-Open every found track in a new tab, as back button won't allow you to see search results you looked for

1 b) (doesn’t work anymore for 07.02.24)

Discord server with sharing bot (albums and songs)

https://discord.gg/MmE4JnUVA

-||-

https://discord.gg/2HjATw6JF

(another invite link valid till 12.11.13; needs to be renewed every month, probably current invitations will be on Divolt server here when the above will expire)

Later they required writing to the bot via DM to access the welcome channel with requests. Once I couldn't access the channel, and I needed to update Discord or wait 10-15 minutes, so the input form appeared.

To download, in welcome channel, paste:

$dl [link to the song or album on streaming service without brackets]

More detailed instruction of usage below.

(Defunct)

2) https://slavart.gamesdrive.net/tracks

(sometimes used to work, but not too often)

As of June 2023-March 2024 it is defunct, and throws: “There was an error processing your request!” on track download attempt, or in the past it was loading forever and nothing happens on multiple tries, before it worked after download button will stop being gray, and it’s green again, so you should click it and download may start shortly, but it stopped, lately it was working, you only needed to wait a bit.

Similar search engine for FLACs. Files are sourced from Qobuz (including hi-res when available). Songs listed double are sometimes in higher bit depth/resolution (different versions of the same track).

If you want to know what is the version you download, go to https://play.qobuz.com/ share track from there, and use download bots.

1 b) Join their Divolt server directly by this link (if the above stopped working):

https://divolt.xyz/invite/Qxxstb7Q (currently the bot don’t allow posting, containing only Discord invite, check it again later for valid link if necessary)

Free registration required.

If this Divolt server is also down, go here:

https://slavart.gamesdrive.net/

to get a valid Divolt invite link (it might have changed). But it had the old link for the long time later.

Dolby Atmos ripping

“Streamed Dolby Atmos is eac (5.1 Surround) and JOC (Joint Object coding) it's a hybrid file of channels and objects that decodes the 5.1 + joc to whatever your speakers are from 2.0 up to 9.1.6.

It's not a multitrack, although clearly what some mixers do is put all the vocal in the center channel, so effectively you have an a cappella in center and then the Instrumental in everything else, but many labels forbid engineers doing it and have policies that they must mix other sounds into center, so people don't rip the a cappella.

YouTube only supports FOA Ambisonics as spatial audio, but you can encode Dolby Atmos to Ambisonics” (...) by e.g. https://www.mach1.tech/

 ~Sam Hocking

Tidal only supports 5.1 or possibly 5.1.4, and Apple Music at least up to 7.1.4 (9.1.6 support could have been dropped since macOS Sonoma, not sure).

(doesn’t work anymore for Atmos)

- from Tidal (via Tidal-Media-Downloader-PRO [Tidal-DL GUI])

Just get Tidal-dl with HiFi Plus subscription - now merged into one subscription (CLI version; for one user on our server it works for 13.10.22, but for some people strangely not).

For 30.04.24 with Tidal app installed on Windows and tidal-gui authorized by browser prompt/or automatically, Atmos files are not downloaded (checked all qualities in settings incl. 720/1080), at least on subscription automatically converted into higher plan due to recent changes (MQA files started to play since then, so it might be not subscription issue).

If having some problems, use tidal-dl (non-GUI) and tidal account with valid subscription and proper plan, set up to fire tv device api (option 5 iirc).

But I cannot guarantee it will work for Atmos.

- from Tidal (with orpheus_dl_tidal installed over orpheusDL)

Downloads Atmos and high resolution files bigger than 24/48.

It’s only CLI app (valid subscription is still required).

Convoluted installation. If you have problem with using git in the Windows command line, use this ready package (works for 30.04.24, later it can get outdated; it already has Tidal settings and Atmos enable) after you install python-3.9.13 or newer (works currently also on python-3.12.3-amd64).

Or else, to install manually following GH instructions, to fix git issue, execute:

pip install gitpython

and/or install git from here 

(one or both of these should fix using git in CML when pip install git cannot find supported distribution and git command is not recognized).

Once Python and the package is correctly installed, usage is:

orpheus https://tidal.com/browse/track/280733977

(always delete ?u in the end of the link)

Now it will ask you for login method (I tested the first one - TV) - now it will redirect to your browser to authorize.

MQA is disabled by default (not used by Atmos), but you can enable it in config\settings\

by editing "proprietary_codecs": to false in line 21.

Downloaded files are located in OrpheusDL\downloads folder


spatial_codecs
 flag is enabled by default and supports Dolby Atmos and 360 Reality Audio.

"Some of the 360 stuff is impossible to split right now. Not sure what is going on. Maybe some type of new encryption. I have the MP4 to David Bowie Heroes 22 channels, and it's a brick, useless…"

The output of downloaded Atmos files is m4a encoded in E-AC-3 JOC (Enhanced AC-3 with Joint Object) -  Dolby Digital Plus with Dolby Atmos and possibly AC-4, and FLAC for hi-res

Downloaded hi-res and Atmos files can be played in e.g. MPC-HC or VLC Media Player, but will fail on some old players like Foobar2000 1.3 and 1.6.

- from Apple Music (Android)

https://github.com/adoalin/apple-music-alac-downloader

“Pre-Requisites:

x86_64 bit device (Intel/AMD Only)

Install Python: https://www.python.org/

Install Go: https://go.dev/doc/install

Install Android Platform Tools: https://developer.android.com/tools/releases/platform-tools

and set it to environment variables / path

Download and extract Frida Server - https://github.com/frida/frida/releases/download/16.2.1/frida-server-16.2.1-android-x86_64.xz

Download Apple Music ALAC Downloader - https://github.com/adoalin/apple-music-alac-downloader

Extract content to any folder.

1)

Install Android Studio

Create a virtual device on Android Studio with an image that doesn't have Google APIs.

2)

Install SAI - https://github.com/Aefyr/SAI

Install Apple Music 3.6.0 beta 4 - https://www.apkmirror.com/apk/apple/apple-music/apple-music-3-6-0-beta-release/apple-music-3-6-0-beta-4-android-apk-download/

Launch Apple Music and sign in to your account. Subscription required.

3)

Open Terminal

adb forward tcp:10020 tcp:10020

if u get a msg that there are more than 1 emulator/devices running, seek up NTKDaemonService in task manager/services and stop it

adb root

cd frida-server-16.2.1-android-x86_64

adb push frida-server-16.2.1-android-x86_64 /data/local/tmp/

adb shell "chmod 755 /data/local/tmp/frida-server-16.2.1-android-x86_64"

adb shell "/data/local/tmp/frida-server-16.2.1-android-x86_64 &"

The steps above place Frida-server on your Android device and starts the Frida-server.

4)

Open a new Terminal window

Change directory to Apple Music ALAC Downloader folder location

pip install frida-tools

frida -U -l agent.js -f com.apple.android.music

5)

Open a new Terminal window

Change directory to Apple Music ALAC Downloader folder location

Start downloading some albums:

go run main.go https://music.apple.com/us/album/beautiful-things-single/1724488123

go run main_atmos.go "https://music.apple.com/hk/album/周杰倫地表最強世界巡迴演唱會/1721464851"

from Apple Music (MacOS, virtual soundcard recording)

(Guide by Mikeyyyyy/K-Kop Filters, source)

You will need a Mac to do this, this will only work for MacOS, you will need an Apple Music subscription, "Blackhole 16ch" and any DAW of your choice I prefer FL Studio (can be also Audacity),

Step 1. Install Blackhole Audio driver (search for it in Google)

Step 2. Download the song you want in Dolby Atmos (if you don't know how to do it, go to settings in Apple Music then to general then toggle download Dolby Atmos)

Step 3. Go to your desired DAW and in your mixed select input, and it will show your 16 outputs select 1, (Mono) for the first mixer, then number 2 mixer do the same but 2 and so on until you reached 6

Step 4. Hit record and play the track in Dolby, and you're done!

Similar tutorial based on Blackhole and Audacity on Mac (open the link in incognito in case of infinite captcha)

You won't be able to do the same on Windows with LoopBeAudio instead (paid, but trial works for every 60 minutes after boot) because Apple Music on Windows (including the one in MS Store) doesn't provide Dolby Atmos (7.3.1) files at all (only stereo hi-res lossless) no matter what virtual soundcard you use, so you'll need Hackintosh or VMware.

"Vmware kinda lag

and find own seri to fix login apple services"

- ittiam-systems/libmpegh: MPEG-H 3D Audio Low Complexity Profile Decoder

Using this program, you can extract the 12 channels of the Dolby Atmos tracks.

“MPEG-H is essentially Sony360, just Sony360 licenced decoders needed. Fraunhofer allow it to be used for free, though.

All Dolby Atmos is encoded, so to play it, basically it has to be decoded to audio playback through a Dolby licenced decoder. There are ways to decode, though. Easiest is to use Cavern.

https://cavern.sbence.hu/cavern/

Atmos is a lossy format. 768kbps across 6 channels so not the highest resolution, but to decode to multichannel .wav just download cavern and put your dd+joc file through Cavernize. Streamed Atmos [is lossy]. TrueHD Atmos isn't. Atmos Music is only distributed lossy, though.

On the side, “The process of making Atmos [from an engineer standpoint] is:

DAW > 128 channel ADM_BWF > 7.1.4 >5.1(joc). So basically those 128 channels are encoded to 6, but the object audio is still known where it should exist in the space and pulls that audio out of the 5.1 channels to make up to 9.1.6 (max supported for music)”

And authoring for Atmos is not available on Windows but:

“Traditionally it's not been unless you ordered a DELL from Dolby configured to use as a Rendering machine, but today both Dolby Atmos Renderer, DAWs like Cubase and Nuendo and 3rd party VST exist to do it on Windows now. I use Fiedler Atmos Composer on a stereo DAW called Bitwig to build demix projects for Atmos engineers to then master to Atmos from Stereo (sometimes all they have left to work with as multitrack tapes lost/destroyed/politics/easier)” ~Sam Hocking

___AI mastering services___

Might be useful even for enhancing quality of instrumentals after separation (or your own mixed music)

Be aware that it may cheat the content ID, so your song won't be detected. If some label prevents from uploading their stuff on YT by blocking it straight after uploading regular file, you may get a copyright strike after some time of uploading mastered instrumental as they also use search engine on YT too to find their tracks.

If you don't find satisfying results with the services below, read that.

Paid

https://emastered.com/ (free preview, 150$ per year)

Preview is just mp3 320kbps @20kHz cutoff, which is claimed to have a watermark, but it cannot be heard or seen in Spek. The preview file can be downloaded by opening Developer Tools in browser, and playing preview, then in "media", the proper file should appear on the list (don't confuse it with original file), now open the proper link in the new tab and open options of the media player and simply click download.

It's the most advanced and better sounding service vs all free ones I tested (even if you have only access to mp3, but I also listened to max 24 bit WAVs on their site with a paid account). Also, it's one of those, which are potentially destructive if you apply wrong settings, but leaving everything in default state is a good starting point, and works decent for e.g. mixtures and even previously mastered music to some extent, at least which does not hit 0dB (but e.g. even -1dB, but it is claimed to work the best between -3dB and -6dB). Generally I recommend it. Worth trying out.

Note for paid users - be aware that preview files can be mp3 files as well. So what you hear during changing various parameters, is not exactly the same as final WAV output.

https://www.masteringbox.com/ >

https://www.landr.com/ (now also plugin available)

https://masterchannel.ai (only free previews, also can convert stereo to multichannel audio)

https://ariamastering.com/en/Pricing (from 50$ per month/9.90$/Master, mastering based on fully analog gear and robotic arm to make adjustments in real time)

VST plugins

iZotope Ozone Advanced 9 and up

Version Advanced has a new AI mastering feature which automatically detects parameters which can be manually adjusted after the process. It works pretty well, and repairs lots of problems with muddy mixes (especially with manual adjustments - don't be afraid to experiment - AI is never perfect).

Mastering Assistant built-in the recent versions of Logic Pro DAW (MacOS only)

It can give more natural results than Izotpe above

AI Master by Exonic UK (paid)

master_me free

It contains a decent mastering chain which adjusts settings for you automatically for the song which can be changed later, and also you can change target ilufs value manually. By default, it's -14 ilufs and can be too quiet for songs already mastered louder, and it can become destructive while set that way for some songs

Free (all below remarks apply when mastering AI separated instrumentals)

https://aimastering.com/

wav, mp3, mp4 accepted, Tons of options but not comfortable preview during tweaking them. You can optionally specify the reference audio, uploading a file. Also, there’s one completely automatic option. Generally it can be destructive to the sound, even using the most automatic setting - attenuation of bass, exaggerating of higher tones.

Preferred options while working with a bit muffled snare in the mix of 500m1 model for instrumental rap separation result

(automatic (easy master) is (only) good for mixtures [vocal+instr]):

  • True Peak, Oversampling 2x, AM Level 0.3, WAV 32. SAO, 0/22000 (the rest untouched)

For still too muffled sound (e.g. when lost in lots of hi-hats):

  • YouTube Loudness, OVS to 1x and AM Level 0.2 and 24 bit (+ true peak, SAO, 0/22000)

Alternative (good for mixtures and previously mastered music with a bit muddy snare):

  • YouTube Loudness, Target Loudness -8, Ceiling -0.2, OVS to 2x, True Peak and AM Level 0.3 and 32 bit, SAO, 0/22000

The most complicated tool, but the most capable amongst all free ones mentioned here so far. After two first files, it gets you into a short queue. Processing takes 2-3 minutes. Cannot upload more tracks than one at the same time. Great metrics, e.g. one measuring overall "professionality" of the result master. At this point, it can also start exaggerating vocal leftovers from the separation process. Equalize Loudness doesn’t do anything when checked just before download (probably only after when you click remaster).

https://moises.ai/

16-32 bit WAV output (now WAV is only in premium), any input formats. They have bad separation tools, but great, neutral mastering AI. It works very good for vinyl rips. You can get more than 5 tracks per month for free (don’t know how many - the 5 tracks limit is for separation, not for mastering feature, at least 30 worked in 2022).

The mastering feature is only available in the web version, so if you’re on the phone, run the site in PC mode.

24 bit -9 iLUFS / or without limiter does the best job in most cases for e.g. GSEP (the latter is when you don’t want to smooth out the sound). -8 tends to harm the dynamics of songs, but in some cases it might be useful to get your snare louder.

The interface has a bug when you need to pick your file to upload twice, otherwise you won’t be able to change parameters and confirm the upload process (also on mobile parameters not always appear immediately after you pick your file/pasted link enlisting the options manually doesn’t let you confirm the step to proceed to upload, and you need to retry picking the file, and now you can proceed).

Sometimes uploading is stuck for very, very long on 99% and if you leave your phone in sleep mode and return after 15 minutes, it will start some upload again on this 99%, but eventually it will return the error. You simply need to retry uploading the file (it will also stack at 99%, but it will still upload at that time).

Also, importing the same file via GDrive may not work.

Additionally, if you pick 32 bit output quality, when mastering is done, when you will want to download the file, in WAV it will show 24 bit, but the file will be 32 bit as you selected before.

It’s the most neutral in sound in comparison to the two below.

If you plan to master your own music, read “Preparing your tracks” here: https://moises.ai/blog/how-to-master-a-song-home/ I think these tips are pretty universal for all of these services.

https://www.mastering.studio/

Four presets with live preview, only 16 bit WAV for free, only WAV as input accepted (for the best quality convert any mp3’s to WAV 32-bit float (you can use Foobar2000), 64 bit WAV input unsupported).

If you see "upload failed", register and activate a new account in incognito mode and everything using VPN (probably a block for ISP which I had).

Judging by only 16 bit output quality (which is unfair comparison to 24 bit on moises.ai) and for GSep 320kbps files, I found it worse, and even the London smooth preset is not so neutral like moises in overall, and it can be destructive to the sound quality. But, if you need to get something extra from the mix if it’s blurry, that’s a good choice (while some people can find emastered too pricy).

BandLab Assistant mastering

First, you need to download their assistant here:

https://www.bandlab.com/products/desktop/assistant

Then insert the file, pick preset, listen, and then it is uploaded for further processing, and you’re redirected to the download page.

They write more about it below:

https://www.bandlab.com/mastering

Four presets - CD, enhance, bass boost, max 16 bit WAV output only. In comparison to paid emastered, it’s average. But in some cases it’s better than free mastering.studio when you have a muffled snare in the instrumental. On GSEP only CD preset was usable. The sound is more crusty than even LA Punch - more saturated (less neutral) a bit too bassy and compressed, but it may work in some songs where you don’t have a better choice and all above failed.

If your file doesn’t start uploading (hangs on “Preparing Master”), make sure you don't have “Set as a metered connection” option enabled in W10/11. If yes, disable it, and restart the assistant.

Straight after your file is done uploading, it is being processed, so don’t bother going to BandLab site too fast - sometimes it’s being processed even after download button appeared, where you start waiting in a queue even few minutes after you press the WAV button later, and you will not make this any faster.

On the side. The audio you hear during preview is not exactly the same as in result downloaded from the site. Preview is a bit louder, and stresses vocal residues more, and snare is less present in the mix, although the file is more clear, sadly it’s also 16 bit, in overall it doesn’t seem to be better. Also, the file doesn’t seem to be stored locally anywhere. But if you’re desperate enough to get this preview, fasten your seatbelt. If you processed more files before, close the assistant, and open again, now process the file, so preview can be played, pause it.

In Windows go to task manager, go to details, sort by CPU, RBM on BandLab Assistant.exe (the one with the most memory occupied)>Create dump file. Open it in HXD (located in temp), write in bytes per row instead of “16”, “4000”, find string “RIFF,”. If you cannot find it, it’s wrong process - make a dump of another assistant one (one of three most intensive). If you found the “RIFF,” delete everything above it (mark everything dragging the mouse to the top, with page up pressed and then keep shift pressed and left arrow to mark also the first row, then press delete), then save it as wav. The file can be played, but it’s too big. To find the end, go to “find” (ctrl+f), hex and write FF 00 00 02 00 01 00, find (it shouldn’t be at the beginning of the file - press F3 even more than once if necessary), mark everything dragging the mouse to the top with page up pressed and press copy (CTRL+C) and paste it into new file and save as wav.

https://masterchannel.ai

(Promo from March 2023) 20% off on Mastering at using the code CHAT_20, meaning you can master unlimited amount of songs for $20

Haven't tested it yet.

https://bakuage.com/

You can also use Matchering

New Colab:

https://colab.research.google.com/github/kubinka0505/matchering-cli/blob/master/Documents/Matchering-CLI.ipynb

Old Colab:

https://discord.com/channels/708579735583588363/814405660325969942/842132388217618442

Or in UVR5 (in Audio Tools)

Reference file(s) to use

“Mastering The Mix”:

https://drive.google.com/file/d/1kqPmcVC3qvh_Mqd9vIssGUKpz3jTddPc/view?usp=sharing

You need 7zip with WavPack plugin to extract it.

Brown/Pink noise:

https://drive.google.com/file/d/1wJHKRb2SIgJZIc-J8kEDD1k4OQj_OXzp/view?usp=sharing

“Try to use this as reference track in matchering to get nice wide stereo and corrected high frequencies.” zcooger

You can also use Songmastr instead of Colab as it uses Matchering as well.

Be aware that there’s a length limit in at least UVR5, and it’s 14:44 (or possibly just 15 minutes). Instead of hit or miss by lots of reference files in one, you can also use simply one song you think will fit the most for your track. You can even further split it to a smaller fragment with e.g. lossless-cut in order to avoid reencoding. It can work even more efficiently that way.

Sometimes I use Matchering for different master versions of the same song when I have a few masters I like certain things in them, but none good enough on their own.

Usually, in the target file should be placed the file with the richest spectrum (but feel free to experiment).

Can be a target file e.g. after a lot of spectral restoration, which e.g. lost some warm and fidelity, and you need something from previous master version.

You can also try to reprocess your result up to even 6 times, inputting a new file in target or reference each time, till you’ll get there. But usually 2-3 should do the trick, sometimes interchangeably between target and reference.

For using Matchering in UVR5, necessarily check option “Settings Test Mode” in additional settings. It will add 10 digits number to each result, preventing you from overwriting your old files during multiple experiments conducted on your files. UVR doesn’t ask before overwriting!

Feel free to experiment with WAV output quality. Probably the further you’ll go from 24 bit, the more different your result will be after converting back to 16 bit by some lossy codec like Opus on YT. But if you care mostly about the result file, then simply be aware that you can use output quality to your advantage, knowing in what way specific bit depth affects output results. E.g. the muddier results start with PCM_32 (non float), 64 bit has it too, but additionally with some grittiness. 16 bit is usually good to glue well sounding audio together with loudly sounding snares already, but can be muddy frequently. Usually your result will be not so good in most cases, hence I’d encourage using higher bit depths than 16 bit here, but 24 bit can make your audio too bright at times, hence in such cases you can check 32 bit float and non-float. There’s no one simple setting working for every song.

Some newer AI:

https://huggingface.co/spaces/nateraw/deepafx-st

Also try this one:

https://github.com/jhtonykoo/music_mixing_style_transfer

For enhancing 4 stems separations from Demucs/GSEP:

https://github.com/interactiveaudiolab/MSG (16kHz cutoff)

Services I'm yet to test: Landr, Aria, SoundCloud, Master Channel, Instant Mastering (if it wasn’t an April fools joke)

Platinum Notes

(Windows/Mac paid software)

"corrects pitch, improves volume and makes every file ready to play anywhere (...) add warm" and dynamics, remove clipping.

AI mixing services

https://automix.roexaudio.com

AI online auto-mixing service. Various instruments, genre settings, stem priority, pan priority.

1 free mix per month.

Might be useful for enhancing 4 stem separations.

"I tried 2 songs with it. Wasn't really pleased with results"

"The biggest problem I had [...] while I am trying to balance my vocals in instrumental like Hollywood style"

Other tool by Sony (open-source)

https://github.com/sony/fxnorm-automix

You can also train your own models using wet music data.

AI mixing plugins

iZotope Nectar

Sonible Pure Bundle

Creating mashups and also DJ sets (two options).

https://rave.dj/mix

It can give better results than manual mixes performed by some less experienced users (but I doubt it will work with more than 2 stems).

Generally it's good for

Ripple

iOS only app (currently for US region only)

"Ripple seems to be SpongeBand just translated into English, it was released last year: https://pandaily.com/bytedance-launches-music-creation-tool-sponge-band/

(more info about its capabilities)

Back then, it only didn't have separation to 4 stems.

_______

For enhancing vocal track you can use WSRGLOW, and better yet, process it through Izotope RX (7-9) spectral recovery tool (in RX 10 it’s only in more expensive version irc), and then master it, or send it somewhere else above.

https://replicate.com/lucataco/wsrglow

There are a lot of requests for music upscaling on our Discord. You can use online mastering services as well. Technically it's not upscaling in most cases, but the result can be satisfactory at times.

If you try out all solutions, and learn how they work and sound, you can easily get any track in better quality in few minutes.

For very low resolution music (if you manage to run it):

AudioSR - used more often then the below, lately (voc/inst)

Audio Super Resolution

https://github.com/olvrhhn/audio_super_resolution

hifi-gan-bwe

https://github.com/brentspell/hifi-gan-bwe/

More details and links, Colabs for these in the upscaler’s full list

If you want to start making your own remasters (even if your file is in terrible quality, especially 22kHz):

https://docs.google.com/document/d/1GLWvwNG5Ity2OpTe_HARHQxgwYuoosseYcxpzVjL_wY/edit?usp=drivesdk

Might be useful also for low quality, crusty vocals, but it is also a guide for mixing music in overall but focused on audio restoration as well.

___Best quality on YouTube for your audio uploads____

  1. If you already have a ready video which is not just a one frame (e.g. a cover all over the video), download MKVToolnix and replace audio track with lossless one instead of rendered lossy track. You will avoid recompression or reencoding, unlike it is during rendering normal video.
  2. If you can, upscale the video to at least 1440p or greater. It will avoid deferred transitioning of your AAC (16kHz) audio stream to Opus (20kHz) when your video gets popular, or it's old enough (for current YT audio format, check statistics for nerds). QHD/+ makes your video play in better Opus codec from the beginning, and it will sound better than after deferred transition from AAC to Opus on FHD clip (Opus audio streams checksums differs in FHD and QHD videos despite the same video source file and most likely something is broken on YT side during the process, though both Opus files are 20kHz, so the file in FHD is not recompressed from AAC, perhaps from other audio file created during YT rendering, but not from the source video).
  3. Alternative - if you have just one image to make a video of it (e.g. cover), make sure it’s at least 1440p or greater. If not, simply upscale it (e.g. XnView has some basic upscaling filters). Then place the image nearby this batch FFmpeg script with your lossless audio files. It will render videos with the same audio streams like original files, but muxed into your output MKV files (you can check in Foobar2000 for Audio MD5 comparison or by using AudioMD5Checker if MD5 is unavailable in F2K) so it won’t be recompressed on your end while making a video for upload on YT (yes, YT supports MKV!). It’s faster than MKVToolnix and you can convert multiple files with the same image at the same time (it's very fast, incomparable to normal video rendering, and output is only 1 FPS, so it will buffer in YT also very fast).
  4. You don’t have to wait till YT stop processing your HD version for Opus to appear. It happens at a point when FHD resolution appears before QHD when processing is still in progress. So check it out from time to time before you hit the publish button.
  5. Because Opus is 16 bit, and your input audio file in Matroska container might have higher bit depth, it’s good to compress your input file to Opus VBR 128kbps for testing purposes to check how it will sound on YT (of course don’t use it later for MKV file). Downsampling performed by the encoder can occasionally introduce some unwanted changes to the sound. It’s the most noticeable when audio input is 64 bit, but smaller can be still good enough.

___Best quality from YouTube and Soundcloud - how to squeeze out the most from the music taken from YT for separation___

Sometimes a better source just doesn’t exist, and only YouTube audio can be used for separation in some cases.

Introduction

Audio on YT in most cases is available in two formats:

1) AAC (m4a) and Opus. As I mentioned, the latter appears for older or popular uploads, or videos uploaded in QHD or 4K. Most videos will have both formats available already.

AAC on YT is @128kbps with 16kHz cutoff and 44kHz (that’s not artificial cutoff - that’s how the codec normally behaves when such bitrate is set).

2) Opus on YT is 96/128/152kbps with 20kHz cutoff (24kHz for videos uploaded before ~2020+, but with some aliasing above 20kHz, probably as a result of resampler) always 48kHz (44kHz audio is always upsampled with built-in resampler in Opus - that’s how the Opus works - it has always 48Khz output).

1) and 2) can be downloaded, e.g. via JDownloader 2 (once you downloaded one file, you must delete the previously shown entry in link grabber and add the link once more, and now pick the Opus (m4a is default) for download).

You can also use online too https://cobalt.tools/ which is probably just GUI for yt-dlp.

Opus files downloaded from JDownloader are different than Opus in webm files seeing by spectrum, but I can’t compare it with Cobalt as Spek doesn’t cooperate with its webm files in at least progressive mode which is “direct vimeo stream”. yt-dlp with -x argument might be free of the issue, but I haven’t checked yet.

Don't download as Opus from JDownloader 2. The quality will be affected.

Download always as webm in any quality - all qualities will contain the same Opus audio stream in the same bitrate.

Don’t download in OGG from Cobalt. It’s recompression from webm/Opus. OGG file is not on variants list in JDownloader (and probably the same would be in CML tools like yt-dlp, so it’s simply not on YT).

However, it will have some additional information below 16kHz compared to Opus downloaded from JDownloader, probably because it was sourced from webm, and not JDownloader's Opus, but that’s it. Recompression here will add some ringing issues and compression artefacts. Details and spectrograms here.

Sometimes it happens that m4a (AAC) sounds better than Opus. It all depends on a track. It is more likely to happen if both have the same cutoff in spectrogram due to how it was uploaded on YT.

What to do to improve the audio gathered from YT?

#1 Joining frequencies with EQ method

1) Download both M4A and Opus audio from YT (if Opus is available for your video)

2) Upsample M4A to 48kHz (or else you won’t align the two files perfectly) with e.g. Resampler (PPHS) in Ultra mode in Foobar 1.3.20>Convert>...

3) To have frequencies above 16kHz from Opus and better sounding frequencies up to 16kHz from AAC, we will combine the best of the both worlds by:

a) applying resonant highpass on Opus file at 15750Hz in e.g. Ozone 8/9 EQ

b) aligning the track to M4A audio file (converted to 48kHz WAV 32), so added as separate track in free DAWs like Audacity, Cakewalk, Ableton Lite, or Pro Tools Intro (or eventually Reaper with its infinite trial).

Export the mixdown as WAV24. It should be more than enough.

Using brickwall highpass instead will result in a hole in frequency in the result spectrogram (check it in Spek afterwards).

            #2 Manual ensemble in UVR

Files ensemble with Max Spec in UVR

Instead of EQ, you can use ensemble after manual upsampling of M4A file. You can have your files aligned in UVR.

Be aware that this method is not fully transparent, and produce files a little bit brighter, and still with cutoff, but not brickwall like in M4A.

Without upsampling step, you can use Max Spec method with great results also for Soundcloud which provides 64kbit/s opus and 128kbp/s mp3 and 256kbp/s aac.

You only need to amplify mp3 file by 3dB. Align step is also necessary here, but it can be performed in UVR.

(fixed in UVR 5.6) Be aware that a bug in manual ensemble exist which forces 16 bit output despite choosing e.g. 32-bit float. To fix it, you need to execute regular separation of a song with any AI model with 32 bit set, and then you need to return to manual ensemble without changing any settings now, so from now on it will retain 32-bit float in manual ensemble.

You can fix this by changing the 510th line of lib_v5/spec_utils.py to:

    sf.write(save_path, normalize(output.T, is_normalization), samplerate, subtype='FLOAT')

 then restart the program (yo may not find that file if your UVR is not taken from source).

TBH, I didn’t compare directly the first EQ vs the latter Max Spec method, but the latter sounds brighter for sure than opus, and m4a.

“while it helps to make trebles more defined, it's a bit flawed, due ensembling 3 different compression methods, so 3 different compression flaws/errors and noises”.

PS. For YT I also tried downsampling Opus to 44 and to leave M4A intact, but it gave worse results (probably because of more frequencies affected by resampler in this case).

Explanation

Audio file sizes and bitrate are the same for both formats. Knowing that the cutoff in AAC is not artificial, but codec without a doubt efficiently compresses only audio up to 16kHz, leaving everything higher blank and untouched, we can come to the conclusion that frequencies up to 16kHz in AAC may sound better than in Opus, since the size and bitrate of both files is the same, and most likely bitrate in AAC is not used to frequencies above 16kHz, so full 128kbps bitrate is used only for frequencies up to 16kHz in AAC codec while in Opus for the whole spectrum up to 20 or even 24kHz in some old videos till around 2020, while keeping the same size, so that might be more harmful for frequencies up to 16kHz than in AAC.

PS. After some time, I receive explanation/reassurance on the purpose of this process here, saying it’s generally justified and Opus is actually better than AAC even above 9600Hz, so one more additional cutoff in AAC will be needed. Also, might be worthy to use phase linear EQ to get rid of some coloration of the result file.

Experimenting with it, make sure that you don’t run into overlapping frequencies in area of bypassing (e.g. you can see it here as slightly brighter area above 9.6kHz up to 12kHz) to avoid it in e.g. in RX editor, one filtered signal needs to be 10Hz away from another one. I.e. if lowpass is 12000 Hz, then highpass is 12010 Hz. “But there is a catch with iZotope RX. The 10Hz away I described is only applied to the Copy operation (when you basically select the frequency range, and just CTRL+C by copying it). But there is also Silence operation (when you select freq. range and press Delete, it eliminates the freq. in this range), and it is another way around: you need to get the other signal 10Hz inward, so they overlap. I.e.: 12000 Hz lowpass, 11990 Hz highpass. Here is the video demo: https://youtu.be/h5yE5cpqqMU

#3 Bash script to automate the process for YT

introC eventually wrote his bash script which makes an alignment (so trimming 1600 samples from m4a), performs cutoffs and joins frequencies of both files for you - without an overlap issue (tested with white noise). The script works for multiple m4a and webm files with the same name. Probably, MSYS2 (or cygwin) is required to run this script on Windows or for W10/11 use WSL (read).

He also took a more conservative approach here and changed cutoff frequency from 9600Hz to 1400Hz since AAC didn’t perform better in one song, but below 1400Hz it will be rather in every case. What cutoff is actually the best might be sometimes depending on a song. The script is a subject to change:

https://cdn.discordapp.com/attachments/1070055072706347061/1089039589483761754/combine_youtube_quality.sh

_____Custom UVR models__________

Mostly outdated models

      0)  BubbleG — 15.06.2021

Final drum model (for UVR 5 and 4band_44100.json4band_44100.json)

  1. Dry Paint Dealer Undr — 08.07.2021

haring wip piano model trained on almost 300 songs might continue to train might not, has an issue where it also removes bass guitar too

  1. BubbleG — 16.06.2021

Temp. bass model. Must use with 4band_44100.json

  1. viperx — 04.08.2021

My simple karaoke model that I trained in month 5 until epoch 25/28 doesn't complete the training because I've been busy with other projects, and I left this one aside, but this simple model removes the second voice, it can be useful in only some cases, it's bad but it's acceptable

  1. centre isolation model epoch 0 inner epoch 1 - 150 pairs for UVR 4.0.1

  1. K-POP FILTERS — 02.07.2021

model_0_0_1024_2048.pth

feedback will be appreciated

Check #model-sharing for current WiP models

__Repository of other Colab notebooks__

UVR 5 (Colab by HV): https://colab.research.google.com/github/NaJeongMo/Colaboratory-Notebook-for-Ultimate-Vocal-Remover/blob/main/Vocal%20Remover%205_arch.ipynb

(On Mobile Chrome use PC mode)

Alternative UVR 5 notebook up to date (not HV’s):

https://colab.research.google.com/github/lucassantilli/UVR-Colab-GUI/blob/main/UVR_v5.ipynb#scrollTo=-KYA8iOZ8BKq

MDX (Colab by CyberWaifu, 4 stem, cannot be used in Mobile Chrome even using PC mode - there's no GDrive mounting and track downloading is always 0%. Model A cleaner but with more bleeding; Audioshake is based on it, but with different model based on larger dataset iirc, UVR team consider training it on their own bigger dataset to get better results - it’s based on phase unlike UVR, but tsumeruso works on adding phase, so then it might get rewritten to UVR)

https://colab.research.google.com/drive/1R32s9M50tn_TRUGIkfnjNPYdbUvQOcfh?usp=sharing

(wait patiently, it doesn’t show the progress)

UVR 5 (old version by HV with any 2 files ensemble feature, put tracks in separated folder. As for x/z - similar results, but not the same. Put as first the one you want the result more similar to)

https://colab.research.google.com/drive/1eK4h-13SmbjwYPecW2-PdMoEbJcpqzDt?usp=sharing

https://colab.research.google.com/drive/1C6i_6pBRjdbyueVw27FuRpXmEe442n4k?usp=sharing#scrollTo=CT8TuXWLBrXF (+12 ens, no batch ens, deleted)

2021-ISMIR-MSS-Challenge-CWS-PResUNet (byteMSS) (if you run out of memory, split up the input file)

https://colab.research.google.com/drive/17m08bvihZAov_F_6Rg3luNj030t6mtyk?usp=sharing

Woosung Choi's ISMIR 2020 (Colab by CyberWaifu)

https://colab.research.google.com/drive/1jlwVgC9sRCGnZAKZTpqKgeSnzP3sIj8U

Vocal Remover 4:

https://colab.research.google.com/drive/1z0YBPfSexb4E7mhNz9LJP4Kfz3AvHf32

To fix librosa error, try adding the

!pip install librosa==0.8.0

or 0.9.? works as well

line about librosa, and if still the same, about pysound as well:

https://discord.com/channels/708579735583588363/767947630403387393/1089518963253317652

https://colab.research.google.com/github/burntscarr/vocal-remover/blob/main/vocal_remover_burnt.ipynb

(UVR4 + models description:

https://github.com/Anjok07/ultimatevocalremovergui/tree/v4.0.1

Search for:

"Models included" at the bottom".)

UVR 2.20 (it achieved some good results for old 70’s pop music for me where cymbals got muffled on current models, but prepare for more bleeding in some places vs VR4 and newer)

https://colab.research.google.com/drive/1gGtjAo3jK3nmHcMYTz0p8Qs8rZu8Lhb6?usp=sharing

Spleeter (11/16kHz, 2, 4, 5 stems, currently doesn’t work): https://colab.research.google.com/drive/1d-NKFQVRGCV5tvbd0GOy9spMMel6mrth?usp=sharing

According to my experience, if you don’t need piano stem, 4 stem model makes better job than 5 stem (and even vs 2 stem, and it is also reflected in SDR results). Use 11kHz models only if your input files are sampled at 22kHz (it will provide better result in this and only in this case).

If you can, use Izotope RX-8 for 22kHz 4 stem, as it provides better separation quality with aggressiveness option. It’s Spleeter, but with better model (full band).

Demucs 3.0

https://colab.research.google.com/drive/1yyEe0m8t5b3i9FQkCl_iy6c9maF2brGx?usp=sharing

To install it locally (by britneyjbitch):

I cracked the Da Vinci code on how to install Demucs V3 sweat_smile For anybody who struggled (on Windows) - I got you!

1. DL a zip folder of Demucs 3 from Github (link: https://github.com/facebookresearch/demucs) and extract it in a desired folder

2. Inside the extracted folder run cmd

3. If you want to simply separate tracks, run the following command:

                  python.exe -m pip install --requirement requirements_minimal.txt

4. If you want to be able to train models too, run the following command:

                  python.exe -m pip install --requirement requirements.txt

5. If a read error for incompatible versions of any of the modules appears (e.g. torch) run the following command:

                  pip install desired_module==version_of_desired_module

                  e.g.  pip install torch==1.9.0

6. Repeat step 5 for any incompatibilities that might occur

7. Separating tracks:

          python.exe -m demucs  -n "desired_model_to_run_separation" "path_to_track"

8. If you want help finding all additional options (for example overlap or shifts), run:

          python.exe -m demucs --help

At least that worked for me, feel free to let me know if this worked for others as well

exclamation Oh, and I forgot - between step 6 or 7, don't pay attention to a potential red error ''torchvision 0.9.1+cu111 has requirement torch==1.8.1, but you'll have torch 1.9.0 which is incompatible.''

Do NOT change back to torch 1.8.0 cuz you won't be able to run demucs

warning! If ''torchvision 0.9.1+cu111 has requirement torch==1.8.1, but you'll have torch 1.9.0 which is incompatible.'' is the only red error you're getting after executing the commands from step 3,4 and/or 5, you're good to go with separation!

Demucs (22khz, 4 stem):

https://colab.research.google.com/drive/1gRGRDhx9yA1KtafKhOaXZUpUoh2MuF8?usp=sharing

https://colab.research.google.com/github/facebookresearch/demucs/blob/master/Demucs.ipynb

https://colab.research.google.com/drive/1gRGRDhx9yA1KtafKhOaXZUpUoh2MuF_8?usp=sharing

Other one(s):

LaSAFT:
https://colab.research.google.com/drive/1XIngzXDi2mF_y6WwDrLLx4XZtI8_1FAz?usp=sharing


(original, cannot define model ATM)

https://github.com/ws-choi/Conditioned-Source-Separation-LaSAFT/blob/main/colab_demo/LaSAFT_with_GPoCM_(large)_Stella_Jang_Example.ipynb

If you cannot load the file, upload it manually to your Colab, or just wait patiently. Refresh Github page with CTRL+R if you can’t see the code preview.

Check out also this laSAFT download with message which says about superiority of 2020 model (said in march 2021).

Clone voice:

https://colab.research.google.com/github/tugstugi/dl-colab-notebooks/blob/master/notebooks/RealTimeVoiceCloning.ipynb

Matchering:

https://cdn.discordapp.com/attachments/814405660325969942/842133128851750952/MatcheringColabSimplified.ipynb

For more Colabs search for colab.research.google.com on our Discord server

__Google Colab troubleshooting (old)_

  • Error of authorisation during mounting:
TL:DR - you need to log into the same account in Colab you want to mount drive later, or just change your Colab account.

It was introduced to Colab at some point. Once I tried to log into another account during mounting, it displayed a new window with only one account, where the wanted account didn't appear, and when I manually signed in to it, Colab showed an error on Colab, something about unsuccessful authorisation. When I changed account in the right corner this time for the same account I wanted to choose when mounting, everything went fine as it always used to be. Full list of accounts appeared. HV Colabs already have the new mount method implemented, so the old one doesn’t cause error, but in UVR notebook you can choose between the new (default) and the old one (just in case Google changed something again).

  • Try to log into another Google account(s) if you cannot connect with GPU anymore and/or you exceeded your GPU limit

  • (cannot really say if it’s really helpful at this point)

Paste this code to console (Chrome: CTRL+Shift+I or …>more tools>tools for developers>console) to avoid disconnections from runtime environment or if you encounter problems while being AFK and if you run into issues of being unable to connect to GPU after reconnection after idle time or possibly after the code was executed, and you’re AFK for too long. It won’t prevent you from showing one captcha in the session.

interval = setInterval(function() {

    console.log("working")

    var selector = "#top-toolbar > colab-connect-button"

    document.querySelector(selector).shadowRoot.querySelector("#connect").click()

    setTimeout(function() {

            document.querySelector(selector).shadowRoot.querySelector("#connect").click()

    }, 1000)

}, 60*1000)

It will constantly reclick one window to appear in Colab to prevent idle check.

Repository of stems/multitracks from music - for creating your own dataset

Dataset search engine

https://datasetsearch.research.google.com/

Up-to-date list of datasets

https://github.com/Yuan-ManX/ai-audio-datasets-list#music

33 datasets compilation list:

https://sites.google.com/site/shinnosuketakamichi/publication/corpus

musdb18-hq

https://drive.google.com/file/d/1ieGcVPPfgWg__BTDlIGi1TpntdOWwwdn/view?usp=sharing (14GB 7z)

https://zenodo.org/record/3338373#.Yr2x0aQ9eyU (mirror, 22GB zip, it can be slow at times)

(prev. mega link dead)

https://promodj.com/tools

There is a lot of filtered trash, but you can also find official acapellas.

https://zenodo.org/record/3553059

DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation

seems to be a really big dataset of instrumental-amateur vocal-mix with compression and such triplets.

https://remixpacks.ru (it’s been taken down, it’s under .net domain now [not sure if the site content is the same)

Archive.org copy

https://web.archive.org/web/20230105142738/https://telegra.ph/Remixpacks-Collection-Vol-01-04-12-25

or here:

https://docs.google.com/spreadsheets/d/1_dIFNK3LC8A40YK-qCEHhxOCFIbny7Jv4qPEoOKBrIA/edit#gid=1121949630

(separate downloads)

or here:
https://docs.google.com/spreadsheets/d/1BtUSgPffbcaW4bMuGClYi8FGvaYmYyc1p4SkfpNty-U/edit

or here:

magnet:?xt=urn:btih:45a805dbd78b8dec796a0a127c4b4d2466ddbb9a

List with names:

https://docs.google.com/spreadsheets/d/1uCWmuAUfvVLonbXp9sQUb9dEODYTHmPAOyvGxulMOCA/edit?usp=sharing

Renamer - python script

https://mega.nz/file/gEgwwaaB#BCDDMpl-VcIZDnNYQziyklOV9Vpf43wuc76hsS3JTlw

Showcase

https://www.youtube.com/watch?v=95Q31HjU04E

“For those that aren't able to d/l the torrents anymore, or just want to d/l some of the remixpacks content,

I uploaded all 26 collections (~3TB) here: https://remixpacks.multimedia.workers.dev/

DM me to request username/password.” Bas Curtiz#5667

https://clubremixer.com/ - outrageously big database, probably reuploads from remixpacks too (but on slow Nitroflare or simply paid irc)

https://songstems.net/ - lots of remixpacks stuff reuploaded from masterposts of clubremixer.com to Yandex (free Nitroflare is “20kb/s”)

https://www.acapellas4u.co.uk/

https://multitracksearch.cambridge-mt.com/ms-mtk-search.htm

A nice collection of legally available multitracks.

"I believe about 2/3rds of musdb18's tracks are taken from this."

Great for dataset for creating stem specific models like acoustic guitars, electric guitars, piano, etc. You will just get the stem file you want and combine the rest

Slakh2100 dataset (2100 tracks), mono, guitar + piano, and a LOT of other stems, no vocals

If we were to ever train a multiple-source Demucs model, it would be greatly helpful

https://drive.google.com/file/d/1baMOSgbqogexZ5VDFsq3X6hgnIpt_bPw/view

https://github.com/ethman/slakh-utils

https://drive.google.com/file/d/1sxdNk0kekvv8FwDvzNypYe6Nf7d40Iek/view?usp=drivesdk

Collection of 40K instrumentals and accapellas (lossy, rather avoid using such files for training, and search for lossless if possible)

https://docs.google.com/spreadsheets/d/1NuQV8cfFPehvIwPBUGOMbiC4FSei2p923qC6af5tCV8/

Metapop

https://metapop.com/competitions?p=1&status=ended&type=all

“Most of them have a click through to download stems. You might need to automate downloads using Simple Mass Downloader browser extension or something. Some are remix competitions, some a production, but all have stems."

SFX

Datasets for potential SFX separation

- https://cocktail-fork.github.io/ (SPEECH-VOICE-SFX (3 stems), 174GB)

- https://www.sounds-resource.com/

- https://mixkit.co/free-sound-effects/game/

- https://opengameart.org/content/library-of-game-sounds

- https://pixabay.com/sound-effects/search/game/

- https://www.boomlibrary.com/shop/?swoof=1&pa_producttype=free-sound-effects

____

Mega collection of stems/multitracks (remixpacks - Guitar Hero, Rock Band, OG)

https://docs.google.com/spreadsheets/d/1_dIFNK3LC8A40YK-qCEHhxOCFIbny7Jv4qPEoOKBrIA

Rock Band 4 stems (free Nitroflare mirror)

https://clubremixer.com/rb4-stems/

GH stems from X360 instead of Wii for better quality https://www.fretsonfire.org/forums/viewtopic.php?f=5&t=57010&sid=3917a8e390f65097f07d69595dd5ba55

(free registration required, basically content of all zippyshare links of the PDF below:)

PDF with separate RB3-4 stems description and DL (lots of links are offline as zippyshare is down), page 6 shows some table of content with evaluation progress.

https://cdn.discordapp.com/attachments/805980610610593802/1098393004156403712/toaz.info-stemspdf-pr_7a1e446f01c9b1666a9bebe9fd51f419.pdf

Huge database (probably contains some of the above)

https://songstems.net/

Others:

https://rutracker.org/forum/tracker.php?f=2492

https://thepiratebay.org/search.php?q=multitrack&all=on&search=Pirate+Search&page=0&orderby=

https://thepiratebay.org/search.php?q=multitracks&all=on&search=Pirate+Search&page=0&orderby=

Jammit

https://cdn.discordapp.com/attachments/773763762887852072/791275338570661888/Jammit.torrent

"the audio files can't be mixed directly. You need to apply a gain reduction of 0.77499997615814209 (in dB : -2.2139662170837942) on each track to get a perfect mixdown. This factor is about to set a 0dB on the original jammit mixtable."

____

Multitracks. Looks like paid, but it has also few pages with some free ones (e.g. Fleetwood Mac, not sure if free)

https://isolated-tracks.com/

This is also paid, but it has less known music

https://www.multitracks.com/

Can be ripped. Some tracks there will be a subject to rule out due to bleeding. Plenty of genres. Might be good for diverse dataset.

https://www.epidemicsound.com/music/search/

MoisesDB

https://developer.moises.ai/blog/moises-news/introducing-moisesdb-the-ultimate-multitrack-dataset-for-source-separation-beyond-4-stems

"Total tracks in MoisesDB: 240

How often folders exists for track: ('vocals', 239), ('drums', 238), ('bass', 236), ('guitar', 222), ('other_keys', 110), ('piano', 110), ('percussion', 99), ('bowed_strings', 45), ('other', 39), ('wind', 26), ('other_plucked', 7)"

“MAESTRO” is a dataset composed of about 200 hours of virtuosic piano performances captured with fine alignment (~3 ms) between note labels and audio waveforms.

https://magenta.tensorflow.org/datasets/maestro

GiantMIDI-Piano [1] is a classical piano MIDI dataset contains 10,855 MIDI files of 2,786 composers. The curated subset by constraining composer surnames contains 7,236 MIDI files of 1,787 composers. GiantMIDI-Piano are transcribed from live recordings with a high-resolution piano transcription system

https://github.com/bytedance/GiantMIDI-Piano

MedleyVox dataset -

for separating different singers, which they refrain from releasing the model for (and Cyrus eventually did it single-handedly):

https://github.com/CBeast25/MedleyVox (13 different singing datasets of 400 hours and 460 hours of LibriSpeech data for training)

https://zenodo.org/record/7984549

Be aware that the only one MedleyVox dataset which remains unobtainable to this day is TONAS, but it’s small, esp. compare to the Korean datasets. Besides this one, queer and Cyrus have them all on our Discord, but they’re huge. Ksinger and Ktimbre takes ~300GB unzipped for both.

Jarredou’s (@rigo2) dataset with screaming, cheering, applause, whistling, mumble, etc... collected from all the sources I've found, to help model creation:

+5000 stereo wav files, 44100hz

~37 hours of audio data

(Link on Discord DM to avoid takedown issues)

-”Ultimate laugh tracks for sitcoms, game shows, talk shows, and comedy projects (available on Amazon Music and Apple Music (ripped, YT upload has similarly looking spectrograms)

- Laughter-Crowd Dataset #2.zip https://terabox.com/s/1xLuZWvpGX0LTQypO1p7u_g

This is also paid, but it has less known music

https://www.multitracks.com/

“those are covers from famous songs, but all in multitracks.

and from what I've listened to so far, is that they are pretty conservative.

aka the vocals all seem to be dry and none seem to contain bleed so far.

also the instrument stems are proper / not mixed up with other instrumentals.

and the stems are the exact same duration.

all in all a solid dataset right off-the-bat imo.

i should've calculated it prior, what the better subscription was, the 10gb or 20gb a day one vs. price vs. content approx. in total.

52mb (wav) * 12 (multitracks) = 624mb per song

4.766 songs * 624 = 2973984 mb = 2.97tb

weekly limit = 70gb * 4 (weeks) = 280gb = 280000mb

2973984 / 280000 = 10,6 weeks in total.

uhoh.

10,6 / 4 = 2,65 so 3 months x $30 = 90 bucks - oof. didnt expect that.”

- Maybe you find something useful on sharemania.us too (160 lossy/261 lossless)

- Seems 'acapella tools' or 'instrumental tools' are good key-words to search for.

some are covers of original tracks, but that shouldn't matter, since they represent the same.

this is on deezer, but u might find others on tidal.

Drums

StemGMD: A Large-Scale Audio Dataset of Isolated Drum Stems for Deep Drums Demixing

(although drumsep used bigger dataset consisting of MIDI sounds to avoid bleeding, with XLN only)

Virtual drumkits

The “advantage is that you can have zero bleed between elements, which is not possible with real live drums.

You can create “more than 300 drumkits as virtual instruments (toontrack, kontakt, xln, slate, bfd, XLN ones are nice too (from their trigger and drums VST) + a Reaper framework to multiply that by 10 (using heavily different mixing processes for each drum elements), so potentially 3000 different sounding drumkits “

“one could use producer sample packs/kits for more modern samples” there are tons of packs around the net.

jarredou (rigo2):

For those interested, I'm sharing on demand my drums separation dataset (send me dm)

It's not a final version. I've realised after generating 130h of audio data that I've made a mistake in routing, leading to some occasional cowbell in snare stems. So it's  [Kick/Snare-cowbell/Toms/HiHat/Ride/Crash] stems.

I've stopped it's rendering and will not make the final "mastering" stage that was planned.

I will make a clean no-cowbell version, but as I'm lacking free time, I don't know when, and as this one is here and already great sounding why not using it in the meantime.

Just don't mind the cowbell !

---

https://www.monotostereo.info/

“Helped me find not only tools but also other resources like research papers, etc on audio source separation in general. A fantastic resource for anyone into audio source separation”

For more links check #resources

List of cloud services with a lot of space

or temporary ones

50GB

https://fex.net/en/

32GB

https://www.transferfile.io/

Decentralized file hosting. If it goes down, perhaps the link can be replaced by ipfs.io

Unlimited

https://catbox.moe/

200MB max file size

(files kept forever, donations)

https://buzzheavier.com/

It optionally creates not only download link, but also torrent file

Unlimited

(at least any info in the account panel on it cannot be found)

https://qiwi.gg/

15GB

4shared.com

Max 3GB of daily traffic, 30GB per month

(Once I got my account deleted over years, maybe due to inactivity, but even if I was warned, messages were coming into Gmail spam)

Expiring/problematic

Unlimited, but usually 14GB (10 day expiry date till last DL):

https://gofile.io

Don’t use it,

as certain files, e.g. with 1GB size (at least in some cases), can be only downloaded with premium account if servers are overloaded - you can visit the link for 10 days till it expire in hope of the server being offloaded, but still not be able to download the file at all. GFY, WS.

Unlimited (till their server space is full, which sadly is often the case,

file get deleted after 6 days):

https://filebin.net/

50GB without registration (up to 14 days expiry date):

https://dropmefiles.com/

250GB:

https://filelu.com/ (expires 15/45 days after last download [un/reg], or never for 1TB 6$ per month)

1000GB:

https://www.terabox.com/

(but I have some reports that after uploading X amount of data (it depends) they block the account and tell you to pay)

100GB:

https://degoo.com/

(but Degoo has bots which look for DCMA content, and they close even paid accounts in such cases or even some files without any reason)

100GB:

https://filetransfer.io/

(file expires after 21 days/50 downloads/max 6GB per file. It could mess up with filenames after downloading from direct link which was also possible [at least in the past] and could be used e.g. in Google Colab, there was some sneaky method of extracting direct links clicking on download buttons instead of sharing classic links)

Unlimited

10GB/file, FTP

Depositfiles/dfiles.eu

They exist since forever (2006) and probably didn't collapsed due to nightmarishly slow download speeds (20 or even 50KB/s, can't remember) for at least non-Gold accounts.

Unlimited

Chomikuj.pl

Only 50MB of downloading for free

Also exists since forever, and didn't collapsed despite many court cases probably due to creative changes of owners from specific countries. It can be also used as public sharing disk with points of transfer for downloaded content from your disk.

It happened in the past, that along years, among very big collection of files, someone had I think some even encrypted private files deleted, but usually only DCMAed files are taken out.

Pixeldrain

Some files uploaded on Pixeldrain are only available for Pro users.

On some files it will just tell you that servers are overloaded, and the error will last for days, weeks even, and not let you download. So I'd rather refrain from using it.

Bigger popular cloud services

20 GB

https://mega.nz/

(no expiry, usual DCMAs)

15 GB

https://fileditch.com/

(it allows sharing direct links like from FTP or GitHub when you upload file on release page where also bigger files can be uploaded vs 50MB in source files. I see 9 months old links still active from fileditch. Download can however be slow, 5 mbits, on old files that have not been accessed in 30 days.)

10.2 GB

https://safe.fiery.me

(I think this has no expiration, not sure)

11GB

https://disk.yandex.com/client/disk

(rare DCMAs vs Mega, since 2022 difficult registration without a phone, and/or when you use public SMS receiving gate and/or VPN e.g. Russian - they can prevent you from access to a disk right after registration if they detect something suspicious during registration process, after war they decreased max file size limit to 1GB irc)

15GB

Mediafire

OneDrive

Dropbox

Unlimited

2GB/file

GitHub - on release page of (at least) some public repository, you can upload up to 2GB file. You can split your archive into parts if necessary.

It’s perfect to use in Colab notebooks. Very reliable and fast.

Temporary file uploads that expire anytime:

https://litterbox.catbox.moe/

(1 GB, up to 3 days)

https://tmp.ninja/

(10 GB)

https://pomf.lain.la/

(512 MiB)

https://uguu.se/

(128 MiB)

https://cockfile.com/

(IK, funny name, but 128 MiB)

https://sicp.me/

(114 MB)

https://www.moepantsu.com/

(128 MiB)

Hint. In the case of some free, not very well-known services which can even disappear after some longer period of time (do you remember RapidShare, copy.com, hotfile or megaupload, catshare, freakshare, uploaded.to, fileserve, share-online.biz, odsiebie, hostuje.net?) it’s better to keep your files in more than one service, or stick to some popular big tech companies which are unlikely to disappear soon or if another war will break out and increasing energy costs will make smaller services unprofitable.

Paid:

- You can get 1TB OneDrive with an .edu email

- If you sign up for the Google Workspace (it was called G Suite until recently) version of Google Drive, you can get 1TB for ~$10 USD a month, but here's the thing... I have been way over 1tb for a couple of years now, and they have never charged me anymore. I am over 4tb now and have been for ~3 months, and it is still only ~$10. If you do it, just create a 1 user account and just keep filling it up until they say you need to add more users or pay more.

Well it looks like it is $12 now, but it's for 2tb and maybe that is what they change my plan to and are charging me now too… I thought there was some kind of surcharge and tax (never really paid attention to the exact amount) but guess it is just $12 + tax now...

https://workspace.google.com/pricing.html

it looks like they might have gotten rid of it, but it used to be $50/month for unlimited storage but I think as long as you do what I do, I think it is probably close to unlimited for $12/month

It's pretending you're in a college and college drives have infinite storage

I used to have one for 1-2 years but it suddenly got removed, so it's not safe. All of the files are gone too, without notice

BTW. For workspace you still need to have your own domain (with the possibility of changing DNS entries, so free ones are out). Yearly cost is negligible, but you have to remember about it.

- Also, if you have ProtonVPN on the Proton Unlimited plan, you get 500GB of storage on Proton Drive for free.

- Also, Google Pixel 1 phones used to have like unlimited or at least bigger GDrive plans irc (it was withdrawn from later Pixel phones). Some people bought these phones just for the space.

- You can get very cheap 2TB (around 16$) for a year on Google Drive in Lyres (I think they only changed it in methods of payment, not necessarily whole region), but some people say it's better to get it in Brazil due to fewer problems.

I heard it's better to not buy it in Lyres on your main account, because your apps can get regional lock (e.g. Tidal). Some people even had problems with currency in their other accounts, and you can change it only once for a year on an account, and in case of some emergency, you might have to be forced using Revolut cards. There is a lot of misinformation about that promo trick so verify it all, but there should be some reasonable amount of info scattered around the net already (e.g. hotukdeals, pepper.pl, or the German counterpart).

- 50GB on Dropbox from some Galaxy phones (e.g. S3) can no longer be redeemed (since ~2016 I believe)

Smaller ones:

no expiration:

https://catbox.moe/

(200 MB)

temporary file uploads that expire anytime:

https://litterbox.catbox.moe/

 (1 GB)

https://uguu.se/

 (128 MiB)

https://fileditch.com/

 (15 GB)

https://cockfile.com/

 (ik funny name, but 128 MiB)

https://tmp.ninja/

 (10 GB)

https://pomf.lain.la

 (512 MiB)

https://sicp.me/

 (114 MB)

https://www.moepantsu.com/

 (128 MiB)

https://www.multcloud.com/

Service allowing moving files across various cloud accounts and services

____________

Pitch detection and to midi convert

https://discord.com/channels/708579735583588363/708579735583588366/1019280811461181510

_________________________________________________________________________

(outdated)

(for old MDX SDR 9.4 UVR model):

(input audio file on Colab can be max 44kHz and FLAC only).

Original MDX model B was updated and to get the best instrumental - you need to download invert instrumental from Colab.

Model A is 4 stem, so for instrumental, mix it, e.g. in Audacity without vocals stem (import all 3 tracks underneath and render). Isolation might take up to ~1 hour in Colab, but recently it takes below 20 minutes on 3.00 min+ track.

If you want to use it locally (no auto inversion):

https://discord.com/channels/708579735583588363/887455924845944873/887464098844016650

B 9.4 model:

https://github.com/Anjok07/ultimatevocalremovergui/releases/tag/MDX-Net

Or remotely (by CyberWaifu):

https://colab.research.google.com/drive/1R32s9M50tn_TRUGIkfnjNPYdbUvQOcfh?usp=sharing

 

Site version (currently includes 9.6 SDR model):

https://mvsep.com/

You can choose between MDX A or B, Spleeter 2/4/5 Stems), UnMix 2/4 stems, but output is mp3 only)

New MDX model released by UVR team on mvsep is currently also available. If you have any problems with separating in mobile browser (file type not supported) add for a file additional extension: trackname.flac.flac.

MDX is really worth checking. Even if you have some bleeding, and UVR model cuts some instruments in the background.

CyberWaifu Colab troubleshooting

If you have a problem with noise after a few seconds of the result files, try to use FLAC. After an unsuccessful attempt of isolation, you can try restoring default runtime to default state in options. The bug happened a few days after releasing the Colab suddenly one day and the is prevailing to this day (so WAV no longer works). If you run the first cell to upload, and afterwards after opening file view, one of the 001-00X wav files is distorted (000 after few second) it means it failed, and you need to start over till you get all the files played correctly. But after longer isolation, it may cause reaching GPU limit, and you will not be able to connect with GPU. To fix it, switch to another Google account. If you have a problem, that your stems are too long, and mixed with a different song, restore default runtime settings as well, or delete manually

_________________________________________________________________________

(outdated, deleted feature from HV Colab) Be aware that normalization turned on in Colab for instrumentals achieved with invertion may lead to occurrence of some artifacts during inversion, but general mix quality and snare in the mix might be more loud and sound more proper with normalization on, though it’s not necessarily universal solution in every case when the track might sound a bit off than more flat sound of normalization turned off (at least in some parts of it).

AI killing tracks - difficult ones to get instrumentals - a lot of e.g. vocal leftovers in current models

"instrument-wise, the problematic ones I can remember are:

alto sax, soprano sax, any type of flutes/whistles (including synths), trombone slides, duduk, some organ sounds (close to sine wave sound)" plus harmonica, erhnu, theremin.

"even if some models do a bit better job than others, these instruments are still problematic because their timbres are close to humain voice"

And in general - songs heavily sidechained, with robotic, heavily processed vocals, sometimes with lots of weird sounding overdubs where some are missed (e.g. in trap).

Anjok stated that the hardest genre for separation is metal and vocal-centered mixes. If the instrumental has a lot of noise, e.g. distorted guitars, the instrumental will come out muddier.

Tracks from 70-80s can separate well. 50-60s will be harder, e.g. recorded in mono. Early stereo era gets a little better.

GSheet with more tracks opened for everyone with Google account to contribute (we kinda tried not to duplicate any songs in both places too much).

  • Childish Gambino - Algorithm (robotic vocal effects, autotune, echoes, specific processing plugins on vocals, constant audible vocal residues for all current models)
  • tatu - Not gonna get us ("This song is impossible to quality separate by any model. Our dataset contains several songs by this artist, but this did not improve the result in any way. Just forget about it for a few years") - IRC, the result was enhanced by slowing down, a.k.a. soprano option on x-minus.
  • Eric Prydz - Call On Me (aggressive sidechain compression “It's literally ditching the vocal part [and instruments] out to make room for the kick. So yeah, good luck in getting that vocal back.“)
  • Jamaroquoi - Virtual Insanity ("One of the most difficult challenges of all my experience has been that is not very well handled even when maxing out quality in v5.")

Others:

  • Beyonce - I'm that girl
  • half•alive - Still feel
  • Queen - White Queen
  • Queen - Bohemian Rhapsody (very complicated song; mix of various vocals and guitars)
  • Queen - These Are The Days Of Our Lives - to evaluate BVE model and how it reacts with harmonies. If it works on this track, probably all the others will work.
  • J Dilla - Don't Cry (lots of so-called lo-fi “cuts” or chops of vocals from old vinyls, characteristic for hip-hop productions and harder to separate)
  • Lots of Juice WRLD (his tracks have leftovers here and there, e.g. in "Off the rip (Gamble)")
  • Eminem - No regrets (constant low-volume vocal leftovers)
  • Louis The Child - Better not (problem with vocals with currently the best MDX23 MVSEP beta model, and also Demucs ft and Kim model)
  • A$AP Rocky - Fashion Killa (same as for “Night Lovell - Dark Light” - "almost every AI can't separate the main vocals from the melody, the melody has a part that sounds like vocals, so just about every AI picks some of it up in the vocals section instead of the instrumental section")
  • Porcupine - Trees Don't Hate Me ("Quiet bits, loud bits, flutes and strings, things I can't even name plus all the usual suspects, drums etc, and Steven Wilson has a crisp clean voice a lot of the time")
  • Thomas Anders - You Will Be Mine (vocal residues in instrumentals using all current models for April 2023)
  • Modern Talking - One in a million (also minimum vocal residues)
  • Modern Talking - Mrs. Robota (too many synthesizer effects bleed in vocals of MVSEP MDX23 model)
  • Crush 40 - Live & Learn'
  • JPEGMAFIA - HAZARD DUTY PAY! (hard to get vocals from rapping section, Kim vocals 1)
  • Bjork - All Is Full Of Love (2:06, 2:12, 2:50 and throughout from that point, the vocals partially still bleed. Tested on MDX Inst 3, Inst HQ 1 and 3, Inst Main, Kim Inst, HTDemucs, HTDemucs FT and 6S, and ensembles including (Kim vocal 2, Kim Inst, Inst Main, 406, 427, HTDemucs FT) and (Kim Inst, Voc FT, Inst HQ 3)

  • South Park - Chocolate Salty Balls (bad results with most models)
  • Tally Hall - Never meant to know (the almost impossible goal for now is to remove "with" in 2:39).
  • WWE - Demon in Your Dreams - (here's a track that sounds bad - the parts where the vocals usually are sound muffled and dull, guitars are barely audible - HQ_3, Demucs 6s tested)
  • Taylor Swift - Better Than Revenge (Taylor's Version) (background vocals in all models including HQ3 and voc_ft - using Dolby Atmos version, and (I think just) muting (?vocal) channel(s) helped)
  • Bon Jovi - I believe
  • Twenty One Pilots - The Hype (“voc_ft leaves too many perc/drums/synths [in vocals] that sounds like t's and s' or just sound like vocals, and it's really annoying, also because of this nearly no other model can separate it either because they think it's part of the vocals, but it's mostly just synths, Ripple put a lot of echo into the other stem”)
  • The Weeknd - Until I Bleed Out  (vocal stem includes a bunch of drum and synth bleeding. Tested on htdemucs_ft, VocFT, Inst HQ 3, InstVoc HQ 2, Kim 1, Kim 2, Kim inst, and ensembles (htdemucs_ft, VocFT, Inst HQ 3, InstVoc HQ 2), (Kim 2, Kim inst, Inst Main, 406, 427, htdemucs_ft) and (Kim inst, VocFT, Inst HQ 3))
  • Travis Scott - Nightcrawler (vocal residues in 1:27, 2:24, 3:56, and 4:50 using BS-Roformer 1296 in UVR beta and overlap 2/8 - less than in other ech models though it's more muddy, though 04.24 model on MVSEP less, discussion)

Duplicates from GSheet

- Moby - Porcelain (in Gsheet; in instrumental, vocal reverb bleeding at 1:00, and bleed at 2:20, all good MDX models, GSEP, MDX23 by ZFTurbo tested, still getting more or less the same results)

  • Queen - March Of The Black Queen (always causes issues, the best result on Full Band 8K FFT, as for 06.08.23, but still lot of BV is missed)
  • Night Lovell - Dark Light ("almost every AI can't separate the main vocals from the melody, the melody has a part that sounds like vocals, so just about every AI picks some of it up in the vocals section instead of the instrumental section")
  • Bob Marley - Sun is Shining (all current models bleed in the same timestamps: 1:02, 1:42, 1:54, 1:57, 2:50)
  • Daft Punk - Give back life to music (problem with vocoder in the vocals rendering bad instrumental results)
  • Daft Punk - Within (robotic voices)

Tracks to compare weaker vs more effective models in instrumentals (e.g. inst 464/Kim inst or HQ_2/3 vs all others)

  • O.S.T.R. - Incognito (non Snap Jazz version) (lo-fi Polish hip-hop with constant vocal leftovers in all models and AIs except MDX-UVR inst 1-3, main where inst 3/464 performs the best, it’s also good to test an influence of various chunks settings at 1:53. Publicly available songs for datasets usually don't include hip-hop at all, especially not from some low, weird sounding languages with loud, bassy, over processed voices. In Snap Jazz version also in 464 there are e.g. less vocal residues than on GSEP - still slightly hearable).
  • Kaz Bałagane - Stara Bida (constant vocal leftovers in all models and AIs except MDX-UVR inst 1-3 and inst main where inst 3/464 performs the best [good to test weaker models or specific epochs], flute from 1:11 gets deleted on MDX-UVR HQ models).
  • The Weeknd - Hardest To Love (htdemucs_ft did well here).
  • NNFOF - Jeśli masz nierówno pod sufitem (all MDX-UVR instrumental models will filter out inconsistently flute from the track, while GSEP handles that song well - it happens for all kinds of songs containing flute and oriental instruments)
  • Ace of Base songs (“any of them have those flute-ish synthetic instruments which have always been a nightmare in terms of getting a flawless a cappella”).
  • Różewicz Interpretacje (Sokół) - Wicher (very deep and low rap voices cause problems with weaker models, e.g. original MDX23 on mvsep1.ru (now MVSEP.com)/ZFTurbo MDX23 Colab; you can also try out also Sokół - Nic and Sokół - Wojtek Sokół albums)
  • Chaos (O.S.T.R., Hades) - Powstrzymać Cię (lots of bleeding in e.g. MDX23 model on MVSEP in 2:00. Not that much in Kim inst)
  • DJ Skee & The Game (from 2012 mixtape) or Tyler the Creator (album version from 2011) - Yonkers (same beat prod. by Tyler the Creator)
  • (the first from mixtape with more cuts/vocal chops difficult to get rid of. HQ models usually confuse vocal chops with vocals, but here it might be useful)
  • Avantasia - The Scarecrow (HQ3 generally has problems with (here bowed) strings. mdx_extra from Demucs 3 had better result, sometimes 6s model can be good compensation for these lost instruments)
  • Static Major - CEO (if someone wants to test out isolation of many vocal layers using e.g. Melodyne)
  • oikakeru yume no saki de - sumikeke (here vocal layers extraction Karaoke models and Melodyne fail)
  • Dizzy Wright - No Writer Block (hard track to keep hi-hats consistent throughout the whole output with even some snares - it can all get easily washed out, also more vocal leftovers in MDX23C ensemble on MVSEP1.ru vs MDX23 2.1 Colab [despite better SDR], not bad GSEP result, but it makes hi-hats like a bit out of rhythm probably due to some built-in processing in GSEP)
  • Dariacore - will work for food (generally that whole Dariacore album can be tasking due to its loudness and “craziness”)
  • Centrala Katowice - Reprezentowice (first version of GSEP in 192kbps was consistently failing in picking up vocals and also leaving strong vocal residues)

See #your-poor-results on our Discord server for more (also deleted).

Warning. If you upload lots of music in #your-poor-results or (now deleted) #your-good-results or lots of separation examples elsewhere on our server (or any other server), recently our long-term users receive warnings from Discord about possible deletion of their accounts and whole good results channel got deleted - we advise sharing only links to e.g. GDrive instead of uploading music to Discord itself base64 encoded links if you do this on everyday basis or very frequently (although only one warning was given to our user since your-good-resuls deletion. So far our user received two warnings from Discord without deleting account yet. The whole good results channel got deleted after linking to uploads instead of uploading after the last clean-up we got.

Training models guide

Introduction

Technically, it’s three files for training e.g. vocal model - vocals, instrumental, and mixture. When you’ll try to train without mixture, the results will be “terrible” (Kim).

(Based on Anjok’s interview, around 0:40:00)

For training a new model, use at least 200 samples for such a model to achieve any good results. Anything below that might give you the results you might not be happy with, and of course, above that will give better results.

MDX-Net turned out to be easier in picking out proper parameters for training than VR.

In case of e.g. MDX-Net, you take under consideration how big your model is intended to be by fft parameter determining the cutoff of the model, and also in-out channels (size of the channels long story short) - it increases size of the model and intensifies the resources needed for training.

So if you have a smaller dataset, your model doesn’t have to be that large.

If you crank up the model size too much for a small dataset, you're putting yourself into a risk of overfitting. It means that the model will work too well on a data which was trained on, but it will not work so well on unknown songs which the model wasn’t trained on.

In case of situation of having large database with small model size, there won’t be much training at all. It will basically forget features of larger dataset. You need to find a balance here.

Batch size is the amount of samples that are being fed into the model as it’s being trained. Smaller batch sizes will take longer to learn, but you might get a better result at the end. Larger batch size will make the model not so good, because it has to learn bigger passages at once, but the model will train faster.

You need to tweak, balance out and find what works for you the best for a model you’re training. Also balancing things out might be helpful for end users with slower GPUs, or even CPUs [although bigger MDX23C models are very difficult to separate on CPU, nearly impossible on the oldest 4 cores and still noticeably slower than MDX-Net models on GPUs like 3050].

“Overfitting is when a model is still improving on training data but not on unseen data, and if training is push too far, it can even start to perform worse on unseen data.

It's more important issue when you want a model that generalise well”, [e.g. targeting only 909 hihats], you want a model which targets one really precise sound (with some variation, but still 909 hihats, so it's not really about generalisation.” jarredou

In terms of training, currently Anjok uses A6000 48GB and Ryzen 7 5800, 128GB RAM, 3TB NVME, you need an SSD for training as the training process in intensive for a massive amount of data.

MDX23C is noticeably slower for separation than MDX-Net, even for GPUs like 3050.

3000 samples of 3-4 minutes length, it's going to take at least for batch size of 8, a month and a half (?on A6000 and MDX-Net). Anjok didn't want to make models too big, having end users with not the best hardware in mind.

(here the interview section ends)

Everything should be trained to min. 200 epochs, and better 500 (e.g. MDX-Net HQ_2 was trained to 450 epochs). From e.g. 200 upward, the increase of SDR can be very low for a longer time. Experimentally, HQ_4 was trained to epoch 1149, and it slowly, but consequently progressed. Generally, some people train models up to 750 or 1000 epochs, but it takes longer.

Somewhere at the beginning of 2023, UVR dataset consisted of 2K songs (maybe for voc_ft, can’t remember), and probably more for MDX23C, and 700 pairs for BVE model, but in case of vocal model, the one with 7K songs didn't achieve much better SDR results than 2K. Could've been a problem of overfitting or no cutoff for vocal model or any other problem with dataset creation we will tackle here later.

The best publicly available archs for training instrumentals/vocals which community already used, are:

BS-Roformer (very demanding), MelBand Roformer (faster variant but cannot surpass MDX23C and BS-Roformer SDR-wise), MDX23C (can produce more residues in instrumentals than v2), MDX-Net 2021 (instrumentals can get a bit muddy even in fullband models), Demucs HT a.k.a. Demucs 4, and good for specific tasks like Karaoke/BVE models or dereverb - vocal-remover (VR).

"You can train any sound you want with any architecture (mdx-net, demucs, spleeter)" ~Kim

But don’t use Spleeter, it’s deprecated since so many archs were released.

I think on example of HQ_3 and 16.xx models, it’s safe to say that MDX-Net v2 fullband models have less vocal residues in instrumentals than newer MDX23C arch, but it is also much more muffled, and it depends on specific song what arch will fit the best.

About BS-Roformer, e.g. the model trained by Bytedance didn’t include other stem and is obtained by inversion, and initially the results had lots of vocal residues in instrumentals or instruments in other stem, but it can be alleviated by decreasing volume of input file for separation by 3dB (the best SDR among lots of tested values). Generally viperx models sounds similar to Ripple. The arch itself has potential for the best SDR from currently evaluated archs on MVSEP, that’s for sure.

There are other good archs like BSRNN which is already better than Demucs, and later released SCNet (we haven’t trained any models on them). It's faster, than BS-Roformer, but probably due to arch differences, rather not better.

Viperx trained on far more demanding arch (BS-Roformer) with 8xA100-80GB (half of what ByteDance used), on 4500 songs, and only on epoch 74 they already surpassed all UVR and ZFTurbo’s/MVSEP models, including ensembles/weighted results (more info later below).

Viperx made a private model with Mel-Roformer which reached an epoch of around even 3100. He uploaded the SDR results to MVSEP, but it has been taken down since [presumably by viperx himself]. And even then, the result was not above 9.7 unfortunately, achieving results not much better than MDX23C SDR-wise, but with probably bigger dataset.

Preparing dataset

Let’s get started.

First, check the -

Repository of stems - section of this document.

There you will find out that most stems are not equal in terms of loudness to contemporary standards, and clip when mixed together.

About sidechain stem limiting guide by Vinctekan

(there's a clipping bug here workarounds at the end, be aware);

The sidechain limiting method might be not so beneficial for SDR as we thought initially, irc it’s explained in interesting links section with the given paper)

Other useful links:

https://arxiv.org/pdf/2110.09958.pdf

https://github.com/darius522/dnr-utils/blob/main/config.py

“You can also just utilize this https://github.com/darius522/dnr-utils/blob/main/audio_utils.py

and make a script suited to your own, the one already on this repo is a bit difficult to repurpose.

I just concatenated a lot of sfx music and speech together into 1hr chunks and used audacity tho (set LUFS and mix)

oh and then further split into 60 second chunks after mixing them” - jowoon

“Aligned dataset is not a requirement to get performing models, so you can create a dataset with FL/Ableton with random beats for each stem. Or using loops (while they contain only 1 type of sound).

You create some tracks with only kick, some others with only snare, other with only...etc...

And you have your training dataset to use with random mixing dataloader (dataset type 2 in ZFTurbo script, one folder with all kick tracks, one folder with all snare tracks, one folder with...etc..).

Then you have to create a validation dataset accordingly to the type of stems used in training, preferably with a kind of music close to the kind you want to separate, or "widespread", with a more general representation of current music, but this mean it has to be way larger.

The only requirements are:

44.1Hz stereo audio.

Lossless (wav/flac)

Only 1 type of sound by file (and no bleed like it would happen with real drums)

Audio length longer than 30s (current algos use moslty ~6/12 second chunks, but better to have some margin and longer tracks so they can be used in future when longer chunks can be handled by archs & hardware).” jarredou

“A quite unknown Reaper script to randomize any automatable parameters on any VST/JS/ReaXXX plugin with MIDI notes. It's REALLY a must have for dataset creation, adding sound diversity without hassle.

https://forum.cockos.com/showthread.php?t=234194” -||-

Sidechain stem limiting guide by Vinctekan follows

Hello all, I am here to share the definitive answer to exporting sets of stems with consistent and loudness and brickwall like mixing, when a manual mixture of pairs/stems are too loud or are modified.

Even though, pairs like this probably won't have to be used for training in the future, it's still going to be super important for evaluation for said models, or techniques that any of you may discover in the future.

I discovered this through this video, the details and specifics behind are explain in this if you would like to recreate it manual

https://www.youtube.com/watch?v=Hv8nENoNvbk&t

This is basically a Side Chain Stem Limiting method that uses FabFilter's Pro-L 2 limiter plugin in the REAPER DAW to mix your stems in a way that when you mix them together in Audacity with the "Mix and Render" option, you get a perfect waveblock like mix, with no clipping and no distortion.

Decided to help you all out and created two REAPER templates where this mixing method is used, so you don't have to make it manually. I'll give out a 4 stem template  and a 2 stem template for vocals, and instrumental that you all can use to recreate the above.

The steps to make the above happen aren't exactly the same as in the video, in addition there are a lot of things you don't need to do (since I have already done it), so here is a step-by-step guide:

Requirements:

1. REAPER (DAW) [in the video, it says you can use any DAW]

2. FabFilter Pro-L 2 limiter plugin (preferably the regular VST version, instead of VST3)

Steps:

1. Open the REAPER Project File of your choice (if you're exporting 2 stems, use the 2 stem version, if you're exporting 4 stems, use the 4 stem template)

2. Drag your stems into the corresponding channels, you also have to drag it into the channels labeled: __"DUPE"__

-Your vocal stem to "VOCALS", and "VOCALS DUPE"

-Your drum stem to "DRUMS, and "DRUMS DUPE"

-Your other stem to "OTHER", and "OTHER DUPE"

-Your bass stem to "BASS", and "BASS DUPE"

-Your instrumental to "INSTRUMENTAL", and "INSTRUMENTAL DUPE" [For the 2 stem template]

3. Check the settings of the limiter to make sure it suits your needs.

-You can set the gain on the left side of the UI, if you think your mix it still isn't loud enough.

-I used 8x oversampling as default, if you feel like your CPU can handle more or less, you can adjust it to suit your needs.

-If the exported stems have distortion (by any chance), you can set the limiting mode to SAFE, which prioritizes transients, and keeps unwanted sounds to ABSOLUTE ZERO.

-You can also think about adjusting the attack, release, and channel linking settings if it's not good enough, but I think the settings in the templates are good for any form of limiting.

-Make sure "True Peak Limiting" is always on, if it isn't, distortion might become a factor again in the final results

4. Now it's time to export the stems in the first track folder individually. You can do this by soloing them with the yellow "S" button next to the tracks.

4.5 In REAPER: File>Render... and render. Rinse and repeat for all of the stems.

These are the settings I recommend using, if you plan to further edit the results, and also for retaining the quality of the sources:

-No Tail

-44100hz or 48000hz sample rate

-Channels: Stereo

-Resample mode: r8brain free (highest quality, fast)

-Format: WAV

-WAV bit depth: 24 bit PCM

-Nothing else

Done!

+5. You can check your work by opening Audacity, importing the exported stems, and mixing them together by pressing CTRL+A, going inside: Tracks > Mix > Mix and Render.

If everything is done correctly, you should have a mix of stems which sound nice to the ears, and has absolutely zero clipping. You can if it clips or not by checking: View > Show Clipping (on/off). Or you can press CTRL+A, go inside Effects > Volume and Compression > Amplify. If it's correct, the Amplification bar should show 0.0 DB.

Clipping bug workaround

https://cdn.discordapp.com/attachments/708579735583588366/1139206772092051496/Instrumental_Fix.mp4

In addition to Safe Mode, I set the release to the max, and it worked that way, but the dynamic were shite.

More:

https://discord.com/channels/708579735583588363/708579735583588366/1139189181873143869

In conclusion to below: The instrumental clipping wasn't the Fabfilter Pro-L 2 VST's fault, or any sidechain limiter for that matter. This is just how digital audio works, unfortunately.

(And PS. - 32-bit float exporting might prevent clipping).

Trivia

Ugggh, just checked out both Pedalboard, and DawDreamer, from what it looks like: It's not really possible to recreate stem limiting with a RAM loaded mixture as a reference/auxiliary input.

The only 2 remaining possibilities that I am thinking of is using pydub, librosa, scipy or pyo to do it without the use of a DAW.

If that's not possible, then the only option left is to control REAPER with reapy + reascript.

I also think I now understand why the peak amplitude of the instrumental is decreased when you re-mix the acapella back in to the mix:

Since music is basically just about, 22000 different sine wavs going off at the exact same time with changing amplitudes, the pressure waves of all of these sine waves interfere each other constantly:

If at any given time the pressure waves of these sine waves have a perfectly aligned value of +1, then they add up together, creating a strong signal

On the flip side: there are times when they cancel each other out, because the amplitude are different (e.g 1st being +1 and the 2nd being -1)

I watched a video guide in Fourier Transform, and the concept is visually demonstrated really well:

https://www.youtube.com/watch?v=spUNpyF58BY&t=50s

In a nutshell: If you take away the vocals, certain frequencies of the instrumental get amplified, because now the vocal isn't there to dampen it/cancel it out.

You can recreate this by taking a brickwall limited recording of your choice, lowering the DBFS by at least -2. Then you can process it through an MDX model, and then compare the peak amplitudes of:

1. the mixture

2. the Instrumental

3. and the separated instrumental and acapella mixed back together

https://github.com/jeonchangbin49/musdb-XL/

From what I can understand, they applied a maximizer to all the mixtures, then calculated the differences of amplitude in a sample by samples by basis, and applied the difference to all the stems at once.

I think I could do that.

Update

“Even though I have found out that using Pro-C 2 [Sidechain compressor, not a limiter] totally fixes the issue of mixes clipping after turning down just about any stem, the trade-off is that the LUFS [short term] suffers by at least -2 DB“

(older techniques from before the guide above)

jarredou’s guide:

Here's 2 "proof-of-concept" python dynamic range compressor/limiter I've made recently and that are working with sidechain and multiple stems inputs:

1st one "pydub_comp_fork.py" is a fork of pydub's dynamic range compressor

(line79 to change the audio inputs)

You can set attack/release/ratio/threshold settings like any other compressor

---

2nd one "limiter.py" is a fork of this Safety Limiter: https://github.com/nhthn/safety-limiter/

(54line to change the audio inputs)

You have "release" and "hold_time" settings.

(no threshold here, you just gain the input)

---

Even if they were sounding "ok" with normal settings, the speed performances were not satisfying for any of them for the planned use, I will not develop them more, consider them as abandonware. But they can maybe be usefull for someone else.

from pydub import AudioSegment, effects  

https://cdn.discordapp.com/attachments/773763762887852072/1167555636272316467/limiter.py

https://cdn.discordapp.com/attachments/773763762887852072/1167555635928367265/pydub_comp_fork.py

You can use this technique to make the loudness of your stems consistent:

https://github.com/jeonchangbin49/musdb-XL to get better results with your model, where usually there's a problem with proper isolation of overly compressed music.

You can also read Aufr33 short guide

on his approach toward this problem (plus more explanations here)

For a problem of inconsistent volume in mixture vs stems when a limiter is used, sidechain mixture to a limiter.

Other option, more close to real world processing :

* apply (strong) compressor/limiter to individual stems to mimic the mixing process

* and then apply (softer/lighter) compressor/limiter on mixture (with sidechain trick) to mimic the mastering process.

Because if you apply too much limiting on the mixture, it will destroy the sound. 2-stage dynamics processing is more transparent.

The only problem with the technique is that there could be clipping if we invert one or more stems over the mixture.

Unless the AIs work with [32 bit] floating point (not integer!)

Exemplary step-by-step guide

I used side-chain on the 2 stems with source 3 as input.

Somehow I had to set the threshold to -12db (instead of the OG -24db) i applied to the mixture (prolly coz 12 x 2 stems)

Used the same Ratio/Attack/Release settings as used with compressor prior, this time on the side-chain compressor.

Two templates for Reaper. One with better LSP Sidechain Limiter Stereo which is Linux Studio Plugin and the other with free reacomp. Target is around -7.5 ilufs, but anything between 8.5 and 9 will do fine.

https://cdn.discordapp.com/attachments/708595418400817162/1108853386608136252/Pair_Limiter.RPP

https://cdn.discordapp.com/attachments/708595418400817162/1108861182506455170/Pair_Limiter_-_ReaComp.RPP

These may not be the final files. ReaComp struggled more at some point. Consider using e.g. also iZotope RX9/10 Maximiser IRC IV for more transparent results.

E.g. Aufr33 used Voxengo and sometimes ReaXComp in 4 channel mode.

_______

Alternatively, you can experiment with:

KSHMR Chain method by Sam Hocking 

“I too get some residual that doesn't null when comparing Master Bus v Distributed Stem Mastering.”

"The way gainmatch works is it exists on your before processing chain and after processing chain and real-time communicates the difference between the two (does the part knock is showing), so the adjustment is made dynamically as a gain match calc, or you can use it as a target match too. While the loudness adjusting could all be an offline one click process, you would still have to set it all up manually in a DAW. There are some cool duplication chain-style solutions in ProTools that could achieve it more easily, however. My personal favourite is a tool called KSHMR Chain which will work in any Stereo DAW and that allows one plugin instance to be effective on hundreds of tracks at the same time but controlled from one master plugin. This way you could actually adjust every single audio to a common master LUFS dynamically and click export stems and all would be dynamically adjusted at once and offline exported."

https://www.excite-audio.com/kshmr-chain

_________

Short guide of Aufr33 approach

https://cdn.discordapp.com/attachments/900904142669754399/1090876675966894142/sm.png

"If anyone is wondering how I create pairs. Here's what my project looks like in REAPER.

Before the master bus is the limiter plugin, which works with 4-channels. After rendering a pair for one dataset (in this case, for Karokee), I swap audio items and render the pairs for other datasets: BVE, Strings, etc."

“For training, just make sure that all pairs have a margin of about 0.3 dB. Storing pairs larger than 16 bits can be useful for further editing.” aufr33

___________

Below is just a theory for now and probably wasn't strictly tested on any model yet, but seems promising

Q: Can you not calculate the average dB of the stems and fit one limiting value to them all?

A: the stems are divide-maxed prior

meaning they are made so, that when joined together, they wont clip

but are normalized

so they will be kinda standardized already

based on that, i should be able to just go with one static value for all

Example

https://www.youtube.com/watch?v=JYwslDs-t4k

Q: This is great, I actually used this method before with a few set of stems, before I decided to try sidechain compression/ Voxengo elephant method, but I'm not too sure if I am on the right path. However, I'm pretty sure this only works best for evaluation, if the resulting mixture has consistent loudness like in today's music.

A: Yeah, it's a different approach than compression/voxengo indeed.

But the fact it scored high in SDR and UVR dataset is already compressed/elphanted

I think it's a good combo to use both in the set, a bit like new style tracks and oldies [so to use both approaches inside the dataset]

some tracks in real life are compressed like fuck - some aren't

so it mimics real life situation

Q: if it's true that's awesome, with that the model basically has the potential to work in multiple mixing styles, without having to create new data, or changing it, right?

While still adding new data

A: Yeah, since UVR dataset is already compressed - and then add these one of mines with the more delicate way of mastering (incl. divdemax prior)

BTW. You shouldn't compare training data against evaluation data, while those being the same.

E.g. you can use multisong dataset from MVSEP, and make sure you don't have any of those songs in your dataset.

Q: Does evaluation data matter for the final quality of the model?

A: Absolutely not. It's merely indication

Measurement is logarithmic, meaning that 1SDR is 10x difference.

Leading architectures

MDX-Net (2021) architecture

From public archs, before MDX v3 2023, it gave us the best results for various applications like vocal, instrumental, single instruments models compared to VR arch. But denoise and dereverb/deecho model turned to be better using VR architecture, the same goes to Karaoke/BVE models where in contrary to 5/6_HP, MDX model sometimes does nothing.

In times of Demucs 3 there was also e.g. custom UVR instrumental model trained, but it didn’t achieve that good results vs MDX-UVR instrumental models.

Once there was UVR Demucs 4 model coming up, but the training was canceled due to technical difficulties. Looks like ZFTurbo managed to train his model for SDX23 challenge and also vocal model, but “[the] problem is that Demucs4 HT [traning is] very slow. I think there is some bug. Bug because sometimes I observe large slow-downs on inference too. And I see high memory bandwidth - something is copying without reason...”

Spleeter might seem to be a good choice, because training is pretty well documented, but it isn’t worth it seeing how these models sound (also it was very first AI for audio separation at the time, and even VR arch is better than Spleeter hence UVR team started to train on VR arch with much greater results than Spleeter).

Your starting point to train MDX model would be here:

https://github.com/KimberleyJensen/mdx-net

(visit this repo, it has some instructions and explanations)

Also, ZFTurbo released his training code for various archs here:

https://github.com/ZFTurbo/Music-Source-Separation-Training

"It gives the ability to train 5 types of models: mdx23c, htdemucs, vitlarge23, bs_roformer and mel_band_roformer.

I also put some weights there to not start training from the beginning."

“Set up on Colab is simple:

You only have to create one cell for installation with:

from google.colab import drive

drive.mount('/content/drive')

%cd /content/drive/MyDrive

!git clone https://github.com/ZFTurbo/Music-Source-Separation-Training

%cd /content/drive/MyDrive/Music-Source-Separation-Training

!pip install -r requirements.txt

And a cell to run training:

%cd /content/drive/MyDrive/Music-Source-Separation-Training

!python train.py \

    --model_type mdx23c \

    --config_path 'configs/config_vocals_mdx23c.yaml' \

    --results_path results/ \

    --data_path '/content/drive/MyDrive/TRAININGDATASET' \

    --valid_path '/content/drive/MyDrive/VALIDATIONDATASET' \

    --num_workers 4 \

    --device_ids 0

Don't forget to edit the config file for training parameters

You can also resume training from an existing checkpoint by adding

--start_check_point 'PATH/TO/checkpoint.ckpt' \

parameter to the command in the training cell

the checkpoints are saved in the path provided by the :

--results_path results/ \ parameter of the command, so here, in "results" folder

With ZFTurbo's script, mixtures are needed for validation dataset, to evaluate epoch performance” - jarredou

“it saves every checkpoint as "last_archname.ckpt" (file is overwritten at each epoch), and also save each new best checkpoint on validation as "archname_epxx_SDRscore.ckpt".

It also lowers the learning rate when validation eval is stagnant for a chosen number of epochs (reduceonplateau), you can tweak the values in model config file.”

Q: what does this gradient accumulation step/grad clip mean exactly?

A: “Accumulation lets you train with a larger batch size than what you can fit on your GPU, your real batch size will be batch_size multiplied by gradient_accumulation_steps.

grad_clip clips the gradients, it can stop the exploding gradients problem

Exploding gradients = model ruined basically, i had this problem with Demucs training, but I used weight decay (AdamW) to solve it instead of grad_clip

I don't think grad_clip uses any resources, but accumulation uses a little bit of VRAM, i don't know the exact number” - Kim

Q: Why can’t models have like an auto stop feature or something IDK like if the model stops improving it’ll stop automatically

or overtraining, but IDK if models can overtrain

A: Nothing stopping you from adding a thing to stop training after seeing SDR (or whatever) is stagnant, some people even represent it in a chart

A: That’s easy to get it done in PyTorch, just use EarlyStopping after the overall validation loss computation and the training will stop depending on the patience you set on EarlyStopping…

- Colab by jazzpear96 for using ZFTurbo's MSS training script. “I will add inference later on, but for now you can only do the training process with this!”

Q: how can I train a heavier MDX-NET model with a higher frequency cutoff like recent UVR MDX models?

KimberleyJSN:

A: these are the settings used for the latest MDX models you can change them at configs/model/ConvTDFNet_vocals.yaml and configs/experiment/multigpu_vocals.yaml

overlap - 3840

dim_f - 3072

g - 48

n_fft - 7680

These seem to be actually parameters for the last Kim ft other instrumental model, while e.g. the last MDX-UVR HQ models without cutoff has n_fft/self n_fft set to 6144.

Alternatively, see this guide:

https://github.com/kuielab/mdx-net/issues/35#issuecomment-1082007368

You also need to be aware of a few additional things:

(Bas Curtiz, and brackets mine)

Few key points:

- If you don't have a monster PC incl. a top range GPU [RTX 3080 min?] (or at work), don't even consider. [smaller models than good inst/vocs with fewer epochs of around 50 might be still in your range though]

- If you don't have money to spent renting a server instead, don't even consider.

- If you aren't tech-savy, don't even consider.

- [If training] a particular singer, [then does it have] highly 100 tracks with original instrumental + vocal?

- IDK, but I don't think that will be enough input to get some great results, you could try though [good models so far have varying genres and artists in the dataset, not just one].

- If you need some help setting it up, Kimberly (yes, she's the one who created Kim_vocal_1 model, based on an instrumental model by Anjok),

you can ask her (@)KimberleyJSN.

[- Training lots of epochs on Colab might be extremely tasking - for free users they currently give only GPU with performance of around RTX 3050 in CUDA]

MDX-Net 2023 (v3) a.k.a. MDX23C

Lots of general optimizations to the quality while keeping decent training and separation performance. Theoretically the go-to architecture now (at least over V1), although currently SAMI Bytedance reimplementation (under VR section below) seems to be more promising. It was already used for trained models by UVR/ZFTurbo. On the same if not better dataset than previous V1 models, it received not much worse SDR than V1 arch for narrowband, but with much fuller vocals, although with more bleeding (also in instrumentals). For fullband, SDR was high enough to surpass previous models, but SDR stopped reflecting bleeding on multisong dataset.

"It doesn't need pairs anymore.

This... is HUGE.

It randomizes chunks of 6 seconds from random instrumental and random vocal to learn from.

In other words, no more need to find the instrumental+vocal for track x.

Just plump in any proper acapella or instrumental u can find.

the downside so far is the validation.

it takes way longer." so you might be able to perform evaluation per only e.g. 50 epochs.

Dataset structure looks like

- train folder > folders with pairs > other.wav + vocals.wav

- validation folder > folders with pairs > other.wav + vocals.wav + mixture.wav"

Libsnd can read FLACs when renamed to WAV. It can save a lot of space.

I think in the old MDX-Net, we didn't have a model with not worse SDR than epoch greater than 464, although 496 with lower SDR also had its own unique qualities (though more vocal residues at times). Also, frequently training is ended on epoch 300, and might not progress SDR-wise for a long time (maybe till 400+).

https://cdn.discordapp.com/attachments/911050124661227542/1136258986677645362/image.png

We may already be hitting the wall SDR-wise, as Bas once conducted an experiment with training a model consisting dataset made of the dataset evaluation and the result was only 0.31 higher than the best current ensemble (although it used lower parameters for separation). Generally, to break through that wall, we may need to utilize multi-GPU with batch size “16 or even 8”.

“if you did this experiment with batch size 16 or even 8 you would see much better performance i think” Kim

“mhm but that requires multi GPU” Bas

“yeah that is the wall i think” Kim

JFI -

Multi Source Diffusion

https://github.com/gladia-research-group/multi-source-diffusion-models

Some results posted by Bytadance were labelled as “MSS” but it’s rather not the same arch. In the original MSS paper above, only Slakh2100 was used.

ByteDance probably expanded it further, and had it was said they had issues with their legal department with making their work public, so they can equally use unauthorized stems just like us, or looking for ways to monetize their new discovery for TikTok, as their company largely invest in GPUs lately, so something might happen maybe in the end of the year, and maybe it will be released in their exclusive service (Ripple and Capcut were released later indeed). TBH, it's hard to get a good model using only public datasets. For public archs, it's even impossible. They probably know it too, so it’s kinda grey zone, sadly, and model trained later for Ripple was probably done from scratch and contains only lossy files for training from now on.

Bytedance was said to train on 500 songs only + MUSDBHQ

BS-RoFormer

The best, but the slowest tested arch out of the all in this doc SDR-wise. Currently considered as SOTA (state-of-the-art algorithm).

https://github.com/lucidrains/BS-RoFormer

SAMI Bytedance arch reimplementation from their paper done by lucidrains.

32x V100 will require two months of training (most likely for 500 songs only + MUSDBHQ)

“It’s better to have 16-A100-80G”, ViperX trained 4500 songs on 8xA100-80GB and after 4 days achieved epoch 74, and on epoch 162 achieved only 0,0467 better SDR for instrumental.

ZFTurbo having 4x A6000 gave up training on it, having to face 1 year of training time.

Later, Mel-Band RoFormer based on the band split was released, which is faster, but achieves worse SDR.

So instead of 16 a100, it might be like 14 a100 to train in decent time, but at best, SDR will be only in pair with MDX23 and MDX-Net archs v2 archs, and BS-Roformer will achieve better SDR than Mel-Band. Might be some issue in Mel-Band Roformer reimplementation, maybe paper lacking something. Only in BS-Roformer some of the original authors from Bytedance took part in some reviewing of the reimplementation code made by lucidrains.

On Mel-Band, epoch 3005 took 40 days on 2xA100-40GB.

You can use ZFTurbo code as base for training:

https://github.com/ZFTurbo/Music-Source-Separation-Training

“change the batch size in config tho

I think zfturbo sets the default config suited for a single a6000 (48gb)

and chunksize” joowon

So, to sum up, BS-Roformer is the best publicly available arch SDR-wise for now, although very, very demanding compared to MDX23C or MDX-Net v2 or VR (voice-remover by tsurumeso) “in bs-roformer they don't do any downsampling or compression” hence it’s so slow to train.

Viperx trained their own model, using BS-Roformer on +4500 songs (studio stems * +270h) using 8xA100-80GB, and only on epoch 74 they almost surpassed sami-bytedance-v.0.1.1 result, achieving 16.9279 for instrumental, and 10.6204 for vocals.

With epoch 162, they achieved 16.9746 and 10.6671, which for instrumental, is now only 0,0017 difference in SDR vs v.0.11 result.

Training settings:

chunk_size 7.99s

dim 512 / depth 12

Total params: 159,758,796

batch_size 16

gradient_accumulation_steps: 1

Since epoch 74 there were “Added +126 songs to my dataset”

Training progress:

https://ibb.co/1zfFX82

Source:

https://web.archive.org/web/20240126220641/https://mvsep.com/quality_checker/multisong_leaderboard?sort=instrum

https://web.archive.org/web/20240126220559/https://mvsep.com/quality_checker/entry/5883

It sounds similarly to the Ripple model.

"7 days training on 8xA100-80GB": 7\*24\*15.12 (runpod 8xa100 pricing) = $2540.16”

ViperX trained on Dataset type 2, meaning that he had 2 folders:

vocals and other and no augmentations

ViperX trained on faster Mel-Roformer before, and on epoch 3005 trained 40 days on 2xA100-40GB with 4500 songs, he achieved only 16.0136 for instrumentals, and 9.7061 which is in pair with MDX-Net voc_ft model (2021 arch).

“Each epoch [in Mel-Roformer] with 600 steps took approximately 7 to 10 minutes, while epochs with 1000 steps took around 14 to 15 minutes. These are estimated times.

Initially, I suspected that the SDR was not improving due to using only 2 x 40GB A100 GPUs. After conducting tests with 8 x 80GB A100 GPUs, I observed that the SDR remained stagnant, suggesting that the issue might be related to an error in the implementation of the mel-roformer architecture.” More info (copy).

Later, the viperx’ BS-Roformer model was further trained from checkpoint by ZFTurbo, and it surpassed all the previously released models, and even ensembles, at least SDR-wise. Still, it might share some characteristics of BS-Roformer, like occasional muddiness, and filtered sound at times.

Arch papers:

https://arxiv.org/abs/2211.11917

https://arxiv.org/pdf/2310.01809.pdf

More insides:

https://github.com/lucidrains/BS-RoFormer/issues/4#issuecomment-1738604015

https://media.discordapp.net/attachments/708579735583588366/1156700109682262129/image.png?ex=6516952c&is=651543ac&hm=988a5acc32f075988c1701d41c2090321a25955c4ffedd64516e0062fa1002e0

https://cdn.discordapp.com/attachments/708579735583588366/1156700305069707315/image.png?ex=6516955b&is=651543db&hm=bf5737f95f3a93fd3e3a23a679e2ad0031e0feb6c622fbb85eafa053ed483e08

https://media.discordapp.net/attachments/708579735583588366/1156700453585829898/image.png?ex=6516957e&is=651543fe&hm=06ed766b39c3c7f4a8329420a22bcc572e856116a6e1cea030d158c984c46825

"1) Best [BS-Roformer] model with the best parameters can be trained only on A100, and you need several GPUs. The best is use 8. It reduces possibilities of training by enthusiasts. All other models like HTDemucs or MDX23C can be trained on single GPU. Lower parameter BSRoformers don't give the best results. But maybe it's possible. Solution:

We need to try train smaller version which will be equal to current big version. Lower depth, lower dim, less chunk_size. We need to achieve at least batch 4 for single GPU. Having such model can be useful as starting point for finetuning for other tasks/stems.

2) I also noticed a strange problem I didn't solve yet. If you try to finetune version trained on A100 on some cards other than A100 then SDR drops to zero after first epoch. Looks like "Flash attention" has some differences (???).

3) Training is extremely slow. And I noticed BSRoformer more sensitive to some errors in dataset.

[probably for 4090]

chunk_size: 131584

dim: 256

depth: 6

I think these settings can give batch_size > 4

For example, I can't finetune viperx model on my computer with 48GB A6000 because the model is too large.

chunk_size is what affect model size the most, I think. And I saw it's possible to get good result with small chunk size.

I put the table here:

https://github.com/ZFTurbo/Music-Source-Separation-Training/blob/main/docs/bs_roformer_info.md

[see also https://lambdalabs.com/gpu-benchmarks batch size chosen in Metric, fp16, but ZFTurbo said that training on fp32 is also possible]

Q: Can we change model size of existing model and fine tune it? Or it must have been trained from scratch with the same chunk size

A: 1) If you just decrease chunk size, it will work almost the same as with larger (as I remember)

2) If you decrease dim or depth, score will drop very much"

Don't forget, each time you change something in dataset, you have to delete metadata_x.pkl file to create new database on training launch taking into account new changes (it made me become crazy during my first tests when forgetting to delete it)

I've just checked ZFTurbo's code, and for dataset type 2,  the ".wav" extension is still required for the script to find the files (it doesn't work with any other)

The lightest arch and still performing great seems to be vitlarge

segm_model in the script (or something like that)

musdb configs are for 4stem training, vocals ones are for 2stem

this [vitlarge] arch is more tricky than other, even if lighter.” jarredou

Q: What’s the minimum length requirement

A: “Default segment_size in htdemucs config is 11 seconds audio chunks, so your training audio files should be longer or equal to 11 second length.

It can be lower, if there’s no other choice.”

Here, one user is being helped with training hi hat model from scratch using ZFTurbo code on an example of RTX 3060 12GB:

https://discord.com/channels/708579735583588363/708912597239332866/1225994922822467674

Experimental BS-Mamba

git clone https://github.com/mapperize/Music-Source-Separation-Training.git --branch workingmamba

SCNet: Sparse Compression Network

A paper was released: https://arxiv.org/abs/2401.13276

https://cdn.discordapp.com/attachments/708579735583588366/1200415850277130250/image.png

On the same dataset (MUSDB18-HQ), it performs a lot better than Demucs 4 (Demucs HT).

“melband is still sota cause if you increase the feature dimensions and blocks it gets better

you can't scale up scnet cause it isn't a transformer

it's a good cheap alt version tho”

There's an unofficial (not fully finished yet, it seems) implementation of SCNet: https://github.com/amanteur/SCNet-PyTorch

VR architecture (obsolete for instrumentals - bleeding)

(guide by Joe)

Not (really) recommended anymore, unless for specific tasks like de-noise, de-reverb or Karaoke or BVE when MDX V1 wasn't giving that good results.

Q: How do I train my own models?

A:  

Model Training Tutorial

#Requirements:

- Windows 10

- Nvidia GeForce Graphic card (at least 8 GB of VRAM)

- At least 16GB of Ram

- Recommend 1 - 2TB of hard drive

Setup your dataset

1. You need to know...

Attention:

- Although you can train your model with mp3, m4a, flac file, but we recommend convent those file to wav file.

- For high-resolution audio sources, the samples are reduced to 44.1kHz during conversion.

- If possible, match the playback position and volume of the OnVocal and OffVocal sound sources.

- The dataset required at least 150 pairs of songs

2. Rename the file...

Attention:

Create "mixtures" folder with vocals / "instruments" folder without vocals

Please separate the sound sources with and without vocals as shown below.

There is also a rule for file names, please make the file names numbers and add "_mix" / "_inst" at the end.

Example:

Instrumental with vocal:

                    D:\dataset\mixtures\001_mix.wav

                    D:\dataset\mixtures\002_mix.wav

                    D:\dataset\mixtures\003_mix.wav

                    .

                    .

                    .

Instrumental only:

                    D:\dataset\instruments\001_inst.wav

                    D:\dataset\instruments\002_inst.wav

                    D:\dataset\instruments\003_inst.wav…

                    .

                    .

                    .

3. Download the vocal-Remover from GitHub

Link: https://github.com/tsurumeso/vocal-remover/releases/

4. Install the program (Use this command down below)...

pip install --no-cache-dir -r requirements.txt

5. Start learning

python train.py --dataset D:\dataset\ --reduction_rate 0.5 --mixup_rate 0.5 --gpu 0

Attention:

If you want to pause, press Ctrl+Shift+C

6. Continue learning

Example:

python train.py --dataset D:\dataset\ --pretrained_model .\models\model_iter(number).pth --reduction_rate 0.5 --mixup_rate 0.5 --gpu 0

MedleyVox

Excellent for training duet/unison and separately main/rest vocals.

The original code is extremely messy and broken at the same time, and dataset is big and hard to obtain. Cyrus was to publish their own repository with fixed code and complete dataset at some point.

The problem of the model trained by Cyrus was training cutoff used while training.

"The ISR_net is basically just a different type of model that attempts to make audio super resolution and then separate it. I only trained it cuz that's what the paper's author did, but it gives worse results than just the normal fine-tuned" ~Cyrus

Apart from training code, there wasn't any model released by the authors. Only result snippets.

https://github.com/JusperLee/TDANet

"I think this arch should worth a try with multiple singer separation, as it's performing quite well on speaker separation, and it seems it can be trained with a custom number of voices (same usual samplerate & mono limitations tho)" jr

MossFormer2 may perform better

___

Might be useful for any training in Colab (by HV, 2021):

 

“function ConnectButton(){

    console.log("Connect pushed");

    document.querySelector("#top-toolbar > colab-connect-button").shadowRoot.querySelector("#connect").click()

}

setInterval(ConnectButton,60000);

 <- enter this on console (not cell)

and keep Colab on foreground.

It's not really good to train in Colab at all, due to its limitations.

If you're training because you want a better model than v5/v4 mgm models, stop it, you won't surpass mgm models with just Colab. However, you could subscribe to https://cloud.google.com/gcp

and watch some YouTube tutorials how to utilise its resources to colab.”

How to get fast GPUs for training

By Bass Curtiz

"Budget" option - 4090, or

Buy A6000, preferably multiple.

Or hire them in the cloud.

Best bang for your buck for now

https://vast.ai/

[https://www.tensordock.com/ similar prices

https://www.runpod.io/]

“the easiest would be colab, if you pay for the compute units the v100 is identical to training with 3090 locally, but colab can get expensive quickly” - becruily

Most in-depth and handy article: https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/

GPU performance chart:

https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?w=1703&ssl=1

tldr; https://nanx.me/gpu/

Cheaper GPUs for training

2x GTX 3090s used are cheaper than 4090 (but irc, the performance for multi-GPUs doesn’t scale linearly, so might be not that affordable)

RTX 3090 24GB (CUDA cores: 10496)

4070 Ti Super 16GB (CUDA cores: 8448)

RTX 4070 Ti 12GB (CUDA cores: 7680, but it's (still) tasking to train on it)

2x GTX 1080s (if dual GPU scaling would be decent enough, it’s not linear)

Not mentioning these:

RTX 2080 Ti (CUDA cores: 4352)

RTX 3060 12GB (CUDA cores: 4864)

GTX 1080 Ti (CUDA cores: 3584)

Sign up for an account at https://sites.google.com/site/vultrfreecredit?pli=1

Get 250 bucks free.

Add 50 bucks.

Now GPU rental is unlocked. Start there and vast.ai and wait for a server that has a6000 x8 for a good price.

But if you have enough time at hand, RTX 4090 is cheaper in the long run.

Depends on your electricity costs, though, which varies per country.

Training and inference performance for GPU per dollar

https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?ssl=1

Be aware that multi GPU configurations don’t scale linearly.

How long it takes to train a model?

"Depends on input and parameters, and architecture.

MDX old version:

5k input (15k actually: inst/mixture/vocals) + 100 validation tracks (300, same deal), fullband, 300 epochs would have taken 3 months on a RTX 4090.

You can speed it up by going multiple GPUs and more memory, therefore:

A6000 (48gb) x 8 was like 14 days.

Damage on 300 epochs: ~700 bucks."

"7 days training of e.g. BS-Roformer on 8xA100-80GB": 7\*24\*15.12 (runpod 8xa100 pricing) = $2540.16”

4 days achieved epoch 74, and on epoch 162 for ~4200/4500 songs

Performance in training

NVIDIA H100>A100 (40/80GB)>RTX 4090>RTX A6000 Ada>L40>RTX 4080>3090 Ti>V100 (32/16GB)>RTX 3090

https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?w=1703&ssl=1

___

“Q: 4070 [8GB] works, but I would only use for testing IMO

A: I’ve trained some convtasnet in the past with really decent times [on 4070 8] (the new Ada Lovelace on 40 series makes faster tensor cores, which kinda compensates the less number of cores compared to 30 series)

A: [4070 8GB] is fine for non transformers.

If mamba blocks are used good it could be fine TBF.

The thing with transformers is that it is really reliant on VRAM.

A: Depends on what's inside the transformer, if it's flashatten then you need Ada.

Mamba has custom kernels, but I'm pretty sure 4090 can run it - what'll be cool is mamba + reversible net, super memory efficient in training, but it ends up being slower per step (around 2x compared to backprop).

I guess in reversible net you can have gigantic batch sizes which kinda circumvent the problem of a slow step speed”

____________________________________________________________________

Volume compensation for MDX models

How to automate calculation of volume compensation value for all MDX models

(results are not perfect and need to be fine-tuned)

by jarredou

So, I have maybe a protocol to find accurate volume compensation:

- Use a short .wav file of just noise (I've used pink noise here) and pass it through the model you wanna evaluate

- Take the resulting audio, the one that will have all the noise in it and compare it to the original noise with this little python script that will give you the difference in dBTP and the quivalent VC ratio (you'll need to

pip install librosa

 if you don't have it installed already). The results I've found with it are coherent with the ones you've found by ears ! (1.035437 for HQ2 / 1.022099 for KimFT other)

Here's the script :

import numpy as np

import argparse

import librosa

 

 

def Diff_dBTP(file1,file2):

    y1, sr1 = librosa.load(file1)

    y2, sr2 = librosa.load(file2)

    true_peak1 = np.max(np.abs(y1))

    true_peak2 = np.max(np.abs(y2))

    difference = 20 * np.log10(true_peak1 / true_peak2)

    print(f"Diff_dBTP : The difference in true peak between the two audio files is {difference:.6f} dB.")

    ratio = 10 ** (difference / 20)

    print(f"The volume of sound2 is {ratio:.6f} times that of sound1.\n")

 

if __name__ == "__main__":

    parser = argparse.ArgumentParser(description="Find volume difference of two audio files.")

    parser.add_argument("file1", help="Path to original audio file")

    parser.add_argument("file2", help="Path to extracted audio file")

    args = parser.parse_args()

   

    Diff_dBTP(args.file1, args.file2)

Volume compensation values for various models (in reality they may differ +/- e.g. by 0.00xxxx, but maybe not much more)

All values according to the script made by **jarredou**

*(All default but Spectral Inversion - Off; Denoise Output: On; - the latter shouldn't affect the results if turned off)*:

   -   Kim Vocal_1   -   1.012819

   -   Kim Vocal 2 - 1.009

   -   voc_ft - 1.021  

   -   Kim ft other - 1.020 (Bas' fine-tuned and SDR-validated)

   -   UVR-MDX-NET 1    -   1.017194

   -   UVR-MDX-NET Inst 2    -   1.037748

   -   UVR-MDX-NET Inst 3    -   1.043115

   -   UVR-MDX-NET Inst HQ 1    -   1.052259

   -   UVR-MDX-NET Inst HQ 2    -   1.047476

   -   UVR-MDX-NET Inst Main    -   1.037812 (actually it turned out to be 1.025)

   -   UVR-MDX-NET Main    -   1.002124

   -   UVR-MDX-NET-Inst_full_292    -   1.056003

   -   UVR-MDX-NET_Inst_82_beta    -   1.088610

   -   UVR-MDX-NET_Inst_90_beta    -   1.151219 (wtf)

   -   UVR-MDX-NET_Main_340    -   1.002742

   -   UVR-MDX-NET_Main_406    -   1.001850

   -   UVR-MDX-NET_Main_427    -   1.002091

   -   UVR-MDX-NET_Main_438    -   1.001799

   -   UVR_MDXNET_9482    -   1.007059

"denoise is just processing twice with the second try inverted, after separation reinverted, to amplify the result, but remove the noise introduced by MDX, and then deamplified by 6dbs, so it still the same volume, just without MDX noise.

Basically HV noise removal trick"

UVR hashes decoded by Bas Curtiz

https://github.com/Anjok07/ultimatevocalremovergui/blob/master/models/MDX_Net_Models/model_data/model_data.json

the link with hashes possess MDX models parameters

The above probably still doesn’t possess all the models added in update, e.g. Foxy model, but there are only 4-5 combinations of settings so far.

Be aware that all MDX model parameters in UVR consist of these combinations:

-Fullband:

self.n_fft = 6144 dim_f = 3072 dim_t = 8

-kim vocal 1/2, kim ft other (inst), inst 1-3 (415-464), 427:

self.n_fft = 7680 dim_f = 3072 dim_t = 8

-496, Karaoke, 9.X (NET-X)

self.n_fft = 6144 dim_f = 2048 dim_t = 8 (and 9 kuielab_a_vocals only)

-Karaoke 2

self.n_fft = 5120 dim_f = 2048 dim_t = 8

-De-reverb by FoxJoy

self.n_fft = 7680 dim_f = 3072 dim_t = 9

Names decode

full_band_inst_model_new_epoch_309.onnx fea6de84f625c6413d0ee920dd3ec32f

full_band_inst_model_new_epoch_337.onnx 4bc04e98b6cf5efeb581a0f382b60499

kim_ft_other.onnx b6bccda408a436db8500083ef3491e8b

Kim_Vocal_1.onnx 73492b58195c3b52d34590d5474452f6

Kim_vocal_2.onnx 970b3f9492014d18fefeedfe4773cb42

kuielab_a_bass.onnx 6703e39f36f18aa7855ee1047765621d

kuielab_a_drums.onnx dc41ede5961d50f277eb846db17f5319

kuielab_a_other.onnx 26d308f91f3423a67dc69a6d12a8793d

kuielab_a_vocals.onnx 5f6483271e1efb9bfb59e4a3e6d4d098

kuielab_b_bass.onnx c3b29bdce8c4fa17ec609e16220330ab

kuielab_b_drums.onnx 4910e7827f335048bdac11fa967772f9

kuielab_b_other.onnx 65ab5919372a128e4167f5e01a8fda85

kuielab_b_vocals.onnx 6b31de20e84392859a3d09d43f089515

Reverb_HQ_By_FoxJoy.onnx cd5b2989ad863f116c855db1dfe24e39

UVR-MDX-NET-Inst_1.onnx 2cdd429caac38f0194b133884160f2c6

UVR-MDX-NET-Inst_2.onnx ceed671467c1f64ebdfac8a2490d0d52

UVR-MDX-NET-Inst_3.onnx e5572e58abf111f80d8241d2e44e7fa4

UVR-MDX-NET-Inst_full_292.onnx b06327a00d5e5fbc7d96e1781bbdb596

UVR-MDX-NET-Inst_full_338.onnx 13819d85cad1c9d659343ba09ccf77a8

UVR-MDX-NET-Inst_full_382.onnx 734b716c193493a49f8f1ad548451c48

UVR-MDX-NET-Inst_full_386.onnx 2e4fcd9ec905f35d2b8216933b5009ff

UVR-MDX-NET-Inst_full_403.onnx 94ff780b977d3ca07c7a343dab2e25dd

UVR-MDX-NET-Inst_HQ_1.onnx 291c2049608edb52648b96e27eb80e95

UVR-MDX-NET-Inst_HQ_2.onnx cc63408db3d80b4d85b0287d1d7c9632

UVR-MDX-NET-Inst_Main.onnx 1c56ec0224f1d559c42fd6fd2a67b154

UVR-MDX-NET_Inst_187_beta.onnx d2a1376f310e4f7fa37fb9b5774eb701

UVR-MDX-NET_Inst_82_beta.onnx f2df6d6863d8f435436d8b561594ff49

UVR-MDX-NET_Inst_90_beta.onnx 488b3e6f8bd3717d9d7c428476be2d75

UVR-MDX-NET_Main_340.onnx 867595e9de46f6ab699008295df62798

UVR-MDX-NET_Main_390.onnx 398580b6d5d973af3120df54cee6759d

UVR-MDX-NET_Main_406.onnx 5d343409ef0df48c7d78cce9f0106781

UVR-MDX-NET_Main_427.onnx b33d9b3950b6cbf5fe90a32608924700

UVR-MDX-NET_Main_438.onnx e7324c873b1f615c35c1967f912db92a

UVR_MDXNET_1_9703.onnx a3cd63058945e777505c01d2507daf37

UVR_MDXNET_2_9682.onnx d94058f8c7f1fae4164868ae8ae66b20

UVR_MDXNET_3_9662.onnx d7bff498db9324db933d913388cba6be

UVR_MDXNET_9482.onnx 0ddfc0eb5792638ad5dc27850236c246

UVR_MDXNET_KARA.onnx 2f5501189a2f6db6349916fabe8c90de

UVR_MDXNET_KARA_2.onnx 1d64a6d2c30f709b8c9b4ce1366d96ee

UVR_MDXNET_Main.onnx 53c4baf4d12c3e6c3831bb8f5b532b93

VR dereverb models decode

UVR-De-Echo-Normal.pth = f200a145434efc7dcf0cd093f517ed52

UVR-De-Echo-Aggressive.pth = 6857b2972e1754913aad0c9a1678c753

UVR-DeEcho-DeReverb.pth = 0fb9249ffe4ffc38d7b16243f394c0ff

So they’re all "4band_v3.json"  config file (from here)

More thorough chart by David Duchamp (a.k.a. Captain FLAM):

https://docs.google.com/spreadsheets/d/1XZAyKmgJkKE3fVKrJm9pBGIXIcSQC3GWYYI90b_ul1M

Local SDR testing script

https://cdn.discordapp.com/attachments/708579735583588366/1123414662403334215/sdr.py by Dill

https://cdn.discordapp.com/attachments/708579735583588366/1123429514798694420/sdrgui.py GUI by zmis (but it scores a bit lower for some reason)

Here's a handy little python script I made using the help of Ai that can calculate the SDR of a track based off of the actual instrumental or vocal of the song.

You can do python sdr.py --help for an explanation on how to use the script.

You just need numpy and scipy for it to work, and python ofc!

I'm not sure if you would like to pin this or not, but I've been using this script to help me improve my separation methods.

https://github.com/ZFTurbo/Audio-separation-models-checker/tree/main

Based on MUSDB18-HQ dataset

Best ensemble finder for a song script

https://cdn.discordapp.com/attachments/708579735583588366/1123710507057168384/best_ensemble_finder_fast_v3.py

Currently, this optimized version can find the best combo of 9, 3 minute audio files in about 2 minutes and 40 seconds in Colab.

Universal function to make different types of ensembles by ZFTurbo

https://cdn.discordapp.com/attachments/911050124661227542/1192220574982881320/ensemble.py

“In my experiments SDR for avg_wave always the max.”

Voice Cloning

“RVC and some of its forks (Applio, Mangio, etc) are genuine free, open source ones for inference and training. For realtime voice changer that uses RVC models, there's w-okada: https://rentry.co/VoiceChangerGuide” no guide for Linux though

https://www.tryreplay.io/

“Url downloads, local files, massive database of models, both huggingface and weightsgg, in built separation models, options to skip that part if you have vocals, ability to use multiple ai models for one particular result, and the option to either merge or just get multiple results at the end, plus whatever else, de-reverb and stuff” it has voc_ft vocal model from UVR5.

“even my old laptop still can inferencing using applio

i3 3217u 1.8ghz

intel hd 4000”

And you’re probably aware already that RVC Colabs to train voice cloned models are banned.

Interesting links for research

https://github.com/facebookresearch/AudioMAE

https://arxiv.org/abs/2310.02802

https://github.com/pbelcak/fastfeedforward

https://github.com/corl-team/rebased

https://github.com/bowang-lab/U-Mamba/tree/main

https://www.unite.ai/mamba-redefining-sequence-modeling-and-outforming-transformers-architecture/

https://github.com/state-spaces/mamba

https://github.com/apapiu/mamba_small_bench

(“this one is actually exciting because it runs faster and leaner than transformers and promises to surpass them in quality

>What makes Mamba truly unique is its departure from traditional attention and MLP blocks. This simplification leads to a lighter, faster model that scales linearly with the sequence length – a feat unmatched by its predecessors. Mamba has demonstrated superior performance in various domains, including language, audio, and genomics...”)

“mamba is real fucking complicated. like reaaaally complicated (...) hyper params do seem hard to adjust tho.”

“mamba is kinda sick but its early days in the SSM space, so lots of the tricks that you can do with transformers you cant do with SSMs because they havent become mainstream

but mamba has two very cool properties

it has positional information by its nature - i.e. no extra computation is required to embed positional info

linear time complexity - so in audio it's super useful because audio data hits the On^2 complexity (if the chunksize is large enough)”

“i personally don't trust any of the mamba papers - they either say how mamba is the best thing since sliced bread or worse than 3 year old transformers

although the paper I read for that was questionable”

https://www.harvard.edu/kempner-institute/2024/02/02/repeat-after-me-transformers-are-better-than-state-space-models-at-copying-2/

“they don't even replace the mask estimator thing in bs-roformer with mamba”

https://arxiv.org/abs/2404.02063

https://arxiv.org/abs/2401.09417

https://github.com/hustvl/Vim

https://github.com/RobinBruegger/RevTorch

https://huggingface.co/blog/rwkv

Why does music source separation benefit from cacophony?

https://arxiv.org/abs/2402.18407

It makes our side chain stem limiting thing irrelevant.

“As the paper demonstrate that using only randomly mixed stems is more efficient for training than using only real paired stems (from the same song and sync), in that random mix config, the indiv stems will never be against the mixture that was used to limit them, so making that process irrelevant” jarredou

MDX23C training code by ZFTurbo has the mix randomization feature built-in - dataset type 1 is random mix, dataset type 4 is the real mix (aligned).

“I think now after reading that paper that once you have a dataset large enough and using the random mixing with some simple augmentations like gain changes/channel swap/phase inversion/EQ/soft-clipping(tanh), you're good to good and can forget more ressource intensive augmentations like pitch shifting and time stretching, that can really slow down training. Maybe just reverbs can be still usefull even if it need more resources than simple math processing.

So really go fast/minimal on pre-processing in fact.” -||-

“The only good paper about SDR I have in mind is "SDR - half-baked or well done?" https://arxiv.org/abs/1811.02508

from 2018, but maybe there are some more recent ones on the subject

There's also that thesis that is interesting but maybe also outdated now (as based on the old OpenUnmix), about loss functions effect on source separation learning: https://discord.com/channels/708579735583588363/911050124661227542/1191134740284190750

My go-to URLs to follow publications:

https://arxiv.org/list/cs.SD/pastweek?show=2000

(weekly list)

https://arxiv.org/list/eess.AS/pastweek?show=200

(weekly list)

I've registered to https://www.scholar-inbox.com

recently (it's free), which can be handy (but lots of duplicate if you follow already arxiv publications above)

and also: https://twitter.com/csteinmetz1/

for sure”

Griffin: Mixing Gated Linear Recurrences with Local Attention for E…

https://arxiv.org/abs/2402.19427

new tweak to a modern transformer architecture improves performance

https://github.com/IAHispano/gdown

If you have some issues with downloading files from GDrive on Colab

Q: Can you recommend something to automate adding effects (and if possible randomized)

A: https://pytorch.org/audio/stable/generated/torchaudio.io.AudioEffector.html#torchaudio.io.AudioEffector

maybe even http://ccrma.stanford.edu/planetccrma/man/man1/sox.1.html

https://github.com/iver56/audiomentations

 (which uses random parameters by design)

https://github.com/spotify/pedalboard

(take a look at the augmentations in ZFTurbo script (dataset.py), it uses both libs with randomized parameters also for pedalboard)

Q: What Transformer and Mamba is

A: https://www.youtube.com/watch?v=XfpMkf4rD6E

https://www.youtube.com/watch?v=9dSkvxS2EB0

Side-note: with a big of tweaking, ZFTurbo training script can be edited to train a reverb model, generating the randomized reverb on the fly with pedalboard.Reverb (https://spotify.github.io/pedalboard/reference/pedalboard.html#pedalboard.Reverb) and using reverbs IRs to have more diversity

https://openaccess.thecvf.com/content/CVPR2022/papers/Mangalam_Reversible_Vision_Transformers_CVPR_2022_paper.pdf

https://arxiv.org/abs/2306.09342

Fork of ZFTurbo training code, but I don’t know with what changes (by frazer):

https://github.com/fmac2000/Music-Source-Separation-Training-Models/tree/revnet

Another (by joowon)

https://github.com/mapperize/Music-Source-Separation-Training

Another (not so new) paper with maybe interesting concept improving separations quality that couldb may be reproduced:

VocEmb4SVS: Improving Singing Voice Separation

with Vocal Embeddings

http://www.apsipa.org/proceedings/2022/APSIPA%202022/TuAM1-7/1570836845.pdf

There's also a demo site for the 4-stem version, but I haven't found any publication/code https://cathy0610.github.io/2023-SrcEmb4MSS/

"Demucs employs a combination of L1 loss and deep clustering loss to optimize source separation." (https://github.com/facebookresearch/demucs/issues/458) I've found this paper few months ago, its findings are based only on openunmix arch, the observed behaviour could be different with other archs, but it's still very interesting: https://arxiv.org/abs/2202.07968

Not really in MDX23 code made by ZFTurbo:

“By default my code uses loss proposed by kueilab team. They use MSE but skip sample with worst loss (to avoid problems in dataset). mse loss can be used directly with --mse_loss  argument.

Also, auraloss is included in my code too. I experimented with it, but it didn't allow to gain additional profit comparing to standard loss function.”

Useful lib to experiment with different loss functions:

https://github.com/csteinmetz1/auraloss

I've seen that paper in my feed last month, doing real-time source separation (23ms latency): https://arxiv.org/abs/2402.17701

Mamba: Linear-Time Sequence Modeling with Selective State Spaces

https://www.youtube.com/watch?v=9dSkvxS2EB0

Vocal restoration research

https://github.com/facebookresearch/AudioMAE

https://carlosholivan.github.io/demos/audio-restoration-2023.html

https://google.github.io/df-conformer/miipher/

https://arxiv.org/abs/2403.05393

https://github.com/vikastokala/bcctn

https://github.com/espnet/espnet

https://github.com/manosplitsis/hifi-gan-bwe/tree/train_with_music

“new vocoder replacing hifi-gan, vocos, bigvgan etc

compared to other ones, high freq smearing practically doesn't occur”

EVA-GAN - another breakthrough over HiFi-GAN

https://arxiv.org/abs/2402.00892

https://arxiv.org/pdf/2402.00892.pdf

Anjok’s interview on YT

TL;DW: UVR’s documentary + training, archs and demudder explained

Anjok is the developer of Ultimate Vocal Remover 5 (UVR5 GUI).

Anjok in times where Spleeter was still a thing, found a VR arch made by Japanese developer, tsurumeso, and received better results than Spleeter. He started to make his own model on laptop 1060 6GB on 100 or 150 pairs with the absolute minium parameters, and it turned out to be a better model than tsurumeso's one. Later he transitioned to faster GPU (probably before 3090 yet).

Anjok wanted GUI for VR, and found BoskanDilan on Fiver and simply contracted him, paying to build the foundations of what UVR is today. BoskanDilan turned out to be a very good and talented coder.

They put the work on GitHub, and Aufr33 contacted Anjok with ideas on how the models can be improved etc.

Then BoskanDilan left in mid 2021 for personal reasons. Then the GUI work was taken by Anjok who was mentored by BoskanDilan to improve on understanding the coding. Anjok started to working on UVR exclusively, spending 10 hours a day for UVR in 2022.

He decided to make a simple installer in one package, as he received lots of issues on GutHub, from people not knowing how to install it. He also re-coded the UVR to make the code easier to maintain. Then Bas Curtiz helped Anjok on design aspects of UVR, e.g. designed new logo, and gave some advice, and good amount of feedback from UVR user perspective. Early 2022 phase of UVR development took a lot of advice from early users of UVR.

In May 2022 there was a first installer released to make UVR more accessible without e.g. installing Python or other dependencies and specialized programming knowledge to set up a proper environment.

Anjok intended UVR to be a Swiss army tool - to contain everything you need for separation, and also contain models made by the community (e.g. dereverb/denoise/deecho).

Anjok was still in charge of introducing other archs than VR into UVR, being simply the only one behind the process, while normally bigger teams work on projects of that scale, when e.g. different archs could be coded into UVR by different developers. It was a stressful period of time, because Anjok intended to make the software which is free of bugs, and still not fully rely on the community in terms of bug reporting.

Then the Mac version came out and M1/M2/M3 support for faster GPU acceleration. Anjok found out in Demucs repo a part of the code, making it easier to port UVR to Macs, and it is used by every model. Music community is pretty Mac-centered, and he devoted a considerable amount of time to make it work reliably on Macs too.

In the new UVR version there's a planned demudder to be introduced (described later), and possibly translations.

Anjok currently trains a new model coming in several weeks.

It's intended to be a little smaller in order to be not so resource intensive, but also better than the best current MDX-Net model.

Update 01.03.24

“I'm going to allow HQ4 to continue training beyond 1500+ epochs as an experiment (it's currently at 1200), and interestingly, the SDR has been steadily increasing. It has significantly surpassed HQ3 in terms of SDR and listening tests, and it also outperformed MDXC23 in listening tests, though not in SDR (yet!). The most recent evaluation on the multi-dataset showed a score of 15.85, using the default settings. Clearly, there's a limit to how much further training can enhance performance, but up to this point, improvements are still being observed. This model has been in training since October! I'm chipping away at the next GUI update as well, and the demudder will be in it.”

The model was released, with already HQ_5 scheduled in following month/s.

The archs in UVR and their technicalities summarized

VR

VR uses audio spectrograms and converts them to FFT spectrograms.

VR uses only magnitude spectrograms, not phase.

Phase represents timing where the data is, while magnitude represents the intensity of each frequency.

Phase is much harder to predict.

Actually VR uses original phase from the mixture and saves it during the process "and it just does the magnitude".

That's the reason why VR tends to have more artefacts in it. The smearing in instrumentals of VR is because the phase from the mixture is still in there.

Aufr33 later introduced 4 bands support for UVR.

Let's say for first of three bands between 0-700Hz there will be different resolution, for all other frequency ranges there will be different. E.g. knowing that vocals are in specific frequency range, you can optimize it further.

That feature made UVR and VR arch much better.

Later they introduced -

Ensembling

So a way to use multiple models to potentially get better results.

The three ways of ensembling:

avg - gets the average of vocals/instrumentals

max - is maximum result of each stem, e.g. in a vocal you'll get the heaviest weighted vocal from each model, and the same goes for instrumental, giving a bit cleaner results, but more artefacts

min

MDX-Net

Uses full spectrogram with phase and magnitude

Tradeoff is muddier results, but natural, cleaner sound.

Training

Anjok separated on nearly every genre you can think of, and stated that the hardest genre for separation is metal and vocal-centered mixes. Also, if the instrumental has lot of noise, e.g. distorted guitars, the instrumental will come out muddier.

MDX-Net was the arch, addressing lots of VR issues in its core.

Tracks from 70-80s can separate well. 50-60s will be harder, e.g. recorded in mono. Early stereo era gets a little better.

A good model needs to be as good as the dataset for a model.

There was lots of work scrapping it from the internet.

Aufr33 was the mastermind behind Karaoke model and its dataset.

Demucs model wasn't as successful, as probably was more meant for more stems, and MDX-Net gave better results for 2 stems.

Training details covered in this interview can be found at the top of Traning models guide section of the doc

The biggest issue in terms of archs is phase, and the source of muddiness. Currently, in audio separation there's not a great way to calculate phase in a model like the phase spectrogram as it's not as obvious as the magnitude spectrogram.

You take the vocal out of a heavy rock track, but the process is not perfect, so it will take some part of the instrumental with it. Even if you don't hear instrumental in vocals, there's still instrumental data in there in the phase of that vocal track.

In the end of the day, source separation is prediction. It's predicting where it thinks it is, but there will be always some imperfections, e.g. whenever you hear muddier sound in a track which has more noise like metal tracks.

Anjok emphasizes on (currently) lack of correlation between SDR and the fact that bigger SDR metric doesn’t necessarily mean better. He tried some top of the SDR chart result before, and wasn’t quite happy about them.

Because phase is a big part of the issue, now the new upcoming -

Demudder

feature into UVR comes into play (it was also explained before on the server by Anjok - if something is not clear, try to find his messages there)

It uses lots of phasing tricks. It processes the track twice. The first takes instrumental from the first go around and compares it against the original mixture. It chops the mixture into 3 seconds chunks and ?inerts over that lists of chunks and for each segment, it cuts out where that segment is in the instrumental, and it finds similar events that aren't at the exact same place. It takes those chunks, and it analyses them against the instrumental that was generated, and it tries to find the most similar events it can from the instrumental, that aren't at the same place from that segment, and it finds similar events, and then it phases it, it does a phase invert of that instrumental

(56:30) If the volume or DB threshold isn't past the certain point because it's too loud then it means it does not cancel out and doesn't make phase invert, if it reaches a certain threshold like if it is below certain threshold it'll phase that, and then it will basically stitch together a new mixture that is kind of phased from that original instrumental output, and it reprocesses that new stitch together, mixture with the phase with the instrumental phase changed, and it processes that through the second pass, and then it takes that vocal and then phase inverts it with the mixture, with the original mixture and then what you end up having is some of the parts that are similar from the other parts of the track, you end up having those fill in the spectral holes.

Sam remarks find some similarity with probably how Izotope Imager works.

Anjok says: I'm trying to get a similar part, but also try to take it and phase it with that segment. Because it's not the exact same part of the segment, it's not gonna be a perfect phase, because it would be an original vocal output.

So it's kind of still finding the bit of instrumental that is still in the vocal.

Sam remarks about frequent situations where you perform separation, and it can lead to decrease of e.g. hihat volume levels in instrumental, referring to what information separated vocal stem can wear. It's part of the muddiness Anjok tried to address with the feature.

Anjok didn't want to compromise vocal quality, and in some cases it makes vocal better too, but it also depends on how the track was mixed originally. If it's analog track recorded in one session or even live track, it won't work so good. The problem is with e.g. 10 minutes track, when demudder won't find phase similarities so effectively. It will work the best on music made with samples. If the track is digital, it is more likely to work better.

Anjok currently works on it to make it work for all tracks.

The more he works on it, the more breakthroughs are made, but due to his day job, he had less time to work on it lately.

Anjok gives his appreciation on the group of very talented developers who made MDX-Net arch in the University of Korea. It's his favourite network. He's a big fan of Woosung Choi work.

_____________________

For help and discussion, visit our Audio Separation Discord: https://discord.gg/ZPtAU5R6rP

For inst/voc separation, try out Colabs: MDX23 (2-4 stems) | MDX-Net | VR | Demucs 4 (2-6) | GSEP (2-6) | BS-Roformer (2-4)