DISCLAIMER: What’s written here is based on the best I could do from experience and like, random articles and documents I have found or have heard about. Some of these range from literal memes not meant to be taken seriously such as the “crunch method”, some come from deep studies within computer music such as “waveset synthesis” and some come from DSP with no real application as far as I know (around the end is where they show up). If I’m wrong please let me know and correct me so I can improve this.

Space Laces Crunch Method:
There are probs a lot of ways to get that crunch in your sound, use an OTT, use a dynamic tube distortion and turn up the bias in ableton, or ring mod a sound, or use crunchy foley, a waveshaper that with a good amount of the wave @ 0 to create those pulse like timbres, or use intermodulation distortion by taking white noise on a bass. This isn’t a real technique lol, I like this wangleline vid.

https://www.youtube.com/watch?v=g3JxYhg7Wu8

UPDATE I finally found this really useful tutorial by dnksaus on the actual “crunch” if i’m to believe lol at this point i’m probs wrong 😭
​​
https://www.youtube.com/watch?v=UhpNyn9TdRU&list=PL0GYNH5tLnlpZVsjIOgDXm4isrOofIybg&index=3

Mr. Bill Transient Method: Replace a transient from one drum with another straight up, Mr. Bill has a whole tutorial on it

https://www.youtube.com/watch?v=rt1VbWI6ylU

He even has a plug-in called slap that does this.

https://www.yum-audio.com/Slap-by-MrBill/

UPDATE thank u to HeyImNIRE, but basically what it refers to is similar to what I mentioned above but utilizes shaper box to extract the transient of other drums and on the master, amplify said transients through something of a sidechain monitor to keep them consistent. Just watch this tutorial
https://youtu.be/FTvGkPhUT9I?si=GDL4FZ73P08kdyuo&t=232 

Lazer Texture Method: Do a pitch dive on sounds to give it that lazer sound, or layer a laser in your sound, or maybe granular a bunch of laser zap sounds to do fun stuff. There are a lot of ways to make those sounds in general, all-pass filters such as dispersers can also do the trick.

https://youtube.com/shorts/QzJYGVHKkGE?feature=shared

Melodyne Method: Melodyne is an overly expensive plug-in used for manual pitch correction, however, in the full-version, you can give a lot of ways to control the formant and harmonic character of a sound used especially in botanica. Dnksaus has a good tutorial on the cool stuff you can do with it, (which I’m sure Neutone and Logic’s own alternative don’t have but idk)
https://www.instagram.com/reel/C8DM-LHPvZ4/?utm_source=ig_web_copy_link&igsh=MzRlODBiNWFlZA==

Selsai also has some other tricks that idk you can mess with too.

The Pitchmap Smear Method: Pitchmap is a popular polyphonic pitch corrector that probs utilizes FFT algorithms to pitch stuff correctly, so that means you can smear transients in it, video by selasi

https://www.youtube.com/watch?v=QLJUk-Aj4D8

The nvctve emergence method: Emergence is a free granular plug-in that utilizes real-time granular manipulation and you can automate a lot of the parameters and do some uhhh, stuff to it.
https://x.com/nvctve/status/1813575896135287121
https://www.kvraudio.com/product/emergence-by-daniel-gergely

The vital glitch method: Ok this is a technique I’ve developed myself using vital that I posted on Twitter a while back but basically it’s how the free synth Vital can be especially good for glitch sound design by creating a chaotic patch, right clicking on a wavetable to resynthesize it, and then scrubbing through said wavetable while manipulating it.
https://x.com/kyrsive/status/1784156555614605672 

The metallic chorus method- If you turn down the rate/depth of a chorus to be as slow as possible and crank up the feedback, you can easily make a sound much more metallic since choruses are a bunch of delays. Bonus points if you can change the delays themselves. Vital is exceptionally good @ this as it has a “freeze” option that can turn off any modulation.

https://x.com/kyrsive/status/1734021815809318916

Chorus + disperser method- Idk how to describe this one but it was mentioned by @FloureDistrict so I thought I’d mention it. Basically turning up the depth and the rate,setting the feedback to around 20%, and dry/wet at around 50-75% allows for sounds to have a very grainy esque tail onto them + adding a disperser can give it a much more watery texture to it.

https://x.com/FloureDistrict/status/1822097359456919592

Concatenative synthesis method – form of synthesis akin to granular where a set of short samples (units) are used to concatenate a sound, in contrast to granular synthesis, concatenative synthesis is driven by an analysis of the source sound, in order to identify the units that best match the specified criteria, these resulting analyzed units are then collected into a database we call a “corpus” - https://ismm.ircam.fr/catart/

There are however, some free alternatives to said software if you aren’t willing to shrill it out. Some examples include Aphex Twin’s “samplebrain” which has a similar methodology in utilizing a corpus called a “brain” to reconstruct a target sound https://gitlab.com/then-try-this/samplebrain

Eargram for puredata is another popular resource I would recommend checking out, I haven’t tried it so I might be off https://sites.google.com/site/eargram/ 

Resynthesized modal filter method- referring to the “modal filter” inside of melda’s msoundfactory, this filter allows you to analyze any wav file and extract frequencies from a snapshot of audio and apply it to the harmonic relation of the modal filter - https://www.youtube.com/watch?v=IV-7fF_8lOE

You might also be able to build your own version of this algorithm in max/msp + plugdata. The cycling 79 youtube channel has a series of FFT which in the end does an incredibly similar resynthesis technique @ the end, but sigmund in plugdata might help you as well. https://www.youtube.com/watch?v=9gQAHf0Sf9I&list=PLasl9I6VeCCo4ASETZ5fgcflFx09ZJ6EJ

Gen~ physmod method– physical modeling using gen~ in max/msp. The reason this technique might be preferred over using a normal max/msp system for physical modeling is the fact that sample delays and feedback systems are much more versatile in an environment like max/msp that works via sample data instead of vector data. Autechre have used it to make incredibly detailed and unreal sound design using it. You might be able to achieve this in plugdata but making sure your block size is as small as possible.

https://www.youtube.com/watch?v=eDYs2UZzhI4 https://ccrma.stanford.edu/~jos/pasp/Physical_Models.html

Rawdata crunch method – I often use audacity raw data import for generating interesting noise textures to then layer and use within compositions and sound design, “crunch method” would refer to distorting a source signal with raw data noise together on a bus. Noise artists such as Network Glass have used it to create anti-music statements, but certain algorithms can create different textures.

https://www.youtube.com/watch?v=zdSwj3Y92F4

sdrr2 method – sdrr2 is just a saturation plugin I like using on my master, shoutouts @gater for putting me on

flucoma demix method – flucoma is a bank of external objects for maxmsp. Demixing (or decomposition, sometimes referred to as remixing) is breaking down a signal into 3 components (sinusoidal[tonal],residual[noise], and transients) that then sum back into the original. FluidCorpusMechanic is also available for puredata so I’d experiment and see if it’s possible to do so, if you ever make it, let me know lol. https://learn.flucoma.org/learn/decomposition-overview/  https://learn.flucoma.org/reference/sines/

audeka cubic crunch method – this is just the “cubic” waveshaper mode in ohmicide being used in multiband on the upper bands to create a sort of “breaking up” “tearing” texture, I saw this in some audeka video or course that’s why I contribute it to them. The manual reads it as having a “drying” effect and the waveform looks like this. With the unaffected waveform @ the top and the affected waveform at the bottom, leading to a definitely blown out sound. Ohmicide is free so you can get it right now, albeit discontinued.

Prismfx method– prismfx is just a physical modeling modal bank/resonator in reaktor, sounds cool. If you don’t want to spend as much money, the mikroprism is available for free in kontakt, however, it’s not an effect rather an instrument but still fun.

nonlinear filter method – The nonlinear filters in phaseplant add some level of distortion and coloring to the signal thanks to the nonlinearities in the circuitry (which could just be basic saturation). Phaseplant and Surge XT provide useful filters that can do this, even some that allow for warping to happen due to the cutoff frequency or the resonance, and Surge XT even has a nonlinear all pass filter?? Ezra says u should do this with cross modulation (fming two synths back into themselves, here’s an example of that happening https://www.instagram.com/reel/C12HovCMxYH/?utm_source=ig_web_copy_link&igsh=MzRlODBiNWFlZA==)  
Extra documentation on the signal processing behind nonlinear filters that come from Surge XT’s manual
https://jatinchowdhury18.medium.com/complex-nonlinearities-episode-4-nonlinear-biquad-filters-ae6b3f23cb0e

wave-terrain synth method– wave terrain synthesis is a form of synthesis with a 3rd added dimension to what would normally be a 2d wavetable lookup https://cycling74.com/forums/-sharing-wave-terrain-synthesis-abstractions

A good free alternative however this is VCV rack’s own wave terrain synthesizer called GeneticSuperTrain, I’ve had some fun evolving timbres through it ! https://library.vcvrack.com/dbRackModules/GeneticSuperTerrain

manual granular/micromontaging method – primitive form of granular synthesis where tiny slices of audio (grains) are manually placed and edited on a timeline, allowing for greater control of each individual grain at the expense of time, exerted effort and sanity. The term “manual granular” was first coined (as far as I know) by billain/aethek on his track vertebrae. https://soundcloud.com/aethek/vertebrae?si=4c9661398f44499ebc23b5a782462fbc&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing

Artists like Little Snake have also been known to utilize this technique on his debut album:

Ezra also utilized this technique for my song rogue myth but who cares lol hehe https://soundcloud.com/voysol/rogue-myth?si=41bd20be49ba4ad7a418bd88935ff264&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing

fake harmor resynth method – “I was instructed not to give away the sauce on this one, sorry.”- Ezra
I’ve looked around some discord servers but there seems to be no explanation on this y’all sorry. My best guess as to what this entails is possibly resynthesizing a sound not via that of harmor specifically but using another audio to spectrogram software, taking an image of said spectrogram via screenshotting and importing it into there, allowing any flaws that come through to be noticed, but this is a very weird technique I would assume would have no results. If anyone actually does know it, hit me up (we need you so badly rn).

Free spectral sound design for garageband ios -

OK this is mostly insane but bear with me. If you are in garageband and are looking for some nice spectral sound design tools that don’t cost any money, might is suggest plugdata? Plugdata is something we’ve mentioned before but it’s basically puredata but as a VST. What people don’t tell you is that plugdata also can work as an iOS AU device for garageband via the app, so you can design stuff like spectral gating, spectral delays, heck even a morpher if you have the time. I learned this a while back when I wanted to make a quick OTT on the go so this is insane.

https://plugdata.org/

round robin sampler method – refers to importing multiple samples into a sampler and having each consecutive midi input trigger a different random sample, at very quick trigger speeds this can result in a foley-esque texture, shoutouts @sv1___ !! Combining this with stuff like exponential rhythms can lead to very botanica-esque transition sounds.

VIDEO ON EXPONENTIAL RATES
https://www.youtube.com/watch?v=WXzDesXWA3I

self-oscillating eq method – using an eq bell peak (or any filter that isn’t digital) with a very high q factor to boost and resonate a specific frequency to the point it starts self-oscillating, sometimes due to the noise in a circuitry model. Usually works well being fed into a distortion effect.

Audacity sound design method - Audacity is a free audio editor we’ve talked about before but it’s able to host so many sound design possibilities, such as experimental spectral processing via that of noise removal, sliding stretching, and vocoding. Here’s a tutorial that does it
https://x.com/_onokio_/status/1661773079687643137

slipstick synthesis method – Slipstick synthesis can mean multiple things depending on who you ask.

In the kyma system, slipstick is a type of physical modeling that was developed by Knut Kaulke that produces waveforms with equations modeled after a mass dragged by a spring on friction surface. In a model, it leads to two events happening when undergoing friction, that of “sticking” and that of “slipping” that can be used to do a modeling of percussion when modulating the position of the object. The results from what I hear, while very realistic, also have this sorta watery texture to them.  

https://medias.ircam.fr/x7aa847

Some kyma systems might even work well with the “fingerboard” instrument, a tool we will discuss later

https://www.youtube.com/watch?v=eAVLrtOrcyc

However, according to Andy Farnell’s interpretation of the “slipstick” phenomenon when discussing the sound of opening a door, he would refer to using stuff such as exponential rhythms that go from slower rates up to audio rates + coupled with that of resonant filters + delays. I’ve done this technique beforehand and have ended up with some nice results.

And some people, well, I’ll just let the man explain for himself:

amon tobin kyma method– amon tobin used kyma (pronounced “keema” crazy I know) for various forms of synthesis and digital signal processing for his album isam, this would be used to create insane resynthesis morphings + formant shifting + performative granular stuff, and it’s a tool many sound designers still swore by. Sadly you need dedicated hardware to run kyma unfortunately, but maybe if you got some sound designer teacher in college they probably got one you can borrow.

https://www.youtube.com/watch?v=jbJwyTkCJk0

realtime audio variational autoencoder method– A real time timbre transfer that uses machine learning that can be utilized in max (but I think some plug-ins have done it as well such as https://neutone.ai/morpho). There are tone transfers that are also possible using Google’s machine learning which have been utilized by artists such as Giant Claw to create hyper realistic orchestral sounds.

https://sites.research.google/tonetransfer 

There is also the RAVE external that can be used for interesting morphing in max/msp. https://www.youtube.com/watch?v=dMZs04TzxUI

OPN enzyme/scanned synthesis method– OPN used enzyme while working on his garden of delete project, in particular utilizing the randomize feature to create sounds and glitches. The enzyme synth utilizes something called “scanned synthesis”, which you can think of as a combination of physical modeling and wavetable synthesis, where an emulation of a hammer strikes a string to which the waves it creates can be used to synthesize sounds. Enzyme has some pretty interesting features such as being able to use your own samples as the hammer and a deeply complex signal chain with multiple oscillators that allows for a much more expressive and experimental sound palette.  

*OPN discussing the Enzyme system from Equipboard

https://www.soundonsound.com/reviews/humanoid-sound-systems-enzyme

Sadly, the plug-in is long gone (I mean LOOK AT THAT UI). There are probably some good piracy sites, but I won’t promote that. Instead, I’ll give you some other alternatives to the scanned synth.

Nettle by Fellusive (for free!)

https://www.kvraudio.com/product/nettle-by-fellusive

Voltage Modular’s own Scanned Oscillator

https://cherryaudio.com/videos/weevil-scanned-oscillator

And that is not to mention Qu-Bit’s own Scanned Eurorack oscillator !

lorn speaker method– I don’t remember what interview I heard this in but basically lorn sometimes records his master output going through his speakers for room tones and texture from the sub, known as reamping. If you have a broken speaker, try using that (I know i got one).

Ultrasonic metal sounds- This is a fun one to use if you can (learned from Galen Tipton), get an ultrasonic microphone (if you can lol) and record some crazy metallic sounds like your keys jangling. These sounds usually contain frequencies above what we can hear so when you pitch it down, you can get some larger than life timbres. You can also do sound design sessions at extremely high sample rates and attempt to pitch them down as well, warning your computer may never appreciate you.

emphasis de-emphasis frequency/pitch shifter method– placing two frequency shifters at the start and end of your processing chain, and having them shift the signal in opposite directions from each other can have drastic results on the final sound, shoutout false noise. My theory as to why this happens is possibly due to the phase shifting that goes when working inside a frequency shifter and the artifacts from the ring mod, you could probably do the math to figure that one out.

Video by au5 discussing building a frequency shifter: https://www.youtube.com/watch?v=Vup3KZvsazc 

Similar artists have achieved this with pitch shifters as well, and depending on how some are built (such as that of FFT or PSOLA), can lead to interesting results as well !

ppooll method– is a digital signal processing environment for max msp that can be used for audio manipulation such as granular synthesis, live performances, ambisonics etc https://ppooll.klingt.org. For all you nerds, Tim Hecker uses it (akin to his workflow of utilizing max processors) https://www.reddit.com/r/ppooll/comments/12hvaxs/tim_hecker_recently_released_an_album_and_shared/

Also this might be a good time to shout out Forester for max/msp, could help create some nice soundscapes with this one !
https://leafcutterjohn.com/software/

Color bass in vital method - Another glorkglunk tutorial I’m just copy and pasting onto here I like, basically take any wavetable, set it’s detuning to the harmonic mode in advanced, set the spectral morph to vocode, and boom, it’s tonal enough to play chords.

https://x.com/GLORKGLUNK/status/1580425445274963968

YASUNAO TONE CD SKIP METHOD, TAPE DETERIORATION METHOD, LASZLO PHONOGRAPH GLITCH METHOD, OVAL CD METHOD, MARIA CHAVEZ METHOD - these are all just various techniques for producing glitches while manipulating the physical media the audio is being played through, scratching cds, cutting tape, burning cds, hitting your computer while playing sound etc. Here’s an example of artist Maria Chavez utilizing broken vinyls to create chaotic and psychedelic loops and textures.

https://www.youtube.com/watch?v=ruDZM-mrTpA&t=0s

Picture example of a Yasunao Tone method, utilizing adding scotch tape to the CD before playing it. (SOURCE https://www.youtube.com/watch?v=BTkqfWjTJ6E )

Interactive physical modeling method - This is another kind of physical modeling method but instead of using waveguides/delays, it attempts to recreate sounds via actual real time physics simulations and creating the sounds based on your interactions with the material and it’s atoms visually. This is much easier to explain when seeing it for yourself. I’ve linked a video of this in action, where interacting with the visual model produces sound based on said model.

https://youtu.be/BkxLji2ap1w?feature=shared

If you want to try it yourself, there’s a GitHub link to said software by Silvin Willemsen.

https://github.com/SilvinWillemsen/ModularVST

I’d also recommend checking out Anukari’s take on it!

https://anukari.com/

Emf shortwave method – recording electromagnetic fields with microphones for sound design purposes, pretty easy to understand https://www.youtube.com/watch?v=n76Br7rijFE

Andrew Huang has utilized them alot !
https://www.youtube.com/watch?v=lO5ehpdZ5Po

PARTICLE DISPERSION SYNTHESIS METHOD – yapping (or is she?). Particle synthesis is in fact a real term utilized to describe stuff like granular synthesis, pulsar synthesis, trainlet synthesis, basically anything that utilizes an incredibly small section of sound manipulated, microsound by Curtis Roads has a whole list of them go read that!

CEPSTRAL FORMANTS METHOD – Cepstral processing refers to taking the log of a time-domain frequency transform to get a spectral envelope. These envelopes can then be used to perform sound morphing (as done with Alexander Panos’ device if I am to believe) or utilized in some formant shifting algorithms like Harmor’s if I am to believe (which might be cheating since it just uses images). These processes are time-consuming and do require some knowledge of DSP but if you can do it, it’s super fun (I haven’t but it probably is).

Paper discussing cepstral processing in puredata with a use in morphing:
https://www.researchgate.net/publication/283270327_Spectral_envelope_extraction_by_means_of_cepstrum_analysis_and_filtering_in_Pure_Data

Alexander Panos’ devices: https://alexanderpanos.com/ 

microtonal manual additive method– okay this one is actually cool, u know how you can place a bunch of sine waves into the piano roll and create a “pseudo-additive synth” this is basically just that idea but using piano rolls with various microtonal intervals instead of 12edo. Ok this is actually right.

dereverb method– using izotope rx dereverb with artifact smoothing turned all the way down where it behaves like a spectral gate with a different flavor to it (again also right). If you want a cheaper one, does this look promising ? Let me know if you find anything better.

https://www.plugin-alliance.com/en/products/spl_de-verb_plus.html

circuit bending method- Circuit bending is a popular technique among noise musicians that utilizes taking electronics that produce sound (more popularly kids toys such as Fisher Price ones) and messing with the circuitry to produce these glitchy and messed up sounds that fit the aesthetic.

https://www.youtube.com/watch?v=I2eSBMotRjI

One of the most popular resources on the technique however has to be Circuit-Bending: Build Your Own Alien Instruments by Reed Ghazala, the guy who coined the term circuit bending. It’s free on the internet archive.

https://archive.org/details/CircuitBendingBuildYourOwnAlienInstruments!

If you aren’t willing to risk your nostalgia destroying these toys, then VCV Rack’s Audible Instruments Macro Oscillator (which references Mutable Instruments’ Braids) has an oscillator that models these chaotic circuit sounds called TOY*!

https://library.vcvrack.com/AudibleInstruments/Braids

aphex twin metasynth method/graphical synthesis method- OK this is a pretty easy one starting off with. Aphex Twin utilized this software called metasynth that creates experimental sounds and arrangements by using what is known as “graphical sound” also known as the ability to compose sound using images and drawings.

Video by benn jordan explaining the software:
https://www.youtube.com/watch?v=4-GDOIOAuU4

The actual software
https://uisoftware.com/metasynth/

If you can’t afford it by any means, then might I suggest the Virtual ANS system, that can do practically the same thing.

https://www.warmplace.ru/soft/ans/

curtis roads pulsar synthesis method - So Curtis Roads, one of the biggest people being the microsound movement, coined a synthesis technique a while back called pulsar synthesis. The technique involves using a waveform consisting of pulsarets and some silence in the waveform. The formant frequency (the length/width of said pulsaret also known as pulsaret-width modulation) and the frequency, the rate of these pulsarets are the main controllers, but there is so much more you can do with them. You can overlap these pulsarets to create crazy detuned/spectral sounds, and FM them between each other, and process them. Entire albums have been written using just pulsar synthesis.
https://pixelmechanics.bandcamp.com/album/pulsar-retcon
You might know of some synths that utilize them such as Opal’s pulsar machine, but for two incredible OP versions of pulsar synthesis there is nuPG by Marcin Pietruszewski., a pulsar synth that runs entirely in supercollider !
https://www.marcinpietruszewski.com/the-new-pulsar-generator
Or that of the PdPG, a pulsar synth that works entirely in puredata/plugdata by “whose body is this” which has extra tools such as cross modulation and flucoma interpolations with wavetables. I have tried both and they are extremely powerful to use !
https://github.com/whosebodyisthis/PdPG

polyBLAMP method - This needs some deep explanations. One of the biggest issues when it comes to virtual analogue synthesis (aka recreating analogue sounds on a digital computer) is the fear of aliasing that causes when sounds go above the nyquist frequency, especially in sharper waveforms such as a triangle wave. One solution is the Band Limited Ramp or BLAMP function is meant to solve these. I am not going to go too in-depth cause most likely, you don’t need to even worry about it. So here, read this dafX paper and correct me if I’m wrong lol. https://www.dafx.de/paper-archive/2016/dafxpapers/18-DAFx-16_paper_33-PN.pdf

chaotic attractor FM injection method- OH MY GOD. Ok so this might be incredibly wrong and this is a crazy guess, but what I believe this could be talking about is a type of chaotic oscillator, one that can be generated by an equation that can’t be as easily controlled like that of a normal oscillator such as a Duffing Oscillator. The FM part can mean adding extra amounts of chaos via that of frequency modulated synthesis. This JSTOR article can help you get into chaotic synthesis.
https://www.jstor.org/stable/3680960
Another resources if you want some good chaotic oscillators comes from “whose body is this” again having a unique collection
https://github.com/whosebodyisthis/whose-chaos-modules-v0.3
One other interpretation could just mean the designed by nature max for live devices by Dillion Baston, which utilize magnetism in charged particles to modulate parameters in a granular synth, delay, and yes, and FM synthesizer.

https://www.ableton.com/en/packs/inspired-nature/

CORDIC algorithm method- The Cordic algorithm method is used to help calculate trigonometric functions and other operations in any base utilizing the digit by digit method which is used to perform floating point arithmetic. In DSP, this is utilized heavily when manipulating and shifting and identifying phase, but I’m not a genius at this so Imma just give you this chat where I found where people utilize it (you can tell how bad I’m getting at this).
https://dsp.stackexchange.com/questions/73536/cordic-what-is-it#:~:text=The%20CORDIC%20algorithm%2C%20published%20by,without%20the%20use%20of%20multipliers.

UPDATE ty to Blair Arbouin for their request. One of the bigger uses of the CORDIC algo is for additive synthesizers in order to allow for more harmonics minus the intensive CPU power.

This video goes over a conference documenting this: https://www.youtube.com/watch?v=TBCf1p7BSek

newtone method - This is a pretty interesting method found on alot of shitpost tracks you’ll find in dariacore and “inertia williams” music shivers, but basically u can stretch around audio and play with the pitch and constantly manipulate it. Recording these sounds in edison will allow you to save those automations and controls

trevor wishart waveset method - Oh this is one I know (shoutout Nathan Ho blogs !!). Waveset synthesis is a branch of microsound that utilizes fragments of a sound at every upward zero-crossing point in the waveform. In some cases, these fragments might help you find the pitch of a waveform. These fragments can then be manipulated, duplicated, reversed to create unique glitches and phase vocoder-esque sounds.

These supercollider codes utilize waveset synthesis to do some weird stuff with it.
https://github.com/bkudler/WavesetTransformer-ForTrevorWishart

https://scsynth.org/t/waveset-time-stretching-best-solution/8230
Here’s a performance in electroacoustics utilizing said method
https://madacyjazz.bandcamp.com/track/waveset-synthesis-of-variable-deceleration-figures-studies-i-viii-for-computer-generated-sound-and-clock-jitter-2021
And here’s Nathan Ho using waveset synthesis with k-clustering to create some really interesting concatenative textures.
https://nathan.ho.name/posts/wavesets-clustering/

serum/vital’s sampler or joshua chop method - This is just more or less a reminder on the importance of utilizing the samplers in serum and vital. There are many ways you can do that. For example in serum, you can take a bass loop or a song and have the key tracking mapped to the phase (or the start position) so you can easily chop up the bass loop and add serum’s processing such as sample FM. In vital you can’t really change the phase but you can randomize it so you can probs get some fun stuff from it, such as creating granular clouds when you add an arpeggiator with a high gate and a long release envelope on Vital’s ADSR!

A video by soup that goes through some interesting techniques where she uses the sampler in serum:

https://www.youtube.com/watch?v=Vzxe5T8DUSk

A twitter video I made turning vital into a granular synth.

https://x.com/kyrsive/status/1692346012554133805

roland kayn cybernetic synthesis method - AAAAAHHHH ok so cybernetics basically means the study of feedback systems and it’s methodology has been utilized by artist Roland Kayn’s Electronic Symphonies to create these systems of machines that fold back on themselves. These modular systems employ constant looping, such having one filter/integral control another one in a feedback loop to allow for chaotic compositions that do not need a human to interact with these systems. I recommend reading Nathan Ho’s article on the process to see how he employs the system together.
https://nathan.ho.name/posts/cybernetic-synthesis/
Also check out this lines thread, has some people messing with modular synthesis to get this through !
https://llllllll.co/t/cybernetic-music-roland-kayn-feedback-systems/40635

polygon synthesis method - Polygonal synthesis can refer to the fact of having a phasor trace out a polygon in order to generate unique waveforms, probably the most simple technique over here. There seems to be a M4L device that can perform this but I could not find it.
https://quod.lib.umich.edu/cgi/p/pod/dod-idx/continuous-order-polygonalwaveform-synthesis.pdf?c=icmc;idno=bbp2372.2016.104;format=pdf#:~:text=A%20method%20of%20generating%20musical,pitch%20and%20complexly%20shaped%20amplitudes.

You can also try this VCV Rack module that can perform something similar!
https://library.vcvrack.com/Sckitam/PolygonalVCO

kalman filter- Ok this is another DSP something but like this is basically good at tracking sensors I think and reducing noise in a system or something like that and it’s good for GPS?? There have been studies from what I read showing it being used to track the rhythm or pitch of a sound

https://dafx.de/paper-archive/2023/DAFx23_paper_7.pdf

https://web.stanford.edu/class/aa228/reports/2019/final147.pdf

wavefield synthesis method- This one is a super interesting thing I discovered a while back. Wavefield synthesis is a multi-loudspeaker set up to help create spatial acoustics that come from a virtual background. There have been installations that are able to create these insanely organic compositions using this technique. One resource I’ll recommend checking out is Elías Merino & Daniel del Río’s own “Structures for Wave Field Synthesis” album, with a book that contains some extra documentation on the subject if you are willing to pay for it.

https://eliasmerinodanieldelrio.bandcamp.com/album/structures-for-wave-field-synthesis

L-system method- The L-system is an algorithm that utilizes a string of letters meant to be used similar to how our alphabet works with rules and systems. While used for the study of chaotic growing systems like trees, we can use L-system techniques to help create chaotic modulation of signals and sounds, and even build synthesis systems using said technique. Here’s a Nathan Ho article that can help you with that !

https://nathan.ho.name/posts/sound-synthesis-with-l-systems/

Neuron synthesis method- My best guess for what the neuron synthesis could mean is a synthesis of a neuron using a biological model that can then be oscillated and used for sound synthesis that has been written in the ChucK system before. These systems use mathematical equations to replicate phenomena such as “sodium channel activation”. This was first developed at the Princeton Laptop Orchestra and had plans to become a VCV Rack module, but so far, I can’t find any usability with it unfortunately, but some people have come close to it.

https://www.nime.org/proceedings/2018/nime2018_paper0088.pdf

https://github.com/spiricom/VCVrack

saphe covoder/vase phocoder method- Ok I actually thought this was a joke but no, this is another cepstral technique that uses cepstral processing to build a “phase vocoder” or as they call it a “saphe covoder” that helps with “spectral crowding”. Like a more “accurate one”. Cepstral processing is an entire wormhole I’m too scared to go down (have you SEEN those words) so I’ll link this really interesting paper on different pitch shifting algorithms, since cepstral processing but also PSOLA is popular in that sphere.
https://digitalcommons.bard.edu/cgi/viewcontent.cgi?article=1262&context=senproj_s2018

inharmonic comb filter - This is a pretty interesting idea. Basically, whenever you crank up the feedback in a delay, you are able to generate a comb filter where there are peaks at every harmonic that’s pretty basic. When adding something such as a filter into that feedback system (something that programs like the tuned delay in surge XT’s VCV Rack bundle can do), the peaks of a comb filter are detuned. You can also utilize the frequency shifting technique mentioned earlier to basically allow the peaks of a comb filter to be detuned while not affecting the actual sound being processed.

javascript stock market sonication method- program software in javascript so you can turn the stock market into music by tuning data points to notes end of story (IDC if this is sonic pi lol it explains it)

https://www.youtube.com/watch?v=mcwId_RA0rc&feature=youtu.be

cross-modulation method- cross modulation is a take on frequency modulation that relies on having one oscillator modulate the frequency of the other oscillator, while having that oscillator modulate the frequency of its modulator. It’s a very interesting take that utilizes feedback to create these chaotic textures and sounds. Sarah Belle Reid has a nice video explaining the concept and applying it to vcv rack and eurorack synths.

https://www.youtube.com/watch?v=IHzOlm_32-Q

Matrix mixing feedback- this is another feedback based method in creating complex sounds that is popular in the worlds of eurorack and modular synthesis, but basically it involves sending a sound to multiple effects, as such, the first row can work like effects send while the last column can be used to turn up the volume of each output. Then, having the outputs of those effects sent to each other through a matrix mixer, creates a complex feedback + cross feeding. This video by Monotrail Tech Talk gives a small diagram that can be replicated in vcv rack with your personal effects.

https://youtu.be/jeWHepyO1Wk?feature=shared&t=749

super mega ultra cybertronic get-some-hoes ass bitch synthesis method- ….