ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Software options to generate movement of naturalistic auditory sound sources, where we take a recorded sound (e.g. a car or a bee) and specify direction, speed and distance/range of its movement
2
Responses from the Auditory List (October, 2021)
3
4
Picinali, Lorenzo <l.picinali@imperial.ac.uk>
if you are aiming to then play back the signals via headphones, you can use the 3D Tune-In Toolkit binaural test application. Here you can hear a demonstration: https://www.youtube.com/watch?v=osJQ0Kxv1P0&t=14s You can download the application that is in the video here, for free: https://github.com/3DTune-In/3dti_AudioToolkit/releases and in the same repo you can find the C++ open source code, as well as a few other releases, such as a VST plugin, a Javascript plugin and a Unity asset. You can't directly plan and control trajectories, but the application can be controlled remotely using Open Sound Control (OSC) therefore you can use other programs (e.g. MaxMSP, Matlab, Python, etc.) to control trajectories and timing. There is also a version of the Toolkit for loudspeakers but...it's rather limited, and works only for systems with 8 loudspeakers arranged on the corners of a cube.
5
Tim Ziemer <ziemer@uni-bremen.de>
You can also try https://github.com/marteroel/Binaural3D_Sound_Unity_Csound that couples Unity to CSound via OSC to implement a generic HRTF to he sound rendering. You can, of course, use CSound directly, especially hrtfmove: http://www.csounds.com/manual/html/hrtfmove.html The basic KEMAR HRTF is also implemented in Earplug~ for Pure Data: https://puredata.info/downloads/earplug
6
Giso Grimm <g.grimm@uni-oldenburg.de>
in addition to the suggestion of Lorenzo Picinali you may look at TASCAR - it is primarily made to simulate arbitrary movements in real-time. It offers rendering methods for loudspeakers and an HRTF simulation. Examples can be found on our lab youtube channel: https://www.youtube.com/channel/UCAXZPzxbOJM9CM0IBfgvoNg Installation instructions (currently Linux only) are on http://tascar.org/
7
Brian FG Katz brian.katz@sorbonne-universite.fr
If you are concerned with naturally perceived rendering of very near sources, like a bee buzzing around you, loudspeaker sources are not really feasible, unless you are able to go to a WFS system (or NFC-HOA), with a high density of speakers. Note that very near source positioning (within arm's reach) requires additional processing over and above simple HRTF convolution, unless a near-field HRTF dataset is available. For very far distances, air attenuation is necessary over and above HRTF. This distance attenuation can be modelled, but should ideally be a function of atmospheric conditions depending how far away the source is. For detailed distant natural rendering, one may also need to account for general terrain acoustic properties. For headphone rendering, we have made our research rendered public as Anaglyph (http://anaglyph.dalembert.upmc.fr/), free for all use as a VST plug-in. It is geared very much towards realistic proximity rendering with some basic far distance attenuation. As with others mentioned, you should be able to automate trajectories using various VST supported hosts (MatLab even supports it now).
8
julien laroche aka BOB Cooper <emo_neuro@hotmail.com>
not public and open source at all but personally I use Sound Trajectory from TripinLab in combination with SPAT revolution from FLUX. I suppose you would do this on headphones though.
9
Maximilian Haider <maximilian.haider@uniklinik-freiburg.de>
I could add the IEM Plug-in Suite (https://plugins.iem.at/) to the previous suggestions. It is free and open-source. It offers Ambisonics encoding tools with real-time remote parameter control, flexible decoding and binaural rendering tools.
10
Matthias Geier <matthias.geier@gmail.com>
f you are not afraid of highly experimental (and unfinished) software, you can try the next generation of the Audio Scene Description Format (ASDF), which I'm currently working on. It allows you to define 3D trajectories with a (somewhat) simple HTML syntax. For documentation, see https://AudioSceneDescriptionFormat.readthedocs.io/ The format is independent of the playback software. One way to listen to the sound scene is via a Pure Data External I've implemented as part of the reference implementation: https://github.com/AudioSceneDescriptionFormat/asdf-rust/tree/master/pure-data Another possibility is to use the SoundScape Renderer (SSR, http://spatialaudio.net/ssr/), but this is even more experimental. Here you'll have to check out a certain branch (https://github.com/SoundScapeRenderer/ssr/pull/155) which enables ASDF support.
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100