Published using Google Docs
Ubuntu Studio OSC Musical Gestures
Updated automatically every 5 minutes

Ubuntu Studio OSC Midi Keyboard Gestures

By Anthony Matarazzo

 

The ability to control digital audio workstations via midi keyboard gesture recognition will advance music production by using generalized OSC signals during headless operation. First because of sunlight and LCD visibility and secondly because of creative motivational productivity. There may be discussion on which midi keyboard keys and functions to include. For discussion beginners, let us embark upon a project with the typical natural usage in what is needed to make a quality signal.

With the USB midi keyboard attached to the laptop, the lid closed you are waiting to compose. The first press among many keyboards will sound a panio. The keys on far right and left control context of the start menu. That is, when the combination of the two keys are pressed together in sequence it is activated. Perhaps the very next key provides track add, instruments in combination, five left and five right. Double and single long press might be appropriate. Audio vst parameter automation through mod wheel. An acceptable interface for the akahi mini midi plus, a less than fifty dollar keyboard.

Another type of gesture, pressing two keys that are next to each other may be a better choice as typically this is not a playing desire. This will allow no sound or stray keys to be played as with the two keys in sequence would have to.

Voice Synthesis used for Announcements

Natural voice for file selections is heard as the files are scrolled through.  Text entry is not recommended so naming items will be chronological and category based. Perhaps a bucket list and a rolling history list. The user may save into categories and also scan preview renders by time historically. And further organize when keyboard is accessible.

The selection of audio files, midi clips, and video files should audition the sound while user is rolling through the list. Perhaps a pause in time or another key will not select the file but read it's name aloud. As likened to the item being highlighted, the contents may be scanned left and right cuing using gestures.

The text to speech facility quality when incorporated to identify DSP components becomes essential. The common form of most used, and categorization. However this is a very unusable system for large amounts of components. Limiting the selection to a purposeful amount a necessity.

Their may be other forms of organization but summarizing channel types with audio previews with control parameters is also a great communication device for interface. Typically clipping, detection of distortion, compression, parametric equalization and limiting are applied but not always.

 Compression may occur over the entire signal or also multiband. The parameter selection made by holding the midi key in numerical position down while the wheel is used. A pause will cause read of the final value to occur.

Group of these chains can be organized for selection. Selection of favorite verses the other applying component stand ins are elected by pressing the midi key multiple times  

Use of the black key mat be system input oriented such as function keys. Help audio balloon on off for example would offer a voice over while double tapping selects it. Forward and menu back.

Voice Recognition Integrated Unobtrusively

  A voice database for commands does seem inviting. Voice recognition. A specialized database for the instrument names and vst effect. The domain of useful names for DSP are within a given scope. However there is a large set which are artistically motivated. These names fall under typically the soft synth category. So in some fashion, the no sticky names may be learned but this complicates the interface for the user or not. But algorithically, it changes the scope. To limit the scope, commonly reliable numerics.

Numerical presets are identifiable to midi musicians because of experience using. General midi subscribes to have specific instruments at a numeric for the purposes of transfer playback of a midi stream. That is the grand panio, flute, bass synth and drum are represented by the same numeric index. The actual rendering may sound vastly different as there is no specification on rendering quality or samples. Using this system and a spoken command numeric, the user could learn codes. But this may be misleading.

But a focus must lead this to operations and environment research for usability and processing power. The ability to be usable using midi only should be one of the major focuses.

Optical Shape Recognition for OSC control, Machine Building, and User Designed Interfaces

Camera, cardboard and markers could also be a booting start point. Writing, drawing and numerous vst adjustments made this way. The effectiveness of a color printout from the program with a qr code ensure direct matching more closely in object recognition. Creating a keyboard with preset buttons added dynamically writing. Editing controls and audio mixing panels a relaxed cartoon sketch. The ability to offer audio wave form shaping as per hand grasp or form provides automation controls while the hands are over the sketch upon the cardboard. Plugins and functions named through the shape and writing of text within the sketch.  Those paper thin instrument and vst controls played out on a poster board would be inviting as most electronics of this nature are expensive. Spill coffee, no problem.

In these advanced visual shape learning processes, llvm may be used to hone the shape recognition to a particular instrument or drawing as the image once learned will be static. So an effective use would be to all creation of new guided learning of sketches and offer the ability to print out the categorized item or edit it. The main focus within heads down would be the creation of labeled instrument panels.

The building of vst has been possible on the past yet most products are interface driven rather than algorithm stack driven. The building blocks of signal production are an effort within mathematical science that through the combination of multiple algorithms in parallel or chain form sound waves. The production method summary, parameters and formation process left to the engineer.

As a descriptive process their are forms of diagramming which show dsp rudimentaries. These require some depth knowledge which can be learned by novice through interface controls. The rate, type of osculation and numerical linearity of change are spots within a storage area or an array. This can be thought of as simply rows within a spreadsheet. The functions that modify these values over time express signature of signal production comprehension. While these values are numeric, either whole integer or floating point, their interpretation lies within the vst component.

Their is an adherence to common forms of rate changing and control according to dsp. Oscillators which operate along a path at specific bpm intervals or other timing setting. The path may be wave picture graph related such as a saw, sine or square wave.

The value of these concepts while using video shape recognition can supply many new forms of interaction and production not readily available in modern interface technology. The users create their own wiring diagram and level hierarchy of control simply by sketching the connections as if they are wires. The building of patterns, objects on one sheet, could be referenced as a machine on the other side.

Video recognition does have its limitations as per reading the pressure level. However a stick held or a elastic price of a rubber band provide the analog detail available from the real world. Most likely promoting the pad and paper editing process using a pencil as a pointing device will be accurate.

 Using a midi keyboard in conjunction with shape recognition that is compiled using llvm for static recognition the most successful endeavour.  The algorithms produced are used to achieve low latency for playback. Typically scanning the base image hot areas for color changes of as tolerance. Comparing this to a state model which trigger events to the user interface. As a pixel memory the size of the image should remain constant within a tolerance. The camera stand steady, and paper held in place are all that is needed to make the system use very simple and closed domain shaped recognition.

Usage at this level may take different forms of development and be data driven only not requiring llvm at all. The extensiveness of the algorithms less than rudimentary on my part at this time

Cellphone and Tablet Applet Integration

Integration of the cellphone with an applet critically enhances visibility because of the display technology and proximity while using. Handheld and then cradle it. This opens the doors extensively and a fun exercise in user interface components. First no advertising bars will exist but will have a shopping products page integrated for visualization of interface production.

Reintroducing less aesthetics and animation from audio production, the panning interfaces feature svg approaches to rendering. Image data for the interface and layout sent from the plugin. As a whole, cellphones are capable of the highly animated audio signal. It is an abbreviated summary of visual plot. The point data should be a stream from the laptop. However the constant communication and rendering become burdensome. It is currently unknown.

The effectiveness of svg is also detailed by its filter inclusion and advanced API to manipulate objects as a scene graph. So in some regards it is eloquent while other measures of programming could occur such as graphic image production with a lite event decision based language. DSP will not be considered just the parameters to the plugin being controlled remotely and it's values presented in a paged interface.

A difficulty in production is knowing the android base features are at most times running on a select cpu model with a common low level API that c++ programs can link to.

Yet Java offers a library of interface components. These applications are focused in using JNI which is javas low level data type parameter tie in to low level languages. There are many promises of success in using the cellphone this way, the designed way.

One might even turn the phone sideways and have a mini midi keyboard to play.

Most importantly the visualization of notes, objects, loops will be possible with the cellphone. Textual input and directory searching a very easy to use interface. Stopping and starting the track playback or enabling record. Adding tracks, arranging them and setting the audio bus tracks for grouped percussion from drum machine.

Integrated tutoring from the laptop software. The subject broad but reliance on applet distribution or constant network delays costly. As an integrated source distributed with the Ubuntu studio base,

The cellphone camera might be used also. Some devices also have a sensor that informs it tilt. The vibration could be used for the metronome. As well the audio production routed to the cellphone, broadcast, and also played on the connected Bluetooth audio.

The USB data interface limits the motion range to the cord but offers higher speed and less latency making it respond more real time. This is very important and makes some playing haphazard due to delay. The input of automation envelopes as a listening measure also a real time necessity.

The touch screen a multi point capable sensor could be used to allow the fingers to control groups of effect parameter in specialized interfaces or to create the adsr stretching parameters for synthesis.. LCD DSP chain scratching and sound shaping.

The tapping of drums an easy grab bag for the cellphone touch screen. Interface logic may also change to provide the composition fill and shift operation liked on hats and snare drums. Perhaps allowing an input arrangement where there four or number of squares that make sound when tapped. Using a slider along the edge to allow one control over velocity is another feature.

Main Focus MIDI Gesture Input

Using a midi keyboard with basic musical sensors an enjoyable mechanism. The forward and back of audio seeking using mod wheel is a common use. With step editing the ability to delete a note or notes should be possible. In this way an audio wizard that I8m6guides the listener through the process by repeating the audio within a perceptible frame with other channels less in volume  the icon of audio are bells or short signal ques. Perhaps a note that is selected repeats several times to point it out.

Working with an audio interface in headless mode might also summarize some parts to provide the note plucking and beat making possible. That is tuned and balanced channels. Channel preloads. Selection of change out percussions. Effect chaining presets apply verbose setting that make percussion instruments new and distinct.

 The operation of presets and knob turning of plugins must be accomplished using midi keyboard input automatically. To know the number of settings at first hand is obviously daunting. The VCA, ADSR envelop, wave form, osc, are numerical parameters which control the output audio signal. With pure wave forms, plugins are used to create complex yet intriguing listening escapes.

  Plugins that modify the wave for quality or any other adjective fitting are compression, multiband compressor, limiter, parametric equalizer, vocoder,  reverb and many other plugins.

Quantization for drums are aligned on larger boundaries than a keyboard.

Loop repeat and stacking changes after is a very important editing feature. The method of ten categories while functions are the in a three layer design but providing most used at branch two of the menu will satisfy usage needs. The announcement of the function allows audio discovery. Double tap or hold activates. The necessity of the audio menu can be configured.  The menu should be constant.

Headless Booting

When the laptop is turned on as a synthesizer is, it should be ready to play. An usb interface midi keyboard attached with jack audio server running, hydrogen drum machine. Adore. Imagine that the keys 1 through 5, left hand might control the booting options. Selection of top level functions.

Selection of the drum machine and it's granular pattern interface should be a to level function perhaps. Yet controlling several of the audio output from it inside ard the grouped track with an audio bus track is often desired. Mixing levels can be modified for individual instrument and track. To provide the ever fun midi input of drums which are played live such as hat or anything an ardore track records the midi and sends this note to the drum machine and then the audio back through to the audio bus of the ardore track. This is one example but applies well to a basic structure with dsp setups for composition. As a tree expanding, the suite must be identifiable by its structure which the user will have in their understanding.

No stress Conversation Shortcut Audio Interface

Better less stress voice over from the system to the user will relax them and minify interruptions. Simple conversation such as track selection. After left extreme and right extreme key combo timing click, the audio "Menu mode" might be spoken. The left hand index held down allows the track f ok cus to be selected using mod wheel is a natural second order.use. Track one is spoken. Two, three four. And then the user pauses. The word track is not spoken repeatedly but is conversational as if contextually related to time. The psychology of the interface should not interject but also provide audio text training with interactive skits. Perhaps audio icons might be descriptive enough for some.

Network AI

The development of AI that play music and also algorithmically create song structure have been around fro some time. There are tools which are professional in their cord and drum beats production are used commonly by some artists. Music at times is more fun with varying input from multiple creative sources. However the parts that are desired to be modified to be unique over time are the easiest to be perceived and inputted first over time such as drums and hats. A more intelligent companion ai to give life where none exists such as a tennis match. The computer algorithm and database choosing to produce several audio mix down tracks.  The manufacturing of a playback mix for the user to add to is applied with the details modifiable by the user. The computer music tennis match where refinement in output is possible.

A user could write the lyrics while the selection of vocal talent from a pool of internet singers provide tracking for audio. Enhance my ents suggested by listeners allow applied 8n a morphological blending facility and merge. The forever remote storage of adore projects for data mining and statistical analysis. The hmm I pledge pruning mistakes and strengthening molidic composition. Addition track stacking pretender.

Complete to publishing to web can be provided by saving the item to a published category. As a out box for connected internet upload, the account settings maintain tracking of the audio as a document. The residual income gathered stream lines payment to an electronic account automatically.

Age Difficulty of Use

One of the least talked about subjects yet top scope reason for this discussiion.is the quality produced by Ubuntu Studio. That's why the laptop hardware, NVME drive and extra memory Is important. To make this quality accessible, well thought out interfaces and graduating usages allow the user to use the device for their enjoyment in creating music. There are many levels that should be tailored and designed for distinct age groups and identity.

For example playing musical fun with baby, toddler and youngest may also reference television character voiceovers provided through the network. To not make them cry by lowering the pitch may be important emotionally. As well the basic instruments. These allow easy discovery. Perhaps a time of play would be that very little is expected for input while the output designed to be magically up lifting. Telling a story perhaps allowing choices to be made by playing or choosing from the big key piano. The keys are easy to use for young ones. Multiple choice story games and sound event that provide rapid arcade use of the keys. The memory game with timing built in.

Designing progressing wizards which teach and make a named genre the default input. A market which identifies with books, games and movies is possible with the next age group 4 to 15 perhaps. Advancements are also provided at a different yet self paced manner.

The purpose is to make publishing quality and fun occur at once. As with most tasks their is the attribute of experience and desired focus. For interfacing to the audio production engine and all of its features this orders a large magnitude of variation and control. This will change as per audience feedback after the very best method imposed.

Screen casts and HDMI generated Visualizations

While purposeful but not a main focus. The ever changing lights of midi reactive graphics. The pallets of complexity increased at this data plentiful position. A derivated machine could offer added details as a library focus.

Embedded Devices of Those Yet to Be

With proper attention, an embedded device built with an HDMI connection could be manufactured that had these well tested musical software. As a product designer, the effective use of the technology should include preloaded education. The audio card and instrumentation within the music field has additions of plugs and quality audio circuitry to consider. A plate amplifier, configured with a battery and a studio reference transducer will provide automious sound production for composition. Xlr and rca low level input provides integration with audio equipment. Optical output is also available on some configurations.

 The reality of this type of control is that extra buttons should be created. A panel with knobs also known as attenuators might be an consolidated extra add on USB interface device. This would be an addition to the midi keyboard. Some types of devices already exist but labeled. Soft readable labels

There are some downfalls of capturing five keys that are designed to make sound instainiiusly. To play the note or activate the very next is a problem where to make do is the proper action while the vary next one will pause the process and allow configuration.

 A stolen key is much to damaging to the small keyboard usage. So the interface action allows by default the playing of notes but relies on extreme distance 8n time combination very short.  But to the user once in the mode the keys may be used as a saw feature freeing the note making process as per input..

Two keys simultaneously at once offer the ability to be filtered to form the trigger. If next to one another or a flat of, that can be considered a trigger. This will allow the most instant activation without false notes sounding

  This is a powerful process of music production. Once the interface technology is combined into a product it will advance many forms of musical creativity. The quality of the signal is a key reason to choose the technology path of general purpose cou sound production. Finally, game playing and audio feed back are necessary features.