Most of the year has been spent updating the IoT control system and testing audio streaming to ultimately merge the two into an IoT-based music application. By the end of the summer I was finally ready to work on the final application
(My Desk in the Office)
A couple useful tests were re-purposing the motorised potentiometers I had used early on to manipulate music. The first video was establishing a networked gain/volume controller for an audio source:
And the second was using the pot to crossfade two audio signals:
I finally settled on a reverb application that can use the natural acoustic properties of a physical space to create a reverb effect as opposed to digital algorithmic and convolution reverb plug ins. While there are benefits to all types of reverb, I thought using a physical space was a unique and creative way of making an “audio processor,” and the ability to access a space remotely could give greater value to a desired space. A preliminary test is below:
Which for the time being, consisted of a temporary set up in a file cabinet under my desk
This demo involves a round trip test of sending audio from a client computer to a host, from the host the audio is output into the room and then re-captured, and then that newly reverberant audio is delivered back to the client computer.
I had the opportunity to deliver my first research poster at the Innovation in Music 2017 Conference at the University of Westminster in London. It was a great experience and surprisingly positive.
The poster was accompanied by a brief sample of some work found at: http://mjhardin.esy.es/innovation_music.html
One feature I wanted to add to the control interface was real-time video that allows a user to see the device that they were manipulating with minimal delay. Ultimately I settled on WebRTC, which allows for Real-Time Communication (RTC) of media data through a web browser.
While the components WebRTC aren’t difficult themselves to master, it does require additional knowledge of signaling and NAT traversal which, at least for me, could be somewhat daunting for the casual programmer. Luckily there are resources such as Servicelab.org to help put the pieces together. Below shows real-time video streaming using webrtc
As a secondary mention, I did try to investigate popular social media streaming services, including Youtube, but was obtaining up to 10 seconds of delay
Since this summer I have been playing around with the Jacktrip platform that supports high resolution audio streaming with no compression and low-latency in efforts to send real-time audio to my IoT audio system.
Below is a test of a client computer sending audio via Jacktrip to a synthesizer attached to a server computer, filtering the audio remotely with a websocket motor attached to the synthesizers low pass filter, and hearing the audio on the client computer as it is being process.
Although with some issues, I was able to do a brief demonstration with my supervisor at the SC16 Super Computing Conference in Salt Lake City
This is the next step from my motorized potentiometer tests: interfacing with a physical synthesizer.
Before incorporating audio
The following videos show my progression in adapting websockets to work with a motorized potentiometers.
Using mbed microcontroller to send discreet movements to a DC motor attached to potentiometer