Videos

 

 

Smart Cajón

This video shows a demo of the first prototype of Smart Cajón. Such musical instrument consists of the addition of sensor technology, embedded sound processing and delivery, and wireless connectivity to a conventional acoustic cajón. The video documents how the embedded intelligence of the instrument has been creatively exploited by three professional cajón players who tried it.

 

 

 

Musical Haptic Wearables for performers

This video shows an excerpt of an electronic music performance that involved two prototypes of "Musical Haptic Wearables (MHWs) for performers". These are wearable devices targeting music performers, which encompass haptic stimulation, gesture tracking, and wireless connectivity features. MHWs were conceived to enhance creative communication between performers as well as between performers and audience members by leveraging the sense of touch, in both co-located and remote settings.

 

 

 

Smart Mandolin

This video documents the performance of "Dialogues with Folk-rnn" held on the 20th of November 2017 at the St. Dustan's and All Saints Stepney, within the "Being Human Festival". The music is composed by Luca Turchet using Irish folk tunes generated by artificial intelligence techniques developed by Bob Sturm and Oded Ben-Tal. The piece is conceived as a dialogue between the material generated by the Folk-rnn artificial intelligence algorithm and the player improvising over it. This performance is the world premiere not only of the composition, but also of the Smart Mandolin, a novel musical instrument specifically conceived to extend the sonic possibilities of the mandolin and enhance its ways of musical expression. Such an instrument consists of the augmentation of the conventional acoustic mandolin by means of sensor technology, embedded sound processing and delivery, and wireless connectivity. This technology is utilized to control the computerized transformation of the original sound of the mandolin in ways not achievable with current standard interfaces. 

The piece is conceived as a continuous dialogue with material produced by artificial intelligence techniques applied to the generation of folk music. The piece is characterized by a continuous improvisation over various computationally generated folk tunes of different styles. Such an improvisation is based not only on the new language offered by the integration of new gestures, related to the use of sensors, in the conventional playing technique of the mandolin, but also on the exploitation of the intelligent components of the Smart Mandolin.

 

 

 

Jamming with a Smart Mandolin and Freesound-based Accompaniment

This video documents the results of the paper "L. Turchet and M. Barthet. Jamming with a Smart Mandolin and Freesound-based accompaniment. In Proceedings of the IEEE Conference of Open Innovations Association (FRUCT), 2018". The paper presents an Internet of Musical Things ecosystem involving musicians and audiences interacting with a smart mandolin, smartphones, and the Audio Commons online repository Freesound. The ecosystem has been devised to support performer-instrument and performer-audience interactions through the generation of musical accompaniments exploiting crowd-sourced sounds. We present two use cases investigating how audio content retrieved from Freesound can be leveraged by performers or audiences to produce accompanying soundtracks for music performance with a smart mandolin. In the performer-instrument interaction use case, the performer can select content to be retrieved prior to performing through a set of keywords and structure it in order to create the desired accompaniment. In the performer-audience interaction use case, a group of audience members participates in the music creation by selecting and arranging Freesound audio content to create an accompaniment collaboratively. We discuss the advantages and limitations of the system with regard to music making and audience participation, along with its implications and challenges.

 

 

 

Towards a semantic architecture for the Internet of Musical Things

This video documents the results of the paper "L. Turchet, F. Viola, G. Fazekas, and M. Barthet. Towards a semantic architecture for the Internet of Musical Things. In Proceedings of the IEEE Conference of Open Innovations Association (FRUCT), 2018". In this paper we propose a semantically-enriched Internet of Musical Things architecture which relies on a semantic audio server and edge computing techniques. Specifically, a SPARQL Event Processing Architecture is employed as an interoperability enabler allowing multiple heterogeneous Musical Things to cooperate, relying on a music-related ontology. We technically validate our architecture by implementing an ecosystem around it, where five Musical Thing prototypes communicate between each other.