World Music's DIVERSITY and Data Visualisation's EXPRESSIVE POWER collide. A galaxy of INTERACTIVE, SCORE-DRIVEN instrument model and theory tool animations is born. Entirely Graphical Toolset Supporting World Music Teaching & Learning Via Video Chat ◦ Paradigm Change ◦ Music Visualization Greenfield ◦ Crowd Funding In Ramp-Up ◦ Please Share

Sunday, May 7, 2017

Cantillate

Music Visualization, Machine Learning And Artificial Intelligence.


Many rather lonely standalone applications of artificial intelligence in music (AIM) exist, but how can music visualization (or visualisation), machine learning And artificial intelligence be brought together and better integrated for social value generation? We look at a pragmatic aggregator or integration framework approach.

Already the first AI-assisted music apps are appearing in app stores. These tend to focus on some specific aspect of musicianship such as chord variety, but are seldom in any way integrated with the actual music users are trying to accompany or enhance, or more specifically with the enabling technology stacks.

How long before AI-assisted, near-real-time notation- or other source-driven instrument models, theory tools, and remote teaching and learning are the norm, at what points in the technology stack could they be applied, and what are some examples of the benefits this might bring?

Music Visualization, Machine Learning And Artificial Intelligence

The new machine age is "digital, exponential and combinatorial" and characterised by "mind, not matter; brain not brawn; and ideas, not things" (Erik Brynjolfsson "Race with the machines").

I'm not so sure about the 'brain not brawn' bit. Humans are complex, emotional, self-aware beings. Artificial intelligence (AI), machine learning or neural networks are basically just logic trees allowing brute-force siege-style evaluation of all mathematically possible outcomes, and algorithms trained to recognise which paths constitute successful outcomes. What a machine achieves through meticulous attention to detail, a human can, in some cases, achieve through leaps of intuition. How long will it take to arrive at artificial intuition?



There is a technological threshold below which what we might call 'chores' can be delegated, and above which human empathy and emotional intelligence are freed.

At the same time, we have to accept the likelihood that in a not too distant future, we will interact with highly empathetic automata equipped to replicate the finest of human motor skills, musical styles, creativity, tension, and hence emotion - and that these will be indistinguishable from the real thing. What will be, will be. Let's stay grounded :-)



Though I have misgivings (see my closing remarks way below) about delegating too much of those things related to human wellbeing to them, kept focussed, artificial and machine intelligence have the potential to greatly enrich the online and remote music learning value chain.

We have perhaps to make a distinction here between machine learning, which is broadly understood as comprising classification, clustering, rule mining and deep learning, and artificial intelligence, comprising logic, reasoning, rule engines and the semantic web. Underlying both, however, are large collections of data. Together, they are known as 'machine intelligence' (MI).

Both have been around for a while now. There are already many open-source software libraries around for machine learning. To name just a few: Torch. Berkeley Vision released Caffe. The University of Montreal's Theano. Slowly but surely, our web browsers are being infused with artificial intelligence. Google open-sourced their TensorFlow AI engine last year, which is claimed to combine the best of Torch, Caffe and Theano. The curious can even dabble online.

"Machine intelligence (MI) can do a lot of creative things; it can cope with music generation, create art from and with music, mash up existing content, reframe it to fit a new context, fill in gaps in an appropriate fashion, or generate potential solutions given a range of parameters". -Carlos E. Perez

In a musical context, they can be used to solve problems spanning from the more or less cosmetic all the way down to the logical. For example, from musical motif recognition during work analysis, over resolving compatibility issues in music theory and aspects of music technique, down to supporting automated tool configuration processes.

As the diagram above suggests, there are a wide range of potential applications in music, but for the moment (and especially until data handling and interrogation is consistent across the entire stack, these are necessarily focussed or island solutions. A general purpose, layered music intelligence engine -and especially one suited to web applications- is still a long way off.

The to my mind currently most pressing application of artificial intelligence is in the area of automatic synchronization of video, audio and notation feeds from disparate, near-but-not-quite-simultaneous sources, but whose speed may be subject to drift or offset. This applies immediately to tasks such as synchronising webcam video with microphone audio, undertaken daily by thousands of musicians around the globe.. Fully automated -and especially if open sourced- it be of huge assistance in getting P2P teaching off the ground. An approach tolerant of variation in playing speed is based on a so-called beatmap file, but still involves several manual steps.

Other applications include subtle visual interaction issues between notation, instrument models and theory tools. Tiny anticipations and delays, as music itself, can add greatly to the overall music visualization user experience.

Indeed, with it's clear focus on data, it is hoped the music aggregator platform will act as a honeypot for AI and machine intelligence solutions in online music teaching and visualization. It can, in this sense, be understood as an integration platform. Have a lead? Feel free to get in contact. ;-)

Avidly Seeking Sponsors #VisualFutureOfMusic #WorldMusicInstrumentsAndTheory

Why Incorporate Artificial Intelligence?

Potential Crowdfunder?

AI has the potential to solve challenges at many levels in end-to-end online and remote or P2P music learning environments. Some of these relate to constructing the platform's own artifacts, others to supporting the learner, or, indeed, a remote teacher.
Big, brave, open-source, non-profit, community-provisioned, cross-cultural and batshit crazy. → Like, share, back-link, pin, tweet and mail. Hashtags? For the crowdfunding: #VisualFutureOfMusic. For the future live platform: #WorldMusicInstrumentsAndTheory. Or simply register as a potential crowdfunder..
So let's see how some of these might have practical and visible impact on our aggregator platform...



Away from the browser (and with it, our aggregator platform), there are further possibilities..
Finally, a cautionary word. Since the early days of the internet, we have been obliged to police the boundary between machine and man. Increasingly, both attack and policing are done by largely autonomous software built on deep learning principles. The former scours defenses for potential weaknesses, the latter monitors interaction behaviours, looking for unusual patterns. Already, our online security is passing out of our hands.

Under threat, we revert to predictable, tribal behaviours. If man-made, Alt-Right Twitter bots can already endanger, and billionaire-sponsored social media profile analysis straight out of military psychological warfare decide national elections, it takes no great leap of the imagination to see the potential impact of autonomous, self-replicating and learning AI bots on the internet. How and where they turn up is anybody's guess.

However, if digital goods can be replicated in perfect quality at nearly zero cost, be delivered instantaneously and in doing so wipe out half the available jobs, then surely the goal must be -for an ever-increasing number unable to keep pace- to make work irrelevant. How? By supporting human strengths, and creating tools that drive social value into the community. Amongst these, machine intelligence will inevitably find a role - but we need to be involved.


Keywords



online music learning,
online music lessons
distance music learning,
distance music lessons
remote music lessons,
remote music learning
p2p music lessons,
p2p music learning
music visualisation
music visualization
musical instrument models
interactive music instrument models
music theory tools
musical theory
p2p music interworking
p2p musical interworking
comparative musicology
ethnomusicology
world music
international music
folk music
traditional music
P2P musical interworking,
Peer-to-peer musical interworking
WebGL, Web3D,
WebVR, WebAR
Virtual Reality,
Augmented or Mixed Reality
Artificial Intelligence,
Machine Learning
Scalar Vector Graphics,
SVG
3D Cascading Style Sheets,
CSS3D
X3Dom,
XML3D


Comments, questions and (especially) critique welcome.