World Music's DIVERSITY and Data Visualisation's EXPRESSIVE POWER collide. A galaxy of INTERACTIVE, SCORE-DRIVEN instrument model and theory tool animations is born. Entirely Graphical Toolset Supporting World Music Teaching & Learning Via Video Chat ◦ Paradigm Change ◦ Music Visualization Greenfield ◦ Crowd Funding In Ramp-Up ◦ Please Share

Tuesday, May 9, 2017


World Music Visualization Aggregator Platform vs Soundslice

The world music visualization platform in focus here takes an approach quite different to that of Soundslice, perhaps the currently fastest-growing (solo-)instrument learning environment. A comparative review.

Many instrumental learners -especially those in heritage, folk, and traditional circles- get by learning from slowed audio. This mimics to a certain degree the face-to-face rote learning of earlier generations (and glorious accidents of memory likely responsible for much of the tune diversity).

Even so, alone the time overhead in finding, chopping up and slowing tunes can be daunting. This was, indeed, the motivation for 'Soundslice'. It can be thought of as a round-trip learning tool, in the sense that you start with an original recording, slow it down to a point where you can imitate it, then progressively speed up until playing along in perfect synchrony at the original speed.

Soundslice's focus is on simplifying life for the notation-oriented, 'learning-through-mimicry' instrumentalist, bringing a variety of learning sources into a single workflow - clear audio, simple, single-voice notation, a tiny selection of models of the most popular instruments, and synchronised video of 'virtuoso' (professional teacher's) play.

Soundslice takes a number of similar systems (such as InstantNotation or Knowtation) just a tick further in the drift towards on-demand immersive music environments.

Soundslice's primary achievements are: giving some visual context to what is essentially an audio feed; in time-synchronising the various other feeds with this audio.

A particular success are:

  1. the synchronization of video with external (as opposed to recorded, 'in-video') notation, and the notation's robust, well-synchronised looping controls. These provide both accurate positional and fingering clues, and very useful stylistic audio cues
  2. The reuse of fingering information gleaned from the various instrument-specific input file formats
It is difficult to overemphasise the value of the latter, but it immediately exposes one of the major weaknesses of current music exchange file formats: the inability to map from exchangeable, external fingering files. Even MusicXML, the W3C musical crown jewel, uses hard-coded, inline fingerings. In a world music context (and where people are increasingly drawn to online tools), this is a strong driver of cultural extermination.

Soundslice's focus is ruthlessly on the single (often lead) instrument. It's success is critically dependent on time synchronization tokens and in-exchange-file fingering indications, yet these represent only a fraction of the data available from (say) a multipart MusicXML exchange file. Given direct data bindings to the user interface's graphical elements, we can infer and interactively display much finer-grained information.

Nevertheless, taken together, Soundslice's synchronized video + notation approach plasters over one of the major shortcomings of any notation - it's inability to adequately describe the subtler elements of style. Be it accenting, subtle rhythmic attack or delay, or the fine motor detail in ornamentation, these are all qualities notation is notoriously poor at conveying. Synchronized video provides us with both audio and visual cues. Nevertheless, audio is far from unique to Soundslice. More on that below.
For all it's effectiveness, with the exception of looping controls, I haven't featured Soundslice's notation on this diagram, as there are many similar offerings elsewhere. (Had the notation been data-driven and the notes individually addressable/interrogable, I would have, as these open whole new interaction possibilities). Moreover, Soundslice's approach more or less undermines the possibility of exploiting an area of huge and untapped potential: music visualisation.

What it does, though, it does extremely well, and at first glance, appears to answer pretty much all needs.

Today's needs are, however, not tomorrow's. We stand at the threshold to multiple technological paradigm changes:
  • music visualization
  • augmented and virtual reality
  • machine learning and artificial intelligence
These draw us inevitably from 'mobile first' towards 'data-driven', 'on-demand' (or, better still, 'need-of-the-moment'), immersive and ultimately, it seems, 'AI-first'. A comparison at this point seems both timely and interesting - on a number of levels.

Without distracting from Soundslice's successes, I'd like to focus in on a couple of opportunities it's design impedes - or even obstructs.

Big, brave, open-source, non-profit, community-provisioned, cross-cultural and batshit crazy. → Like, share, back-link, pin, tweet and mail. Hashtags? For the crowdfunding: #VisualFutureOfMusic. For the future live platform: #WorldMusicInstrumentsAndTheory. Or simply register as a potential crowdfunder..

Learning By Ear

Potential Crowdfunder?

Prior to that, however, I feel obliged to concede that for many older / 'traditional' players, even notation is overkill: slowed audio (mp3, more recently AAC) is widely accepted as entirely sufficient:
  • It mirrors the older (and much revered) practice of face-to-face learning around the hearth during long winter months.
  • It is well supported by 'slow-down-while-maintaining-pitch' tools
  • Audio is the fastest and shortest route to the brain. With our brains naturally (and with impressive accuracy) able to distinguish pitch, everything put in the way of ear learning simply slows down the mapping of audio via brain to instrument. Neural pathways take time and practice to form, yet folk virtuosos around the world are stunning testimony as to the power of learning by ear. Many don't read a note of music.
  • Hands-free play-along. Once a piece of music is set to automatic looping playback on an audio device, no further intervention is required. The mind is free to concentrate, the hand to play.
  • Location independent. With any good mp3/AAC player - down by the river, in a park, perhaps during a lunch break (effectively a mini-holiday, given the switch of brain hemispheres).
  • Learners develop a simple, intuitive sense of harmonic relations - applicable both in equal temperament and just intoned music cultures.
That may seem a daft admission coming from something of a visualization 'evangelist', but where audio supports musicality, music visualisation brings deep understanding. Poles apart.

Immediate Needs

There is a sense in which all current online offerings fail to deliver, and that is in satisfying immediate needs. While easy enough to find a modal scale, there may be no help with fingering on an instrument of your choice, or in viewing how it fits in with the broader modal landscape of the musical culture. Often, just to get at this information, you are obliged to scour online book & CD offerings, commit to an unnecessarily broad online course, or indeed follow something created for an entirely different instrument.

The counter-argument is that the brain develops according to the challenges we feed it. Accordingly, even a marginally immersive teaching environment should help develop multiple skills, in parallel, and across a broad front.

Yet notation's massive cognitive load has to be overcome before a note of actual music is played. This has clear impact on the speed at which we learn. Indeed, notation could be argued as having nothing to do with the actual music.

Younger players have grown with the internet, and so are perhaps more open to even slightly more immersive approaches. Soundslices's video and 'hardcoded' (non-configurable) instruments to some degree bridge the gap in providing limited visual orientation. These, however, remain a drop in the instrumental and music visualization ocean.

Soundslice In Detail

Business Model

Soundslice's focus is squarely on a limited range of music exchange formats -mainly those preferred by (gypsy swing) guitar players- with consequences throughout it's entire business alignment. Orchestral or even small ensemble music scores for example, are out.

Soundslice exploits a canny mix of franchising and subscription. In providing the tools needed to synchronise professionally-produced video with notation generated from a variety of exchange file formats, it not only drives own and other's (franchised) subscriptions, but transfers the responsibility for provisioning and promotion to the individual franchises. With a 30% commission on franchise sales, beyond the initial implementation for a given instrument, for it's owner a golden egg laying goose.

At the simplest level, Soundslice allows users to load their tune in a variety of (mainly guitar- and piano-oriented) formats and have it played back with synthesised audio. This is precise, if lacking in the fine musical tensions that distinguish a human player from a mechanical plonker. This does, however, provide for solid rhythmic accuracy - which for some beginners is perhaps no bad thing.

This form of usage allows display either in conventional music notation or so-called 'tablature', a form of pictorial shorthand specific mainly (but not quite entirely) to strung instruments. For this form of use, basic subscription rates apply.

So what help in expanding the platform's functional range is offered? Soundslice's Acceptable Use Policy states:
"The main goal of Soundslice is to help you learn to play specific pieces of music. It's primarily for educational use, and we hope our tools make you become a better musician.

Beyond that, Soundslice is rather open-ended, in that it allows arbitrary annotation of sound and video. We expect users will discover and invent many wonderful, unexpected ways to use it. Those are encouraged, as long as they're consistent with this policy".

This seems to hint some form of open / win-win further development strategy, but as it turns out: only as long as the subscriptions are paid. → This is a closed source, franchise model. Moreover, the most obvious area in which musicians can make a useful contribution is that of tablature glyphs for other melody instruments, and in providing fingerings. The former is reusable, the latter a unicum.

Technical Overview

Soundslice has been built using javascript in conjunction with recent HTML5 web multimedia elements. It reflects efficient and economic -if from a graphics point of view slightly dated- good practice, achieving not just responsive page elements but also responsive content.

Where a learner might earlier have worked with an audio recording, a page of ABC, tablature or a melody line and a few notes, Soundslice generates tablature and/or notation from one's own choice of (single-voiced) exchange file, some simple fingering diagrams and (for an extra fee) time-synchronised video and tablature/notation of some professional mentor's play.

It integrates and fully synchronises notation, video and simple instrument animation. It's appears to rely on tablature information for diagrammatic finger positioning.

All files necessary to full offline use can be retained in the browser's cache, making it useful for those times you are out of mobile or WIFI signal range.

As touched on above, for traditional or folk learners who have worked predominantly with slowed audio, this is already possibly too rich an offering.

For advanced folk musicians seeking a deeper insight into music-cultural structures and for notation-based learners interested in deeper aspects of harmony, it may prove too little.

Nevertheless, Soundslice occupies an odd niche, strangely isolated amongst technologies currently attracting a lot of attention.

In particular, the indications are that is neither 'live', nor 'real-time', nor strictly 'data-driven' - nor (curiously) does it rely on WebGL, which underpins much of the current work on WebAR and WebVR, (the web-based forms of augmented and virtual reality, collectively known as Web3D). Nevertheless, it is reasonably immersive.

How easy it would be to integrate elements of Soundslice into these virtual worlds is moot, but it's an entertaining thought..

It has a simple, intuitive interface and reacts quickly to modification. It gets round some problems inherent in it's implementation using simple but effective (creator's own word..) 'hacks'.

Though allowing Soundslice to do what it does very well, as we will see below, these will clearly hamper the product in it's further development.

Soundslice's Strategic Limitations

Several Soundslice design decisions bar it's further development into a comprehensive aggregator platform for all instruments and other animations. Let's try to summarise some of these.

  • Notation appears to be limited to the treble and bass clefs
  • Currently, piano is the most complex instrument (MusicXML 'Part') setting offered. Synchronization between left (bass) and right (treble) hand voices is possible only because -exceptionally- all the relevant information is present within this instrument's 'Part'.
  • With note selection apparently by offset rather than class or id, it could be technically challenging to extend the system to play multiple parts simultaneously.
  • The above limit harmonic exploration.
Audio And Other Libraries
  • Instrumental audio 'synthesis' is achieved not by generative techniques, but by calculating offsets into mp3 files containing fixed-length soundbites of given instruments at various pitches. A hack, but as it happens, a very effective one..
  • Because of the above, however, other than instrument choice there is no mechanism for selection of alternative audio libraries
  • There is a distinct cognitive load associated with video: mentally mapping positions and fingerings from an opposing, dynamic instrument view. Our immediate need is for orientation and detailed positional information.
  • Because controls are time-offset-based rather than linked to specific on-screen notation elements, the scope for animation and/or P2P interworking is severely limited (there seem to be hints on the Soundslice site, however, that other interworking models may be in the pipeline).
  • Instrument displays are 'one-offs' - effectively hard-coded island solutions with no scope for nicities such as alternative tunings. In serving only a tiny range of already hugely popular instruments and attracting predominantly professional musicians into it's franchising model, Soundslice could be said to be feeding the trend to something of an instrumental monoculture. No doubt this will ease as a wider range of genres, instruments and styles are added.
  • Though likely to improve, the number of instruments for which tablature exists is very limited.
  • Instrument displays appear to be prescribed rather than user-selectable.
  • Tablature and instrument model fingerings and chord diagrams could be said to represent redundancy.

Soundslice creates it's notation in bitmap form using the simplest of graphical elements 'hard-coded' onto a DOM 'canvas' element. Once there, they are more or less fixed.
  1. In bitmap form, the visual mapping and transformation possibilities are severely limited. Where a bitmap is needed (for speed on a mobile device, or exported as an image to a virtual reality context), it could equally well be derived on-the-fly using SVG-to-image conversion on a server - at any size, and if desired transformed.
  2. Bitmapped notation tends to undermine the clearest motivation for parsing MusicXML using Javascript (to prepare notation glyphs for algorithmic placement, using javascript-based data visualization libraries).
  3. Given the above, there is no means of selecting own, preferred note colouring schemes
  4. Soundslice's focus is not on data, but on processing speed. It was built to the 'Mobile First' mantra. Tomorrows's immersive applications, however, will run on dramatically faster, high-resolution augmented and virtual reality devices, will be real-time, 'on-demand', data-driven, and in no sense pre-baked.
  5. The instrumental focus is on the popular, not the diverse. We are in an age of musical individualism, of experimentation, world music and self-expression. Yes, of group play - but preferably accommodating the exotic. Moreover, the platforms of tomorrow will connect real people learning from each other in social groups. If real-time play together over the internet is still a pipe-dream, on-line face-to-face teaching and learning are already being done. They just need the supporting toolset..

In sum, a tour-de-force in pragmatic scaling down for the mobile app market, but which undermines much of it's potential in multi-voiced and advanced educational environments. The following diagrams more or less sum it up.

Omitting any mention of virtual / augmented reality or machine intelligence, here we see how Soundslice, our Aggregator platform and the earlier and less sophisticated InstantNotation and Knowtation products are placed relative to each other in the move towards immersive online learning.

By diversity, we mean a wide range of world music instruments and theory tools animations. But why a 'dubious' model? Because animation diversity and platform speed are not necessarily in conflict with each other. It all depends how that diversity is achieved. More on that in a mo..

Meanwhile, as you may see:
  • Over time (and especially as devices and browser technology advances), Soundslice may quickly get left behind. It will, however, continue to serve well on legacy devices.
  • For Soundslice to achieve greater instrumental or music theory diversity, the current franchise model could lead to high organizational overhead.

Rationale For A Clear Alternative

While readily acknowledging that much of folk music is monaural, the following will allow us to work with all manner of score: from single voices through band works, choral pieces, chamber music, orchestral scores - and across many cultures:
  • the widespread availability of good traditional music scores, as collected by enthusiasts worldwide.
  • the appearance of powerful online notation editors such as Noteflight, which contribute to the wealth of open-source scores, and can be used to create exercises illuminating theoretical and harmonic principles.
  • new forms of microtonal notation.
  • mechanisms for finely differentiated instrument and theory tool configuration.
Our as yet unnamed aggregator platform focusses on notational, voicing, instrumental and theory tools diversity. I hope this post has demonstrated that it is in no sense modelled on the Soundslice approach, but on social value, data and the possibility of reuse across multiple technologies.

Coming back briefly to that 'dubious' model (see similar graphic earlier in this post), can we now put the aggregator platform's potential capabilities into perspective?
Music Visualization Aggregator platform vs Soundslice. #VisualFutureOfMusic #WorldMusicInstrumentAndTheory
This graph is attempting to tell us that through careful configuration of world music instruments, we can reduce the processing overheads involved in recovering and displaying any instrument to much the same level as those Soundslice requires to show one.

What is not perhaps immediately clear is that it will allow the user to populate menus with their own selection of instruments, theory tools and supporting applications, and that the automatic interworking mechanisms open up a galaxy of potential score-driven music visualizations. Another way of looking at things is in the context of the immediacy and immersion of the user experience. Over time, these have developed from (for immediacy) simple face-to-face learning through to the point where people are already learning (albeit with little tool support), P2P (peer-to-peer) across the internet, and for immersion from simple audio through to immersive Web3D. In that user's acceptance of new technologies is delayed, immediacy and immersion can be viewed as alternately 'leap-frogging' each other.
If we zoom in a little on the top RHS, we can perhaps see how Soundslice and our aspiring aggregator platform might fit in. In introducing synchronised video, Soundslice is leading in an era of on-demand, media-synchronised learning.

Nevertheless: 'on-demand' is not 'immediate-need-driven', and 'synchronised' is not 'real-time'. The central distinction is simple: whether or not media is synchronised with a live teacher. A peer-to-peer system linking live teacher and students with gamer-style controls would greatly simplify one-way synchronization - which, if not enough to allow play together, is certainly enough to teach.

Entirely automating synchronization of video stream with a running score during playback is, however, much easier said than done. This is an area where artificial intelligence can perhaps, one day, prove really useful. Until then (and even with the help of game protocol and back-end server tricks) any synchronization of a P2P teacher's play with notation on the student's end is likely to remain a simple, rule-of-thumb affair.

Pre-synchronised audio and video of the kind used in Soundslice is howeve perhaps not such a problem. Much of the groundwork has likely already been done for tools such as VLC (video) and Audacity (audio). Perhaps someone will take the time to investigate this soon: an open-source video (time) tokeniser program - similar to that apparently offered by Soundslice could be very useful all round.

Finally, our aggregator platform, in supporting music visualization in the widest sense and in sharing it's various feeds, bridges the data divide between the conventional browser DOM and the dramatic, gamified world of WebGL and Web3D.

To sum up, while more than enough for learning new tunes on a few mainstream instruments, viewed from the perspective of musical diversity, visual modelling and understanding, immediacy and immersion, there is abundant scope for advance. The next step is crowdfunding, for which, having read this far, I fervently hope I have your support. :-)

Finally finally, a blow-by-blow summary of the main differences between Soundslice and our anticipated aggregator platform:

Comparison between Soundslice and the Music Visualization Aggregator platform #VisualFutureOfMusic #WorldMusicInstrumentsAndTheory


online music learning,
online music lessons
distance music learning,
distance music lessons
remote music lessons,
remote music learning
p2p music lessons,
p2p music learning
music visualisation
music visualization
musical instrument models
interactive music instrument models
music theory tools
musical theory
p2p music interworking
p2p musical interworking
comparative musicology
world music
international music
folk music
traditional music
P2P musical interworking,
Peer-to-peer musical interworking
WebGL, Web3D,
WebVR, WebAR
Virtual Reality,
Augmented or Mixed Reality
Artificial Intelligence,
Machine Learning
Scalar Vector Graphics,
3D Cascading Style Sheets,

Comments, questions and (especially) critique welcome.