World Music Visualisation Environment - Preferences and Population
Freedom of instrumental choice is abysmally poorly served in conventional online instrument teaching environments.
With direct influence on learner motivation, future learning support from artificial intelligence and higher-level worldwide person-to-person teaching opportunities via video chat, this ultimately impacts instrumental survival itself. It is directly contributing to an instrumental monoculture, a mass extinction event.
This need not be. If software can manage some e-mail client's address collection, it can manage the storage of instrument configuration details. This too is just text, and can be kept equally compact and human-readable.
We could track instruments using any or all classification systems, but here I'm going (for reasons that will become apparent) to focus on Hornbostel-Sachs, probably the most widely cited and which results in a classification tree, each branch representing instrument families: each twig or leaf further refining and instrument's definition.
This system classifies instruments only by their basic tone-producing, i.e. construction characteristics, or “form”. Whenever we use the world 'form', we are in essence referring to visual properties: by what means sound is produced, what comprises the core body elements, and some indication of their shape.
As such, Hornbostel-Sachs wholly ignores the actual musical configuration (“function”), which describes the instrument's finer-grain 'user interface'. This encompasses physical properties such as scale or channel length, temperament or intonation, number of notes or tones to the octave, number of courses or channels, tunings, and pitch modification by devices such as capo.
In this sense, instrument classifications under the Hornbostel-Sachs system are incomplete. However, decoupling form and function has -as it happens- huge advantages, and especially in the context of online instrument modeling.
Critically, it allows the Hornbostel-Sachs classification system to be used as the tree-like basis for a web repository (data store) encompassing all known world instrument forms, their many musical configurations (function) being stored as 'foliage', or sub-trees.
These in place, pretty much any world music instrument can be modeled in it's entirety and in the browser - from it’s generic family base.
This opens any and all instruments to integration into a source-driven aggregator platform for world music visualization - encompassing any score, instrument or theory tool, and in any combination.
This approach is simple, robust and perfectly aligned with good API-building practices (so-called semantic URLs). Ok, 'nuff tech.
Worldwide, there are several music classification systems, dozens of music systems, literally thousands of instruments spanning many instrument families, and certainly a good few hundred theory tools in a range of 1-, 2- and 3D virtual shapes.
Even in their current (static) form, whether in the context of social music and dance or comparative musicology, these represent a vast cultural blind spot.
Brought online as dynamic and interactive models, they can be expected to fuel a revolution in both the breadth and depth of music teaching.
A central strength in the aggregator platform proposed here lies in precisely such a 'progressive refinement tree': simple, layered visual models, customized at each level according to immediate configuration needs. Moreover, these customizations can be drag-and-drop shared -individually or as a collection- across the user community. This post provides some pointers as to how this can be achieved.
Big, brave, open-source, non-profit, community-provisioned, cross-cultural and houdini crazy. → Like, share, back-link, pin, tweet and mail. Hashtags? For the crowdfunding: #VisualFutureOfMusic. For the future live platform: #WorldMusicInstrumentsAndTheory. Or just register as a potential crowdfunder..
User Environment Preferences
Data visualisation is concerned with the visual representation of patterns in data.In musical terms, think instrument fingerboards, keyboards, their roadmaps, exercises, chords, rhythm and musical application of colour.
Anywhere data patterns can be found, data visualisation can be applied.
As it happens, the tree structures we associate with instrument and other musical data is an excellent match with established visualisation (and storage) technologies.
A user can specify the visualizations with which they choose to populate their immediate environment.
This is as true for scores (which are also handled using data visualization techniques) as for instrument models, theory tools, physics simulations, genres, scores, bands or any of a host of other interests.
So how might an instrument definition be addressed? The core data can be stored in JSON, and comprises two parts: a <form> part, and the <function> part, i.e.:
<form><function>
The Hornbostel-Sachs ('form') designation for a lute family instrument such as (yawn) guitar would be 321.322.
The build designation might for example comprise elements for scale or channel length (L), temperament or intonation (T), number of notes or tones to the octave (N), number of courses (C) and number of strings to a course (S).
Depending on the data repository (storage) implementation, our target might be a reference to a position in a classification tree (as shown to the left), or indeed to a filename, something along the lines of:
HS321_321_L66_TE_N12_C6_S1
From a user's perspective, such a file could be accessed visually (as a node in a tree), or using more or less conventional search.
As you may see, from a storage perspective such a file could be accessed -depending on storage technology, and amongst others- either as a conventional drill-down NoSQL style query, or as an ad-hoc graph database query.
So having seen how the data might be stored, what might an actual configuration session look like? With no attempt at polish, here a clip from the proof-of-concept implementation. Keep in mind that every time a 'Save' action is undertaken, data is saved to a configuration tree, as has just been described.
In quick succession, we see the definition of an Irish bouzouki, a typical violin (fiddle), a Turkish cura, a South American charango, an Arabic oud and an equal-tempered but microtonal (24 notes or tones per octave) guitar.
Any specific instrument customization can be configured in less than a minute, saved, and (potentially) made available for use by any and all users, worldwide. The only limitations are those imposed by the source music exchange format (think audio, midi, ABC and so on) and resulting music notation.
The tuning menu serves -at this point- mainly to allow crosschecking of behavior with the currently loaded score. This, score-driven fingerings and much more are demonstrated in separate videos.
What has been done for lutes can naturally be done for other stringed instrument types such as harps and zithers, or indeed any of the other high-level instrument families, such as percussion, wind, brass and keyboards.
With time it should be possible to model at least 80% of the world's instruments in this way, providing a solid base for ventures into direct, person-to-person teaching and learning online.
The only danger I see in this procedure is security. With SVG 'scriptable', there is some danger that a hacker could gain access to the system. For this reason, the SVG should be server-side generated, and the only user information provided through GUI conventional dialog elements with good safety controls.
Ok, what we have just seen demonstrated feeds directly into the wider instrument discovery process. Once an instrument has been defined, we can allow visual selection from a data tree to replace or augment (a.k.a. 'fine-tune') traditional text search.
Assuming we have saved a few instruments as described, then, how can they be used to populate the user's menus? He or she will want to see only those instruments featured that are being learned. To do this, we resort to drag and drop.
Menu Population with User Preferences
Assuming the user has 'discovered' a few of instruments tucked away the visual classification tree (accessed through the upper, horizontal menu bar), it makes sense to use these to directly populate other parts of the graphical user interface (GUI) - and in particular the user's menus.When populating menus, we would be free to select not just individual end points ('leaves" of the tree), but entire twigs or branches.
Assuming safeguards against 'overpopulation' of menus, this would be accurate, flexible, secure (no direct user input) and provide immediate visual feedback.
A further benefit is that of code reuse. With all data sharing the same basic tree structure, code used to manipulate one tree can be reused across all others.
In our case, then, we are likely to end up with two levels of data population: the first populating a given user's menus, the second a working selection from these menus to populate the current animation panel for actual use.
With a 1:1 classification tree to menu relationship, this allows us to populate all the menus individually. This is illustrated in overview in the diagram below. Here we see a selection from the top-level 'Genres' menu, which accesses the corresponding classification tree.
From the Genres classification tree, a 'Cantes Flamencos' branch is drag-n-dropped onto the user's 'Genres' menu. The leaves of this tree now represent his or her genre preferences.
Where subsequently selected, this may be used either to pull in relevant genre-oriented contextual information, or limit other selections to this genre.
User Selection from Flamenco Genres to Populate Own Preferences |
|
This approach to preferences and population unleashes a monumental flood of visualisation possibilities. To build on an earlier diagram (here just a foretaste!):
This also gives us some idea of the potential for the creation of -for example- individual exercises (notation) touching on every aspect of music theory, but integrated across the entire instrument and theory tool spectrum.