World Music's DIVERSITY and Data Visualisation's EXPRESSIVE POWER collide. A galaxy of INTERACTIVE, SCORE-DRIVEN instrument model and theory tool animations is born. Entirely Graphical Toolset Supporting World Music Teaching & Learning Via Video Chat ◦ Paradigm Change ◦ Music Visualization Greenfield ◦ Crowd Funding In Ramp-Up ◦ Please Share

Monday, August 8, 2016

Cantillate

Musical Instrument Finger Positions, Fingering Roadmaps, Charts And Diagrams


The lack of fingering flexibility in music exchange files such as MusicXML is a major inhibitor of innovation in online music teaching and learning.

Chief among the challenges is to get away from hard-coded fingerings, to find a mechanism that decouples music source and fingering suggestions. Fingerings are part objective, part subjective: are dependent on instrument type and configuration, it's key or tuning, stylistic or music 'tension' preferences, or simply the desired quality of timbre over the practicalities of speed.

In effect, every instrumentalist has his or her own fingering preferences. Hence, an exchange file format such as MusicXML might be expected to support association with fingering sets, or one might expect external fingering definition files to be associated with a MusicXML voice, part or timeline.

Not so. All we have for the present are the hard-coded values embedded in MusicXML and other exchange formats. The impact is wide ranging and, frankly, catastrophic.

Things get more complex under advanced notation features such as the option (as with the visualization platform in focus here) of aggregating several voices into a single instrument display. In such cases we cannot rely on existing expertise, but would be more or less pushed towards algorithmic (including artificial intelligence) optimizations.

From simple removal of redundancy to note clustering within handspan or for best tone, there are several approaches available, all perhaps best placed under user control.

Fingering Options: Digital (Sic) Documentation

As you probably sense, in the above we have already hinted at a couple of possible approaches to fingering freedoms. Here these listed alongside a couple more:
  • Mappings from external fingering definition files to specific music-exchange-file-internal voice, note and time token (div) positions, with no changes to the exchange file
  • algorithmically -including artificial intelligence- generated fingerings, again with no modifications to the exchange file
  • specific exchange-file-internal cues (effectively inviting fingering suggestions from an unspecified, external fingering file)
  • as above, but applying to every note in the exchange file
  • hybrid solutions using an intermediary file
Forgetting the last and more complex hybrid option, let's see if we can work out some of the commonalities and disparities in each:

Main Challenge Main Benefit Feasibility
Mappings from external fingering definition files n:1 fingering-to-notation-element mappings. Challenging to accurately target notes unless uniquely identified. Potentially flexible in application. With fingerings supplied by domain experts, authenticity is high. Reasonably good
algorithmically (and AI) generated fingerings. Challenging for general (tuning diversity, transpositions etc) application. On-demand (in-browser, on-the-fly calculated). Exchange file dedicated. No storage overhead. Challenging for general application (tuning diversity, transpositions etc)
exchange-file-internal fingering cues ('hooks') - selective (known or likely troublespots only) Challenging for more general application (tuning diversity, transpositions etc). AI assistance? To standard tunings with predictable fingerings (human suggestions with some authenticity guarantee). For all but standard tunings, challenging.
exchange-file-internal fingering cues ('hooks') - for all notes Challenging for more general application (tuning diversity, transpositions etc). AI assistance? 100% fingering coverage possible, but only with AI assistance. Good but with potentially significant impact on exchange file size.


A significant challenge across all these scenarios is simply ensuring a match between external fingering and music exchange files.

Score Handling

At this point I want to take a brief side-step into our musical context, the score. Having abandoned legacy notation technologies such as XSLT and refused plain vanilla javascript, we find ourselves in the realms of data-driven notation - which opens LOTS of doors.

Score Playback Selection at a Point in Time. #VisualFutureOfMusic #WorldMusicInstrumentsAndTheory
Score Selection at a Point in Time
Because even the notation is data-driven, we have considerable freedom over score handling, and on several levels.

Not only that of score playback, but in the selection of one or more voices, their aggregation or bundling as a vertical section, and display on any of one or several different instrument finger- or keyboards.

Using simple selections from items stored in the browser, it will be possible to mute, hide and transpose individual voices, but also interrogate the notation at a variety of user-selectable granularities.

Big, brave, open-source, non-profit, community-provisioned, cross-cultural and crazy biscuits. → Like, share, back-link, pin, tweet and mail. Hashtags? For the crowdfunding: #VisualFutureOfMusic. For the future live platform: #WorldMusicInstrumentsAndTheory. Or simply register as a potential crowdfunder..

Instrument Layout Diversity

Even restricting ourselves to just one of the many instrument family subgroups - such as free reed instruments, the fingering possibilities can seem limitless.

In practice, however, even these can be broken down into sub-groupings or types by (for example) fingerboard layout: whether diatonic (Club, Irish, Musette, Ländler), chromatic, (B- or C-system, bayan), piano accordion, bandoneon or one of the many types of concertina (English, Anglo, Duet, Chemnitzer etc).

   
Three Different Concertina Layouts


Reuse considerations will feature prominently in building the layered, graphical hierarchies behind these layouts. As with abstract music theory lattices or tonnetze, it is also important to consider the ease with which nodes and intervals on the various fingering axes can be interrogated and graphically displayed.

Nevertheless, it is my feeling that for each configuration, clear and reasonably simple mappings fingerings, and that, where not provided by a human, artificial intelligence will help in finding optimizations for various styles of play.

If talking artificial intelligence, our longer term goal is generally applicable fingering intelligence, not just for one instrument.

Instrument Fingering Constraints

Potential Crowdfunder?

The main actors in an instrument fingering scenario are:
  • The specific teacher / mentor / virtuoso
  • The exact instrument configuration, including the tuning or key
  • Key- or fingerboard layout, including whether chromatic or diatonic
  • The precise musical source
  • The instrument, part or voice in the musical source
These will need to be identified and directly mapped to any fingering definition file or mechanism to ensure a unique match.

Constraints the fingering definitions creator will have in mind are:
  • musical range in terms of number and layout of keys or other finger positions in each course or row
  • default or 'normal' hand spans / reach
  • genre- and style-related fingering conventions (modes, scales, positions)
  • musical priorities (speed, convenience, comfort, timbre or tone, possibly aggregated)
Beyond these, we might want to accommodate learner preferences or limits:
  • hand span / reach
By the way, reflecting the variety of combinations to be found out in the real world, it is useful for modeling purposes to treat the melody and bass sides of accordions as discrete entities.

A concertina's sides, on the other hand, represent a single musical unit.

Both are exceptions to the general rule - and with that, a nice challenge. Care to think about how they might be tackled? :-)

How are Fingering Roadmaps ..er.. Mapped?

Guitar Tuning Subsets (Ukelele, Mandola, Violin) #VisualFutureOfMusic #WorldMusicInstrumentsAndTheory
Guitar Tuning Subsets (Ukelele, Mandola, Violin)
On a guitar fingerboard, one finger position delivers one note or tone: this is a 1:1 relationship. On a clarinet, several fingers acting together produce a tone: this is an n:1 relationship (depending on the build there may be a few alternatives, but in general a player tends to stick with one).

Yet a guitar has pitch redundancy. The same note can be found in several different positions. From a fingering perspective, then, this is a 1:n relationship. Confused? Good. Confusion lies on the threshold to progress.

No redundancy, no choice. Instruments with no pitch redundancy (such as simple flutes or single-stringed lute-family instruments) can get by with static (possibly bitmapped) fingering diagrams, because the note-to-fingering mappings never vary.

The instant pitch redundancy creeps in, things get more complicated. Our only real landmark or anchor is the pitch of the source-provided tone. It may map to several possible positions (e.g. guitar) or fingerings (more advance whistles and flutes).

Clearly there are many considerations behind a given finger placement.

Roadmaps And Reuse

Bouzouki Tuning Menu. #VisualFutureOfMusic #WorldMusicInstrumentsAndTheory
Bouzouki Tuning Menu
Many instruments share identical tunings and fingerboard roadmaps. The guitar, for example, shares tuning subsets associated with other common instruments such as ukelele, mandola, bouzouki, banjo, violin or mandolin. The standard tunings for the latter two are, indeed, identical.

From a strategic point of view, tuning (and hence fingerboard layout) reuse is critical in achieving coding economy of across a variety of instrument forms. In this sense, tunings need to be decoupled (abstracted away) from specific instruments, algorithmically generated, reused, possibly AI-filtered for senseless combinations, and much more flexibly mapped to. In this way, each tuning set carries a better guarantee of uniqueness. Hard coding, needless to say, is taboo.

There is of course a combinatorial explosion as the number of courses or channels increases, but we can be sure that all that occurs on a 1-course instrument finds reuse on a 2-course instrument, and so on. Again we find ourselves in the realms of classification hierarchies, and an emerging tuning strategy.

Display Using Heat Map
Coming back to layout, for most instruments, the Heat Map, a more or less standard data visualization construct, is fine.

Diatonic instruments (and their event handling) can be implemented using layered or slightly offset variants.

There are plenty of approaches to achieving layouts for more complex instruments.

On-The-Fly Pitch Adjustments

With several voices, however, the question arises of how to handle widely differing pitch, and especially the thorny question of recommended fingerings. Moreover, simple folk instruments are sometimes limited in range, or the notes native to the score too far apart to be fingered.

Fingerings can be optimised (anyone up for artificial intelligence or machine learning?), but are also an important aspect of style. This suggests that -whatever means are used to find recommended fingerings- ultimately there should always be freedom to override.

In these cases, traditional musicians (and especially those who learn by ear, where there are perhaps fewer playing taboos) use substitution:
  • Pitch Adjustment Controls. #VisualFutureOfMusic #WorldMusicInstrumentsAndTheory
    Pitch Adjustment Controls
    pitch 'octavisation', whereby a note or tone is hiked as many octaves up or down as are necessary to bring the note into playing (or rather 'playable') range
  • substitution of another, harmonically compatible note
  • removal of effectively duplicate but octave-separated (enharmonic) notes
  • dropping a note entirely
The screenshot here is of an first cut at these screen controls (this whole configuration interface is being revamped to make it more intuitive and better integrated with the instrument layout. More on that in a later update).

The screenshot also shows (in blue) the fingering history, that is to say where earlier fingering positions were found. Though early days, this is of potential help in devising practice exercises.

Ok, we've opened a lot of possible pathways. Thoughts? Which fingering mechanism do you think offers the best combination of wide applicability, efficiency and feasibility? Comments welcome..


Keywords



online music learning,
online music lessons
distance music learning,
distance music lessons
remote music lessons,
remote music learning
p2p music lessons,
p2p music learning
music visualisation
music visualization
musical instrument models
interactive music instrument models
music theory tools
musical theory
p2p music interworking
p2p musical interworking
comparative musicology
ethnomusicology
world music
international music
folk music
traditional music
P2P musical interworking,
Peer-to-peer musical interworking
WebGL, Web3D,
WebVR, WebAR
Virtual Reality,
Augmented or Mixed Reality
Artificial Intelligence,
Machine Learning
Scalar Vector Graphics,
SVG
3D Cascading Style Sheets,
CSS3D
X3Dom,
XML3D


Comments, questions and (especially) critique welcome.