A spinoff of this proof-of-concept is that the framework handling the loading and initial display of visual components is entirely domain independent, opening up the possibility of reuse in other timeline-driven applications.
Big, brave, open-source, non-profit, community-provisioned, cross-cultural and hanging from the chandeliers crazy. → Like, share, back-link, pin, tweet and mail. Hashtags? For the crowdfunding: #VisualFutureOfMusic. For the future live platform: #WorldMusicInstrumentsAndTheory. Or simply register as a potential crowdfunder..
A Reusable Framework
Reuse in other timeline-driven applications?
This means that users will be able to entirely personalise their environment, yet still have full interactivity between their animation-driving source and their selected animations.
Moreover, as any synchronisation between possibly several remote nodes in a given learning cluster would likely be made with reference to a common, shared driving protocol, this suggests possible use with differing animations at each end node (reflecting a common practise amongst traditional instrumentalist, where -say- a fiddler teaches an accordionist or vice-versa).
Orchestration is another obvious target, but my feeling is there could be many potential synergies - especially in the area of real-time monitoring.
What do we mean by reusable?
With no knowledge of what is being loaded (this is governed entirely by the content of data files)- individual animations can be loaded into the central animation panel.
You can liken this to a festival stage, where all manner of acts can be accommodated, but with each supplying all it's own needs. Moreover, the stage has space for more than one act at a time..
These models and tools are then animated by the timeline protocol entirely independently of the framework. In other words, the timeline source's data file contents are relevant only to animation interfaces, not to the loading framework.
The framework itself is extremely light-weight, as with (so far) no need for pre-built GUI elements (the range of elements used is tiny, and so quickly built using SVG), there was no need for a large 'bloatware' framework.
Furthermore, with direct, physical URLs, routing is superfluous.
Scaleable
If the framework is to be used on devices with less screen real estate, some control over the arrangement -and possibly scope- of information displayed will be necessary - at all levels.
This goes beyond the remit of the proof-of-concept and the remit of 'responsive' design. My feeling is that on the smallest of devices, only one of several possible contexts will be supported, in the form of a restriction on the range or number of models.
Keywords
online music learning,
online music lessons
|
distance music learning,
distance music lessons
|
remote music lessons,
remote music learning
|
p2p music lessons,
p2p music learning
|
music visualisation
music visualization
|
musical instrument models
interactive music instrument models
|
music theory tools
musical theory
|
p2p music interworking
p2p musical interworking
|
comparative musicology
ethnomusicology
|
world music
international music |
folk music
traditional music
|
P2P musical interworking,
Peer-to-peer musical interworking
|
WebGL, Web3D,
WebVR, WebAR
|
Virtual Reality,
Augmented or Mixed Reality
|
Artificial Intelligence,
Machine Learning
|
Scalar Vector Graphics,
SVG
|
3D Cascading Style Sheets,
CSS3D
|
X3Dom,
XML3D
|
Comments, questions and (especially) critique welcome.