-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MIDI: dedicated input names for common types #33
Comments
[param midi_cc1], [param midi_cc2_ch3], [param midi_bend], [param midi_bend_ch4], [param midi_drum36], [param midi_vel60], [param midi_vel64_ch5] now supported in The most obvious missing item is midi notes (pitches). This is tricky since a single [param] can't capture both pitch and velocity, but moreso because of polyphony. We could support translating midi pitch to a monophonic style cv pitch input, or even polyphony with note1, note2, etc., but this requires some schema for allocating notes (first held, last held, highest, lowest, etc.). It seems like this might be better handed over to user-level code? The older "raw" style of midi handling is still available as a fallback. |
[param midi_clock] and [param midi_play] also included. Note for future questions: Midi clock runs at 24ppqn. Although there will be a tiny bit of slop (0.3% per beat) and latency due to callback rate, it is almost certainly below threshold of perception (assuming even MIDI is reliable...) -- but dropped clocks are possible: At 180bpm that means 72 ticks per second, 13-14ms period. Daisy callbacks (param updates) are at worst every 1.5ms (and could be much faster), but still it's possible that clock pulses could be missed. An adaptive tap tempo style algorithm that can handle dropped events would be a good abstraction to add to Oopsy. |
For allocating voices based on a kind of drum machine model, the dev branch of oopsy already has things like [param midi_drum36] which gives a velocity value scaled from 0 to 1. For allocating voices based on a pitch model, we might want to look at things in a similar way to existing MIDI voice handlers like MutantBrain, Yarns, Befaco MIDIThing etc, which have notions of fixed polyphony modes/apps. I’d like to put something together for [param] inputs along these lines too, which would probably be a lot easier to deal with (and perhaps less overhead) than the current raw midi input stream. So, e.g., [param midi_note2_pitch] would give you the midi pitch value of the 2nd held note. The existence of a "midi_note2" would put the note handling into a duophonic mode; the existence of a "midi_note4" would put it into quadraphonic, etc. These could also have _chX postfixes, so that several monophonic voices on different channels could also be supported. What I'm less sure about yet is how to set the voice priority (highest/lowest/last/first/cycling?) -- perhaps just [param midi_note2_last] ? For reference: |
A ton of updates for both MIDI input & output are now in the Now documented here: https://github.com/electro-smith/oopsy/wiki/MIDI-Input Still on the (MAYBE) TODO list:
|
Similar to the use of [out 5 midi_drum36] and [out 6 midi_cc1] etc., add support for [param midi_cc1], [param midi_drum36] etc.
The text was updated successfully, but these errors were encountered: