-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathomr-research-sorted-by-key.html
10944 lines (9499 loc) · 534 KB
/
omr-research-sorted-by-key.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>OMR-Research</title>
<link rel=stylesheet type="text/css" href="css/OMR-Research.css">
</head>
<body>
<section class="page-header">
<h1>Bibliography on Optical Music Recognition</h1>
<p>Last updated: 01.12.2024</p>
<a href="https://github.com/OMR-Research/omr-research.github.io" class="btn">View on GitHub</a>
<table class="page-header-table" id="navigation-table">
<tr>
<td><a href="index.html" class="btn-light">Sorted by Year</a></td>
<td><a href="omr-research-compact.html" class="btn-light">Sortey by Year (Compact)</a></td>
<td><a href="omr-research-sorted-by-key.html" class="btn-light">Sorted by Key</a> </td>
<td><a href="omr-related-research.html" class="btn-light">Related research</a></td>
<td><a href="omr-research-unverified.html" class="btn-light">Unverified research</a></td>
</tr>
</table>
</section>
<!-- This document was automatically generated with bibtex2html 1.96
(see http://www.lri.fr/~filliatr/bibtex2html/),
with the following command:
BibTeX2HTML/OSX_x86_64/bibtex2html -s omr-style --use-keys --no-keywords --nodoc -o OMR-Research-Key OMR-Research.bib -->
<table>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Achankunju2018">Achankunju2018</a>]
</td>
<td class="bibtexitem">
Sanu Pulimootil Achankunju.
Music Search Engine from Noisy OMR Data.
In Jorge Calvo-Zaragoza, Jan Hajič jr., and Alexander Pacha,
editors, <em>1st International Workshop on Reading Music Systems</em>, pages
23-24, Paris, France, 2018.
[ <a href="OMR-Research-Key_bib.html#Achankunju2018">bib</a> |
<a href="https://sites.google.com/view/worms2018/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Adamska2015">Adamska2015</a>]
</td>
<td class="bibtexitem">
Julia Adamska, Mateusz Piecuch, Mateusz Podgórski, Piotr Walkiewicz, and
Ewa Lukasik.
Mobile System for Optical Music Recognition and Music Sound
Generation.
In Khalid Saeed and Wladyslaw Homenda, editors, <em>Computer
Information Systems and Industrial Management</em>, pages 571-582, Cham, 2015.
Springer International Publishing.
ISBN 978-3-319-24369-6.
[ <a href="OMR-Research-Key_bib.html#Adamska2015">bib</a> |
<a href="http://dx.doi.org/10.1007/978-3-319-24369-6_48">DOI</a> ]
<blockquote><font size="-1">
The paper presents a mobile system for generating a melody based on a photo of a musical score. The client-server architecture was applied. The client role is designated to a mobile application responsible for taking a photo of a score, sending it to the server for further processing and playing mp3 file received from the server. The server role is to recognize notes from the image, generate mp3 file and send it to the client application. The key element of the system is the program realizing the algorithm of notes recognition. It is based on the decision trees and characteristics of the individual symbols extracted from the image. The system is implemented in the Windows Phone 8 framework and uses a cloud operating system Microsoft Azure. It enables easy archivization of photos, recognized notes in the Music XML format and generated mp3 files. An easy transition to other mobile operating systems is possible as well as processing multiple music collections scans.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="AlfaroContreras2020">AlfaroContreras2020</a>]
</td>
<td class="bibtexitem">
María Alfaro-Contreras, Jorge Calvo-Zaragoza, and José M.
Iñesta.
Reconocimiento holístico de partituras musicales.
Technical report, Departamento de Lenguajes y Sistemas Informáticos,
Universidad de Alicante, Spain, 2020.
[ <a href="OMR-Research-Key_bib.html#AlfaroContreras2020">bib</a> |
<a href="https://rua.ua.es/dspace/bitstream/10045/108270/1/Reconocimiento_holistico_de_partituras_musicales.pdf">.pdf</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="AlfaroContreras2021">AlfaroContreras2021</a>]
</td>
<td class="bibtexitem">
María Alfaro-Contreras, Jose J. Valero-Mas, and José Manuel
Iñesta.
Neural architectures for exploiting the components of Agnostic
Notation in Optical Music Recognition.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 33-37, Alicante, Spain, 2021.
[ <a href="OMR-Research-Key_bib.html#AlfaroContreras2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="AlfaroContreras2023">AlfaroContreras2023</a>]
</td>
<td class="bibtexitem">
María Alfaro-Contreras.
Few-Shot Music Symbol Classification via Self-Supervised Learning and
Nearest Neighbor.
In Jorge Calvo-Zaragoza, Alexander Pacha, and Elona Shatri, editors,
<em>Proceedings of the 5th International Workshop on Reading Music
Systems</em>, pages 39-43, Milan, Italy, 2023.
[ <a href="OMR-Research-Key_bib.html#AlfaroContreras2023">bib</a> |
<a href="http://dx.doi.org/10.48550/arXiv.2311.04091">DOI</a> |
<a href="https://sites.google.com/view/worms2023/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Alirezazadeh2014">Alirezazadeh2014</a>]
</td>
<td class="bibtexitem">
Fatemeh Alirezazadeh and Mohammad Reza Ahmadzadeh.
Effective staff line detection, restoration and removal approach for
different quality of scanned handwritten music sheets.
<em>Journal of Advanced Computer Science & Technology</em>, 3 (2):
136-142, 2014.
[ <a href="OMR-Research-Key_bib.html#Alirezazadeh2014">bib</a> |
<a href="http://dx.doi.org/10.14419/jacst.v3i2.3196">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Andronico1982">Andronico1982</a>]
</td>
<td class="bibtexitem">
Alfio Andronico and Alberto Ciampa.
On Automatic Pattern Recognition and Acquisition of Printed Music.
In <em>International Computer Music Conference</em>, Venice, Italy,
1982. Michigan Publishing.
[ <a href="OMR-Research-Key_bib.html#Andronico1982">bib</a> |
<a href="http://hdl.handle.net/2027/spo.bbp2372.1982.024">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Anquetil2000">Anquetil2000</a>]
</td>
<td class="bibtexitem">
Éric Anquetil, Bertrand Coüasnon, and Frédéric Dambreville.
A Symbol Classifier Able to Reject Wrong Shapes for Document
Recognition Systems.
In Atul K. Chhabra and Dov Dori, editors, <em>Graphics Recognition
Recent Advances</em>, pages 209-218, Berlin, Heidelberg, 2000. Springer Berlin
Heidelberg.
ISBN 978-3-540-40953-3.
[ <a href="OMR-Research-Key_bib.html#Anquetil2000">bib</a> |
<a href="https://link.springer.com/chapter/10.1007%2F3-540-40953-X_17">http</a> ]
<blockquote><font size="-1">
We propose in this paper a new framework to develop a transparent classifier able to deal with reject notions. The generated classifier can be characterized by a strong reliability without loosing good properties in generalization. We show on a musical scores recognition system that this classifier is very well suited to develop a complete document recognition system. Indeed this classifier allows them firstly to extract known symbols in a document (text for example) and secondly to validate segmentation hypotheses. Tests had been successfully performed on musical and digit symbols databases.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Anstice1996">Anstice1996</a>]
</td>
<td class="bibtexitem">
Jamie Anstice, Tim Bell, Andy Cockburn, and Martin Setchell.
The design of a pen-based musical input system.
In <em>6th Australian Conference on Computer-Human Interaction</em>,
pages 260-267, 1996.
[ <a href="OMR-Research-Key_bib.html#Anstice1996">bib</a> |
<a href="http://dx.doi.org/10.1109/OZCHI.1996.560019">DOI</a> ]
<blockquote><font size="-1">
Computerising the task of music editing can avoid a considerable amount of tedious work for musicians, particularly for tasks such as key transposition, part extraction, and layout. However the task of getting the music onto the computer can still be time consuming and is usually done with the help of bulky equipment. This paper reports on the design of a pen-based input system that uses easily-learned gestures to facilitate fast input, particularly if the system must be portable. The design is based on observations of musicians writing music by hand, and an analysis of the symbols in samples of music. A preliminary evaluation of the system is presented, and the speed is compared with the alternatives of handwriting, synthesiser keyboard input, and optical music recognition. Evaluations suggest that the gesture-based system could be approximately three times as fast as other methods of music data entry reported in the literature.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Armand1993">Armand1993</a>]
</td>
<td class="bibtexitem">
Jean-Pierre Armand.
Musical score recognition: A hierarchical and recursive approach.
In <em>2nd International Conference on Document Analysis and
Recognition</em>, pages 906-909, 1993.
[ <a href="OMR-Research-Key_bib.html#Armand1993">bib</a> |
<a href="http://dx.doi.org/10.1109/ICDAR.1993.395590">DOI</a> ]
<blockquote><font size="-1">
Musical scores for live music show specific characteristics: large format, orchestral score, bad quality of (photo) copies. Moreover such music is generally handwritten. The author addresses the music recognition problem for such scores, and show a dedicated filtering that has been developed, both for segmentation and correction of copy defects. Recognition process involves geometrical and topographical parameters evaluation. The whole process (filtering + recognition) is recursively applied on images and sub-images, in a knowledge-based way.<<ETX>>
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Audiveris">Audiveris</a>]
</td>
<td class="bibtexitem">
Hervé Bitteur.
Audiveris.
<a href="https://github.com/audiveris">https://github.com/audiveris</a>, 2004.
[ <a href="OMR-Research-Key_bib.html#Audiveris">bib</a> |
<a href="https://github.com/audiveris">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baba2012">Baba2012</a>]
</td>
<td class="bibtexitem">
Tetsuaki Baba, Yuya Kikukawa, Toshiki Yoshiike, Tatsuhiko Suzuki, Rika Shoji,
Kumiko Kushiyama, and Makoto Aoki.
Gocen: A Handwritten Notational Interface for Musical Performance and
Learning Music.
In <em>ACM SIGGRAPH 2012 Emerging Technologies</em>, pages 9-9, New
York, USA, 2012. ACM.
ISBN 978-1-4503-1680-4.
[ <a href="OMR-Research-Key_bib.html#Baba2012">bib</a> |
<a href="http://dx.doi.org/10.1145/2343456.2343465">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bacon1988">Bacon1988</a>]
</td>
<td class="bibtexitem">
Richard A. Bacon and Nicholas Paul Carter.
Recognising music automatically.
<em>Physics Bulletin</em>, 39 (7): 265, 1988.
[ <a href="OMR-Research-Key_bib.html#Bacon1988">bib</a> |
<a href="http://stacks.iop.org/0031-9112/39/i=7/a=013">http</a> ]
<blockquote><font size="-1">
Recognising characters typed in at a keyboard is a familiar task to most computers and one at which they excel, except that they (usually) insist on recognising what we have typed, rather than what we meant to type. A number of programs now on the market, however, go rather beyond merely recognising keystrokes on a keyboard, to actually recognising printed words on paper.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge1991">Bainbridge1991</a>]
</td>
<td class="bibtexitem">
David Bainbridge.
Preliminary experiments in musical score recognition, 1991.
[ <a href="OMR-Research-Key_bib.html#Bainbridge1991">bib</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge1994">Bainbridge1994</a>]
</td>
<td class="bibtexitem">
David Bainbridge.
A complete optical music recognition system: Looking to the future.
Technical report, University of Canterbury, 1994a.
[ <a href="OMR-Research-Key_bib.html#Bainbridge1994">bib</a> |
<a href="https://ir.canterbury.ac.nz/handle/10092/14874">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge1994a">Bainbridge1994a</a>]
</td>
<td class="bibtexitem">
David Bainbridge.
Optical music recognition: Progress report 1.
Technical report, Department of Computer Science, University of
Canterbury, 1994b.
[ <a href="OMR-Research-Key_bib.html#Bainbridge1994a">bib</a> |
<a href="http://hdl.handle.net/10092/9670">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge1996">Bainbridge1996</a>]
</td>
<td class="bibtexitem">
David Bainbridge and Tim Bell.
An extensible optical music recognition system.
<em>Australian Computer Science Communications</em>, 18: 308-317,
1996.
[ <a href="OMR-Research-Key_bib.html#Bainbridge1996">bib</a> |
<a href="http://www.cs.waikato.ac.nz/~davidb/publications/acsc96/final.html">.html</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge1997">Bainbridge1997</a>]
</td>
<td class="bibtexitem">
David Bainbridge and Tim Bell.
Dealing with Superimposed Objects in Optical Music Recognition.
In <em>6th International Conference on Image Processing and its
Applications</em>, number 443, pages 756-760, 1997.
ISBN 0 85296 692 X.
[ <a href="OMR-Research-Key_bib.html#Bainbridge1997">bib</a> |
<a href="http://dx.doi.org/10.1049/cp:19970997">DOI</a> ]
<blockquote><font size="-1">
Optical music recognition (OMR) involves identifying musical symbols on a scanned sheet of music, and interpreting them so that the music can either be played by the computer, or put into a music editor. Applications include providing an automatic accompaniment, transposing or extracting parts for individual instruments, and performing an automated musicological analysis of the music. A key problem with music recognition, compared with character recognition, is that symbols very often overlap on the page. The most significant form of this problem is that the symbols are superimposed on a five-line staff. Although the staff provides valuable positional information, it creates ambiguity because it is difficult to determine whether a pixel would be black or white if the staff line was not there. The other main difference between music recognition and character recognition is the set of permissible symbols. In text, the alphabet size is fixed. Conversely, in music notation there is no standard "alphabet" of shapes, with composers inventing new notation where necessary, and music for particular instruments using specialised notation where appropriate. The focus of this paper is on techniques we have developed to deal with superimposed objects (6 Refs.) recognition
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge1997a">Bainbridge1997a</a>]
</td>
<td class="bibtexitem">
David Bainbridge.
<em>Extensible optical music recognition</em>.
PhD thesis, University of Canterbury, 1997.
[ <a href="OMR-Research-Key_bib.html#Bainbridge1997a">bib</a> |
<a href="http://hdl.handle.net/10092/9420">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge1997b">Bainbridge1997b</a>]
</td>
<td class="bibtexitem">
David Bainbridge and Nicholas Paul Carter.
Automatic reading of music notation.
In H. Bunke and P. Wang, editors, <em>Handbook of Character
Recognition and Document Image Analysis</em>, pages 583-603. World Scientific,
Singapore, 1997.
[ <a href="OMR-Research-Key_bib.html#Bainbridge1997b">bib</a> |
<a href="http://dx.doi.org/10.1142/9789812830968_0022">DOI</a> ]
<blockquote><font size="-1">
The aim of Optical Music Recognition (OMR) is to convert optically scanned pages of music into a machine-readable format. In this tutorial level discussion of the topic, an historical background of work is presented, followed by a detailed explanation of the four key stages to an OMR system: stave line identification, musical object location, symbol identification, and musical understanding. The chapter also shows how recent work has addressed the issues of touching and fragmented objects—objectives that must be solved in a practical OMR system. The report concludes by discussing remaining problems, including measuring accuracy.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge1998">Bainbridge1998</a>]
</td>
<td class="bibtexitem">
David Bainbridge and Stuart Inglis.
Musical image compression.
In <em>Data Compression Conference</em>, pages 209-218, 1998.
[ <a href="OMR-Research-Key_bib.html#Bainbridge1998">bib</a> |
<a href="http://dx.doi.org/10.1109/DCC.1998.672149">DOI</a> ]
<blockquote><font size="-1">
Optical music recognition aims to convert the vast repositories of sheet music in the world into an on-line digital format. In the near future it will be possible to assimilate music into digital libraries and users will be able to perform searches based on a sung melody in addition to typical text-based searching. An important requirement for such a system is the ability to reproduce the original score as accurately as possible. Due to the huge amount of sheet music available, the efficient storage of musical images is an important topic of study. This paper investigates whether the "knowledge" extracted from the optical music recognition (OMR) process can be exploited to gain higher compression than the JBIG international standard for bi-level image compression. We present a hybrid approach where the primitive shapes of music extracted by the optical music recognition process-note heads, note stems, staff lines and so forth-are fed into a graphical symbol based compression scheme originally designed for images containing mainly printed text. Using this hybrid approach the average compression rate for a single page is improved by 3.5% over JBIG. When multiple pages with similar typography are processed in sequence, the file size is decreased by 4-8%. The relevant background to both optical music recognition and textual image compression is presented. Experiments performed on 66 test images are described, outlining the combinations of parameters that were examined to give the best results.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge1999">Bainbridge1999</a>]
</td>
<td class="bibtexitem">
David Bainbridge and K. Wijaya.
Bulk processing of optically scanned music.
In <em>7th International Conference on Image Processing and its
Applications</em>, pages 474-478. Institution of Engineering and Technology,
1999.
[ <a href="OMR-Research-Key_bib.html#Bainbridge1999">bib</a> |
<a href="http://dx.doi.org/10.1049/cp:19990367">DOI</a> |
<a href="http://digital-library.theiet.org/content/conferences/10.1049/cp_19990367">http</a> ]
<blockquote><font size="-1">
For many years now optical music recognition (OMR) has been advocated as the leading methodology for transferring the vast repositories of music notation from paper to digital database. Other techniques exist for acquiring music on-line; however, these methods require operators with musical and computer skills. The notion, therefore, of an entirely automated process through OMR is highly attractive. It has been an active area of research since its inception in 1966 (Pruslin), and even though there has been the development of many systems with impressively high accuracy rates it is surprising to note that there is little evidence of large collections being processed with the technology-work by Carter (1994) and Bainbridge and Carter (1997) being the only known notable exception. This paper outlines some of the insights gained, and algorithms implemented, through the practical experience of converting collections in excess of 400 pages. In doing so, the work demonstrates that there are additional factors not currently considered by other research centres that are necessary for OMR to reach its full potential.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge2001">Bainbridge2001</a>]
</td>
<td class="bibtexitem">
David Bainbridge and Tim Bell.
The Challenge of Optical Music Recognition.
<em>Computers and the Humanities</em>, 35 (2): 95-121, 2001.
ISSN 1572-8412.
[ <a href="OMR-Research-Key_bib.html#Bainbridge2001">bib</a> |
<a href="http://dx.doi.org/10.1023/A:1002485918032">DOI</a> ]
<blockquote><font size="-1">
This article describes the challenges posed by optical musicrecognition
- a topic in computer science that aims to convert scannedpages
of music into an on-line format. First, the problem is described;then
a generalised framework for software is presented that emphasises
keystages that must be solved: staff line identification, musical
objectlocation, musical feature classification, and musical semantics.
Next,significant research projects in the area are reviewed, showing
how eachfits the generalised framework. The article concludes by
discussingperhaps the most open question in the field: how to compare
the accuracy and success of rival systems, highlighting certain steps
thathelp ease the task.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge2001a">Bainbridge2001a</a>]
</td>
<td class="bibtexitem">
David Bainbridge, Gerry Bernbom, Mary Wallace Davidson, Andrew P. Dillon,
Matthey Dovey, Jon W. Dunn, Michael Fingerhut, Ichiro Fujinaga, and Eric J.
Isaacson.
Digital Music Libraries - Research and Development.
In <em>1st ACM/IEEE-CS Joint Conference on Digital Libraries</em>,
pages 446-448, Roanoke, Virginia, USA, 2001.
[ <a href="OMR-Research-Key_bib.html#Bainbridge2001a">bib</a> |
<a href="http://dx.doi.org/10.1145/379437.379765">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge2003">Bainbridge2003</a>]
</td>
<td class="bibtexitem">
David Bainbridge and Tim Bell.
A music notation construction engine for optical music recognition.
<em>Software: Practice and Experience</em>, 33 (2): 173-200, 2003.
ISSN 1097-024X.
[ <a href="OMR-Research-Key_bib.html#Bainbridge2003">bib</a> |
<a href="http://dx.doi.org/10.1002/spe.502">DOI</a> ]
<blockquote><font size="-1">
Optical music recognition (OMR) systems are used to convert music scanned from paper into a format suitable for playing or editing on a computer. These systems generally have two phases: recognizing the graphical symbols (such as note-heads and lines) and determining the musical meaning and relationships of the symbols (such as the pitch and rhythm of the notes). In this paper we explore the second phase and give a two-step approach that admits an economical representation of the parsing rules for the system. The approach is flexible and allows the system to be extended to new notations with little effort—the current system can parse common music notation, Sacred Harp notation and plainsong. It is based on a string grammar and a customizable graph that specifies relationships between musical objects. We observe that this graph can be related to printing as well as recognizing music notation, bringing the opportunity for cross-fertilization between the two areas of research. Copyright © 2003 John Wiley & Sons, Ltd.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge2006">Bainbridge2006</a>]
</td>
<td class="bibtexitem">
David Bainbridge and Tim Bell.
Identifying music documents in a collection of images.
In <em>7th International Conference on Music Information
Retrieval</em>, pages 47-52, Victoria, Canada, 2006.
[ <a href="OMR-Research-Key_bib.html#Bainbridge2006">bib</a> |
<a href="http://hdl.handle.net/10092/141">http</a> ]
<blockquote><font size="-1">
Digital libraries and search engines are now well-equipped to find images of documents based on queries. Many images of music scores are now available, often mixed up with textual documents and images. For example, using the Google “images” search feature, a search for “Beethoven” will return a number of scores and manuscripts as well as pictures of the composer. In this paper we report on an investigation into methods to mechanically determine if a particular document is indeed a score, so that the user can specify that only musical scores should be returned. The goal is to find a minimal set of features that can be used as a quick test that will be applied to large numbers of documents.
A variety of filters were considered, and two promising ones (run-length ratios and Hough transform) were evaluated. We found that a method based around run-lengths in vertical scans (RL) that out-performs a comparable algorithm using the Hough transform (HT). On a test set of 1030 images, RL achieved recall and precision of 97.8% and 88.4% respectively while HT achieved 97.8% and 73.5%. In terms of processor time, RL was more than five times as fast as HT.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bainbridge2014">Bainbridge2014</a>]
</td>
<td class="bibtexitem">
David Bainbridge, Xiao Hu, and J. Stephen Downie.
A Musical Progression with Greenstone: How Music Content Analysis and
Linked Data is Helping Redefine the Boundaries to a Music Digital Library.
In <em>1st International Workshop on Digital Libraries for
Musicology</em>. Association for Computing Machinery, 2014.
ISBN 9781450330022.
[ <a href="OMR-Research-Key_bib.html#Bainbridge2014">bib</a> |
<a href="http://dx.doi.org/10.1145/2660168.2660170">DOI</a> ]
<blockquote><font size="-1">
Despite the recasting of the web's technical capabilities through Web 2.0, conventional digital library software architectures-from which many of our leading Music Digital Libraries (MDLs) are formed-result in digital resources that are, surprisingly, disconnected from other online sources of information, and embody a "read-only" mindset. Leveraging from Music Information Retrieval (MIR) techniques and Linked Open Data (LOD), in this paper we demonstrate a new form of music digital library that encompasses management, discovery, delivery, and analysis of the musical content it contains. Utilizing open source tools such as Greenstone, audioDB, Meandre, and Apache Jena we present a series of transformations to a musical digital library sourced from audio files that steadily increases the level of support provided to the user for musicological study. While the seed for this work was motivated by better supporting musicologists in a digital library, the developed software architecture alters the boundaries to what is conventionally thought of as a digital library- and in doing so challenges core assumptions made in mainstream digital library software design. Copyright 2014 ACM.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Balke2015">Balke2015</a>]
</td>
<td class="bibtexitem">
Stefan Balke, Sanu Pulimootil Achankunju, and Meinard Müller.
Matching Musical Themes Based on Noisy OCR and OMR Input.
In <em>International Conference on Acoustics, Speech and Signal
Processing</em>, pages 703-707. Institute of Electrical and Electronics
Engineers Inc., 2015.
ISBN 9781467369978.
[ <a href="OMR-Research-Key_bib.html#Balke2015">bib</a> |
<a href="http://dx.doi.org/10.1109/ICASSP.2015.7178060">DOI</a> ]
<blockquote><font size="-1">
In the year 1948, Barlow and Morgenstern published the book 'A Dictionary of Musical Themes', which contains 9803 important musical themes from the Western classical music literature. In this paper, we deal with the problem of automatically matching these themes to other digitally available sources. To this end, we introduce a processing pipeline that automatically extracts from the scanned pages of the printed book textual metadata using Optical Character Recognition (OCR) as well as symbolic note information using Optical Music Recognition (OMR). Due to the poor printing quality of the book, the OCR and OMR results are quite noisy containing numerous extraction errors. As one main contribution, we adjust alignment techniques for matching musical themes based on the OCR and OMR input. In particular, we show how the matching quality can be substantially improved by fusing the OCR- and OMR-based matching results. Finally, we report on our experiments within the challenging Barlow and Morgenstern scenario, which also indicates the potential of our techniques when considering other sources of musical themes such as digital music archives and the world wide web.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Balke2018">Balke2018</a>]
</td>
<td class="bibtexitem">
Stefan Balke, Christian Dittmar, Jakob Abeßer, Klaus Frieler, Martin
Pfleiderer, and Meinard Müller.
Bridging the Gap: Enriching YouTube Videos with Jazz Music
Annotations.
<em>Frontiers in Digital Humanities</em>, 5: 1-11, 2018.
ISSN 2297-2668.
[ <a href="OMR-Research-Key_bib.html#Balke2018">bib</a> |
<a href="http://dx.doi.org/10.3389/fdigh.2018.00001">DOI</a> ]
<blockquote><font size="-1">
Web services allow permanent access to music from all over the world. Especially in the case of web services with user-supplied content, e.g., YouTube(TM), the available metadata is often incomplete or erroneous. On the other hand, a vast amount of high-quality and musically relevant metadata has been annotated in research areas such as Music Information Retrieval (MIR). Although they have great potential, these musical annotations are ofter inaccessible to users outside the academic world. With our contribution, we want to bridge this gap by enriching publicly available multimedia content with musical annotations available in research corpora, while maintaining easy access to the underlying data. Our web-based tools offer researchers and music lovers novel possibilities to interact with and navigate through the content. In this paper, we consider a research corpus called the Weimar Jazz Database (WJD) as an illustrating example scenario. The WJD contains various annotations related to famous jazz solos. First, we establish a link between the WJD annotations and corresponding YouTube videos employing existing retrieval techniques. With these techniques, we were able to identify 988 corresponding YouTube videos for 329 solos out of 456 solos contained in the WJD. We then embed the retrieved videos in a recently developed web-based platform and enrich the videos with solo transcriptions that are part of the WJD. Furthermore, we integrate publicly available data resources from the Semantic Web in order to extend the presented information, for example, with a detailed discography or artists-related information. Our contribution illustrates the potential of modern web-based technologies for the digital humanities, and novel ways for improving access and interaction with digitized multimedia content.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baro2016">Baro2016</a>]
</td>
<td class="bibtexitem">
Arnau Baró, Pau Riba, and Alicia Fornés.
Towards the recognition of compound music notes in handwritten music
scores.
In <em>15th International Conference on Frontiers in Handwriting
Recognition</em>, pages 465-470. Institute of Electrical and Electronics
Engineers Inc., 2016.
ISBN 9781509009817.
[ <a href="OMR-Research-Key_bib.html#Baro2016">bib</a> |
<a href="http://dx.doi.org/10.1109/ICFHR.2016.0092">DOI</a> ]
<blockquote><font size="-1">
The recognition of handwritten music scores still remains an open
problem. The existing approaches can only deal with very simple handwritten
scores mainly because of the variability in the handwriting style
and the variability in the composition of groups of music notes (i.e.
compound music notes). In this work we focus on this second problem
and propose a method based on perceptual grouping for the recognition
of compound music notes. Our method has been tested using several
handwritten music scores of the CVC-MUSCIMA database and compared
with a commercial Optical Music Recognition (OMR) software. Given
that our method is learning-free, the obtained results are promising.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baro2017">Baro2017</a>]
</td>
<td class="bibtexitem">
Arnau Baró, Pau Riba, Jorge Calvo-Zaragoza, and Alicia Fornés.
Optical Music Recognition by Recurrent Neural Networks.
In <em>14th International Conference on Document Analysis and
Recognition</em>, pages 25-26, Kyoto, Japan, 2017. IEEE.
[ <a href="OMR-Research-Key_bib.html#Baro2017">bib</a> |
<a href="http://dx.doi.org/10.1109/ICDAR.2017.260">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baro2018">Baro2018</a>]
</td>
<td class="bibtexitem">
Arnau Baró, Pau Riba, and Alicia Fornés.
A Starting Point for Handwritten Music Recognition.
In Jorge Calvo-Zaragoza, Jan Hajič jr., and Alexander Pacha,
editors, <em>1st International Workshop on Reading Music Systems</em>, pages
5-6, Paris, France, 2018.
[ <a href="OMR-Research-Key_bib.html#Baro2018">bib</a> |
<a href="https://sites.google.com/view/worms2018/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baro2019">Baro2019</a>]
</td>
<td class="bibtexitem">
Arnau Baró, Pau Riba, Jorge Calvo-Zaragoza, and Alicia Fornés.
From Optical Music Recognition to Handwritten Music Recognition: A
baseline.
<em>Pattern Recognition Letters</em>, 123: 1-8, 2019.
ISSN 0167-8655.
[ <a href="OMR-Research-Key_bib.html#Baro2019">bib</a> |
<a href="https://doi.org/10.1016/j.patrec.2019.02.029">DOI</a> |
<a href="http://www.sciencedirect.com/science/article/pii/S0167865518303386">http</a> ]
<blockquote><font size="-1">
Optical Music Recognition (OMR) is the branch of document image analysis that aims to convert images of musical scores into a computer-readable format. Despite decades of research, the recognition of handwritten music scores, concretely the Western notation, is still an open problem, and the few existing works only focus on a specific stage of OMR. In this work, we propose a full Handwritten Music Recognition (HMR) system based on Convolutional Recurrent Neural Networks, data augmentation and transfer learning, that can serve as a baseline for the research community.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baro2021">Baro2021</a>]
</td>
<td class="bibtexitem">
Arnau Baró, Carles Badal, Pau Torras, and Alicia Fornés.
Handwritten Historical Music Recognition through Sequence-to-Sequence
with Attention Mechanism.
In Jorge Calvo-Zaragoza and Alexander Pacha, editors,
<em>Proceedings of the 3rd International Workshop on Reading Music
Systems</em>, pages 55-59, Alicante, Spain, 2021.
[ <a href="OMR-Research-Key_bib.html#Baro2021">bib</a> |
<a href="https://sites.google.com/view/worms2021/proceedings">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baro-Mas2017">Baro-Mas2017</a>]
</td>
<td class="bibtexitem">
Arnau Baró-Mas.
Optical Music Recognition by Long Short-Term Memory Recurrent Neural
Networks.
Master's thesis, Universitat Autònoma de Barcelona, 2017.
[ <a href="OMR-Research-Key_bib.html#Baro-Mas2017">bib</a> |
<a href="http://www.cvc.uab.es/people/afornes/students/Master_ABaro2017.pdf">.pdf</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Barton2002">Barton2002</a>]
</td>
<td class="bibtexitem">
Louis W. G. Barton.
The NEUMES Project: digital transcription of medieval chant
manuscripts.
In <em>2nd International Conference on Web Delivering of Music</em>,
pages 211-218, 2002.
[ <a href="OMR-Research-Key_bib.html#Barton2002">bib</a> |
<a href="http://dx.doi.org/10.1109/WDM.2002.1176213">DOI</a> ]
<blockquote><font size="-1">
This paper introduces the NEUMES Project from a top-down perspective. The purpose of the project is to design a software infrastructure for digital transcription of medieval chant manuscripts, such that transcriptions can be interoperable across many types of applications programs. Existing software for modern music does not provide an effective solution. A distributed library of chant document resources for the Web is proposed, to encompass photographic images, transcriptions, and searchable databases of manuscript descriptions. The NEUMES encoding scheme for chant transcription is presented, with NeumesXML serving as a 'wrapper' for transmission, storage, and editorial markup of transcription data. A scenario of use is given and future directions for the project are briefly discussed.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Barton2005">Barton2005</a>]
</td>
<td class="bibtexitem">
Louis W. G. Barton, John A. Caldwell, and Peter G. Jeavons.
E-library of Medieval Chant Manuscript Transcriptions.
In <em>5th ACM/IEEE-CS Joint Conference on Digital Libraries</em>,
pages 320-329, Denver, CO, USA, 2005. ACM.
ISBN 1-58113-876-8.
[ <a href="OMR-Research-Key_bib.html#Barton2005">bib</a> |
<a href="http://dx.doi.org/10.1145/1065385.1065458">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baumann1992">Baumann1992</a>]
</td>
<td class="bibtexitem">
Stephan Baumann and Andreas Dengel.
Transforming Printed Piano Music into MIDI.
In <em>Advances in Structural and Syntactic Pattern Recognition</em>,
pages 363-372. World Scientific, 1992.
[ <a href="OMR-Research-Key_bib.html#Baumann1992">bib</a> |
<a href="http://dx.doi.org/10.1142/9789812797919_0030">DOI</a> ]
<blockquote><font size="-1">
This paper decribes a recognition system for transforming printed piano music into the international standard MIDI for acoustic output generation. Because of the system is adapted for processing musical scores, it follows a top-down strategy in order to take advantage of the hierarchical structuring. Applying a decision tree classifier and various musical rules, the system comes up with a recognition rate of 80 to 100% depending on the musical complexity of the input. The resulting symbolic representation in terms of so called MIDI-EVENTs can be easily understood by musical devices such as synthesizers, expanders, or keyboards.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baumann1993">Baumann1993</a>]
</td>
<td class="bibtexitem">
Stephan Baumann.
Document recognition of printed scores and transformation into
MIDI.
Technical report, Deutsches Forschungszentrum für Künstliche
Intelligenz GmbH, 1993.
[ <a href="OMR-Research-Key_bib.html#Baumann1993">bib</a> |
<a href="http://dx.doi.org/10.22028/D291-24925">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baumann1995">Baumann1995</a>]
</td>
<td class="bibtexitem">
Stephan Baumann.
A Simplified Attributed Graph Grammar for High-Level Music
Recognition.
In <em>3rd International Conference on Document Analysis and
Recognition</em>, pages 1080-1083. IEEE, 1995.
ISBN 0-8186-7128-9.
[ <a href="OMR-Research-Key_bib.html#Baumann1995">bib</a> |
<a href="http://dx.doi.org/10.1109/ICDAR.1995.602096">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Baumann1995a">Baumann1995a</a>]
</td>
<td class="bibtexitem">
Stephan Baumann and Karl Tombre.
Report of the line drawing and music recognition working group.
In A. Lawrence Spitz and Andreas Dengel, editors, <em>Document
Analysis Systems</em>, pages 1080-1083, 1995.
[ <a href="OMR-Research-Key_bib.html#Baumann1995a">bib</a> |
<a href="http://dx.doi.org/10.1142/9789812797933">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bellini2001">Bellini2001</a>]
</td>
<td class="bibtexitem">
Pierfrancesco Bellini, Ivan Bruno, and Paolo Nesi.
Optical music sheet segmentation.
In <em>1st International Conference on WEB Delivering of Music</em>,
pages 183-190. Institute of Electrical & Electronics Engineers (IEEE),
2001.
ISBN 0769512844.
[ <a href="OMR-Research-Key_bib.html#Bellini2001">bib</a> |
<a href="http://dx.doi.org/10.1109/wdm.2001.990175">DOI</a> ]
<blockquote><font size="-1">
The optical music recognition problem has been addressed in several ways, obtaining suitable results only when simple music constructs are processed. The most critical phase of the optical music recognition process is the first analysis of the image sheet. The first analysis consists of segmenting the acquired sheet into smaller parts which may be processed to recognize the basic symbols. The segmentation module of the O<sup>3</sup> MR system (Object Oriented Optical Music Recognition) system is presented. The proposed approach is based on the adoption of projections for the extraction of basic symbols that constitute a graphic element of the music notation. A set of examples is also included.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bellini2004">Bellini2004</a>]
</td>
<td class="bibtexitem">
Pierfrancesco Bellini, Ivan Bruno, and Paolo Nesi.
An Off-Line Optical Music Sheet Recognition.
In <em>Visual Perception of Music Notation: On-Line and Off Line
Recognition</em>, pages 40-77. IGI Global, 2004.
[ <a href="OMR-Research-Key_bib.html#Bellini2004">bib</a> |
<a href="http://dx.doi.org/10.4018/978-1-59140-298-5.ch002">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bellini2007">Bellini2007</a>]
</td>
<td class="bibtexitem">
Pierfrancesco Bellini, Ivan Bruno, and Paolo Nesi.
Assessing Optical Music Recognition Tools.
<em>Computer Music Journal</em>, 31 (1): 68-93, 2007.
[ <a href="OMR-Research-Key_bib.html#Bellini2007">bib</a> |
<a href="http://dx.doi.org/10.1162/comj.2007.31.1.68">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bellini2008">Bellini2008</a>]
</td>
<td class="bibtexitem">
Pierfrancesco Bellini, Ivan Bruno, and Paolo Nesi.
Optical Music Recognition: Architecture and Algorithms.
In Kia Ng and Paolo Nesi, editors, <em>Interactive Multimedia Music
Technologies</em>, pages 80-110. IGI Global, Hershey, PA, USA, 2008.
[ <a href="OMR-Research-Key_bib.html#Bellini2008">bib</a> |
<a href="http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-59904-150-6.ch005">http</a> ]
<blockquote><font size="-1">
Optical music recognition is a key problem for coding western music sheets in the digital world. This problem has been addressed in several manners obtaining suitable results only when simple music constructs are processed. To this end, several different strategies have been followed, to pass from the simple music sheet image to a complete and consistent representation of music notation symbols (symbolic music notation or representation). Typically, image processing, pattern recognition and symbolic reconstruction are the technologies that have to be considered and applied in several manners the architecture of the so called OMR (Optical Music Recognition) systems. In this chapter, the O3MR (Object Oriented Optical Music Recognition) system is presented. It allows producing from the image of a music sheet the symbolic representation and save it in XML format (WEDELMUSIC XML and MUSICXML). The algorithms used in this process are those of the image processing, image segmentation, neural network pattern recognition, and symbolic reconstruction and reasoning. Most of the solutions can be applied in other field of image understanding. The development of the O3MR solution with all its algorithms has been partially supported by the European Commission, in the IMUTUS Research and Development project, while the related music notation editor has been partially funded by the research and development WEDELMUSIC project of the European Commission. The paper also includes a methodology for the assessment of other OMR systems. The set of metrics proposed has been used to assess the quality of results produce by the O3MR with respect the best OMR on market.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Beran1999">Beran1999</a>]
</td>
<td class="bibtexitem">
Tomáš Beran and Tomáš Macek.
Recognition of Printed Music Score.
In Petra Perner and Maria Petrou, editors, <em>Machine Learning and
Data Mining in Pattern Recognition</em>, pages 174-179. Springer Berlin
Heidelberg, 1999.
ISBN 978-3-540-48097-6.
[ <a href="OMR-Research-Key_bib.html#Beran1999">bib</a> |
<a href="http://dx.doi.org/10.1007/3-540-48097-8_14">DOI</a> ]
<blockquote><font size="-1">
This article describes our implementation of the Optical Music Recognition System (OMR). The system implemented in our project is based on the binary neural network ADAM. ADAM has been used for recognition of music symbols. Preprocessing was implemented by conventional techniques. We decomposed the OMR process into several phases. The results of these phases are summarized.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Blostein1990">Blostein1990</a>]
</td>
<td class="bibtexitem">
Dorothea Blostein and Lippold Haken.
Template matching for rhythmic analysis of music keyboard input.
In <em>10th International Conference on Pattern Recognition</em>, pages
767-770, 1990.
[ <a href="OMR-Research-Key_bib.html#Blostein1990">bib</a> |
<a href="http://dx.doi.org/10.1109/ICPR.1990.118213">DOI</a> ]
<blockquote><font size="-1">
A system that recognizes common rhythmic patterns through template matching is described. The use of template matching gives the user the unusual ability to modify the set of templates used for analysis. This modification effects a tradeoff between the temporal accuracy required of the input and the complexity of the recognizable rhythm patterns that happen to be common in a particular piece of music. The evolving implementation of this algorithm has received heavy use over a six-year period and has proven itself as a practical and reliable input method for fast music transcription. It is concluded that templates demonstrably provide the necessary temporal context for accurate rhythm recognition.<<ETX>>
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Blostein1991">Blostein1991</a>]
</td>
<td class="bibtexitem">
Dorothea Blostein and Lippold Haken.
Justification of Printed Music.
<em>Communications of the ACM</em>, 34 (3): 88-99, 1991.
ISSN 0001-0782.
[ <a href="OMR-Research-Key_bib.html#Blostein1991">bib</a> |
<a href="http://dx.doi.org/10.1145/102868.102874">DOI</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Blostein1992">Blostein1992</a>]
</td>
<td class="bibtexitem">
Dorothea Blostein and Henry S. Baird.
A Critical Survey of Music Image Analysis.
In <em>Structured Document Image Analysis</em>, pages 405-434.
Springer Berlin Heidelberg, 1992.
ISBN 978-3-642-77281-8.
[ <a href="OMR-Research-Key_bib.html#Blostein1992">bib</a> |
<a href="http://dx.doi.org/10.1007/978-3-642-77281-8_19">DOI</a> ]
<blockquote><font size="-1">
The research literature concerning the automatic analysis of images of printed and handwritten music notation, for the period 1966 through 1990, is surveyed and critically examined.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Blostein1992a">Blostein1992a</a>]
</td>
<td class="bibtexitem">
Dorothea Blostein and Nicholas Paul Carter.
Recognition of Music Notation: SSPR'90 Working Group Report.
In <em>Structured Document Image Analysis</em>, pages 573-574.
Springer Berlin Heidelberg, 1992.
ISBN 978-3-642-77281-8.
[ <a href="OMR-Research-Key_bib.html#Blostein1992a">bib</a> |
<a href="http://dx.doi.org/10.1007/978-3-642-77281-8_32">DOI</a> |
<a href="https://doi.org/10.1007/978-3-642-77281-8_32">http</a> ]
<blockquote><font size="-1">
This report summarizes the discussions of the Working Group on the Recognition of Music Notation, of the IAPR 1990 Workshop on Syntactic and Structural Pattern Recognition, Murray Hill, NJ, 13-15 June 1990. The participants were: D. Blostein, N. Carter, R. Haralick, T. Itagaki, H. Kato, H. Nishida, and R. Siromoney. The discussion was moderated by Nicholas Carter and recorded by Dorothea Blostein.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Blostein1999">Blostein1999</a>]
</td>
<td class="bibtexitem">
Dorothea Blostein and Lippold Haken.
Using diagram generation software to improve diagram recognition: a
case study of music notation.
<em>IEEE Transactions on Pattern Analysis and Machine
Intelligence</em>, 21 (11): 1121-1136, 1999.
ISSN 0162-8828.
[ <a href="OMR-Research-Key_bib.html#Blostein1999">bib</a> |
<a href="http://dx.doi.org/10.1109/34.809106">DOI</a> ]
<blockquote><font size="-1">
Diagrams are widely used in society to transmit information such as circuit designs, music, mathematical formulae, architectural plans, and molecular structure. Computers must process diagrams both as images (marks on paper) and as information. A diagram recognizer translates from image to information and a diagram generator translates from information to image. Current technology for diagram generation is ahead of the technology for diagram recognition. Diagram generators have extensive knowledge of notational conventions which relate to readability and aesthetics, whereas current diagram recognizers focus on the hard constraints of the notation. To create a recognizer capable of exploiting layout information, it is expedient to reuse the expertise in existing diagram generators. In particular, we discuss the use of Lime (our editor and generator for music notation) to proofread and correct the raw output of MIDIScan (a third-party commercial recognizer for music notation). Over the past several years, this combination of software has been distributed to thousands of users.
</font></blockquote>
<p>
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="Bonnici2018">Bonnici2018</a>]
</td>
<td class="bibtexitem">
Alexandra Bonnici, Julian Abela, Nicholas Zammit, and George Azzopardi.
Automatic Ornament Localisation, Recognition and Expression from
Music Sheets.
In <em>ACM Symposium on Document Engineering</em>, pages 25:1-25:11,
Halifax, NS, Canada, 2018. ACM.