The Machine Musicianship: Automatic Music Transcription
Release time：November 14, 2016 /
Date: Nov 15th(10:00 AM- 12:00 AM)
Location: B102 New main building
Music transcription, i.e., converting music audio to music notation, is an extraordinary capability of talented musicians. Automatic music transcription is at the core of machine musicianship, and is a fundamental problem in music information retrieval research. In this talk, I will review the state-of-the-art research on automatic music transcription. Specifically, I will focus on pitch transcription of polyphonic music played by harmonic musical instruments. Research on multi-pitch analysis has been performed at three levels: frame-level, note-level, and stream-level. I will discuss the progresses and challenges at these levels and present our recent work towards addressing these challenges. Finally, I will present our work on building a complete music notation transcription system by incorporating musical knowledge.
Introduction of the Speaker:
Zhiyao Duan is an assistant professor and director of the Audio Information Research (AIR) lab in the Department of Electrical and Computer Engineering at the University of Rochester. He received his B.S. and M.S. in Automation from Tsinghua University in 2004 and 2008, respectively, and received his Ph.D. in Computer Science from Northwestern University in 2013. His research interest is in the broad area of computer audition, i.e., designing computational systems that are capable of understanding sounds, including music, speech, and environmental sounds. Specific problems that he has been working on include automatic music transcription, audio-score alignment, source separation, speech enhancement, sound retrieval, and audio-visual analysis of music. He co-presented a tutorial on automatic music transcription at the ISMIR conference in 2015.