AI is playing a vital role in the new age of music.
LyricJam, a real-time system that uses artificial intelligence (AI) to generate lyric lines for live instrumental music, was created by members of the University’s Natural Language Processing Lab. The lab, led by Olga Vechtomova, a Waterloo Engineering professor cross-appointed in Computer Science, has been researching creative applications of AI for several years. The lab’s initial work led to the creation of a system that learns musical expressions of artists and generates lyrics in their style.
Recently, Vechtomova, along with Waterloo graduate students Gaurav Sahu and Dhruv Kumar, developed technology that relies on various aspects of music such as chord progressions, tempo, and instrumentation to synthesize lyrics reflecting the mood and emotions expressed by live music. As a musician or a band plays instrumental music, the system continuously receives the raw audio clips, which the neural network processes to generate new lyric lines. The artists can then use the lines to compose their song lyrics.
The neural network designed by the researchers learns what lyrical themes, words, and stylistic devices are associated with different aspects of music captured in each audio clip. For example, the researchers observed that lyrics generated for ambient music are very different from those for upbeat music. The research team conducted a user study, inviting musicians to play live instruments while using the system.
“One unexpected finding was that participants felt encouraged by the generated lines to improvise,” Vechtomova said. “For example, the lines inspired artists to structure chords a bit differently and take their improvisation in a new direction than originally intended. Some musicians also used the lines to check if their improvisation had the desired emotional effect.” Another finding from the study highlighted the co-creative aspect of the experience. Participants commented that they viewed the system as an uncritical jamming partner and felt encouraged to play their musical instruments even if they were not actively trying to write lyrics. Since LyricJam went live in June this year, over 1,500 users worldwide have tried it out.
Artificial intelligence and music have long been intertwined. Alan Turing, the godfather of computer science, built a machine in 1951 that generated three simple melodies. In the 90s, David Bowie started playing around with a digital lyric randomizer for inspiration. At the same time, a music theory professor trained a computer program to write new compositions in the style of Bach; when an audience listened to its work next to a genuine Bach piece, they couldn’t tell them apart.
Progress in the AI music field has rapidly accelerated in the past few years, thanks in part to devoted research teams at universities, investments from major tech companies, and machine learning conferences like NeurIPS. In 2018, Francois Pachet, a longtime AI music pioneer, spearheaded the first pop album composed with artificial intelligence, Hello, World. Last year, the experimental singer-songwriter Holly Herndon received acclaim for Proto, an album in which she harmonized with an AI version of herself.
Source: analyticsinsight.net