Art with MI Demo. How to generate music from literature – with Hannah Davis

Art with MI Demo. How to generate music from literature – with Hannah Davis


[MUSIC PLAYING] HANNAH DAVIS: Hi, everyone. My name is Hannah Davis. I’m going to be talking
to you about a project I’m working on called Transprose,
which creates music from literature,
specifically based on the emotional
content of novels. So when I first started
out this project I had two questions for myself. And the first was,
can I translate the emotional
undertone into music? And second, could that
music be beautiful, or at least listenable? And the first iterations of
this were not listenable. They were lacking complexity. [MUSIC PLAYING] Order. [MUSIC PLAYING] Should I– SPEAKER 1: [INAUDIBLE] HANNAH DAVIS: Yeah. SPEAKER 1: If you
switch your device, it should have picked
up the HDMI device. HANNAH DAVIS: Is there
something I’m supposed to– SPEAKER 2: Give me a second. HANNAH DAVIS: Should I try this? SPEAKER 2: This one should– SPEAKER 1: [INAUDIBLE] HANNAH DAVIS: Oh. Got you. OK. Yeah. It looks good. SPEAKER 2: [INAUDIBLE]. I mean, at least we
could still hear it. HANNAH DAVIS: No. That doesn’t work. SPEAKER 2: Yeah. This one doesn’t fit, which
usually if it’s in all caps. We can still hear it. So– HANNAH DAVIS: So complexity,
order, and emotional accuracy. [MUSIC PLAYING] Lively John in the
heart of Africa. So I fixed that. See what I’ve got next. SPEAKER 1: [INAUDIBLE] HANNAH DAVIS: Uh, this? Oh, OK. Yeah. SPEAKER 1: [INAUDIBLE] HANNAH DAVIS: OK. I’ll try. So basically
Transprose is a mapping between the text analysis
variables and then music composition. So I started with
the text analysis. And there were just so many
variables to work with, that I decided to just focus
on emotional content at first. SPEAKER 1: [INAUDIBLE] HANNAH DAVIS: OK. SPEAKER 2: Try again. HANNAH DAVIS: OK. Cool. So I did that by using a
corpus of 14,000 most common English words that were tagged
with eight different emotions, and then positive and negative. For the music composition,
I basically created arcs out of the emotion density data. And then the melodies
followed these arcs. So a couple mappings. For the notes, lower
emotion densities equal more consonants. And high emotion densities
equal more dissonant notes. So in this way the
plots represented not in literal events, but in
the emotional representation of events. So spikes in emotion equal more
interesting melodic movement at points in the piece. Additional mappings. The major/minor key is mapped
to the positive to negative emotion ratio. The octave is also mapped to
the positive to negative emotion ratio, both overall
and per section. The note length is mapped to the
amount of emotion perception. And then faster notes
equal more emotion. And the tempo is based on the
activity levels, which I also got from an activity lexicon. So I’ll play you two pieces. Just something to
remember here, the songs are representing the
novels chronologically. Four measures roughly
equals 10 pages. And those sections are repeated
to make it sound more musical. [MUSIC PLAYING] [APPLAUSE] HANNAH DAVIS: Thanks. So my next steps, I’m just going
to try to make it more complex and add motifs for
characters and places, more instruments and genres to
create machine learning. I’m working on a video
to music prototype, which will be more like scoring. And then I’m going
to hopefully do an online tool so anyone
can put the text in and get a piece back. Thank you. [APPLAUSE] [MUSIC PLAYING]

One thought on “Art with MI Demo. How to generate music from literature – with Hannah Davis”

Leave a Reply

Your email address will not be published. Required fields are marked *