Talks
Speakers
Events
Topics
Sign in
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
Suggest modification to this talk
Title
Description
RubyKaigi2017 http://rubykaigi.org/2017/presentations/juliancheal.html AI is everywhere in our lives these days: recommending our TV shows, planning our car trips, and running our day-to-day lives through artificially intelligent assistants like Siri and Alexa. But are machines capable of creativity? Can they write poems, paint pictures, or compose music that moves human audiences? We believe they can! In this talk, we’ll use Ruby and cutting-edge machine learning tools to train a neural network on human-generated Electronic Dance Music (EDM), then see what sorts of music the machine dreams up.
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
In the presentation titled "Do Androids Dream of Electronic Dance Music?" at RubyKaigi 2017, speakers Julian Cheal and Eric Weinstein explore the intersection of artificial intelligence, machine learning, and music generation. They aim to investigate whether machines can create music that resonates with human audiences, using cutting-edge techniques and tools in programming, specifically Ruby and machine learning algorithms. Key points discussed include: - **Introduction to Speakers and Topic**: Inspired by Philip K. Dick's work, the presentation begins with introductions from Julian and Eric, sharing their backgrounds in Ruby programming and data management. - **Understanding Machine Learning**: The presenters explain that machine learning is about generalization from known information to handle unknown data, and discuss the distinctions between supervised and unsupervised learning. Supervised learning involves using labeled data for model training, while unsupervised learning deals with data without labels. - **Neural Networks and Music Generation**: The presentation emphasizes the use of Recurrent Neural Networks (RNNs), particularly Long Short-Term Memory (LSTM) networks, which excel at processing sequential data such as music. They discuss how the network retains memory of prior inputs, allowing it to predict subsequent notes in a composition. - **Working with MIDI files**: A vital part of their process involves using MIDI (Musical Instrument Digital Interface) files for training the model. The speakers explain how MIDI standardizes communications between electronic instruments and how they collected royalty-free music data to avoid copyright issues. - **Programming and Libraries**: The presenters introduce their programming environment, where they predominantly used Python (95%) to implement TensorFlow, while only 5% consisted of Ruby for processing the data efficiently. - **Live Music Generation Demo**: A significant aspect of their talk is a live demonstration of the music generation results achieved through their machine learning model. The output closely resembles the style of the EDM artist Deadmau5, illustrating the capability of neural networks to produce coherent musical sequences. - **Conclusions and Future Work**: They summarize the key learning outcomes that include potential improvements through larger training datasets and refining data structures. The goal is to continue advancing their project and considering open-source contributions to expand the capabilities within the Ruby environment. The overall takeaway from their presentation is a compelling demonstration of how machine learning can effectively generate music and contribute to the world of creativity, along with acknowledging the nuances of data handling and model training. They further emphasize an open-source approach to their work, inviting community contributions and discussions around the complexities of music generation through AI.
Suggest modifications
Cancel