Music

Summarized using AI

It's More Fun to Compute

Julian Cheal • November 10, 2016 • Earth

In the presentation titled "It's More Fun to Compute" at RubyConf 2016, Julian Cheal leads an engaging exploration of analog and digital synthesis through programming with Ruby. The speaker initially introduces himself and his background, establishing a connection to both the British culture and the ongoing theme of music creation with code. The central focus is how Ruby can be utilized to generate and manipulate sounds, making programming music accessible to a broad audience.

Key Points Discussed:
- Introduction to Synthesizers: Cheal discusses analog synthesizers, showcasing equipment like the Arturia MiniBrute and Korg Volca series. He explains essential components, starting with oscillators, which create different waveforms (sine, square, triangle, and sawtooth waves) and their significance in sound synthesis.
- Wave Forms and Sound Representation: Different wave types produce distinct sounds, with visual representations of waveforms offered through Ruby Processing. The demonstration highlights how the frequency and shape of waves impact sound.
- Sound Modulation with Envelopes: Cheal introduces concepts of sound envelopes (attack, decay, sustain, release), showing how altering these parameters can change the sound produced, enhancing interactivity and user control in sound synthesis.
- Sonic Pi Overview: Transitioning to coding, Cheal emphasizes the application Sonic Pi, which allows creating music using Ruby commands. He demonstrates its user-friendly nature, enabling even children to compose music with code.
- Integration with MIDI: Cheal discusses the MIDI protocol, facilitating communication between digital and analog instruments. He showcases his Ruby gems that connect with synthesizers, allowing real-time interaction and manipulation of sound as well as controlling Sonic Pi through MIDI.
- Engagement and Collaboration: Throughout the talk, Cheal encourages audience interaction, inviting questions and collaboration to expand the creative use of music technology, emphasizing that music production can be achieved through open-source tools.

In conclusion, Cheal’s talk illustrates the synergy between programming and music creation, breaking down complex concepts into approachable ideas. He advocates for hands-on exploration and collaboration, aiming to inspire attendees to leverage Ruby and synthesizers to produce innovative sounds and enhance their musical experiences.

It's More Fun to Compute
Julian Cheal • November 10, 2016 • Earth

RubyConf 2016 - It's More Fun to Compute by Julian Cheal

Come with us now on a journey through time and space. As we explore the world of analog/digital synthesis. From computer generated music to physical synthesisers and everything in between.

So you want to write music with code, but don’t know the difference between an LFO, ADSR, LMFAO, etc. Or a Sine wave, Saw wave, Google wave. We’ll explore what these mean, and how Ruby can be used to make awesome sounds. Ever wondered what Fizz Buzz sounds like, or which sounds better bubble sort or quick sort? So hey Ruby, let’s make music!

RubyConf 2016

00:00:34 Hello, welcome to my talk. I'm Julian Cheal. I'm British, and as you can tell by my three-piece tweed suit, I like tweed! You can get one of these suits too if you just come to the UK; it's great! I live in a small town called Bath. Just so you know, it's not London. I don't live in London; I live in Bath. It's called Bath because it has a giant bath, and yes, we do have baths and showers in our homes. I work for a small clothing company called Red Hat, where we develop some open-source software. You may have heard of us at Red Hat. I work on a team called ManageIQ, where we manage clouds. Being British, we've got lots of clouds, so I'm good at managing them. If you'd like to come and manage clouds while getting a sweet Red Hat, come speak to me afterwards. You do actually get a Red Hat when you work at Red Hat; it's really cool!
00:01:25 This talk is titled "It's More Fun to Compute," which those of you who were born over 30 years ago will recognize as a reference to the German band Kraftwerk. This is a track taken from their album called Computer World. Today, we're going to talk about synthesizers, but not those cool synths from Fallout 4; I mean real analog synthesizers. On stage, I've got a few of my synthesizers: I've got an Arturia MiniBrute, which is a monophonic synthesizer, and a Korg Volca Keys, which is a polyphonic synthesizer. This means it can create sounds with more than one oscillator, which is pretty cool! Additionally, there's a Volca Beats, which is a drum machine. Unfortunately, I didn’t bring other synthesizers that are these little bits, like magnetic connectors that connect to build all sorts of cool circuits. They teamed up with Korg to create the Korg Synth Kit, which you can give to your kids; they can plug them in and make all sorts of fun noise! If you don’t have kids, buy it for your friends for Christmas—they’ll love you!
00:02:03 To get started with synthesizers, we need to learn about oscillators. I don’t know much about oscillators aside from the fact that they oscillate. In essence, they go around many times oscillating, and when we take the output of an oscillator, that’s when the magic happens! You can find detailed information about oscillators on Wikipedia. Simply put, they are analog components that have become incredibly powerful. A chap named Moog incorporated many of these components into a machine nearly as big as this room, creating beautiful music. The things that these oscillators create are called waves, and they can produce a few different types of waves. As software engineers, you may be familiar with square waves, which are simply on and off. We can also create sine waves, which are more akin to analog waves where multiple variables come into play. Synthesizers can also produce triangle waves, which look pretty interesting. Another type is the sawtooth wave, which I think should really be called no-cougar waves because they can move in one direction and then switch to the other direction.
00:03:09 It’s fascinating that the sound can change based on how the saw wave is oriented. In a moment, we’ll be able to hear some of these variations. Another popular sound is noise. Back in the day, before streaming platforms like Netflix and Hulu, we used to watch over-the-air television, receiving radio waves from the sky. In between channels—and when the channels turned off—this is what you’d hear: noise. It’s just all the waves mixed together. Some believe it originates from the Big Bang, and we can actually use this noise to create some interesting music. So, let’s have a little demo! I've got a small app running in a framework called Ruby Processing, which is meant for graphics but today we’re using it to create sound.
00:04:00 As you can hear, as I speak, we’re capturing the waves at the top, and at the bottom is the frequency representation. The higher the frequencies, the more they appear on one side of the screen, and the lower frequencies appear on the other side. This should be connected to the audio output. Now, let's try playing a triangle wave. There we go, it's working now! Fortunately, it initially picked up the computer's microphone instead of the proper audio input setup. You can see the shape on the screen, representing how the triangle wave appears. Now let's switch it to a square wave. Ah, I might need to manage that a little better. Oh, it might be GarageBand that's affecting the audio output!
00:05:11 Okay, let's try this again and focus on the wave outputs. You can see that the synthesized sounds look different depending on the frequency and the shape of the wave generated. If you waltz your way through the sound waves, you’ll find some really interesting results. So if you attended my earlier talk, you already know that playing with frequencies reveals how they populate the sound space. For example, if I go all the way to the bottom octave, it can illustrate how few can fit in that slot. But if I play with the higher octaves, you’ll notice that many more can exist in a small space. And for the curious, this is what noise sounds like! It takes you back to the days of waiting for cartoons to start.
00:06:30 We can also mix all of these waves together. That’s pretty interesting! Now we can make sounds by pressing buttons, and we can have different types of waves, but we can also modify how the wave is generated using what's called envelopes. In this context, we have an attack, delay, decay, sustain, and release. I know what you're thinking—this sounds like some kind of fighting game move! But in reality, it’s much simpler than that. For example, when we're playing a sound, such as a sine wave, it would emit sound continuously unless we stop pressing the note. If we increase the attack, it alters the time it takes to go from silence to the highest volume of that sound. The decay then controls how long it takes to reduce from that highest point down to a set level. Each of these effects has a time component except for sustain, which influences amplitude. Release defines how long the sound carries on after you've released the button. Now let’s visualize that using our jruby app.
00:07:50 As we increase the attack and delay parameters, you’ll hear how it gradually builds to the full sound. Now if we add some release, the sound continues even as I lift my finger from the button. You can see how the different sound waves interplay with each other. That's very cool! Using different waveforms and envelopes gives us the tools to synthesize any sound you've ever heard, which is why these are called synthesizers—they synthesize sound! You may be thinking that this is fascinating, and you’d probably love to hear me play music all day long! But remember, this is a Ruby conference! Let’s talk about Ruby.
00:09:30 Have any of you heard of an application called Sonic Pi? It's a Ruby application designed for making music and it runs on top of something called SuperCollider, which is a digital synthesizer platform. Unlike analog synthesizers, where everything is realized through circuits, digital synthesizers are generated within my laptop. I really don’t know how any of it works, but here’s the gist! When Sonic Pi was released, I was thrilled because I love playing instruments and coding. But when trying to make good music with code, it proved to be quite challenging. Let's take a look at how Sonic Pi works.
00:10:36 In Sonic Pi, you start by writing commands that trigger musical notes. For example, you would use 'play 60,' which corresponds to a note on the keyboard (possibly middle C). We can also use literal names for notes, making it very user-friendly. When we run this code, you should hear middle C being played! This defines what we previously set with the sawtooth wave and is played at a frequency of 60. We can implement slight attack, decay, sustain, and release to recreate similar sounds synthesized through a traditional synthesizer. What’s really cool is that as I continue buying expensive instruments, I can still leverage Sonic Pi for more than just basic notes!
00:11:56 Now, let’s showcase a piece by one of the contributors of Sonic Pi. What’s remarkable about Sonic Pi is its ability to let anyone from children to seasoned musicians create music. The syntax is simple enough that a ten-year-old can write thread-safe Ruby code to generate music! If they can do that, so can we.
00:12:32 Next, let’s think about how we can integrate coding with live instruments. Wouldn’t it be exciting to write code that communicates with musical instruments? As most of you know, there's a protocol called MIDI. In the earlier days, these analog instruments connected via electrical cables, and people would send voltage signals to control different functions. The music industry came together to create a standard called MIDI, which stands for Musical Instrument Digital Interface. MIDI is a serial protocol running on a five-pin cable built in the 1980s—before USB! Today it can run via USB, but it is still treated like two separate cables worthy of the technology at the time.
00:13:42 By sending MIDI messages, we can control notes, whether they are turned on or off. Each message consists of a status byte (the note), the note number, and the velocity (how hard the note is played). Some keyboards respond differently depending on how hard or soft you press the keys. Let’s see what I can do with that! I've written a few Ruby gems to connect with these instruments. I also have the Korg Volca Beats and Korg Volca Keys, which all support MIDI!
00:14:58 Here you see the basic Ruby code I've written that retrieves the available MIDI device. I also wrapped it so that we can adjust the sustain level of the note. Then we can play a note such as C4 and even change the envelope settings on my analog synthesizer using Ruby code! Running this Ruby code should create audible notes played on my synthesizers, allowing me to directly interact with them. The library I built is publicly available on GitHub, and I would love contributions to improve it further!
00:16:10 Check it out, it connects to the instruments and sends the right MIDI commands to my Korg synthesizers. The beauty of combining Ruby with MIDI means we can create dynamic music generation right from our own homes. Moreover, wouldn’t it be cool to control Sonic Pi using your own instruments rather than typing code? I developed some code to allow MIDI input to trigger Sonic Pi, creating music without needing to write any code at all!
00:17:28 I've connected a USB MIDI keyboard that can send MIDI notes. With this setup, I can press the keys, and it sends MIDI input to Sonic Pi, enabling me to play notes instantly! There does seem to be a slight delay, but it’s minimal. You see, the benefit of using MIDI is vast; I could programmatically control Sonic Pi entirely through my keyboard now. Imagine conducting programming through musical improv—now that is both fun and educational!
00:18:55 Let's see if I can find a simple dance track I made with the system. It integrates different drum patterns and synthesizers to create layered music. With this little project, we've effectively created a free and open-source software version of expensive music production software!
00:20:08 I also want to leave you with an invitation: feel free to come up and ask any questions or bring your own instruments to play. I'm excited to see how we can all contribute to improving our sounds and possibly collaborating on new ideas. If you're feeling curious, let’s explore further together in the hallway!
00:21:10 In Sonic Pi, there's a line of code that adjusts the BPM. It's crucial to remember that while Sonic Pi is regulating the tempo, the communication back and forth can cause timing issues because it processes the input so quickly. What I'd love to hear from you as an audience is ideas on how we can further extend this technology for audio production.
Explore all talks recorded at RubyConf 2016
+78