Talks
Experiments in sharing Java VM technology with CRuby

Summarized using AI

Experiments in sharing Java VM technology with CRuby

Matthew Gaudet • December 11, 2015 • Chuo-ku, Tokyo, Japan

In this presentation titled 'Experiments in sharing Java VM technology with CRuby', Matthew Gaudet, a compiler developer at IBM Canada, discusses innovative experiments to integrate Java's powerful runtime technologies into CRuby, the Ruby interpreter. The session was part of RubyKaigi 2015 and highlights IBM's research on language-agnostic technology as part of its open-source initiative.

Key Points Discussed:
- IBM's Interest in Ruby JIT: Gaudet explains that IBM's motivation for developing a Ruby Just-In-Time (JIT) compiler stems from the desire to create a versatile cloud ecosystem supporting multiple languages. They aim to reduce redundant efforts by developing a shared runtime toolkit, named OMR, which can cater to various programming languages.
- The OMR Toolkit: OMR is designed to house components like garbage collection and monitoring, allowing different languages to leverage shared resources without mandating incompatibility with existing runtimes.
- Integration of Java Technology: The core of the discussion revolves around IBM's JIT compiler called Testarossa, which was originally created for Java but has since adapted to support various languages, including Ruby.
- Technical Processes: Gaudet details the technical processes behind JIT compilation in Ruby, emphasizing the integration with Ruby's MRI interpreter while maintaining compatibility and performance. He describes how Ruby's bytecode is transformed and optimized during execution.
- Performance Benchmarks: He shares insights from performance benchmarks showing up to 2x increases in micro benchmarks but acknowledges that general Rails workloads haven't yet revealed similar enhancements.
- Future Enhancements: The presentation alludes to future enhancements such as speculative optimizations and improved profiling methodologies, hinting at greater performance potential moving forward.
- Community Engagement: Gaudet expresses a commitment to the Ruby community by announcing the release of a Ruby technology preview linked to a GitHub project for community testing and feedback.

In conclusion, Gaudet emphasizes IBM's growing openness to contribute to the Ruby ecosystem, aiming for strategic advancements in Ruby’s performance and engaging with community-driven initiatives while navigating the complexities of integrating Java's proven technologies into Ruby's framework.

Experiments in sharing Java VM technology with CRuby
Matthew Gaudet • December 11, 2015 • Chuo-ku, Tokyo, Japan

http://rubykaigi.org/2015/presentations/MattStudies

What happens you have a virtual machine full of powerful technology and you start pulling out the language independent parts, with plans to open source these technologies?

You get the ability to experiment! This talk covers a set of experiments where IBM has tested out language-agnostic runtime technologies inside of CRuby, including a GC, JIT and more-- all while still running real Ruby applications, including Rails.

We want to share results from these experiments, talk about how we connected to CRuby, and discuss how this may one day become a part of everyone's CRuby.

RubyKaigi 2015

00:00:00.589 Hello everyone. I'm Matthew Gaudet, a compiler developer at IBM Canada. I've been working in compilation technology since 2008, and I have a particular interest in technology that has emerged in the last year. This is my first time off the North American tectonic plate, so thank you very much for having me. It’s been exciting to experience Japan.
00:00:21.210 A couple of weeks ago, Matt announced Ruby 3 by 3 at a talk in San Antonio, setting a goal to improve Ruby’s performance up to 3 times by version 3. Today, he called out IBM J9, so there's no pressure here.
00:00:39.030 The obvious question is, why does IBM have a Ruby JIT? I was particularly interested in cloud technology. We want to create a vibrant cloud ecosystem, which is inherently polyglot. There are many languages out there, each with its own advantages. At IBM, we want to support these languages and help them grow, while also supporting new ones as they emerge.
00:01:06.060 The key goal is to allow developers to choose the right solution based on the capabilities of the language and the specific job at hand, instead of being constrained by the capabilities of one single language. However, with the multitude of languages having various capabilities, our challenge is to minimize duplicated effort. We don’t want to build a Just-In-Time compiler for every language independently or create separate monitoring solutions.
00:01:37.500 Thus, a plan was conceived. We call it OMR, which doesn't stand for anything specific; it’s just a name. OMR is set to be an open-source toolkit for language runtime technologies, evolving from IBM’s Java technology. We are separating the core technology from language-dependent components.
00:02:12.300 This initiative was briefly discussed in a prior talk at JVM Language Summit, which you can find on YouTube. OMR will comprise several components, including garbage collection and monitoring technologies, a porting library, and others. The core aim of OMR is to maintain compatibility, properly integrating with existing language runtimes rather than replacing them.
00:02:39.030 We hope that the technology we’re developing is flexible enough to fit within your language’s ecosystem rather than forcing you to adapt to OMR. The philosophy here is to enable the assembly of the right solutions for each language. If you're working with a language that doesn’t require JIT compilation but does need garbage collection, you can simply integrate that specific technology.
00:03:01.889 Of course, we want to discuss Ruby specifically today. Some have questioned our silence on this project; we’ve been developing it quietly for a while and only recently started engaging with the Ruby community. A humorous way to explain this is to borrow a quote from William Gibson, who said, 'The future is already here; it's just not evenly distributed.' I modify this for IBM, stating that while open-source is essential to our company, it's not uniformly applied across all departments.
00:03:43.590 IBM has been investing heavily in open source and is improving in that area, but it’s still a work in progress. We come from a traditionally closed-source mentality, and we’re learning how to navigate the open-source landscape. Part of our quietness comes from needing to validate our proof of concept internally. We wanted to ensure that what we were building was viable before sharing it with the community. We didn’t want to present something that wouldn’t work right after initial discussions.
00:04:31.070 However, the technology we're discussing today is real. The early components are already being shipped in products, including the automatic binary optimizer for COBOL as well as IBM JDK 8. We are actively working on this language technology and striving to get it open-sourced as quickly as possible, although we do face resource constraints.
00:05:23.510 Let’s discuss the JIT aspect. OMR’s compiler technology is named Testarossa, which began in 1999 as a dynamic language JIT for Java. Like many compilation frameworks, Testarossa has evolved to support other languages, including C++ and COBOL. It also powers an emulator for IBM's System z. The technology behind Testarossa is quite versatile.
00:06:10.460 The Java JIT system uses Method JIT compilation. When we integrate Testarossa into MRI Ruby, we take the instruction byte codes for a Ruby method and pass them to the embedded Testarossa JIT compiler within MRI. The process then moves through several components, including an intermediate language generator and an optimizer, which aims to increase execution speed.
00:06:53.639 The generated intermediate language is then sent to a code generator that produces executable code, which is emitted into a code cache. This code cache holds addresses for generated methods, allowing them to be executed on subsequent calls. Notably, other components in the runtime can invoke JIT-compiled methods, and regular interpreter functionality can still be applied where necessary.
00:07:44.819 Another key feature of Testarossa is its profiler, which collects runtime information from the interpreter, helping the optimizer make better decisions. To date, our integration work focused on ensuring functional correctness because, while faster execution is desirable, it shouldn’t compromise the integrity of Ruby applications. We've avoided making any major changes to Ruby’s core functionality while developing this.
00:08:41.000 So far, everything we’ve accomplished has been to keep existing Ruby applications, including Rails, functional. To demonstrate integration better, I’ll show a snippet of code that gets compiled with JIT. There are specific stages to initialize the JIT, which require passing in addresses of certain global variables, ensuring the JIT has proper references.
00:09:25.590 Upon initializing, the JIT setup code is inserted into Ruby VM execution, ensuring everything ties back into the interpreter when needed. As for the compilation control strategy, we have a simple counter mechanism that decrements each time we check if something has been JIT-compiled. Once the count goes below zero, we'll trigger JIT compilation.
00:10:11.040 At runtime, we continue to invoke the JIT-compiled native code. To achieve this, we check if a method has a JIT body or if specific requirements are met. If not, we fall back to interpreting that method, allowing support for various Ruby code functionalities.
00:11:16.380 Regarding the Ruby bytecode, here’s a simple method example: multiplying a number by itself. The generated instruction sequence involves several operations. The opcodes for Ruby are defined in a specific format that includes instruction names, operands, and associated C code.
00:12:51.300 Ruby's opcodes present a unique challenge due to their complex semantics. Testarossa's intermediate language is tree-based, representing singular expressions as tree nodes. For example, when translating Ruby bytecode, you may observe a clear transformation from one representation to another, especially as we compare new representations to the original Java structures.
00:13:40.860 However, it’s important to note that Ruby's intricate bytecode can lead to size expansions, with actions like 'get local' producing a more complex tree structure that requires multiple dereferencing steps to access values in the context.
00:14:30.210 To implement our JIT, we mimic the Ruby interpreter to maximize compatibility. This means any changes that could impact the operand stack are carefully managed. We also prioritize implementing simple opcodes, like 'get local', directly in IL. For more complex operations, like 'defined?', we invoke callbacks to handle them efficiently without directly generating complex inline code.
00:15:14.050 We aim for efficiency with patterns we recognize during execution, allowing the JIT to optimize where possible. That said, we rely on the interpreter for more challenging situations to ensure robust execution throughout.
00:16:15.080 Currently, we base our project on Ruby 2.2.3, having recently upgraded from Ruby 2.1.5. Most opcodes are well supported, and we are testing various Ruby applications, including the Rails framework.
00:17:18.070 Now let’s discuss performance. During our benchmarking with tools like Bench9000, we observed that on certain micro benchmarks, performance sometimes exceeds a 2x increase. We are continuing to work towards that ambitious Ruby 3x performance goal. The broader applications, classified as production applications, show varying results, including both speedups and some slowdowns.
00:18:35.330 We find that while we compile methods, we have not yet seen noticeable performance increases within a standard Rails application workload. As we consider our future direction, we recognize challenges inherent in working with MRI’s dynamic behavior and the access it allows to internal data structures, which complicates how the JIT operates.
00:19:57.300 That said, there’s also significant opportunity here. The technology that's been honed for Java has great potential for Ruby. There are possibilities for speculative optimizations and specializations based on knowledge about class hierarchies or method receivers.
00:20:39.410 Moreover, the future may bring recompilation abilities, which have proved beneficial in Java but are yet to be fully developed in Ruby. Improvements in profiling methodologies would significantly enhance optimization capabilities.
00:22:16.050 For Ruby, IBM is eager to contribute to the Ruby 3 by 3 initiative, employing our extensive expertise in high-performance language technologies that date back to the 1980s. Our goal is to facilitate enhancements in Ruby’s performance by collaborating on improvements and sharing design concepts for optimal integration.
00:23:10.030 While we aren't open-sourced yet, we are releasing a Ruby technology preview today. You can follow the provided URL to start testing this technology through a GitHub project that links to a Docker container, and we welcome any feedback.
00:24:14.770 As a compiler developer, I’ve observed the renaissance brought about by LLVM in the realm of compilation technology. I envision a day where OMR fulfills a similar role in runtime technologies, where it can host innovative research or serve as a foundation for various company core interpreters.
00:24:56.840 Thank you very much for your attention! Feel free to reach out to us, and I will be around during the conference for any questions.
00:25:48.000 Now, I’m happy to take any questions you might have.
00:30:32.280 Q: Do you have a timeframe on open sourcing this technology, and if not, who can we contact to get updates? A: The timeline is currently undetermined, but you can start by reaching out to John and then work your way down.
00:31:00.000 Q: How can we get this integrated with Travis for easy testing? A: Email Mark or me, and we can communicate further. Q: Any insights into changing Ruby bytecode to enhance performance with Testarossa? A: We’d prefer to reduce the workload of any single bytecode while considering trade-offs for interpreter speed.
00:32:07.666 A: We aim for less code in C and more in Ruby to improve optimization. With more Ruby code, we would have the flexibility to integrate inlining and other advanced optimizations.
00:32:41.000 Q: Are there plans for integrating the GC technology like you are for JIT? A: There will be an upcoming talk about GC experiments on Sunday. We are actively working on that.
00:33:17.000 Thank you for your questions! We appreciate your interest and look forward to any discussions over the coming days!
Explore all talks recorded at RubyKaigi 2015
+47