Application Security
Using Ruby In Security Critical Applications

Summarized using AI

Using Ruby In Security Critical Applications

Tom Macklin • November 15, 2015 • San Antonio, TX

The video titled "Using Ruby In Security Critical Applications" by Tom Macklin, presented at RubyConf 2015, delves into the various security measures taken to enhance the Ruby programming language for applications that require high security. Macklin draws from his experiences at the US Naval Research Laboratory and emphasizes the importance of simplicity and effective architectural design in programming and security.

Key points discussed include:
- Understanding 'Security Critical': Macklin opens by clarifying the term "security critical" and sharing his insights from past experiences, stressing that security cannot be guaranteed solely through assurances but instead requires meaningful evidence and effective architecture.
- Assurance Principles (NEAT): He introduces the NEAT principles—Non-bypassable, Evaluatable, Always invoked, and Tamper-evident—highlighting their importance in establishing robust security frameworks throughout the system's architecture.
- Security Controls: Emphasizing layers of security, Macklin discusses how integrating the right security controls at appropriate layers is crucial. He advises on utilizing operating system controls that prevent unauthorized access to sensitive data and systems.
- Use Case Examples: Macklin offers a composite use case to demonstrate security principles in action, showing that secure system architecture can significantly reduce vulnerabilities by adopting simpler design principles.
- Community Engagement for Security Enhancements: The speaker also shares initiatives aimed at improving Ruby’s security and invites community involvement to bolster these efforts.
- Integration and Isolation: He discusses the significance of isolating processes within services and emphasizes enterprise integration as a vital aspect to maintain security, particularly with regard to communication protocols and database access.
- Future Aspirations: Towards the conclusion, Macklin highlights future objectives that include automating security rule sets, employing monads for validation, and exploring tools that can enhance security measures in Ruby applications.

Overall, Macklin's presentation emphasizes that thorough architectural thinking, community collaboration, and systematic security measures can enhance Ruby’s applications in environments where security is paramount. His insights encourage developers to adopt a methodical approach to security while fostering collaboration within the Ruby community to tackle security challenges collectively.

Using Ruby In Security Critical Applications
Tom Macklin • November 15, 2015 • San Antonio, TX

Using Ruby In Security Critical Applications by Tom Macklin

We’ve worked to improve security in MRI for a variety of security critical applications, and will describe some of our successes and failures in terms of real-world applications and their various runtime environments. We will describe some of the security principles that guide our work, and how they fit in with the ruby culture. We will also introduce some objectives we have moving forward to improve ruby’s security, and ways we’d like to engage the community to help.

Help us caption & translate this video!

http://amara.org/v/H1hn/

RubyConf 2015

00:00:15.200 All right, well, uh, welcome everybody to my talk. Thanks for coming. I hope everyone's having a good conference; I know I am. Is everybody learning a lot?
00:00:22.400 Excellent! I try to leave a few minutes when I do talks because I learn so much in conferences, and I want to talk about the stuff I'm learning in other people's talks more than what I came to talk about.
00:00:34.760 So if I get on a side note about something that I heard, the last talk I was at was phenomenal. But anyway, I hope you all get a lot out of my talk today.
00:00:41.800 Before I say anything else, let me get this disclaimer out of the way: I work for the US Naval Research Laboratory, but my talk today reflects my opinions based on both my professional and personal experiences.
00:00:48.520 My opinions do not represent those of the US Navy or the US government. As a matter of fact, if you do any research, you'll probably find that there are a lot of people in the government who disagree with me on many things.
00:01:00.680 Also, another disclaimer: I tend to say 'we' a lot when I talk because I have a really close-knit team, and it's an awesome team. We argue about stuff and don't always agree, but when I say 'we,' I'm not referring to big brother or all the developers I work with.
00:01:14.040 I'm subconsciously referring to the fact that we try to make as many decisions as we can as a team.
00:01:19.280 So, I apologize in advance when I say 'we.' Now, a little bit about me: I consider myself a good programmer, not a great one, but I like to keep things simple.
00:01:26.600 I study a martial art called Aido, and in Aido, we have a saying: an advanced technique is just a simple technique done better. I like to apply that, not just in martial arts but in all aspects of my life, and programming is no exception.
00:01:40.159 Everything I do and talk about has the underlying theme: keep things as simple as you possibly can.
00:01:46.040 Regarding the Naval Research Laboratory, it was started in 1923 by Congress on the recommendation of Thomas Edison, who believed we needed a naval research lab. The group I work in, the Systems Group, has come up with some cool technology.
00:02:04.799 Most notably, the onion router (Tor) came out of NRL, along with foundational technologies for virtual private networking.
00:02:09.000 There’s a great paper from 1985 called "Reasoning About Security Models," written by Dr. John Mlan, who is my boss’s boss’s boss’s boss’s boss.
00:02:16.680 This paper discusses system Z, and if you're into academia, it's a really cool theory about security.
00:02:21.200 All that said, my talk is not about anything military-related; it's not academic, and it’s definitely not buzzword bingo. I had a really cool buzzword bingo slide, but I took it out because CCS was way better.
00:02:34.319 So, what am I going to talk about? I want to spend some time unpacking what I mean by 'security critical'; as we've just heard in the last talk, people throw phrases around that mean different things to different folks.
00:02:51.920 So, I want to clarify what I mean by 'security critical.' Sorry about that! I also want to work through a use case, which isn’t an actual case but rather a composite of experiences I've had.
00:03:05.599 It borrows from systems I've worked on and developed, but it's not representative of any system we've ever built. The main reason I'm here is my next point: next steps.
00:03:30.480 We have many initiatives we are interested in pursuing to enhance our ability to use Ruby in security-critical applications. Some of these initiatives we know how to do well, others we think we know how to do but probably wouldn't execute well, and others we know we can't do.
00:03:43.640 If anything you see on my next step slides resonates, please come talk to me after the talk, as we’re eager to get help from people who want to do cool stuff with security and Ruby.
00:03:54.720 There’s a great talk I attended that influenced my thinking about Ruby and security back in 2012. It was at a conference called Software Craftsmanship North America, which I highly recommend.
00:04:07.239 Uncle Bob gave a talk titled "Reasonable Expectations of a CTO"—if you haven’t seen it, look it up on Vimeo. I won’t summarize it for you, but as you watch it, just add security to the list of problems systems have.
00:04:19.000 This insight resonates even more today than when he first gave the talk in 2012. When we talk about computer security, one essential thing to discuss is assurance.
00:04:42.240 Assurance usually, as a verb, means to assure you that everything will be okay, that there’s no problem. However, when I talk about assurance, I’m not speaking about just telling you everything will be okay.
00:04:54.080 What’s the first thing you think when I tell you everything's going to be okay? You think something's wrong. So I don’t want to assure you of anything.
00:05:01.479 What I want to do is to provide you with assurances that allow you to make your own decisions.
00:05:07.120 Even if you don’t like the assurances received from a security analysis, at least you know where you stand, and that’s genuinely useful.
00:05:12.639 When I talk about assurances, I’m not trying to promise everything will be okay; I’m talking about evidence.
00:05:18.199 We've all seen this chart before, and whether you're trying to make money, make the world a better place, or solve a security problem, this chart is unavoidable.
00:05:25.440 When we go about solving a security problem, we encounter this as well. We have a few choices: we can do something clever to outsmart attackers, we could buy a cool library that promises security, or we could hire an external consultant.
00:05:39.720 However, don’t do any of that because attackers are clever—more clever than me or you. What's more, there are lots of them, and they have plenty of time.
00:05:52.280 You build a feature, then it's on to the next one; they are there hammering at your systems day after day—sometimes in teams, if you're unlucky enough to be a target. Most of you aren't, but we make mistakes in our code; it's simply a fact of life.
00:06:04.639 There will be bugs, including security bugs, so I'm going to discuss how we can defend ourselves. A key point I want to make today is that a security critical system should have the right security controls in the right places and with the right assurances.
00:06:16.319 Let's say that again: our security critical system should have the right security controls in the right places and with the right assurances.
00:06:26.800 I like to achieve this through architecture. We construct architecture, and many times, the principles that make code awesome are the same principles that make it secure.
00:06:34.479 We want to reduce complexity, localize functionality, and improve test coverage, among other goals. But we also must ensure we have the right controls in the right places.
00:06:47.440 Having a firewall at the front door won’t keep bad guys out, just as a guard with a gun in your server room isn't going to prevent hackers from accessing your server.
00:06:59.120 You need to consider the architecture of your code, your design, and your test coverage while also thinking about where and how you're using various controls. More specifically, we need to layer those controls in our systems.
00:07:19.440 Some of these acronyms may not be familiar to you; I'll explain them later, but these are the security control layers you should consider a minimum for your application.
00:07:30.560 You have your operating system, your security framework, and then your application framework. These layers serve as the foundation for our assurances.
00:07:38.599 But what are these assurances? Are they something squishy that we can’t measure? Well, kind of, but we can discuss them in a semi-structured way.
00:07:44.319 I like to talk about assurances in terms of this neat principle. NEAT stands for Non-bypassable, Evaluatable, Always invoked, and Tamper-evident.
00:07:56.680 The more you can measure your security controls and affirmatively answer these questions by nodding instead of shaking your head, the more security you will have.
00:08:03.879 Let’s go through these quickly. Non-bypassable: if you've got a circuit breaker, it prevents your electronics from frying due to an excess current.
00:08:09.400 It will trip the breaker to stop the current flow, but if there's a wire going around the circuit breaker, linking the power directly to your device, it won't help.
00:08:14.479 For a good control to work, it has to be the sole pathway from point A to B.
00:08:21.240 Evaluatable is a little harder to articulate. There are tools like symbolic execution engines and static analysis tools to measure and assess code security.
00:08:29.479 For most of you here, it's great to follow instructions on screen to gauge your code's readability and evaluatability. A score below 20 is ideal.
00:08:40.519 If your code needs to be secure, aim to keep it well below 20. Keeping them small, minimizing branches, and avoiding functions like eval are critical practices.
00:08:46.279 Always invoked: I think the HTML sanitizer in ActionView is a great example.
00:08:51.640 At first, it was something you could call if you wanted, but you could also easily forget.
00:08:58.360 At some point, they integrated it into ActionView, mandating you to call it.
00:09:05.200 I’ve not used Rails lately; I’m one of those weird Ruby people. But this is a C example, and I like having things like this prompted in headers.
00:09:11.720 This encourages the compiler to alert individuals doing foolish things—it's a learning experience.
00:09:19.200 Lastly, tamper-evidence can be tricky to describe.
00:09:24.240 Coal miners used to bring canaries into mines. These would succumb to toxic gas before it harmed the miners, providing an early warning system.
00:09:32.680 In binaries, we do something similar; placing little cookies on the stack that can be devastated if there’s a buffer overrun.
00:09:40.440 This is a simplification and not foolproof, but there's more to learn about ways we protect binaries.
00:09:46.279 I have my checklist, and I want to discuss some controls and assurances regarding those controls and see how we’re doing.
00:10:02.640 We're going to use this checklist throughout the rest of this talk. The use cases I’m discussing represent one example divided into three parts. It doesn’t pertain to any actual system I've built, but I believe they illustrate good security principles.
00:10:20.440 At the base of your system are your operating system controls. No matter how secure your code is, if your operating system isn't configured correctly, you're in trouble.
00:10:40.160 The main security feature of your OS is access control. Security geeks talk about mandatory access controls—they can sound complex, but it's actually straightforward, indicating that something set by the administrator at boot cannot be modified.
00:10:56.720 Mandatory access controls are reliable and don't change, allowing us to ensure that code operates effectively. Thus, using your operating system’s access control mechanisms, preferably in a mandatory fashion, simplifies your system.
00:11:09.000 A use case might be that you have multiple databases, and you want to ensure that users on different networks can only access those databases they are authorized for.
00:11:24.360 You need to be extremely careful about what gets into those databases. Rather than relying on the best practices of our code to prevent SQL injections, we can grant our applications read-only access to these databases.
00:11:40.920 This way, regardless of how poor our network application is, there’s no way it can read from or write to databases it’s not allowed to access. Instead, we implement a little 'glue' router to ensure the right requests go to the right places.
00:11:57.240 This relies on the fact that these databases have read-only permissions, meaning an attacker would need to compromise the entire server to bypass that, thereby giving us reasonable assurance.
00:12:10.080 Evaluatable: the security critical part of this code is just that little router taking write requests and sending them to the database owners. I can keep this code small and evaluatable, potentially using a type-safe language like Rust.
00:12:24.000 Always invoked: every file system call must go through the kernel, assuring it's invoked consistently. I was trying to find an example demonstrating tamper evidence but decided against it to avoid boring you.
00:12:34.640 Some takeaways: use your operating system’s access control mechanisms whenever possible and wrap them into your application, perhaps using Foreign Function Interface (FFI). However, don’t stop there because while implementing these controls during development can be challenging, it is of utmost importance.
00:12:43.360 Doing Day-to-day development with these controls in place can be tedious and even risky, potentially crashing your development environment. Use stubs to sidestep potential complications during development.
00:13:00.800 We reached a point where a third-party was helping us develop our application but didn’t have our Mac infrastructure. Instead, we provided them with this stub they used to code a cool application.
00:13:15.200 When they handed the code back to us, it was relatively easy to integrate it with our application after removing the stub.
00:13:29.440 Remember, implementing mandatory access control only works if the application can't change its own policy. Therefore, be cautious about giving your application system privileges.
00:13:45.200 For example, if a popular library like Stage in Android makes a small mistake, it could have catastrophic consequences if granted system privileges. It's crucial to give such privileges careful thought—making an error means you no longer control your system.
00:14:06.720 One of our objectives is to simplify the creation of test doubles for our file system objects, as we heavily rely on the operating system in security implementations.
00:14:24.320 I have discussed this concept with some people from Test Double, but it still requires further thought and conversation. If you think it’s a bad idea or would like to help, please let me know.
00:14:33.600 Next, moving through the layers of the onion of our use case, I’ll refer to them as our Services Security Framework.
00:14:51.080 If we're breaking our application into various processes, we must integrate them. However, integration points are prime locations for attackers to penetrate your system.
00:15:03.760 Things like interprocess communication or database access can lead to vulnerabilities, including cross-site request forgery (CSRF), internationalization attacks, or SQL injection. It’s often difficult to get clients to understand what can happen without skilled management here.
00:15:19.199 I’m reminded of an insightful moment in Ender's Shadow, where Bean describes that as your attack surface expands, defense becomes untenable. Fortunately, we're not facing a horde of aliens, but the risks are still significant.
00:15:31.920 I don’t have enough confidence, even with our extensive experience, to guarantee our code would be flexible enough to withstand changes over a decade that allow threats to get through.
00:15:51.440 So we follow the principle of separating, isolating, and integrating. Each time a process component is separated based on data, it should utilize a domain-specific language for enforcing security policies.
00:16:05.679 When sending data out, we seek to protect the data and the following process to ensure security.
00:16:20.320 Let’s take an oversimplified example: we want to make sure no semicolons reach storage, as many web attacks rely on them.
00:16:30.480 This isn’t an absolute solution, but it’s a useful policy for many web threats. Please note that the following examples do not consider internationalization.
00:16:45.600 When examining application layer preprocessing, this code appears to be doing some form of escaping that transforms semicolons into alternate characters before storage.
00:17:04.480 It then resolves the escaped character back to the original format upon rendering. While I have trust in this code, it does present some complexity.
00:17:21.199 Imagine applying policies like this to an application with hundreds of such rules; it could get unwieldy.
00:17:31.920 However, the code for the actual storage process is much simpler; it simply looks for semicolons in guidance and only permits data without them.
00:17:43.479 If a semicolon appears in stored data, it indicates either a severe flaw in our application or possible tampering. This example illustrates a form of tamper evidence without high-tech solutions like stack canaries.
00:17:56.320 Ruby is excellent for monkey-patching code into classes, allowing for various hooks to trigger automatically as demonstrated in last talk.
00:18:06.679 The point of this simple check carries significant weight, as the complexity in normalizing data pre-storage allows for single and effective validation.
00:18:17.760 Moving on to more related technologies: if any of you have worked with parsers like Antlr or Treetop, they enable you to customize behaviors in your code for specific parsing tasks.
00:18:35.440 Another fantastic tool is Chek Dosh, a simple bash script that analyzes binaries to provide insights into what sorts of exploit mitigations have been added.
00:18:50.639 We regularly utilize this tool and find various links about security concepts, allowing considerable learning opportunities about binary exploitation by following resources on their site.
00:19:06.560 POC or GTFO is another noteworthy work, often dubbed the 'Why the Lucky Stiff' for security geeks. It's humorously technical, albeit challenging to follow at times.
00:19:15.760 Finally, the Spanner blog discusses web application penetration and remains one of the best sources for this topic.
00:19:26.679 If you've dealt with SELinux or other complex policy frameworks, you'll recognize how powerful and effective they can be. Yet, maintaining the state of such systems can still be challenging.
00:19:40.480 I'm a proponent of simplifying the use of custom-oriented domain-specific languages (DSLs) tailored to the specific issues we face. This is a very Ruby-centric approach.
00:19:55.200 Keep your enforcement checks as simple as possible, as complexity can lead to obscure bugs, such as time-of-check-to-time-of-use issues.
00:20:05.360 These bugs occur when a condition is checked, proceeding to another operation that may inadvertently alter the data during the interval.
00:20:12.919 Complicated conditions can easily lead to these problems; thus, it’s vital to keep checks straightforward to mitigate the risk of these nasty security bugs.
00:20:20.640 I have an example linked here showcasing an incident involving Matt Honan of Wired Magazine, where clever attackers chained minor security oversights into a significant breach.
00:20:35.279 So, next steps. There was a talk Tom Stuart gave in Barcelona in 2014 titled 'Refactoring Ruby with Monads.' I find the concept of monads exciting but don’t necessarily practice it well yet.
00:20:47.560 I strongly believe monads could be used to wrap content we've ingested from trusted sources, ensuring proper validation before storage.
00:21:03.440 Additionally, we’ve considered leveraging immutability for its security properties alongside performance and code quality benefits.
00:21:17.200 There is much we can do to enhance our code’s mechanisms and ensure promised validation.
00:21:28.840 Consequently, we’ve been focusing on improving tools to automate our security rule sets, tackling the occasionally mundane process of manually constructing these checks.
00:21:43.440 Now, back to what affects most of you: writing applications.
00:21:56.400 There are numerous security decisions required in the app layer that can’t be resolved at the service or integration levels.
00:22:09.200 A great case in point is XML—I try to avoid XML when possible, but sometimes it's unavoidable.
00:22:24.240 Building a high-assurance, secure XML processor isn’t just complex; it's highly challenging given the scope of available XML libraries, many of which are sophisticated.
00:22:40.320 Achieving our goals means breaking down the effort into smaller, manageable pieces, then integrating them via well-defined mechanisms that the operating system enforces.
00:22:55.120 A strategy we like to champion is what I define as 'binary diversity'. Utilizing distinct libraries for diverse functionalities complicates an attacker's task.
00:23:10.400 By strategically separating functions and employing varying libraries, we gain a measure of defense—even if it’s just a partial line of protection.
00:23:25.200 As Justin Surl discussed, breaking down functions into smaller components simplifies testing. Validating both simple and intricate mechanisms leads to more robust software.
00:23:43.760 We might have very secure code, but the dangers of vulnerable libraries still lurk. Even reliable libraries like Psych could have obscure security flaws.
00:23:57.440 It's crucial to build fault isolation into your systems. Evaluating where we currently stand highlights challenges we face.
00:24:10.280 We’ve established a solid, non-bypassable pipeline to our storage data, ensuring it's the only path available, equipping it with evaluatable inputs.
00:24:24.159 Our codebase is extensive, presenting somewhat limited scope for evaluation. However, there are ways to ensure your code is reliably invoked.
00:24:38.459 For instance, using Rails, you might employ instrumentation checks that ensure accessibility to your XML, letting those verifications auto-run.
00:24:59.679 The brick wall alerts presented earlier aren't just decoration; they're practical tools like Seccomp, which we frequently utilize.
00:25:14.760 Seccomp acts like a firewall at the OS level: each file read triggers a process that translates requests into opcodes.
00:25:31.919 Many system calls range from incredibly beneficial to those you should never use in a production context—these weaknesses are known to attackers.
00:25:48.120 Utilizing tools like Seccomp secures your application; if there’s a security failure, it won’t compromise the OS entirely.
00:26:02.640 Furthermore, consider using GRsecurity: it’s a controversial patch requiring recompilation of the OS, but it offers significant protections.
00:26:14.840 The Washington Post had an insightful article on November 5th that shared contrasting perspectives on GRsecurity versus Linus's stance.
00:26:30.320 For those interested in the Internet of Things, various tools, including OpenEmbedded, are valuable; however, in my experience, Buildroot shines for creating customized Linux distributions.
00:26:44.640 Ruby is also part of Buildroot's offerings.
00:27:02.000 Looking towards the future, we hope to simplify usage of Seccomp.
00:27:14.160 I intend to create a gem aiding in blocking unnecessary system calls that won’t be needed in a production. You could significantly diminish any application’s attack surface using this gem—although it wouldn’t work on Windows.
00:27:30.919 Next, we emphasize the importance of separating code into distinct processes and ensuring secure isolation as well as integration.
00:27:45.680 Being relentless in testing and designing your code is paramount.
00:27:59.559 I aim to enhance the availability of Seccomp tools for wider community use.
00:28:12.560 Robusta is another cool recent development. It's essentially a container running within the Java Virtual Machine that seals off native extensions if a security compromise occurs.
00:28:27.720 Most Ruby vulnerabilities stem from native extensions that we rely on heavily, so this could be significant for the Ruby community.
00:28:40.320 We’re also learning about MRuby. Unfortunately, a Birds of a Feather session on MRuby overlapped with my presentation, preventing me from attending.
00:28:56.919 MRuby contains robust controls to enhance the security of our binaries, making it incredibly challenging for attackers.
00:29:11.040 Incorporating flags available in GCC and Clang can also strengthen binaries, yielding impressive results.
00:29:26.160 I could share a picture of my cat, but I always find Zach and Cory's briefings entertaining.
00:29:38.240 Now let's briefly touch on security penetration testing. Sometimes I engage in this process, exploring the mystery surrounding penetration testing for those unfamiliar.
00:30:02.919 Recognizing this picture from Lord of the Rings, you'll recall that an outer wall isn't entirely defensive; its purpose is to make attacks more cumbersome.
00:30:14.640 Directing focus onto the outer barriers led to vulnerabilities in Helms Deep, so resources need to be allocated to guard critical areas.
00:30:31.679 When contemplating whether to hire penetration testing services, always provide them with ample information; it allows them to deliver maximum value.
00:30:43.080 Building relationships with your penetration testers can yield invaluable insights. During testing, be transparent and cooperate with them for maximum effectiveness.
00:30:57.200 Don’t merely test from the outside; Ruby enthusiasts know it’s crucial to unit test all classes, no matter how inherently embedded.
00:31:10.640 By allowing testers access from different layers, they can effectively assess your application’s vulnerabilities without compromising the production environment.
00:31:24.200 With that, I appreciate your attendance at my talk. I hope you found it engaging and insightful.
00:31:35.070 I have 10 minutes for questions, so feel free to ask!
00:31:43.080 So the question is: what’s my opinion on getting third-party penetration testers versus just executing automated vulnerability scans? It really depends on your goals.
00:31:57.440 That’s a vague answer, but I believe in balancing between employing automated scans with continuous integration while also engaging human testers.
00:32:13.839 Automated tools are practical, but trained penetration testers can discover vulnerabilities that automated tools might miss.
00:32:31.159 While recognizing the necessity of utilizing libraries, it’s wise to consider that using multiple libraries can widen the attack surface.
00:32:48.639 In my example, though, utilizing REXML for well-formedness allows other libraries to focus on deeper content checks, leading to stronger validation.
00:33:04.960 The practice of using preconditions and postconditions to create a secure architecture can be beneficial. If done right, it can significantly improve security processes.
00:33:17.239 Any other questions? Great! Thank you all again for attending!
Explore all talks recorded at RubyConf 2015
+76