00:00:20.869
Good morning, everyone! Probably a few more people will straggle in here, but I'll go ahead and introduce myself.
00:00:27.360
My name is Lyle Mullican, and I'm a consultant based in the mountains of Asheville, North Carolina.
00:00:32.730
I've been using Rails for a long time, since version 1, and a lot of my work is in the healthcare sector, where we take security very seriously.
00:00:39.660
So we're going to talk a little bit about application security this morning. It's been my observation over the course of my career that development teams have a lot of tools for specifying the features we expect our software to have.
00:00:53.100
We have user stories, wireframes, and mock-ups, but we don't have nearly as many tools for expressing the security characteristics that we expect our software to exhibit. There are companies with those tools that do a great job of this and have a very strong security culture. If you work for one of those companies, that's great, but for many developers, especially in smaller teams and startups trying to get off the ground, we are often very focused on making the software work.
00:01:11.369
We aim to make it do what it's supposed to do, and we have well-defined expectations for user-facing behavior, alongside a vague expectation that along the way, it's going to be written in a secure manner—without a clear definition of what that actually means.
00:01:36.149
Occasionally, you might end up having conversations like this, particularly after there's been a well-publicized security incident in the news. A boss or possibly a client product owner might ask questions about the security posture of an application, asking something like, 'Is this application secure?' Or if they're slightly more sophisticated, they might ask, 'How secure is the application?'
00:01:54.929
There's an assumption built into this kind of questioning that security is something we can actually achieve; that it's either a box we can check, or perhaps a spectrum with insecure on one end and secure on the other, where we know we're somewhere in between, trying to move closer to the secure end of the spectrum.
00:02:27.299
However, if you stop and think about the security features that we actually build and the kinds of security controls we try to implement, you quickly realize that this isn't how security works at all. So what are we talking about when we use the word 'security,' particularly in the context of software? We are really discussing risk and how we manage it.
00:02:58.530
The controls that we put in place are designed to limit the risks our software faces. Security becomes a kind of verbal shorthand for saying we believe we’ve accurately assessed the risks that our software faces, and that we’ve applied technical controls to bring those risks to an acceptable level for our business.
00:03:37.150
That phrase 'acceptable level' is crucial, and we will come back to that later. Any security control typically addresses a specific scenario, often at a single point in time.
00:03:57.099
People often use metaphors with security; for instance, consider living in a castle with towers and big stone walls. You might ask yourself if you are secure.
00:04:08.620
Well, it all depends on your threat. If your threat is a medieval land-based army, you're reasonably secure against it. However, if your threat is a modern army with modern technology, you're probably not secure at all.
00:04:19.690
A more technical example could be HTTPS; it has the 'S' right there in the acronym, standing for secure. To the general public, when they access a website and see that 'S' in the URL bar along with a little padlock, it signals a secure website.
00:04:37.240
As technologists, we know that it’s not the website that is secure. HTTPS tells us nothing about what the website is doing with our information. It's the connection to the site that's secure, and even then, it only secures against very specific scenarios that HTTP was designed to address.
00:04:56.040
Thus, any definition of security can only be based on a clear definition of a threat. Instead of asking whether something is secure, we should be asking: in what ways is it secure or not secure? I’d like to break that down into a few basic questions.
00:05:14.340
These are questions we should ask ourselves routinely as we build software. We can ask them at different levels and contexts, such as at the level of code when we introduce a new method or when we change the behavior of a method, or we could ask them about the system as a whole—our software application in general.
00:05:43.320
The first question, and really the first two that go together, is what we call a threat model. This includes defining what we're worried about: what could go wrong in a security-relevant way? What could cause harm to the software, to the business, to its users, and how likely are those issues to actually occur? Once we understand that, we can ask ourselves what actions we could take to mitigate those threats.
00:06:10.020
We recognize that we will always need to make trade-offs. Often, the hardest question for developers is not what the threats are but how we prioritize them. We need to determine what deserves the most attention and what the acceptable level of risk is for a business.
00:06:40.860
The most secure design we can develop may ultimately be impractical. For instance, if I’m deeply concerned about SQL injection—and I am—I could design a system where my application isn’t allowed to communicate with my database at all. I could make the application output questions on paper that a DBA interprets and manually executes on an air-gapped machine.
00:07:02.520
While this provides robust protection against SQL injection, it’s entirely impractical. Hence, we have to recognize we will land somewhere between that impractical design and a database that is publicly accessible on the Internet.
00:07:31.380
There are trade-offs we need to make regarding the security features we implement. So, how do I determine which ones are really necessary and which need immediate attention? I'll discuss one way to think about this; it’s certainly not the only way, as there are numerous approaches to developing a threat model.
00:08:06.170
When we talk about prioritization, we should consider this as a conversation. You can use a whiteboard with your team and start addressing these issues along two axes: one axis should measure the likelihood of a threat occurring, while the other axis should measure the potential damage if it occurs.
00:08:41.700
This approach is quite subjective, allowing us to establish starting points. While we can't quantify the statistically probability of a specific attack occurring, we can compare scenarios based on our insights from industry trends, our application logs, or the nature of our business.
00:09:11.970
The idea is to establish priorities by identifying which threats are worth our attention. Some threats are common due to their ease of automation, while other threats may be less likely because they rely on targeting.
00:09:37.860
However, unlikely scenarios can still pose significant risks, and recognizing this allows us to prioritize effectively. By comparing threats this way, we can focus on those that are both likely to occur and likely to cause meaningful harm, ensuring they receive our primary attention.
00:10:13.529
This prioritization should also be revisited continually, as no threat model is ever complete. Much like when we iterate on our product prototypes, we should iterate on our security threat assessments as well.
00:10:32.519
Once we have a model documented, we can begin to ask further questions: What do we do about these threats? What controls do we already have in place? Where are our gaps? How can we prioritize efforts to close those gaps?
00:10:48.779
The good news is that designing security controls is typically straightforward because many other organizations face the same problems I do. Experts have already thought of effective solutions to address common vulnerabilities.
00:11:20.579
For example, we know how to prevent SQL injection and cross-site request forgery, but the real challenge lies in implementation. One key question is how to assess whether our existing security controls are effective.
00:11:49.630
The answer lies in the same way we determine if our software behaves as intended: we need to test it. My own journey as a developer saw me gradually adopting test-driven development principles and embracing automated testing.
00:12:09.730
Before transitioning to Rails, I primarily worked with PHP and found myself entering code in a text editor, opening my web browser, and hitting refresh to see if my changes worked. Eventually, I realized that method doesn't scale, and I recognized the undeniable power of automated testing.
00:12:43.290
Once I began to write effective tests, I discovered that learning to test compelled me to write better code. It made me think more critically about the code I was producing, and I believe that security testing will have the same impact.
00:13:10.730
When considering how we’ll test the security controls we implement, we typically design better security controls. Therefore, security testing should be a significant part of your testing suite, integrated into your CI pipeline.
00:13:38.290
If you're not testing your security controls, someone else probably is, and you definitely don’t want to leave security testing to chance. There are several types of security tests we can utilize, and I’ll highlight a few categories to approach security testing.
00:14:12.050
The first type of security testing involves explicit tests for controls built into our applications. This resembles testing any other application's behavior. For example, in Cucumber, we might validate access control to ensure that a user can only view their own orders.
00:14:37.320
In the test, not only do I confirm that access is denied when attempting to access someone else's data, I also assess what the response is when I try.
00:14:58.100
These explicit tests cover exactly what I instruct them to since they will alert me if any security controls fail to operate as designed. However, they won’t catch vulnerabilities that I'm not already considering, which is where other testing types come into play.
00:15:19.740
The next category is static analysis testing, achieved using various tools. For instance, I'm sharing output from a tool called Brakeman, which many of you may already be using. It identifies potential SQL injection vulnerabilities by analyzing whether user-controlled input is passed into ActiveRecord class methods without sanitization.
00:15:45.690
Static analysis excels at tracking input handling across multiple levels of indirection, something that’s difficult for humans to do. These tools don’t execute application code; they analyze its structure.
00:16:09.970
However, false positives can occur during this analysis, which can be frustrating for developers. I personally view a false positive from Brakeman as a code smell, suggesting I've made my code too complex for Brakeman to interpret.
00:16:31.320
Another vital aspect of static analysis is auditing dependencies. Most applications utilize not only their code but also external gems and other dependencies with potential vulnerabilities.
00:16:56.370
As part of a comprehensive automated security testing strategy, auditing the state of gems is essential. One tool I use for this purpose is Bundler Audit. It accesses an open-source Ruby advisory database, checking for missing patches or version upgrades in my Gemfile.
00:17:23.170
The next category of security testing is dynamic analysis. Unlike static analysis tools, which don’t execute code and only evaluate its structure, dynamic analysis tools run code without understanding its internals, analyzing the outputs instead.
00:17:51.790
In the business world, dynamic analysis might be regarded as traditional security testing. It generates unexpected inputs to see how the application reacts, which is useful for identifying potential string handling issues, even if they don't represent specific security vulnerabilities.
00:18:20.410
It’s wise to grant these tools access to an authenticated user account, as applications may have stringent security protocols for public access, yet trust levels can be improperly assigned once a user logs in.
00:18:47.710
Allowing a dynamic analysis tool to probe an application can yield significant insights. Furthermore, this approach can effectively exercise the entire application stack by running requests through the server and all relevant gems.
00:19:15.480
The fourth category is manual testing, which can be costly in terms of time, money, and expertise. However, human testers can do things that automated tools can’t.
00:19:43.690
A skilled tester can gain insights into an application's behavior, drawing on their development experience to make educated guesses. They may start with automated tools to identify potential vulnerabilities and then proceed with manual probing.
00:20:05.070
Manual testing is particularly critical if you're dealing with sensitive data. Organizations can deploy dedicated teams, implement bug bounty programs, or hire consultants for penetration testing.
00:20:27.900
Conducting manual security testing offers invaluable insights that automated tools may miss, especially in context of sensitive data.
00:20:47.670
The concept of defense-in-depth frequently arises in security discussions and applies to testing strategies. None of these methods are mutually exclusive, and it’s advisable to combine them for maximum security.
00:21:23.540
The idea of defense-in-depth refers to layering security controls, providing multiple opportunities to prevent failure. For instance, imagine a web application firewall monitoring traffic for malicious patterns—a strong layer of defense. However, should that layer fail, we still ensure our application sanitizes input effectively.
00:22:05.320
If an SQL injection vulnerability exists within the app, it is crucial to enable least privilege access levels to the database user. Therefore, even in the event of a successful injection, it’s limited in capability.
00:22:43.110
Defense-in-depth strategies should be applied to testing, as none of the aforementioned approaches are flawless. By layering them, we enhance our confidence in identifying as many vulnerabilities as possible.
00:23:00.760
Now I want to discuss what happens when an attack is in progress, particularly when applications potentially have unresolved issues. Incident response is a vast topic, but one key aspect involves recognizing that exploits take time.
00:23:33.940
It takes time for an attacker, whether they’re using an automated script or manual probing, to discover vulnerabilities and find successful exploit methods. Most security discussions focus on prevention strategies to lock down applications and secure them from malicious activities.
00:24:10.060
This includes the critical need to detect attacks timely. If we have existing vulnerabilities, early detection and swift action are essential. There are simple, effective methods to strengthen our incident response, even in smaller applications.
00:24:51.260
For example, implementing a non-existent route with a simple yet predictable path can reveal unsuccessful probing attempts. If a user attempts to access it, they receive a 404 error—and spying on this pattern helps us identify areas of concern.
00:25:27.150
I proposed connecting it to a controller I’ve dubbed 'Tripwire.' If an authenticated user triggers this route, I can send notifications to my team, alerting us that potentially malicious activity is occurring.
00:26:05.620
Another common probing tactic is to ascertain the technology stack. For instance, suppose a user probes a WordPress path. If this occurs, we can respond by blocking that IP address altogether.
00:26:42.780
IP blocking like this is highly effective, albeit not necessarily the most efficient method of managing IPs through application code. However, in smaller applications, these measures can yield significant benefits.
00:27:11.640
For efficiently managing and responding to these instances, store those flagged IPs in cache with a time limit to deny access without inundating your resources.
00:27:45.070
For any incident response plan, the crucial thing to remember is to limit the number of decisions made under pressure. Questions such as how to block an attacker should be pre-established before an incident arises.
00:28:18.230
While you may not encounter the exact situation you’ve anticipated, planning equips you with tools to adapt to unforeseen events in the real world. In speaking about preparedness, I want to emphasize that risks cannot be fully controlled.
00:29:06.040
There will always exist a certain level of risk when using software connected to the Internet. While it's important to engage in preventative measures, dealing with incidents effectively is equally crucial.
00:29:43.480
Resilience is a concept we can apply at multiple levels: to software, to individuals, and to organizations. We must ask ourselves how our software is designed to handle failures. In security terms, this may include how we store data, segregate networks, and layer controls.
00:30:12.790
On the human side, resilience refers to our responses when code fails. When we discover vulnerabilities in others' code, how do we process that? Do we play the blame game or do we strive to learn and prevent recurrence?
00:30:55.430
Impostor syndrome is widely discussed in the community; many developers fear that others possess more knowledge, leading to apprehension about being 'found out.' In security-related contexts, this fear may intensify, as security failures tend to attract moral judgments that other bugs do not.
00:31:33.580
We must remember that security is inherently challenging, primarily because defenders must be correct at all times while attackers only need to succeed once. During security discussions, we must consciously avoid framing our language in ways that imply systems can be flawless.
00:32:05.530
Understanding that there is no such thing as a secure application is essential. However, we must also engage in security discussions with those who may harbor naive views of how security functions in business.
00:32:51.920
The tools we’ve discussed can assist in reframing conversations. When someone suggests it should all be foolproof or that certain security measures should have been well known, we can reference documented approaches that involve both technical and non-technical stakeholders.
00:33:14.290
Documentation aids conversations about security expectations, and having security testing integrated into automated testing provides transparency. Planning and discussions about how to respond to trustworthy events can cultivate a culture of security.
00:33:28.640
With structure established, security conversations shift from blaming individuals to analyzing how to improve processes. Reflecting on our approaches leads to understanding whether our threat models and tests could be more robust.
00:33:38.189
I encourage you to explore the tools I’ve mentioned, including Brakeman, Bundler Audit, and Raccattak, which can help enhance your security strategy.
00:33:44.149
Additionally, there are helpful resources like the Rails security guide from the Rails documentation, the OWASP guide for secure coding practices, and the annual OWASP Top Ten list detailing recurring security issues.
00:35:07.020
The U.S. Computer Emergency Response Team (CERT) tracks vulnerabilities and publishes relevant feeds. These resources can enhance your understanding of security within the industry.
00:35:27.570
Thank you very much! Feel free to find me at the conference if you have any questions or want to discuss security further.