Talks
Handling File Uploads For A Modern Developer
Summarized using AI

Handling File Uploads For A Modern Developer

by Janko Marohnić

The video 'Handling File Uploads For A Modern Developer' by Janko Marohnić at the wroc_love.rb 2019 event focuses on best practices for file uploads in web applications, particularly within the Ruby ecosystem. It reflects on various libraries for file attachment, highlighting Marohnić's experiences with Paperclip, CarrierWave, and Shrine, the latter being created to address specific needs for flexibility and modularity in file handling. Key points discussed include:

  • File Uploading Challenges: The speaker emphasizes the necessity and commonality of handling file uploads, citing issues like maintaining logic in ActiveRecord models and the limitations of specific gems.
  • Shrine Library: Marohnić discusses his motivations for developing Shrine, which aims for broader usability across Ruby frameworks and avoids coupling to specific URLs to enhance design flexibility.
  • Validation Practices: Strong server-side validation is necessary, covering file size, content-type matching, and custom metadata extraction, also highlighting methodologies to prevent deceptive uploads through techniques like analyzing the file's magic bytes.
  • Image Processing: Marohnić introduces a separate gem for image processing that integrates alternative backends like libvips, emphasizing the importance of structured and clear image processing workflows.
  • User Experience Improvements: The transition from synchronous to asynchronous uploads is outlined, with Uppy proposed as a solution for enhancing user interactions during file uploads.
  • Cloud Integration and Direct Uploads: He advises using cloud services for uploads, explaining the process of fetching parameters from the server and handling uploads directly to services like S3, enhancing both performance and user experience.
  • Resumable Uploads: Addressing potential interruptions in uploads, he introduces chunked uploads and the TUS protocol to further improve user experience, particularly with larger files.

In conclusion, Marohnić underscores the importance of implementing robust validation, modular processing, and enhancing user interactions to manage file uploads effectively, positioning RESTful practices and performance optimization as critical components in modern web development.

00:00:14.590 Hello everyone! Before I start, I want to ask, how many of you have ever worked on a web application that needed to handle file uploads? Okay, most of you. I think this is a really common requirement in web applications, but I feel like it's not talked about enough. Today, I want to share with you some of the best practices that I've learned over the past few years in this field.
00:00:48.850 A bit about me: my name is Janko Marohnić, I’m from Croatia, and I’m a Ruby on Rails developer as well as the creator of the Shrine file attachment library. Most of you have probably used one of the libraries depicted on the screen, and I think the Ruby ecosystem is really nice because it has many options with features that I haven't been able to find in other languages.
00:01:04.449 I personally started my journey with Paperclip, but soon I got a bit frustrated with having to keep all of the file-uploading logic in my ActiveRecord models. So, I switched to CarrierWave, which let me move the logic to external classes. However, after using it for some time, I realized that some essential features, like direct uploads and background processing, were not really built into the gem. These features are provided by external gems, but they don’t work well with each other.
00:01:30.540 Around the time when the original author of CarrierWave released a library, it drew my attention because of its simplicity. I liked how it solved some of the complexities that CarrierWave had. I soon became a core maintainer of that library, but eventually, I felt it was too opinionated. I wanted something that would work for everyone, not just a specific use case. I didn’t want to sacrifice certain features, so I effectively forked the library and created Shrine.
00:02:22.240 Almost three years after Shrine was released, Rails 5.0 came out, featuring Active Storage. One of the philosophies I had when building Shrine was that I wanted it to work for any Ruby application. I enjoy working with various Ruby web frameworks and want to focus my energy on tools that everyone can use, not just Rails developers. One of the key components to achieve that is by building for Rack instead of Rails. Building for Rack allows wider usability across all Ruby web frameworks.
00:03:36.110 Another important aspect was that I didn't want to couple the implementation to any specific URL, as this leads to better design. File uploads should have a thin integration with the persistence layer and should be usable in various contexts. I also wanted a modular solution, allowing developers to pick and choose the features they want and modify behavior accordingly.
00:04:28.760 I aimed to have multiple levels of abstraction—if something doesn’t work for me in the higher-level APIs, I should be able to drop down to lower-level APIs provided by the library to build a flow that suits my needs. Configuration is crucial because if a user cannot adjust something to work exactly how they want, such as reducing the number of concurrent file uploads, their performance can greatly suffer due to potential HTTP requests and the like.
00:05:14.479 Next, I want to talk about metadata validation. File uploads should be validated not only on the client side but definitely on the server side. For example, you could validate that the uploaded file is not larger than 10 megabytes and that it’s a JPEG or PNG image. The delivery method you use should support these validations.
00:05:41.300 There is a specific caveat when uploading a file through a Ruby app: the Content-Type header received doesn’t necessarily match the MIME type of the file. This is because the browser determines this value based on the file extension. Someone can upload a malicious file by changing the extension to something your application considers valid. To prevent this, you need to validate the MIME type by analyzing the file's content instead.
00:06:09.169 Each file type has something called magic bytes, which is a specific byte sequence at the beginning of the file that uniquely determines its type. The most popular UNIX tool for doing this is the 'file' command, but there are also Ruby gems that can perform a similar function. In Shrine, you can enable determining MIME type from the file content by loading a plug-in and selecting the desired analyzer.
00:06:53.659 Another caveat is related to file size validation. Simply checking that a file is under a certain size is not enough. It’s possible to generate a small file size that has large dimensions, which can crash your image processing tool. Therefore, we should also validate the dimensions of images after validating the file size.
00:07:20.350 For example, there's a phenomenon known as 'image bombing' where someone creates a very large image with small file size, which crashes processing tools. We need to make sure that we validate not only file sizes but also dimensions. The validate block allows you to use regular conditionals; for example, if the MIME type is an image, you can proceed to validate its dimensions.
00:08:40.340 Additionally, we can extract and validate any custom metadata without needing external extensions that hook into the library’s internals. For example, we can extract the duration of a video, and ideally, we should persist all the extracted metadata in the database for later use, such as displaying it in views.
00:09:02.290 In summary, we should validate the uploaded files for common extensions, common metadata, or any custom metadata specific to the file type. This extracted metadata should be persisted to the database. Now that we've successfully validated the file, we usually want to process it to normalize it into a format that our application understands.
00:09:38.949 Most file attachment libraries come with their own macros for image processing. Using ImageMagick can be inconvenient because you often want something more structured. To avoid implementing another homegrown solution into Shrine, I created a separate gem with a functional approach. You provide it with a source image, and it returns the processed file as output.
00:10:10.790 Among the functionalities, we ensure that the processing steps include not only resizing but also specific extras such as rotating the image if needed. Also, after resizing, images may lose clarity; thus, the gem applies additional sharpening to enhance that aspect. It's nice to have a tool that takes care of these details for you.
00:11:05.199 There's an alternative to ImageMagick known as libvips, which is often much faster and full-featured. The image processing gem integrates with libvips as an alternative backend to ImageMagick, sharing as much of the same interface as possible. In benchmarking generating a 500x500 thumbnail, the performance difference can be astounding, sometimes being three to five times faster, depending on the image size.
00:12:01.860 Hooking up image processing is straightforward since we do most processing on-the-fly. When a file is uploaded, we generate a URL, and when that URL is requested, the file is processed and typically cached into a CDN. Active Storage and other gems encode processing steps into the URL, which is convenient as it eliminates the need for additional configuration.
00:12:44.139 However, I don't prefer this approach because it causes the URL to grow with processing logic, making it less flexible. Instead, Shrine provides a way to define custom processing blocks in Ruby that define how the file is processed. When the URL is hit, Shrine finds the corresponding processing block that you defined.
00:13:29.140 An alternative to on-the-fly processing, which often applies to larger files such as videos, cannot always be processed on-the-fly due to resource constraints. To manage this, a similar approach to on-the-fly processing can be used, defining a Ruby block and collecting the processed files into a single storage area. Unlike some solutions where files are directly serialized, Shrine stores each uploaded file into the database separately.
00:14:18.670 This method ensures that if you later decide to change how you generate the upload location, it doesn't invalidate existing URLs because the upload location is stored in the database. This flexibility allows developers to perform processing inside any other type of processing without needing to define external extensions.
00:15:07.480 In recap, processing can occur either on upload or on-the-fly. On upload means it's triggered when the file is attached, while on-the-fly processing happens when you request a URL for image processing. It’s advisable to use the image processing gem for efficient resizing, and many gems like Active Storage now leverage this for better performance.
00:15:23.370 Now let’s focus on improving user experience during file uploads. We want to transition from synchronous uploads—where the user is unaware of how long an operation will take—to a more asynchronous experience. This way, users can edit other fields while waiting for file uploads.
00:15:59.350 Some file attachment libraries like Active Storage provide their own JavaScript that hooks everything up automatically, but I’m not keen on this approach in Shrine due to the constant need for customization and maintaining compatibility with various browsers. Instead, the JavaScript ecosystem has made significant strides in solving the file upload problem.
00:16:51.338 The solution I recommend is called Uppy, a modern JavaScript solution for file uploads that integrates nicely with the existing Shrine components. One of its cool features is built-in UI components that allow users to start with a simple file input and progress bar. It provides substantial ready-to-use functionality and contributes to a smoother user experience.
00:18:02.760 Uppy offers many modular components, from simple file inputs to drag-and-drop fields, enhancing status feedback and even full dashboards that integrate various UI components for an excellent user journey. This adaptability allows you to choose only the components you need, making it a great tool for developers.
00:19:27.049 Next, we need to discuss how to define where to upload the files. The simplest solution is to provide a custom endpoint for uploading files, which then forwards that file to your application's storage and returns JSON data representing the file.
00:20:15.570 When we submit the form, we only send this JSON data which makes the submission instantaneous. By loading the corresponding Uppy plugin, you can point it to the desired URL, while Shrine provides a complete endpoint to do the uploading and return the response.
00:20:51.570 This upload process is simplified, but your server must still handle the actual upload, which consumes resources. Ideally, we want users to upload files directly to a cloud service like S3. The flow involves the client fetching upload parameters from the server, which are then generated from AWS keys and used for the actual upload.
00:22:13.770 On submitting the form, only the JSON data is sent, just as with our earlier upload method. Uppy already understands this flow and performs the necessary Ajax calls to facilitate uploading, letting you simply tell it where to look for the parameters.
00:23:08.650 For directly uploading to a cloud service, you can use either a simple endpoint on your app or directly on a service like S3. Direct uploads generally provide improved UX and performance. I recommend using Uppy, regardless of whether you’re working with Shrine or another tool, as it includes built-in UI components and straightforward direct upload support.
00:23:52.520 If you need to upload large files, you can further enhance user experience by implementing resumable uploads. The issue with standard uploads is that they occur in a single HTTP request, meaning if a connection is interrupted during upload, the upload has to restart entirely. This can frustrate users, particularly those on unreliable connections.
00:24:52.210 The solution is to split the upload into chunks, with each chunk uploaded individually. If one fails, it can be retried, and multiple chunks can even be uploaded parallel, enhancing total upload speed depending on connection quality. To achieve this, you can use the multipart upload feature from S3, which also requires some endpoints in your application but uploads directly to the cloud service.
00:25:56.520 The implementation is somewhat tied to the storage in use; for example, if you want to use Google Cloud Storage, it may be more complex. There exists a dynamic HTTP protocol for resumable uploads called TUS. The TUS protocol is merely a collection of headers and URLs enabling the client and server to interact for resumable uploads.
00:27:21.360 The server interprets uploads and communicates with the appropriate storage service API. There’s an integration for TUS with Shrine via an available gem, making it seamless to utilize regardless of the option you choose.
00:28:09.190 In action, resumable uploads work similarly to standard uploads, adding a pause button to the UI. If you pause an upload, the server saves some data, allowing for it to be resumed later. This functionality keeps track of file status in local storage, promoting a smooth user experience even if an entire upload is interrupted.
00:29:57.170 In summary, resumable uploads significantly enhance the user experience, particularly with larger files. One approach is to utilize the S3 multipart upload API, which directly connects to cloud storage.
00:30:03.230 Alternatively, the TUS protocol provides flexibility with numerous implementations across different languages.
00:30:35.500 [Here are some useful links for the topics discussed today.] That’s it for me.
00:30:57.330 [Audience Member] Thank you for the presentation. I'd like to ask about the on-the-fly processing part because it seems effective as it doesn’t burden the server upfront, but it seems prone to DDoS attacks since an attacker could request multiple versions of the same uploaded file. How can we defend against this?
00:31:32.920 [Janko] Great question. A common design feature in on-the-fly processing is to sign the URL with a secret known only to the server. This means that only the server can create a valid URL, which is typically stored in your CDN. Since attackers cannot create valid signatures for the URLs, it protects against DDoS attacks.
00:31:58.680 [Audience Member] There's an issue with asynchronous uploads; a user could submit a large file but neglect to fill in any metadata afterward. This makes the upload rather useless. Would there be a way to have the image expire after a few minutes?
00:32:40.480 [Janko] Yes, Shrine actually has a mechanism where you can use temporary storage for uploads. When generating the upload URLs, you can specify a temporary storage directory that expires old files, separating them from those that have been attached to records. This helps prevent orphan files.
00:33:49.030 [Audience Member] For testing purposes, can we use local storage instead of S3?
00:34:08.530 [Janko] Yes, there’s a great tool called Minio that simulates the S3 API locally. Point the SDK to your Minio server, and it will store files locally while still communicating through the S3 API in tests.
00:34:54.860 [Audience Member] What about using a private VPS instead of S3? Is that feasible?
00:35:22.640 [Janko] I'm not sure about specific tools for that approach, but I believe some solutions might route the S3 API to another service. It could be worth researching further.
00:35:56.720 [Audience Member] ImageMagick has known security vulnerabilities. Have you considered adding ImageFlow to the image processing gem?
00:36:12.970 [Janko] I would love to add support for ImageFlow! Currently, there are no Ruby bindings, but if someone were to create them, I would definitely add support to the image processing gem.
00:36:43.540 Thank you!
Explore all talks recorded at wroc_love.rb 2019
+9