While file upload might seem straightforward for small applications handling minor uploads like images, its complexity dramatically increases within a heavy-focused context where files are the primary data source. This post will dive deep into how I engineered a resilient and user-friendly file upload feature for an application we're building for Lucidly x Axenso.
Released/Created At 2025/6/14Before starting the project, given its heavy reliance on files of varying sizes, we agreed to implement a chunked file upload strategy, splitting large files and combining them on S3. While this initial solution worked, we quickly encountered significant user experience (UX) and workflow challenges. Users were effectively blocked from any other interaction until the file upload was entirely complete, severely hindering their experience.
After implementing a small Proof of Concept (POC) for the feature, a crucial question emerged regarding the user experience: how could we prevent the file upload process from blocking user interaction? If a user navigated away or minimized the upload, the process would terminate. This led us to identify several key issues that needed addressing:
Error Handling: What happens when a file upload encounters an error? How do we gracefully recover or notify the user?
Offloading from Main Thread: Users needed to navigate and use the application freely, even while files were uploading, without performance degradation.
Intuitive UX: How should the user interface behave to provide clear feedback and control during uploads?
File Tracking and Labeling: What is the most reliable way to track multiple files, especially duplicates, and label them effectively?
To address these pain points, we needed a robust way to track files on the user's device (persisting data even if the user closed the window or restarted their device) and offload the upload process from the main thread. After defining these critical requirements, I began redefining the entire process in Excalidraw, meticulously mapping out the user journey.
Since the primary upload UI was a modal, we opted for minimizing it to a smaller component once the upload started. To offload the process, we utilized a worker pool (e.g., two workers) to handle concurrent file uploads, ensuring the main application thread remained responsive. For persistent state storage, I chose IndexedDB, leveraging Dexie
as the library to facilitate communication and synchronization between the database and the UI.
The core file chunking and upload process unfolds as follows:
I initiate a file upload request to the backend. This signals to the backend that a new file upload is starting and links it to a specific collection.
The backend then generates a unique key. This key is sent to S3 to generate an upload ID, which effectively groups all subsequent chunks.
Each chunk receives its own pre-signed URL from the backend. Upon successful upload, each signed URL returns an ETag, which is crucial for combining the file later. Thus, the complete file is effectively represented by a list of these signed URLs and their corresponding ETags.
Once all chunks are successfully uploaded, I notify the backend that the file upload is complete. The backend then proceeds to merge the file on S3 using the collected ETags.
Each file upload involves a complex set of steps. Handling multiple concurrent uploads on the main thread would inevitably slow down the entire application and lead to crashes, completely negating the goal of allowing users to freely interact with the app.
I began by defining the IndexedDB
schema to store critical file upload information. Initially, I considered using incremental numbers as IDs, which seemed straightforward. However, it didn't take long to identify a crucial problem: how do we track duplicate files? Simply using the filename wouldn't work, as a user might upload the same file to different collections, or have multiple versions (e.g., "cv.pdf").
This led me to a more robust solution: hashing. By taking the first megabyte of a file and generating a hash, combined with the collection name and file name, we could create a unique identifier for each distinct file upload, regardless of its name or location. This approach ensured absolute uniqueness and reliable tracking.
The second challenge was handling page reloads or window closures. This proved to be a simpler task: I implemented a mechanism to quickly check all saved files in IndexedDB upon load and automatically mark any files with an 'uploading' status as 'failed'. This ensured data consistency and prevented orphaned uploads, providing a reliable recovery mechanism.
Synchronizing IndexedDB with the UI was remarkably straightforward, largely thanks to Dexie
's intuitive API, which abstracted away much of the complexity of IndexedDB interactions.
Initially, the UI focused on core functionality. The UI/UX team provided the foundational design, from which I expanded upon it to implement Quality-of-Life (QOL) features that enhanced the user experience, drawing inspiration from Google Drive's familiar file management interface.
The culmination of these solutions resulted in a seamless and robust file upload experience:
Upon site load, the application checks IndexedDB
for any errored or pending files. If found, the minimized upload modal is displayed, allowing users to resume or restart their uploads.
When a user initiates a file upload, a worker pool instance is activated, and the process unfolds as follows:
The worker pool calls Dexie
to retrieve the files the user selected via the file upload dialogue.
Each worker in the pool is assigned a file to process.
When a file begins processing, its status changes to 'uploading', and its progress is reset to 0 (useful for resuming errored uploads).
First, the worker retrieves file metadata and sends it to the 'initiate upload' API.
The backend creates a unique key for the file.
This key is then sent to S3 to generate an upload ID, which groups the chunks.
The backend responds to the worker with the upload_id
and key.
Following this, the loop for chunking and uploading begins:
The worker calls the backend API with a payload containing the upload_id
, key, and part index to generate a pre-signed URL for that specific chunk.
The chunk is then uploaded via the pre-signed URL. Upon completion, S3 returns an ETag in the headers; this tag, combined with the part's index, is crucial for later grouping and combining all parts into a single file.
Progress is calculated based on the count of successfully uploaded parts, with the UI updating after each successful chunk upload.
This process repeats until all parts are uploaded.
After all parts are uploaded, the client calls the 'complete upload' API with the array of [tag, index]
pairs and the upload_id
. The backend then signals S3 to begin the final combination process using this uploaded data.
Building this advanced file upload feature was a significant technical undertaking that presented complex challenges beyond typical file handling. By implementing a multi-threaded, chunked upload system with robust local storage and intelligent duplicate detection, I successfully delivered a solution that not only ensures data integrity and scalability but also provides a fluid and uninterrupted user experience. This project significantly enhanced my understanding of asynchronous processing, data persistence in the browser, and designing resilient systems that perform under high-demand conditions. I'm proud of the stability and responsiveness this feature brings to the application, directly contributing to a superior user workflow.