Whatever your industry, this is sure to be something you’ve come across before. You need to share a file with an external business – it might be a client, a partner or an agency. In the old days, we’d just fire it across as an email attachment. But more businesses working together closely means large volumes of confidential information being shared more frequently.
It’s something that comes up for us pretty regularly. The files that we handle for our clients are sensitive. Customer details, employee records: the kind of information that needs to be protected on the way to its new destination. Plus, files like these are much too big to send across on email. We’re always on the look out for AWS technologies that can make our (and your) life easier. Here’s the approach to file sharing and storage with AWS S3 and KMS.
For one particular project, we were looking at transferring database dump files of around 4TB. We needed to be particularly mindful of security, because of the industry-related regulations that our client had to follow.
AWS S3. This was a simple one for us. An Amazon object storage service with an easy-to-use interface, S3 lets you store as much data as you like and retrieve it whenever you need it. It’s durable, scalable and low cost, and it’s simple to move large volumes of data into and then out of again. Keeping an eye on security, we used in-transit encryption (with inbuilt TLS) and at-rest encryption (with AWS KMS).
How to do it
Our approach was a multipart upload to AWS S3 and server-side encryption to protect data at rest. We used an s3api command on AWS CLI here for direct access to AWS S3 APIs, which meant more granular control over the requests to S3. To help you out with simple and secure file sharing, we’ve gone ahead and put together a straightforward guide to the upload.
STEP 1: CREATE AN S3 BUCKET
a. Create the bucket
b. Add a bucket policy
The bucket policy will mean that only encrypted files are accepted.
Here’s a look at an example bucket policy:
c. Confirm the policy
Using the console, check that the bucket policy has been added successfully.
STEP 2: CREATE A KEY
a. Create a key (managed by AWS KMS) and configure it
Here’s a look at an example key policy:
b. Enable key rotation
AWS can now manage the rotation of the keys. You can also give the key an alias for quick identification.
c. Check the key has been created
STEP 3: PREPARE THE FILE
a. Check the size
Record the size and md5 checksum of the original file (this one’s a large dummy file of 1.79 GB)
b. Split the file into two parts (here, xaa and cab)
c. Check the size of the two files
d. Check the md5 sum of the two files
STEP 4: S3 MULTIPART UPLOAD
a. Initiate the upload
b. Upload each of the file parts separately
c. Confirm both parts are uploaded
d. Complete the multi-part upload
Here’s an example file parts:
e. Check the integrity of the uploaded file
STEP 5: TEST THE UPLOADED FILE
a. Try to access the file using the S3 console
You shouldn’t be able to, because no keys are passed in the header, you can only access the file with signed URLs
b. Download the file s3api
c. Check the md5 checksum of the downloaded file
Looking for more information on how AWS technologies could be the solution to your problem? We can help. Contact us here.