stock-management-demo/src-server/utils/s3.js

166 lines
No EOL
5.4 KiB
JavaScript

import { S3Client } from 'bun';
import { getSetting } from './settings';
let s3Client = null;
export async function getS3Client()
{
if (s3Client)
{
return s3Client;
}
const s3AccessKey = await getSetting('S3_ACCESS_KEY_ID');
const s3SecretKey = await getSetting('S3_SECRET_ACCESS_KEY');
const s3Endpoint = await getSetting('S3_ENDPOINT');
const s3Bucket = await getSetting('S3_BUCKET_NAME');
if (s3AccessKey && s3SecretKey && s3Endpoint && s3Bucket)
{
s3Client = new S3Client({
endpoint: s3Endpoint,
accessKeyId: s3AccessKey,
secretAccessKey: s3SecretKey,
bucket: s3Bucket,
});
}
else
{
throw new Error('S3 settings are not configured properly.');
}
return s3Client;
}
/* S3Client documentation
Working with S3 Files
The file method in S3Client returns a lazy reference to a file on S3.
// A lazy reference to a file on S3
const s3file: S3File = client.file("123.json");
Like Bun.file(path), the S3Client's file method is synchronous. It does zero network requests until you call a method that depends on a network request.
Reading files from S3
If you've used the fetch API, you're familiar with the Response and Blob APIs. S3File extends Blob. The same methods that work on Blob also work on S3File.
// Read an S3File as text
const text = await s3file.text();
// Read an S3File as JSON
const json = await s3file.json();
// Read an S3File as an ArrayBuffer
const buffer = await s3file.arrayBuffer();
// Get only the first 1024 bytes
const partial = await s3file.slice(0, 1024).text();
// Stream the file
const stream = s3file.stream();
for await (const chunk of stream) {
console.log(chunk);
}
Memory optimization
Methods like text(), json(), bytes(), or arrayBuffer() avoid duplicating the string or bytes in memory when possible.
If the text happens to be ASCII, Bun directly transfers the string to JavaScriptCore (the engine) without transcoding and without duplicating the string in memory. When you use .bytes() or .arrayBuffer(), it will also avoid duplicating the bytes in memory.
These helper methods not only simplify the API, they also make it faster.
Writing & uploading files to S3
Writing to S3 is just as simple.
// Write a string (replacing the file)
await s3file.write("Hello World!");
// Write a Buffer (replacing the file)
await s3file.write(Buffer.from("Hello World!"));
// Write a Response (replacing the file)
await s3file.write(new Response("Hello World!"));
// Write with content type
await s3file.write(JSON.stringify({ name: "John", age: 30 }), {
type: "application/json",
});
// Write using a writer (streaming)
const writer = s3file.writer({ type: "application/json" });
writer.write("Hello");
writer.write(" World!");
await writer.end();
// Write using Bun.write
await Bun.write(s3file, "Hello World!");
Working with large files (streams)
Bun automatically handles multipart uploads for large files and provides streaming capabilities. The same API that works for local files also works for S3 files.
// Write a large file
const bigFile = Buffer.alloc(10 * 1024 * 1024); // 10MB
const writer = s3file.writer({
// Automatically retry on network errors up to 3 times
retry: 3,
// Queue up to 10 requests at a time
queueSize: 10,
// Upload in 5 MB chunks
partSize: 5 * 1024 * 1024,
});
for (let i = 0; i < 10; i++) {
await writer.write(bigFile);
}
await writer.end();
Presigning URLs
When your production service needs to let users upload files to your server, it's often more reliable for the user to upload directly to S3 instead of your server acting as an intermediary.
To facilitate this, you can presign URLs for S3 files. This generates a URL with a signature that allows a user to securely upload that specific file to S3, without exposing your credentials or granting them unnecessary access to your bucket.
The default behaviour is to generate a GET URL that expires in 24 hours. Bun attempts to infer the content type from the file extension. If inference is not possible, it will default to application/octet-stream.
import { s3 } from "bun";
// Generate a presigned URL that expires in 24 hours (default)
const download = s3.presign("my-file.txt"); // GET, text/plain, expires in 24 hours
const upload = s3.presign("my-file", {
expiresIn: 3600, // 1 hour
method: "PUT",
type: "application/json", // No extension for inferring, so we can specify the content type to be JSON
});
// You can call .presign() if on a file reference, but avoid doing so
// unless you already have a reference (to avoid memory usage).
const myFile = s3.file("my-file.txt");
const presignedFile = myFile.presign({
expiresIn: 3600, // 1 hour
});
Setting ACLs
To set an ACL (access control list) on a presigned URL, pass the acl option:
const url = s3file.presign({
acl: "public-read",
expiresIn: 3600,
});
You can pass any of the following ACLs:
ACL Explanation
"public-read" The object is readable by the public.
"private" The object is readable only by the bucket owner.
"public-read-write" The object is readable and writable by the public.
"authenticated-read" The object is readable by the bucket owner and authenticated users.
"aws-exec-read" The object is readable by the AWS account that made the request.
"bucket-owner-read" The object is readable by the bucket owner.
"bucket-owner-full-control" The object is readable and writable by the bucket owner.
"log-delivery-write" The object is writable by AWS services used for log delivery.
*/