upload() allows you to control how your object is uploaded. For example you can define concurrency and part size.
From their docs:
Uploads an arbitrarily sized buffer, blob, or stream, using intelligent concurrent handling of parts if the payload is large enough.
One specific benefit I've discovered is that upload() will accept a stream without a content length defined whereas putObject() does not.
This was useful as I had an API endpoint that allowed users to upload a file. The framework delivered the file to my controller in the form of a readable stream without a content length. Instead of having to measure the file size, all I had to do was pass it straight through to the upload() call.
This source is a little dated (referencing instead upload_file() and put() -- or maybe it is the Ruby SDK?), but it looks like the putObject() is intended for smaller objects than the upload().
It recommends upload() and specifies why:
This is the recommended method of using the SDK to upload files to a
bucket. Using this approach has the following benefits:
Manages multipart uploads for objects larger than 15MB.
Correctly opens files in binary mode to avoid encoding issues.
Uses multiple threads for uploading parts of large objects in parallel.
Then covers the putObject() operation:
For smaller objects, you may choose to use #put instead.
This question was asked almost six years ago and I stumbled across it while searching for information on the latest AWS Node.js SDK (V3). While V2 of the SDK supports the "upload" and "putObject" functions, the V3 SDK only supports "Put Object" functionality as "PutObjectCommand". The ability to upload in parts is supported as "UploadPartCommand" and "UploadPartCopyCommand" but the standalone "upload" function available in V2 is not and there is no "UploadCommand" function.
So if you migrate to the V3 SDK, you will need to migrate to Put Object. Get Object is also different in V3. A Buffer is no longer returned and instead a readable stream or a Blob. So if you got the data through "Body.toString()" you now have to implement a stream reader or handle Blob's.