As of the 28th July 2015 you can get this information via CloudWatch.
aws cloudwatch get-metric-statistics --namespace AWS/S3 --start-time 2015-07-15T10:00:00
--end-time 2015-07-31T01:00:00 --period 86400 --statistics Average --region us-east-1
--metric-name BucketSizeBytes --dimensions Name=BucketName,Value=myBucketNameGoesHere
Name=StorageType,Value=StandardStorage
Important: You must specify both StorageType and BucketName in the dimensions argument otherwise you will get no results.
I use s3cmd du s3://BUCKET/ --human-readable to view size of folders in S3. It gives quite a detailed info about the total objects in the bucket and its size in a very readable form.
You will see a list of all buckets. Note there are two possible points of confusion here:
a. You will only see buckets that have at least one object in the bucket.
b. You may not see buckets created in a different region and you might need to switch regions using the pull down at the top right to see the additional buckets
Search for the word "StandardStorage" in the area stating "Search for any metric, dimension or resource id"
Select the buckets (or all buckets with the checkbox at the left below the word "All") you would like to calculate total size for
Select at least 3d (3 days) or longer from the time bar towards the top right of the screen
You will now see a graph displaying the daily (or other unit) size of list of all selected buckets over the selected time period.
If you don't need an exact byte count or if the bucket is really large (in the TBs or millions of objects), using CloudWatch metrics is the fastest way as it doesn't require iterating through all the objects, which can take significant CPU and can end in a timeout or network error if using a CLI command.
Based on some examples from others on SO for running the aws cloudwatch get-metric-statistics command, I've wrapped it up in a useful Bash function that allows you to optionally specify a profile for the aws command:
# print S3 bucket size and count
# usage: bsize <bucket> [profile]
function bsize() (
bucket=$1 profile=${2-default}
if [[ -z "$bucket" ]]; then
echo >&2 "bsize <bucket> [profile]"
return 1
fi
# ensure aws/jq/numfmt are installed
for bin in aws jq numfmt; do
if ! hash $bin 2> /dev/null; then
echo >&2 "Please install \"$_\" first!"
return 1
fi
done
# get bucket region
region=$(aws --profile $profile s3api get-bucket-location --bucket $bucket 2> /dev/null | jq -r '.LocationConstraint // "us-east-1"')
if [[ -z "$region" ]]; then
echo >&2 "Invalid bucket/profile name!"
return 1
fi
# get storage class (assumes
# all objects in same class)
sclass=$(aws --profile $profile s3api list-objects --bucket $bucket --max-items=1 2> /dev/null | jq -r '.Contents[].StorageClass // "STANDARD"')
case $sclass in
REDUCED_REDUNDANCY) sclass="ReducedRedundancyStorage" ;;
GLACIER) sclass="GlacierStorage" ;;
DEEP_ARCHIVE) sclass="DeepArchiveStorage" ;;
*) sclass="StandardStorage" ;;
esac
# _bsize <metric> <stype>
_bsize() {
metric=$1 stype=$2
utnow=$(date +%s)
aws --profile $profile cloudwatch get-metric-statistics --namespace AWS/S3 --start-time "$(echo "$utnow - 604800" | bc)" --end-time "$utnow" --period 604800 --statistics Average --region $region --metric-name $metric --dimensions Name=BucketName,Value="$bucket" Name=StorageType,Value="$stype" 2> /dev/null | jq -r '.Datapoints[].Average'
}
# _print <number> <units> <format> [suffix]
_print() {
number=$1 units=$2 format=$3 suffix=$4
if [[ -n "$number" ]]; then
numfmt --to="$units" --suffix="$suffix" --format="$format" $number | sed -En 's/([^0-9]+)$/ \1/p'
fi
}
_print "$(_bsize BucketSizeBytes $sclass)" iec-i "%10.2f" B
_print "$(_bsize NumberOfObjects AllStorageTypes)" si "%8.2f"
)
A few caveats:
For simplicity, the function assumes that all objects in the bucket are in the same storage class!
On macOS, use gnumfmt instead of numfmt.
If numfmt complains about invalid --format option, upgrade GNU coreutils for floating-point precision support.