This lists files recursively if they're normal files, sorts by the 7th field (which is size in my find output; check yours), and shows just the first file.
find . -type f -ls | sort +7 | head -1
find的第一个选项是递归搜索的起始路径。f的 A 型搜索正常文件。请注意,如果您尝试将其解析为文件名,则如果文件名包含空格、换行或其他特殊字符,则可能会失败。sort的选项也因操作系统而异。我用的是 FreeBSD。
A "better" but more complex and heavier solution would be to have find traverse the directories, but perhaps use stat to get the details about the file, then perhaps use awk to find the largest size. Note that the output of stat also depends on your operating system.
This will print the largest file name and size and more than 500M. You can move the if($1 > 500000),and it will print the largest file in the directory.
#!/usr/bin/env bash
# scriptname: above
# author: Jonathan D. Lettvin, 201401220235
# This finds files of size >= $1 (format ${count}[K|M|G|T], default 10G)
# using a reliable version-independent bash hash to relax find's -size syntax.
# Specifying size using 'T' for Terabytes is supported.
# Output size has units (K|M|G|T) in the left hand output column.
# Example:
# ubuntu12.04$ above 1T
# 128T /proc/core
# http://stackoverflow.com/questions/1494178/how-to-define-hash-tables-in-bash
# Inspiration for hasch: thanks Adam Katz, Oct 18 2012 00:39
function hasch() { local hasch=`echo "$1" | cksum`; echo "${hasch//[!0-9]}"; }
function usage() { echo "Usage: $0 [{count}{k|K|m|M|g|G|t|T}"; exit 1; }
function arg1() {
# Translate single arg (if present) into format usable by find.
count=10; units=G; # Default find -size argument to 10G.
size=${count}${units}
if [ -n "$1" ]; then
for P in TT tT GG gG MM mM Kk kk; do xlat[`hasch ${P:0:1}`]="${P:1:1}"; done
units=${xlat[`hasch ${1:(-1)}`]}; count=${1:0:(-1)}
test -n "$units" || usage
test -x $(echo "$count" | sed s/[0-9]//g) || usage
if [ "$units" == "T" ]; then units="G"; let count=$count*1024; fi
size=${count}${units}
fi
}
function main() {
sudo \
find / -type f -size +$size -exec ls -lh {} \; 2>/dev/null | \
awk '{ N=$5; fn=$9; for(i=10;i<=NF;i++){fn=fn" "$i};print N " " fn }'
}
arg1 $1
main $size
ls -1Rs | sed -e "s/^ *//" | grep "^[0-9]" | sort -nr | head -n20
或(人类可读的大小) :
ls -1Rhs | sed -e "s/^ *//" | grep "^[0-9]" | sort -hr | head -n20
Works fine under Linux/BSD/OSX in comparison to other answers, as find's -printf option doesn't exist on OSX/BSD and stat has different parameters depending on OS. However the second command to work on OSX/BSD properly (as sort doesn't have -h), install sort from coreutils or remove -h from ls and use sort -nr instead.
所以这些别名在你的 RC文件中很有用:
alias big='du -ah . | sort -rh | head -20'
alias big-files='ls -1Rhs | sed -e "s/^ *//" | grep "^[0-9]" | sort -hr | head -n20'
你可以用 ls -shS /pathFolder | head -n 1。ls中较大的 S已经将列表从较大的文件排序到较小的文件,但第一个结果是该文件夹中所有文件的总和。因此,如果您只想列出较大的文件,一个文件,您需要 head -n 2和检查在“第二行结果”或使用第一个例子与 ls sort head。