To create a big (empty) file, set $INPUT-FILE=/dev/zero.
Total size of the file will be $BLOCK-SIZE * $NUM-BLOCKS.
New file created will be $OUTPUT-FILE.
This will create a file with a "hole" in it on most unixes - the data won't actually be written to disk, or take up any space until something other than zero is written into it.
Please, modern is easier, and faster. On Linux, (pick one)
truncate -s 10G foo
fallocate -l 5G bar
It needs to be stated that truncate on a file system supporting sparse files will create a sparse file and fallocate will not. A sparse file is one where the allocation units that make up the file are not actually allocated until used. The meta-data for the file fallocate0 however take up some considerable space but likely no where near the actual size of the file. You should consult resources about sparse files for more information as there are advantages and disadvantages to this type of file. A non-sparse file has its blocks (allocation units) allocated ahead of time which means the space is reserved as far as the file system sees it. Also fallocate nor truncate will not set the contents of the file to a specified value fallocate1 dd, instead the contents of a file allocated with fallocate or truncate may be any trash value that existed in the allocated units during creation and this behavior may or may not be desired. The dd is the slowest because it actually writes the value or chunk of data to the entire file stream as specified with it's command line options.
This behavior could potentially be different - depending on file system used and conformance of that file system to any standard or specification. Therefore it is advised that proper research is done to ensure that the appropriate method is used.
Some of these answers have you using /dev/zero for the source of your data. If your testing network upload speeds, this may not be the best idea if your application is doing any compression, a file full of zeros compresses really well. Using this command to generate the file
dd if=/dev/zero of=upload_test bs=10000 count=1
I could compress upload_test down to about 200 bytes. So you could put yourself in a situation where you think your uploading a 10KB file but it would actually be much less.
What I suggest is using /dev/urandom instead of /dev/zero. I couldn't compress the output of /dev/urandom very much at all.
There are lots of answers, but none explained nicely what else can be done. Looking into man pages for dd, it is possible to better specify the size of a file.
This is going to create /tmp/zero_big_data_file.bin filled with zeros, that has size of 20 megabytes :
In all examples, bs is block size, and count is number of blocks
BLOCKS and BYTES may be followed by the following multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB =1000*1000, M =1024*1024, xM =M GB =1000*1000*1000, G =1024*1024*1024, and so on for T, P, E, Z, Y.
This will generate 4 MB text file with random characters in current directory and its name "4mb.txt"
You can change parameters to generate different sizes and names.
Usage:
fallocate [options] <filename>
Preallocate space to, or deallocate space from a file.
Options:
-c, --collapse-range remove a range from the file
-d, --dig-holes detect zeroes and replace with holes
-i, --insert-range insert a hole at range, shifting existing data
-l, --length <num> length for range operations, in bytes
-n, --keep-size maintain the apparent size of the file
-o, --offset <num> offset for range operations, in bytes
-p, --punch-hole replace a range with a hole (implies -n)
-z, --zero-range zero and ensure allocation of a range
-x, --posix use posix_fallocate(3) instead of fallocate(2)
-v, --verbose verbose mode
-h, --help display this help
-V, --version display version