C doesn't concern itself very much with exact sizes of integer types, C99 introduces the header stdint.h , which is probably your best bet. Include that and you can use e.g. int32_t. Of course not all platforms might support that.
Corey's answer is correct for "best", in my opinion, but a simple "int" will also work in practice (given that you're ignoring systems with 16-bit int). At this point, so much code depends on int being 32-bit that system vendors aren't going to change it.
(See also why long is 32-bit on lots of 64-bit systems and why we have "long long".)
One of the benefits of using int32_t, though, is that you're not perpetuating this problem!
If stdint.h is not available for your system, make your own. I always have a file called "types.h" that have typedefs for all the signed/unsigned 8, 16, and 32 bit values.
stdint.h is the obvious choice, but it's not necessarily available.
If you're using a portable library, it's possible that it already provides portable fixed-width integers.
For example, SDL has Sint32 (S is for “signed”), and GLib has gint32.
You need to include inttypes.h instead of stdint.h because stdint.h is not available on some platforms such as Solaris, and inttypes.h will include stdint.h for you on systems such as Linux.
If you include inttypes.h then your code is more portable between Linux and Solaris.
And this link has a table showing why you don't want to use long or int if you have an intention of a certain number of bits being present in your data type.
IBM link about portable data types
If your implementation supports 2's complement 32-bit integers then it must define int32_t.
If not then the next best thing is int_least32_t which is an integer type supported by the implementation that is at least 32 bits, regardless of representation (two's complement, one's complement, etc.).
There is also int_fast32_t which is an integer type at least 32-bits wide, chosen with the intention of allowing the fastest operations for that size requirement.
ANSI C
You can use long, which is guaranteed to be at least 32-bits wide as a result of the minimum range requirements specified by the standard.
If you would rather use the smallest integer type to fit a 32-bit number, then you can use preprocessor statements like the following with the macros defined in <limits.h>:
#define TARGET_MAX 2147483647L
#if SCHAR_MAX >= TARGET_MAX
typedef signed char int32;
#elif SHORT_MAX >= TARGET_MAX
typedef short int32;
#elif INT_MAX >= TARGET_MAX
typedef int int32;
#else
typedef long int32;
#endif
#undef TARGET_MAX