That depends entirely on the system you are running on. But goroutines are very lightweight. An average process should have no problems with 100.000 concurrent routines. Whether this goes for your target platform is, of course, something we can't answer without knowing what that platform is.
If a goroutine is blocked, there is no cost involved other than:
memory usage
slower garbage-collection
The costs (in terms of memory and average time to actually start executing a goroutine) are:
Go 1.6.2 (April 2016)
32-bit x86 CPU (A10-7850K 4GHz)
| Number of goroutines: 100000
| Per goroutine:
| Memory: 4536.84 bytes
| Time: 1.634248 µs
64-bit x86 CPU (A10-7850K 4GHz)
| Number of goroutines: 100000
| Per goroutine:
| Memory: 4707.92 bytes
| Time: 1.842097 µs
Go release.r60.3 (December 2011)
32-bit x86 CPU (1.6 GHz)
| Number of goroutines: 100000
| Per goroutine:
| Memory: 4243.45 bytes
| Time: 5.815950 µs
On a machine with 4 GB of memory installed, this limits the maximum number of goroutines to slightly less than 1 million.
Source code (no need to read this if you already understand the numbers printed above):
package main
import (
"flag"
"fmt"
"os"
"runtime"
"time"
)
var n = flag.Int("n", 1e5, "Number of goroutines to create")
var ch = make(chan byte)
var counter = 0
func f() {
counter++
<-ch // Block this goroutine
}
func main() {
flag.Parse()
if *n <= 0 {
fmt.Fprintf(os.Stderr, "invalid number of goroutines")
os.Exit(1)
}
// Limit the number of spare OS threads to just 1
runtime.GOMAXPROCS(1)
// Make a copy of MemStats
var m0 runtime.MemStats
runtime.ReadMemStats(&m0)
t0 := time.Now().UnixNano()
for i := 0; i < *n; i++ {
go f()
}
runtime.Gosched()
t1 := time.Now().UnixNano()
runtime.GC()
// Make a copy of MemStats
var m1 runtime.MemStats
runtime.ReadMemStats(&m1)
if counter != *n {
fmt.Fprintf(os.Stderr, "failed to begin execution of all goroutines")
os.Exit(1)
}
fmt.Printf("Number of goroutines: %d\n", *n)
fmt.Printf("Per goroutine:\n")
fmt.Printf(" Memory: %.2f bytes\n", float64(m1.Sys-m0.Sys)/float64(*n))
fmt.Printf(" Time: %f µs\n", float64(t1-t0)/float64(*n)/1e3)
}
If the number of goroutine ever become an issue, you easily can limit it for your program:
See mr51m0n/gorc and this example.
Set thresholds on number of running goroutines
Can increase and decrease a counter when starting or stopping a goroutine.
It can wait for a minimum or maximum number of goroutines running, thus allowing to set thresholds for the number of gorc governed goroutines running at the same time.
It is practical to create hundreds of thousands of goroutines in the same address space.
The test test/chan/goroutines.go creates 10,000 and could easily do more, but is designed to run quickly; you can change the number on your system to experiment. You can easily run millions, given enough memory, such as on a server.
To understand the max number of goroutines, note that the per-goroutine cost is primarily the stack. Per FAQ again:
…goroutines, can be very cheap: they have little overhead beyond the memory for the stack, which is just a few kilobytes.
A back-of-the-envelop calculation is to assume that each goroutine has one 4 KiB page allocated for the stack (4 KiB is a pretty uniform size), plus some small overhead for a control block (like a Thread Control Block) for the runtime; this agrees with what you observed (in 2011, pre-Go 1.0). Thus 100 Ki routines would take about 400 MiB of memory, and 1 Mi routines would take about 4 GiB of memory, which is still manageable on desktop, a bit much for a phone, and very manageable on a server. In practice the starting stack has ranged in size from half a page (2 KiB) to two pages (8 KiB), so this is approximately correct.
The starting stack size has changed over time; it started at 4 KiB (one page), then in 1.2 was increased to 8 KiB (2 pages), then in 1.4 was decreased to 2 KiB (half a page). These changes were due to segmented stacks causing performance problems when rapidly switching back and forth between segments ("hot stack split"), so increased to mitigate (1.2), then decreased when segmented stacks were replaced with contiguous stacks (1.4):
the default starting size for a goroutine's stack in 1.4 has been reduced from 8192 bytes to 2048 bytes.
Per-goroutine memory is largely stack, and it starts low and grows so you can cheaply have many goroutines. You could use a smaller starting stack, but then it would have to grow sooner (gain space at cost of time), and the benefits decrease due to the control block not shrinking. It is possible to eliminate the stack, at least when swapped out (e.g., do all allocation on heap, or save stack to heap on context switch), though this hurts performance and adds complexity. This is possible (as in Erlang), and means you’d just need the control block and saved context, allowing another factor of 5×–10× in number of goroutines, limited now by control block size and on-heap size of goroutine-local variables. However, this isn’t terribly useful, unless you need millions of tiny sleeping goroutines.