Comparison Table
| Feature | Goroutines | OS Threads |
|---|---|---|
| Stack Size | 2KB (initial), grows dynamically | 1-2MB (fixed) |
| Creation Time | ~1-2 microseconds | ~1000 microseconds |
| Memory Overhead | ~2KB per goroutine | ~8MB per thread (stack + metadata) |
| Context Switch | ~200 nanoseconds | ~1-2 microseconds |
| Scheduling | Cooperative (Go runtime) | Preemptive (OS kernel) |
| Maximum Count | Millions (100K+ easily) | Thousands (~10K limit) |
| Management | Application-level | Kernel-level |
| Communication | Channels (built-in) | Shared memory + mutexes |
Cost Analysis
Memory Usage
Creating 1 million goroutines:
codeGoroutines: 1,000,000 × 2KB = ~2GB OS Threads: 1,000,000 × 2MB = ~2TB (impossible!)
Context Switch Cost
codePer switch: Goroutine: ~200ns OS Thread: ~1-2μs (5-10x slower) Why goroutines are faster: - No kernel involvement - No TLB flush - Smaller context to save/restore - Cache-friendly (same address space)
Visual Comparison
codeOS Thread Model (1:1): ┌──────┐ ┌──────┐ ┌──────┐ │ App │ │ App │ │ App │ │Thread│ │Thread│ │Thread│ └──┬───┘ └──┬───┘ └──┬───┘ │ │ │ ▼ ▼ ▼ ┌──────┐ ┌──────┐ ┌──────┐ │ OS │ │ OS │ │ OS │ │Thread│ │Thread│ │Thread│ └──────┘ └──────┘ └──────┘ Go Goroutine Model (M:N): ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌──┐ │G │ │G │ │G │ │G │ │G │ │G │ │G │ (Millions possible) └┬─┘ └┬─┘ └┬─┘ └┬─┘ └┬─┘ └┬─┘ └┬─┘ │ │ │ │ │ │ │ └────┴────┴────┴────┴────┴────┘ │ │ │ ▼ ▼ ▼ ┌──────┐ ┌──────┐ ┌──────┐ │ M │ │ M │ │ M │ (Few OS threads) └──────┘ └──────┘ └──────┘
Was this helpful?