concurrency
2w ago

Explain the Go Scheduler (GMP Model)

4 views • 0 upvotes

The Go scheduler implements an M:N scheduling model, multiplexing M goroutines onto N OS threads. It uses the GMP architecture:

Components

  • G (Goroutine): The actual goroutine structure
  • M (Machine): OS thread managed by Go runtime
  • P (Processor): Logical processor that provides execution context

Architecture Diagram

code
┌─────────────────────────────────────────────────────────┐
│                    Go Runtime                           │
│                                                          │
│  ┌──────┐   ┌──────┐   ┌──────┐   ┌──────┐            │
│  │  P0  │   │  P1  │   │  P2  │   │  P3  │  (GOMAXPROCS)│
│  │      │   │      │   │      │   │      │            │
│  │ LRQ  │   │ LRQ  │   │ LRQ  │   │ LRQ  │  Local Run  │
│  │[G G] │   │[G G] │   │[G]   │   │[G G] │  Queues     │
│  └──┬───┘   └──┬───┘   └──┬───┘   └──┬───┘            │
│     │         │          │          │                  │
│     ▼         ▼          ▼          ▼                  │
│  ┌──────┐  ┌──────┐  ┌──────┐  ┌──────┐               │
│  │  M0  │  │  M1  │  │  M2  │  │  M3  │  OS Threads   │
│  │      │  │      │  │      │  │      │               │
│  │  G   │  │  G   │  │  G   │  │  G   │  Running Gs│
│  └──────┘  └──────┘  └──────┘  └──────┘               │
│                                                          │
│  ┌────────────────────────────────────────────┐        │
│  │     Global Run Queue (GRQ)                 │        │
│  │     [G] [G] [G] [G] [G] [G] [G]           │        │
│  └────────────────────────────────────────────┘        │
│                                                          │
│  ┌────────────────────────────────────────────┐        │
│  │     Network Poller (netpoller)             │        │
│  │     [G waiting on I/O] [G waiting on I/O]  │        │
│  └────────────────────────────────────────────┘        │
└─────────────────────────────────────────────────────────┘
         │          │          │          │
         ▼          ▼          ▼          ▼
    ┌────────────────────────────────────────┐
    │      Operating System Kernel           │
    │      (Actual CPU Cores)                │
    └────────────────────────────────────────┘

How Scheduling Works

1. Goroutine Creation

go
go func() {
    // New goroutine created
    fmt.Println("Hello")
}()
What Happens:
  1. New G (goroutine) structure allocated
  2. G added to current P's local run queue (LRQ)
  3. If LRQ full, half moved to global run queue (GRQ)

2. Scheduling Decision Points

The scheduler runs at these points:
  • Function calls: Stack overflow check
  • Blocking system calls: I/O operations
  • Channel operations: Send/receive on channels
  • time.Sleep(): Explicit sleep
  • Mutex locks: Lock contention
  • Garbage collection: GC needs to run
  • go statement: Creating new goroutine
  • Preemption: Every ~10ms (since Go 1.14)

3. Work Stealing

When a P runs out of work:
code
P0 (idle)          P1 (busy)
┌──────┐          ┌──────┐
│ LRQ  │          │ LRQ  │
│ []   │  ◄───────│[G G] │  Steal half
└──────┘   steal  │[G G] │
                  │[G G] │
                  └──────┘

After stealing:
P0                 P1
┌──────┐          ┌──────┐
│ LRQ  │          │ LRQ  │
│[G G] │          │[G G] │
│[G]   │          │[G]   │
└──────┘          └──────┘
Work Stealing Algorithm:
  1. Check local run queue
  2. If empty, check global run queue
  3. If still empty, steal from other P's local queue
  4. If still empty, check network poller
  5. If still empty, M goes to sleep

Detailed Scheduler Flow

code
┌─────────────────────────────────────────────┐
│ 1. P has work in local queue                │
│    → Pick next G from LRQ                   │
│    → Bind G to M                            │
│    → Execute G                              │
└─────────────────────────────────────────────┘
                    │
                    ▼
┌─────────────────────────────────────────────┐
│ 2. G runs until it:                         │
│    a) Completes (exits)                     │
│    b) Blocks (I/O, channel, sleep)          │
│    c) Gets preempted (~10ms)                │
└─────────────────────────────────────────────┘
                    │
        ┌───────────┴───────────┐
        ▼                       ▼
┌──────────────┐        ┌──────────────┐
│ 3a. Complete │        │ 3b. Blocked  │
│  → G dies    │        │  → Park G    │
│  → Pick next │        │  → Put in    │
│     G        │        │     waiting  │
└──────────────┘        │     state    │
                        └──────────────┘
                               │
                               ▼
                        ┌──────────────┐
                        │ 4. Event     │
                        │    Ready     │
                        │  → Unpark G  │
                        │  → Add to    │
                        │     runnable │
                        │     queue    │
                        └──────────────┘

Example: Blocking System Call

go
func main() {
    go func() {
        data, _ := ioutil.ReadFile("large.txt") // Blocking I/O
        fmt.Println(len(data))
    }()
    
    // Main continues
    fmt.Println("Main running")
}
What Happens:
code
Before I/O:
M0 ← P0 ← G1 (running ReadFile)

During I/O (G1 blocks):
M0 detaches from P0 and blocks on OS
P0 creates/reuses M1
M1 ← P0 ← G2 (picks next goroutine)

After I/O completes:
M0 wakes up with G1
G1 goes back to runnable queue
M0 either picks G1 or goes to sleep
Key Point: P never blocks! It finds another M to continue scheduling.

Was this helpful?

Difficulty & Status

medium
Lvl. 4
Community Verified
Progress: 11%
Answered by: shubham vyasPrev TopicNext Topic