Design of Go Runtime Scheduler

Concurrency is an excellent feature in go which differentiates go from the rest of the languages. Concurrency is dealing with multiple things at a time while parallelism is doing multiple things at a time.

Go supports the creation of thousands of threads at a time, Ever wondered how is it possible ?? It’s because of go runtime. Go runtime manages scheduling, garbage collection, and a plethora of different things we would primarily focus on the scheduler .

Let's dive deeper, Application basically requires a system thread for its user threads to function which are created and managed by the application itself where context switching is faster in the case of user threads as in comparison to system threads, First case would be n:1 mapping where n is the number of user threads, So if a user thread blocks, system thread could not be accessible by rest of the user threads and as a result user threads wait. To resolve this in case of go we have m:m mapping. Goroutines are distributed on a set of system/OS threads. There are three major entities in the Go scheduler: A set of machines (M), Goroutines (G), and processors (P). There are minor entities such as global and local run queue and thread cache.

M is the system thread handling multiple G where P be the logical entity which states the number of processors, The number of P is pre-decided (GOMAXPROCS) and fixed during the run. In the above diagram, we have two P(p1 and p2), with 3 machine threads, each M has a local run queue, goroutines are created and added to the local run queue, if the local run queue is already full they are added to the global run queue. What happens when a G makes a system call? The scheduler would know that G is blocked and hence it’s M is also blocked. P is not utilized and can be used by some other M.The scheduler makes sure all the contexts are running and there is more M for even P = 1 because the worker might be stuck in system call.

there is one more condition when the P threads don’t further have more Gs in the local run queue and if the global run queue is empty as well it steals Gs from other Ps which further gives better resource utilization and lower migration of Gs. Till a P is busy, there is no G movement.

To know more refer the links below :)

http://www.cs.columbia.edu/~aho/cs6998/reports/12-12-11_DeshpandeSponslerWeiss_GO.pdf

software developer