Concurrency in Go - Goroutines, Channels, and Patterns That Actually Matter
Posted on Tue 24 March 2026 by Sanyam Khurana in Programming
"Don't communicate by sharing memory; share memory by communicating." That's the Go proverb that took me a while to truly understand. When I came from a Python/Django background, concurrency meant threads, locks, and race conditions giving me nightmares. Go changed that for me. Not because concurrency is easy (it's not), but because Go gives you primitives that make it far more manageable.
Let's dig into how Go handles concurrency - from the basics of goroutines and channels to practical patterns you'll actually use in production code.
Goroutines - Lightweight Threads
A goroutine is Go's unit of concurrency. It's like a thread but much lighter - a goroutine starts with just a few kilobytes of stack (which grows as needed), while an OS thread typically takes 1-2 MB. You can easily run hundreds of thousands of goroutines on a single machine.
Starting a goroutine is as simple as putting go before a function call:
func main() {
go sayHello()
time.Sleep(time.Second) // Wait for goroutine to finish
}
func sayHello() {
fmt.Println("Hello from a goroutine!")
}
But wait - we're using time.Sleep to wait? That's terrible. We have no idea how long the goroutine will take. This is where synchronization comes in.
WaitGroups - Waiting for Goroutines to Finish
The simplest way to wait for goroutines is sync.WaitGroup:
func main() {
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
fmt.Printf("Worker %d done\n", id)
}(i)
}
wg.Wait() // Blocks until all goroutines call Done()
fmt.Println("All workers finished")
}
Three methods: Add(n) to set the counter, Done() to decrement it, and Wait() to block until it hits zero. Simple and effective.
One thing that trips people up: always pass loop variables as function parameters to the goroutine closure. If you capture i directly, all goroutines will share the same variable and you'll get unexpected results. Go 1.22+ fixed this for for loops, but it's still a good habit to be explicit.
Channels - Go's Communication Primitive
Channels are how goroutines talk to each other. Think of a channel as a typed pipe - you push data in on one end and pull it out on the other.
// Create a channel of strings
ch := make(chan string)
// Send data into the channel (in a goroutine)
go func() {
ch <- "hello from the other side"
}()
// Receive data from the channel
msg := <-ch
fmt.Println(msg)
The send (ch <- value) blocks until someone is ready to receive, and the receive (<-ch) blocks until someone sends. This is called an unbuffered channel - it forces synchronization.
Buffered Channels
You can create channels with a buffer:
ch := make(chan string, 3) // Buffer size of 3
Sends don't block until the buffer is full. Receives don't block until the buffer is empty. Buffered channels are useful when the sender and receiver run at different speeds and you want to decouple them.
A practical example - buffered channels work great as a semaphore to limit concurrency:
func processURLs(urls []string) {
sem := make(chan struct{}, 10) // Max 10 concurrent requests
var wg sync.WaitGroup
for _, url := range urls {
wg.Add(1)
sem <- struct{}{} // Acquire slot (blocks if 10 are already running)
go func(u string) {
defer wg.Done()
defer func() { <-sem }() // Release slot
fetchURL(u)
}(url)
}
wg.Wait()
}
Directional Channels
You can restrict a channel to be send-only or receive-only in function signatures:
// This function can only send to the channel
func producer(ch chan<- int) {
for i := 0; i < 10; i++ {
ch <- i
}
close(ch)
}
// This function can only receive from the channel
func consumer(ch <-chan int) {
for val := range ch {
fmt.Println(val)
}
}
This is a compile-time safety net. If consumer accidentally tries to send on ch, the compiler will catch it.
Select - Multiplexing Channels
select lets you wait on multiple channel operations simultaneously. It's like a switch for channels:
select {
case msg := <-ch1:
fmt.Println("Received from ch1:", msg)
case msg := <-ch2:
fmt.Println("Received from ch2:", msg)
case ch3 <- "hello":
fmt.Println("Sent to ch3")
case <-time.After(5 * time.Second):
fmt.Println("Timed out")
}
If multiple cases are ready, Go picks one at random. If none are ready, it blocks (unless there's a default case).
The time.After pattern is incredibly useful for timeouts. But in production, you should use context-based cancellation instead (see our previous post on context).
Patterns That You'll Actually Use
Fan-Out, Fan-In
This is the most common concurrency pattern. You have work that can be parallelized, so you spread it across multiple goroutines (fan-out) and collect the results (fan-in):
func fanOutFanIn(urls []string) []Result {
results := make(chan Result, len(urls))
// Fan-out: spawn a goroutine per URL
for _, url := range urls {
go func(u string) {
results <- fetch(u)
}(url)
}
// Fan-in: collect all results
var collected []Result
for range urls {
collected = append(collected, <-results)
}
return collected
}
Worker Pool
When you have many tasks but want to limit parallelism (to avoid overwhelming a database or API), use a worker pool:
func workerPool(jobs []Job, numWorkers int) []Result {
jobsCh := make(chan Job, len(jobs))
resultsCh := make(chan Result, len(jobs))
// Start workers
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for job := range jobsCh {
resultsCh <- process(job)
}
}()
}
// Send jobs
for _, job := range jobs {
jobsCh <- job
}
close(jobsCh) // Signal workers that no more jobs are coming
// Wait for all workers to finish, then close results
go func() {
wg.Wait()
close(resultsCh)
}()
// Collect results
var results []Result
for result := range resultsCh {
results = append(results, result)
}
return results
}
The key insight: closing jobsCh causes all workers to exit their range loop. Then the WaitGroup ensures all workers finish before we close resultsCh.
Pipeline
Pipelines chain stages together, where each stage is a goroutine that reads from one channel and writes to another:
func generateNumbers(ctx context.Context) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for i := 0; ; i++ {
select {
case <-ctx.Done():
return
case out <- i:
}
}
}()
return out
}
func doubleNumbers(ctx context.Context, in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
select {
case <-ctx.Done():
return
case out <- n * 2:
}
}
}()
return out
}
func filterEven(ctx context.Context, in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
if n%4 == 0 { // After doubling, divisible by 4 means originally even
select {
case <-ctx.Done():
return
case out <- n:
}
}
}
}()
return out
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Chain the stages
numbers := generateNumbers(ctx)
doubled := doubleNumbers(ctx, numbers)
filtered := filterEven(ctx, doubled)
// Take first 10 results
for i := 0; i < 10; i++ {
fmt.Println(<-filtered)
}
cancel() // Stop the pipeline
}
Each stage runs independently as a goroutine. Data flows through channels. Context handles cleanup. This is composable and elegant.
errgroup - Fan-Out with Error Handling
The standard sync.WaitGroup doesn't handle errors. The golang.org/x/sync/errgroup package fixes this:
import "golang.org/x/sync/errgroup"
func fetchAll(ctx context.Context, urls []string) ([]Response, error) {
g, ctx := errgroup.WithContext(ctx)
responses := make([]Response, len(urls))
for i, url := range urls {
i, url := i, url // Capture loop variables
g.Go(func() error {
resp, err := fetch(ctx, url)
if err != nil {
return err // Cancels ctx, stopping other goroutines
}
responses[i] = resp
return nil
})
}
if err := g.Wait(); err != nil {
return nil, err
}
return responses, nil
}
When any goroutine returns an error, the context is cancelled, which signals the other goroutines to stop. g.Wait() returns the first error. This is the go-to pattern for parallel operations where any failure should abort everything.
Mutexes - When Channels Don't Fit
Channels are great for communication, but sometimes you just need to protect shared state. That's where sync.Mutex comes in:
type SafeCounter struct {
mu sync.Mutex
count map[string]int
}
func (c *SafeCounter) Increment(key string) {
c.mu.Lock()
defer c.mu.Unlock()
c.count[key]++
}
func (c *SafeCounter) Get(key string) int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count[key]
}
If you have multiple readers and few writers, sync.RWMutex is more efficient:
type SafeCache struct {
mu sync.RWMutex
data map[string]string
}
func (c *SafeCache) Get(key string) (string, bool) {
c.mu.RLock() // Multiple goroutines can read simultaneously
defer c.mu.RUnlock()
val, ok := c.data[key]
return val, ok
}
func (c *SafeCache) Set(key, value string) {
c.mu.Lock() // Only one goroutine can write
defer c.mu.Unlock()
c.data[key] = value
}
When to Use Channels vs. Mutexes?
A simple rule:
- Channels: when you're passing data or ownership between goroutines
- Mutexes: when you're protecting access to shared state
If multiple goroutines need to read and write a map, use a mutex. If one goroutine produces data that another consumes, use a channel. Don't force channels where a simple mutex would do - it'll just make the code harder to follow.
Race Detector - Your Best Friend
Go has a built-in race detector. Use it. Always.
go test -race ./...
go run -race main.go
It instruments your code to detect concurrent access to shared variables without proper synchronization. It has some runtime overhead, so you wouldn't run it in production, but it should be part of every CI pipeline.
If the race detector finds something, fix it immediately. Race conditions are the kind of bugs that work fine 99% of the time and then corrupt your data at 3 AM on a Saturday.
Common Mistakes
Goroutine Leaks
If a goroutine is blocked on a channel that nobody will ever send to (or receive from), it leaks. It sits there forever, consuming memory:
// Bug: if timeout fires, the goroutine sending to ch is leaked
func doWork() string {
ch := make(chan string)
go func() {
result := expensiveWork()
ch <- result // Blocked forever if nobody reads ch
}()
select {
case result := <-ch:
return result
case <-time.After(5 * time.Second):
return "timeout"
}
}
// Fix: use a buffered channel
func doWork() string {
ch := make(chan string, 1) // Buffer of 1
go func() {
result := expensiveWork()
ch <- result // Won't block even if nobody reads
}()
select {
case result := <-ch:
return result
case <-time.After(5 * time.Second):
return "timeout"
}
}
Closing a Channel Multiple Times
Closing an already-closed channel panics. Only the sender should close a channel, and only when there's a single sender:
// Bad: multiple goroutines closing the same channel
// Good: use sync.Once or have a single owner close it
var once sync.Once
once.Do(func() { close(ch) })
Not Using Context for Cancellation
If your goroutines don't check for context cancellation, they'll keep running even when the result is no longer needed. Always wire your goroutines to the context - we covered this in detail in the context post.
Summary
Go's concurrency model is built on a few simple primitives:
- Goroutines for lightweight concurrent execution
- Channels for communication between goroutines
- Select for multiplexing channel operations
- WaitGroups and errgroup for synchronization
- Mutexes for protecting shared state
The patterns that matter in production: worker pools for bounded concurrency, fan-out/fan-in for parallel work, pipelines for data processing, and errgroup for parallel operations with error handling.
And the tools to keep you sane: the race detector and context-based cancellation.
Concurrency is one of Go's biggest strengths, but it's also where the most subtle bugs hide. Take time to understand these patterns, use the race detector religiously, and always think about goroutine lifecycle. Who starts it? Who stops it? What happens if it fails? Answer these questions for every goroutine you spawn, and you'll avoid most concurrency headaches.
If you've any questions about concurrency in Go, please let us know in the comments section below.