1. rust
  2. /concurrency

Concurrency

Concurrency is one of Rust's greatest strengths. The ownership system that prevents memory safety bugs also prevents data races, making concurrent programming significantly safer than in most other languages. This section will teach you to write fast, safe concurrent code.

What You'll Learn

This section covers Rust's comprehensive concurrency features:

Parallel Programming

  • Threads - Create and manage OS threads for parallel execution
  • Message Passing - Communicate between threads using channels
  • Shared State - Safely share data between threads with mutexes and atomic types
  • Async Programming - Write efficient asynchronous code with futures and async/await

Why Rust Excels at Concurrency

Rust's ownership system provides unique advantages for concurrent programming:

  • Compile-Time Safety - Data races are impossible in safe Rust code
  • Zero-Cost Abstractions - High-level concurrency primitives with minimal overhead
  • Fearless Concurrency - Write parallel code without fear of common bugs
  • Ecosystem Support - Rich ecosystem of async runtimes and concurrent data structures

Learning Path

Build your concurrency skills progressively:

  1. Threads - Basic thread creation and management
  2. Message Passing - Communication via channels
  3. Shared State - Safe data sharing with mutexes and atomics
  4. Async Programming - High-performance asynchronous code

Key Concepts

By the end of this section, you'll understand:

  • How to create and join threads safely
  • Message passing patterns with channels (MPSC)
  • Shared state management with Mutex, RwLock, and atomic types
  • Async/await syntax and futures
  • Different async runtimes (Tokio, async-std)
  • Common concurrency patterns and anti-patterns

Concurrency Models

Rust supports multiple concurrency approaches:

use std::sync::{Arc, Mutex, mpsc};
use std::thread;
use tokio;

// 1. Thread-based parallelism
fn parallel_computation() {
    let handles: Vec<_> = (0..4)
        .map(|i| {
            thread::spawn(move || {
                // Compute something in parallel
                (0..1000).map(|x| x * i).sum::<i32>()
            })
        })
        .collect();
    
    let results: Vec<i32> = handles
        .into_iter()
        .map(|h| h.join().unwrap())
        .collect();
}

// 2. Message passing
fn message_passing_example() {
    let (tx, rx) = mpsc::channel();
    
    thread::spawn(move || {
        for i in 0..10 {
            tx.send(i).unwrap();
        }
    });
    
    for received in rx {
        println!("Received: {}", received);
    }
}

// 3. Shared state
fn shared_state_example() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = vec![];
    
    for _ in 0..10 {
        let counter = Arc::clone(&counter);
        let handle = thread::spawn(move || {
            let mut num = counter.lock().unwrap();
            *num += 1;
        });
        handles.push(handle);
    }
    
    for handle in handles {
        handle.join().unwrap();
    }
}

// 4. Async programming
#[tokio::main]
async fn main() {
    let future1 = async_computation(1);
    let future2 = async_computation(2);
    
    let (result1, result2) = tokio::join!(future1, future2);
    println!("Results: {} and {}", result1, result2);
}

async fn async_computation(n: u32) -> u32 {
    tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
    n * n
}

Prerequisites

Before diving into concurrency, you should be comfortable with:

  • Ownership and borrowing - Essential for understanding thread safety
  • Smart pointers - Arc, Rc, and Box are crucial for concurrent code
  • Error handling - Concurrent code often needs robust error management

Review these sections if needed:

Concurrency Patterns

You'll learn proven patterns for:

  • Producer-Consumer - Using channels to coordinate work
  • Worker Pools - Distributing tasks across multiple threads
  • Fan-out/Fan-in - Parallel processing with result aggregation
  • Pipeline Processing - Streaming data through multiple stages
  • Rate Limiting - Controlling resource usage and preventing overload
  • Circuit Breakers - Handling failures in distributed systems

Performance Considerations

Concurrency isn't always about speed:

  • CPU-bound tasks - Use threads equal to CPU cores
  • I/O-bound tasks - Async programming shines here
  • Overhead costs - Thread creation and context switching aren't free
  • Cache efficiency - False sharing can hurt performance
  • Lock contention - Too much shared state can serialize execution

Common Pitfalls

Learn to avoid common concurrency mistakes:

  • Deadlocks - Circular waiting for locks
  • Race conditions - Non-deterministic behavior (Rust prevents data races but not all race conditions)
  • Starvation - Some threads never get to run
  • Thundering herd - All threads waking up at once
  • Over-parallelization - Creating too many threads

Real-World Applications

You'll learn to build:

  • Web servers - Handle many concurrent requests
  • Data processing pipelines - Transform large datasets in parallel
  • Background job processors - Execute tasks asynchronously
  • Real-time systems - Process events as they arrive
  • Distributed systems - Coordinate across multiple machines

What Comes Next

After mastering concurrency, you'll be ready for:

  • Systems Programming - Low-level system interaction
  • Web Development - Building high-performance web applications
  • Advanced Topics - Lock-free programming and custom async executors

The Rust Advantage

Rust's approach to concurrency is unique:

  • "Fearless Concurrency" - The compiler prevents most concurrent bugs
  • Send and Sync traits - Compiler-checked thread safety
  • No data races - Impossible in safe Rust code
  • Performance - Zero-cost abstractions for high-level primitives
  • Ecosystem - Rich libraries for every concurrency need

Ready to write safe, fast concurrent code? Start with Threads!