Memory Management
Rust's memory management is one of its most distinctive features. Unlike languages with garbage collection or manual memory management, Rust uses ownership and borrowing to automatically manage memory at compile time, providing both safety and performance.
Stack vs Heap
Understanding where data is stored is crucial for effective Rust programming.
The Stack
The stack stores data with known, fixed size at compile time:
fn main() {
let x = 5; // i32 stored on stack
let y = true; // bool stored on stack
let z = 'a'; // char stored on stack
let array = [1, 2, 3, 4, 5]; // Array stored on stack
println!("Stack values: {}, {}, {}, {:?}", x, y, z, array);
} // All stack values automatically cleaned up when function ends
Stack characteristics:
- Very fast allocation and deallocation
- Memory is automatically managed
- Limited in size (typically 1-8 MB)
- LIFO (Last In, First Out) structure
- Values have known size at compile time
The Heap
The heap stores data with unknown size at compile time or data that might grow:
fn main() {
let s = String::from("hello"); // String data stored on heap
let v = vec![1, 2, 3, 4, 5]; // Vector data stored on heap
let boxed = Box::new(42); // Boxed value stored on heap
println!("Heap values: {}, {:?}, {}", s, v, boxed);
} // Heap memory automatically cleaned up when owners go out of scope
Heap characteristics:
- Slower allocation and deallocation than stack
- Can grow at runtime
- Larger capacity (limited by system memory)
- Values can have unknown size at compile time
- Requires pointer indirection to access
Memory Layout Example
fn main() {
// Stack allocated
let number = 42; // 4 bytes on stack
let array = [1, 2, 3]; // 12 bytes on stack
// Heap allocated with stack pointers
let string = String::from("hello"); // Pointer, length, capacity on stack
// Actual string data on heap
let vector = vec![1, 2, 3, 4, 5]; // Pointer, length, capacity on stack
// Array data on heap
println!("String: {} (len: {}, capacity: {})",
string, string.len(), string.capacity());
println!("Vector: {:?} (len: {}, capacity: {})",
vector, vector.len(), vector.capacity());
}
RAII (Resource Acquisition Is Initialization)
Rust follows the RAII principle: resources are tied to object lifetimes.
Automatic Cleanup
fn demonstrate_raii() {
println!("Function start");
{
let s = String::from("hello"); // Memory allocated
let v = vec![1, 2, 3]; // Memory allocated
println!("Inside inner scope: {}, {:?}", s, v);
// Both s and v are automatically dropped here
// Memory is freed without explicit calls
}
println!("After inner scope - memory cleaned up");
}
fn main() {
demonstrate_raii();
}
Custom Drop Implementation
You can implement custom cleanup logic:
struct FileHandler {
filename: String,
}
impl FileHandler {
fn new(filename: String) -> Self {
println!("Opening file: {}", filename);
FileHandler { filename }
}
}
impl Drop for FileHandler {
fn drop(&mut self) {
println!("Closing file: {}", self.filename);
// Custom cleanup logic here
}
}
fn main() {
{
let _file = FileHandler::new("data.txt".to_string());
println!("File is open");
} // Drop is called automatically here
println!("File has been closed");
}
Memory Allocation Patterns
Stack Allocation Patterns
fn stack_examples() {
// Simple values
let x = 42;
let flag = true;
// Fixed-size arrays
let numbers = [1, 2, 3, 4, 5];
// Tuples
let point = (10, 20);
// Structs with Copy types
#[derive(Copy, Clone)]
struct Point { x: i32, y: i32 }
let p1 = Point { x: 1, y: 2 };
let p2 = p1; // Copied, not moved
println!("Both points available: {:?}, {:?}",
(p1.x, p1.y), (p2.x, p2.y));
}
Heap Allocation Patterns
fn heap_examples() {
// Dynamic strings
let mut s = String::new();
s.push_str("Hello");
s.push_str(", world!");
// Dynamic arrays
let mut v = Vec::new();
v.push(1);
v.push(2);
v.push(3);
// Boxed values
let boxed_num = Box::new(42);
let boxed_array = Box::new([1, 2, 3, 4, 5]);
// Heap-allocated structs
struct LargeStruct {
data: [i32; 1000],
}
let large = Box::new(LargeStruct { data: [0; 1000] });
println!("String: {}, Vector: {:?}", s, v);
println!("Boxed: {}, Large struct size: {}",
boxed_num, std::mem::size_of::<LargeStruct>());
}
Smart Pointers
Smart pointers provide additional capabilities beyond regular references.
Box<T> - Heap Allocation
fn box_examples() {
// Simple heap allocation
let boxed_int = Box::new(5);
println!("Boxed integer: {}", boxed_int);
// Recursive data structures
#[derive(Debug)]
enum List {
Cons(i32, Box<List>),
Nil,
}
use List::{Cons, Nil};
let list = Cons(1, Box::new(Cons(2, Box::new(Cons(3, Box::new(Nil))))));
println!("Linked list: {:?}", list);
// Large data on heap
let large_array = Box::new([0; 1_000_000]);
println!("Large array allocated on heap, size: {}",
large_array.len());
}
Rc<T> - Reference Counted
use std::rc::Rc;
fn rc_examples() {
let data = Rc::new(String::from("shared data"));
let reference1 = Rc::clone(&data);
let reference2 = Rc::clone(&data);
println!("Reference count: {}", Rc::strong_count(&data));
println!("Data: {}", data);
{
let reference3 = Rc::clone(&data);
println!("Reference count in inner scope: {}", Rc::strong_count(&data));
} // reference3 dropped here
println!("Reference count after scope: {}", Rc::strong_count(&data));
}
Arc<T> - Atomic Reference Counted
use std::sync::Arc;
use std::thread;
fn arc_examples() {
let data = Arc::new(vec![1, 2, 3, 4, 5]);
let mut handles = vec![];
for i in 0..3 {
let data_clone = Arc::clone(&data);
let handle = thread::spawn(move || {
println!("Thread {}: {:?}", i, data_clone);
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Original data: {:?}", data);
}
Memory Layout and Optimization
Understanding Memory Layout
use std::mem;
fn memory_layout_examples() {
// Size of different types
println!("Size of i32: {} bytes", mem::size_of::<i32>());
println!("Size of i64: {} bytes", mem::size_of::<i64>());
println!("Size of bool: {} bytes", mem::size_of::<bool>());
println!("Size of char: {} bytes", mem::size_of::<char>());
// Pointer sizes
println!("Size of &i32: {} bytes", mem::size_of::<&i32>());
println!("Size of Box<i32>: {} bytes", mem::size_of::<Box<i32>>());
// String and Vec sizes
println!("Size of String: {} bytes", mem::size_of::<String>());
println!("Size of Vec<i32>: {} bytes", mem::size_of::<Vec<i32>>());
// Struct layout
#[repr(C)]
struct Point {
x: f64,
y: f64,
}
println!("Size of Point: {} bytes", mem::size_of::<Point>());
println!("Alignment of Point: {} bytes", mem::align_of::<Point>());
}
Struct Padding and Alignment
use std::mem;
fn alignment_examples() {
// Struct with padding
struct Unoptimized {
a: u8, // 1 byte
b: u64, // 8 bytes (7 bytes padding after a)
c: u8, // 1 byte (7 bytes padding after c)
}
// Optimized struct layout
struct Optimized {
b: u64, // 8 bytes
a: u8, // 1 byte
c: u8, // 1 byte (6 bytes padding after c)
}
println!("Unoptimized struct size: {} bytes", mem::size_of::<Unoptimized>());
println!("Optimized struct size: {} bytes", mem::size_of::<Optimized>());
// Use repr(packed) to remove padding (be careful with alignment)
#[repr(packed)]
struct Packed {
a: u8,
b: u64,
c: u8,
}
println!("Packed struct size: {} bytes", mem::size_of::<Packed>());
}
Performance Considerations
Avoiding Allocations
fn performance_examples() {
// Prefer stack allocation when possible
let numbers = [1, 2, 3, 4, 5]; // Stack allocated
// let numbers = vec![1, 2, 3, 4, 5]; // Heap allocated
// Reuse allocations
let mut buffer = String::with_capacity(100);
for i in 0..10 {
buffer.clear(); // Reuse existing capacity
buffer.push_str(&format!("Number: {}", i));
println!("{}", buffer);
}
// Use string slices instead of owned strings when possible
fn process_text(text: &str) { // Takes any string-like type
println!("Processing: {}", text);
}
let owned = String::from("hello");
let literal = "world";
process_text(&owned); // Works with String
process_text(literal); // Works with &str
}
Memory Pool Patterns
struct ObjectPool<T> {
objects: Vec<T>,
}
impl<T: Default> ObjectPool<T> {
fn new() -> Self {
ObjectPool {
objects: Vec::new(),
}
}
fn get(&mut self) -> T {
self.objects.pop().unwrap_or_default()
}
fn release(&mut self, object: T) {
self.objects.push(object);
}
}
fn pool_example() {
let mut pool = ObjectPool::<Vec<i32>>::new();
// Get a vector from the pool
let mut vec = pool.get();
vec.extend([1, 2, 3, 4, 5]);
println!("Using vector: {:?}", vec);
// Return it to the pool (after clearing)
vec.clear();
pool.release(vec);
}
Memory Safety Guarantees
No Null Pointer Dereferences
fn null_safety_examples() {
// Rust doesn't have null pointers
// Use Option<T> for nullable values
let maybe_value: Option<i32> = Some(42);
let no_value: Option<i32> = None;
// Safe handling of potentially null values
match maybe_value {
Some(value) => println!("Got value: {}", value),
None => println!("No value"),
}
// Safe unwrapping with default
let result = no_value.unwrap_or(0);
println!("Result with default: {}", result);
}
No Use After Free
fn use_after_free_prevention() {
let data = vec![1, 2, 3, 4, 5];
{
let reference = &data;
println!("Data: {:?}", reference);
} // reference goes out of scope
// data is still valid here
println!("Data still accessible: {:?}", data);
// The following would not compile:
// let reference;
// {
// let temp_data = vec![1, 2, 3];
// reference = &temp_data;
// } // temp_data dropped here
// println!("This won't compile: {:?}", reference);
}
No Double Free
fn double_free_prevention() {
let data = Box::new(42);
// Move the box
let moved_data = data;
// The following would not compile:
// drop(data); // Error: use of moved value
println!("Moved data: {}", moved_data);
// moved_data is automatically dropped when it goes out of scope
}
Memory Debugging and Profiling
Using Tools
// For debugging memory issues, you can use:
// 1. Valgrind (Linux/macOS)
// 2. AddressSanitizer (with RUSTFLAGS)
// 3. Miri (Rust's experimental interpreter)
fn debugging_example() {
// This code is memory-safe by construction
let mut data = Vec::with_capacity(1000);
for i in 0..1000 {
data.push(i);
}
println!("Data length: {}, capacity: {}", data.len(), data.capacity());
// All memory automatically cleaned up
}
Memory Usage Patterns
use std::collections::HashMap;
fn memory_usage_patterns() {
// Pre-allocate when size is known
let mut map = HashMap::with_capacity(100);
for i in 0..100 {
map.insert(i, format!("value_{}", i));
}
println!("Map size: {}", map.len());
// Use iterators to avoid intermediate allocations
let sum: i32 = (0..1000)
.filter(|&x| x % 2 == 0)
.map(|x| x * 2)
.sum();
println!("Sum: {}", sum);
}
Best Practices
1. Prefer Stack Allocation
// Good: Stack allocated
fn process_small_data() {
let data = [1, 2, 3, 4, 5];
for &item in &data {
println!("{}", item);
}
}
// Consider if heap allocation is necessary
fn process_large_or_dynamic_data() {
let data = vec![1, 2, 3, 4, 5]; // Heap allocated when necessary
for &item in &data {
println!("{}", item);
}
}
2. Use Appropriate Smart Pointers
// Use Box<T> for single ownership
let unique_data = Box::new(ExpensiveStruct::new());
// Use Rc<T> for shared ownership (single-threaded)
let shared_data = std::rc::Rc::new(SharedStruct::new());
// Use Arc<T> for shared ownership (multi-threaded)
let thread_safe_data = std::sync::Arc::new(ThreadSafeStruct::new());
3. Minimize Allocations in Hot Paths
fn hot_path_optimization() {
// Pre-allocate outside the loop
let mut buffer = String::with_capacity(100);
for i in 0..1000 {
buffer.clear(); // Reuse allocation
buffer.push_str(&format!("Item {}", i));
// Process buffer...
}
}
4. Use Zero-Copy Techniques
fn zero_copy_example(data: &[u8]) -> &[u8] {
// Return a slice of the input instead of copying
&data[10..20]
}
fn string_processing(input: &str) -> &str {
// Use string slices instead of allocating new strings
input.trim()
}
Rust's memory management system provides automatic memory safety without garbage collection overhead. By understanding stack vs heap allocation, RAII principles, and smart pointer usage, you can write efficient and safe code that performs well while preventing common memory errors.