x-i18n: generated_at: “2026-03-01T14:41:01Z” model: gemini-3-flash-preview provider: google-gemini-cli source_hash: 9aa8adf23ccf6ceaf7586b9aac35c30769bd13ebfbb8950a185d9b136104f10b source_path: ch16-03-shared-state.md workflow: 16
共享状态并发 (Shared-State Concurrency)
消息传递是处理并发的一种好方法,但并不是唯一的方法。另一种方法是让多个线程访问相同的共享数据。再次考虑来自 Go 语言文档的那部分口号:“不要通过共享内存来通信。”
Message passing is a fine way to handle concurrency, but it’s not the only way. Another method would be for multiple threads to access the same shared data. Consider this part of the slogan from the Go language documentation again: “Do not communicate by sharing memory.”
“通过共享内存来通信”会是什么样子的?此外,为什么消息传递的热衷者会警告不要使用内存共享?
What would communicating by sharing memory look like? In addition, why would message-passing enthusiasts caution not to use memory sharing?
在某种程度上,任何编程语言中的通道都类似于单一所有权,因为一旦你通过通道转移了一个值,你就不能再使用该值。共享内存并发类似于多重所有权:多个线程可以同时访问相同的内存位置。正如你在第 15 章中看到的,智能指针使多重所有权成为可能,但多重所有权会增加复杂性,因为这些不同的所有者需要管理。Rust 的类型系统和所有权规则极大地协助了这一管理的正确性。作为一个例子,让我们看看互斥锁 (mutexes),它是共享内存中较常见的并发原语之一。
In a way, channels in any programming language are similar to single ownership because once you transfer a value down a channel, you should no longer use that value. Shared-memory concurrency is like multiple ownership: Multiple threads can access the same memory location at the same time. As you saw in Chapter 15, where smart pointers made multiple ownership possible, multiple ownership can add complexity because these different owners need managing. Rust’s type system and ownership rules greatly assist in getting this management correct. For an example, let’s look at mutexes, one of the more common concurrency primitives for shared memory.
使用互斥锁控制访问 (Controlling Access with Mutexes)
“Mutex” 是“互斥 (mutual exclusion)”的缩写,也就是说,互斥锁在任何给定时间只允许一个线程访问某些数据。为了访问互斥锁中的数据,线程必须首先通过请求获取互斥锁的“锁 (lock)”来发出它想要访问的信号。锁是互斥锁的一部分,是一个记录当前谁拥有数据独占访问权的数据结构。因此,互斥锁被描述为通过锁定系统“保护 (guarding)”它所持有的数据。
Mutex is an abbreviation for mutual exclusion, as in a mutex allows only one thread to access some data at any given time. To access the data in a mutex, a thread must first signal that it wants access by asking to acquire the mutex’s lock. The lock is a data structure that is part of the mutex that keeps track of who currently has exclusive access to the data. Therefore, the mutex is described as guarding the data it holds via the locking system.
互斥锁因难以使用而声名狼藉,因为你必须记住两条规则:
- 在使用数据之前,必须尝试获取锁。
- 当你处理完互斥锁保护的数据后,必须对数据进行解锁,以便其他线程可以获取锁。
Mutexes have a reputation for being difficult to use because you have to remember two rules:
- You must attempt to acquire the lock before using the data.
- When you’re done with the data that the mutex guards, you must unlock the data so that other threads can acquire the lock.
为了更直观地理解互斥锁,请想象一下会议上的小组讨论,只有一个麦克风。在小组成员发言之前,他们必须询问或发出信号表示他们想使用麦克风。当他们拿到麦克风时,他们可以想说多久就说多久,然后将麦克风交给下一位要求发言的小组成员。如果一个小组成员在发言结束后忘记把麦克风递出去,其他任何人都无法发言。如果共享麦克风的管理出了问题,小组讨论就无法按计划进行!
For a real-world metaphor for a mutex, imagine a panel discussion at a conference with only one microphone. Before a panelist can speak, they have to ask or signal that they want to use the microphone. When they get the microphone, they can talk for as long as they want to and then hand the microphone to the next panelist who requests to speak. If a panelist forgets to hand the microphone off when they’re finished with it, no one else is able to speak. If management of the shared microphone goes wrong, the panel won’t work as planned!
管理互斥锁可能极其棘手,这也是为什么那么多人对通道充满热情的原因。然而,得益于 Rust 的类型系统和所有权规则,你不会弄错锁定和解锁。
Management of mutexes can be incredibly tricky to get right, which is why so many people are enthusiastic about channels. However, thanks to Rust’s type system and ownership rules, you can’t get locking and unlocking wrong.
Mutex<T> 的 API (The API of Mutex<T>)
作为如何使用互斥锁的一个例子,让我们从在单线程上下文中使用互斥锁开始,如示例 16-12 所示。
As an example of how to use a mutex, let’s start by using a mutex in a single-threaded context, as shown in Listing 16-12.
#![allow(unused)]
fn main() {
{{#rustdoc_include ../listings/ch16-fearless-concurrency/listing-16-12/src/main.rs}}
}
与许多类型一样,我们使用关联函数 new 创建一个 Mutex<T> 。要访问互斥锁内部的数据,我们使用 lock 方法来获取锁。该调用将阻塞当前线程,使其无法执行任何工作,直到轮到我们获得锁为止。
As with many types, we create a Mutex<T> using the associated function new.
To access the data inside the mutex, we use the lock method to acquire the
lock. This call will block the current thread so that it can’t do any work
until it’s our turn to have the lock.
如果另一个持有锁的线程发生了恐慌,那么对 lock 的调用将会失败。在这种情况下,任何人都无法获得锁,因此我们选择使用 unwrap ,如果处于这种情况,就让此线程发生恐慌。
The call to lock would fail if another thread holding the lock panicked. In
that case, no one would ever be able to get the lock, so we’ve chosen to
unwrap and have this thread panic if we’re in that situation.
获取锁后,我们可以将返回值(在此例中命名为 num )视为指向内部数据的可变引用。类型系统确保了我们在使用 m 中的值之前获取了锁。 m 的类型是 Mutex<i32> 而不是 i32 ,所以我们“必须”调用 lock 才能使用 i32 值。我们不会忘记;否则类型系统不会让我们访问内部的 i32 。
After we’ve acquired the lock, we can treat the return value, named num in
this case, as a mutable reference to the data inside. The type system ensures
that we acquire a lock before using the value in m. The type of m is
Mutex<i32>, not i32, so we must call lock to be able to use the i32
value. We can’t forget; the type system won’t let us access the inner i32
otherwise.
对 lock 的调用返回一个名为 MutexGuard 的类型,该类型被包裹在我们用 unwrap 调用处理的 LockResult 中。 MutexGuard 类型实现了 Deref 指向我们的内部数据;该类型还具有 Drop 实现,当 MutexGuard 超出作用域(发生于内部作用域结束时)时会自动释放锁。因此,我们没有忘记释放锁并阻止互斥锁被其他线程使用的风险,因为锁的释放是自动发生的。
The call to lock returns a type called MutexGuard, wrapped in a
LockResult that we handled with the call to unwrap. The MutexGuard type
implements Deref to point at our inner data; the type also has a Drop
implementation that releases the lock automatically when a MutexGuard goes
out of scope, which happens at the end of the inner scope. As a result, we
don’t risk forgetting to release the lock and blocking the mutex from being
used by other threads because the lock release happens automatically.
丢弃锁后,我们可以打印互斥锁的值,看到我们能够将内部的 i32 更改为 6 。
After dropping the lock, we can print the mutex value and see that we were able
to change the inner i32 to 6.
多线程共享 Mutex<T> (Shared Access to Mutex<T>)
Shared Access to Mutex<T>
现在让我们尝试使用 Mutex<T> 在多个线程之间共享一个值。我们将启动 10 个线程,并让它们各自将计数器的值增加 1,使计数器从 0 增加到 10。示例 16-13 中的例子将出现编译器错误,我们将利用该错误来了解更多关于使用 Mutex<T> 的知识,以及 Rust 如何帮助我们正确地使用它。
Now let’s try to share a value between multiple threads using Mutex<T>. We’ll
spin up 10 threads and have them each increment a counter value by 1, so the
counter goes from 0 to 10. The example in Listing 16-13 will have a compiler
error, and we’ll use that error to learn more about using Mutex<T> and how
Rust helps us use it correctly.
{{#rustdoc_include ../listings/ch16-fearless-concurrency/listing-16-13/src/main.rs}}
我们像示例 16-12 中所做的那样,创建了一个 counter 变量来在 Mutex<T> 中持有一个 i32 。接下来,我们通过遍历一系列数字来创建 10 个线程。我们使用 thread::spawn 并给所有线程相同的闭包:将计数器移动到线程中,通过调用 lock 方法获取 Mutex<T> 上的锁,然后在互斥锁中的值上加 1。当线程完成运行其闭包时, num 将超出作用域并释放锁,以便另一个线程可以获取它。
We create a counter variable to hold an i32 inside a Mutex<T>, as we did
in Listing 16-12. Next, we create 10 threads by iterating over a range of
numbers. We use thread::spawn and give all the threads the same closure: one
that moves the counter into the thread, acquires a lock on the Mutex<T> by
calling the lock method, and then adds 1 to the value in the mutex. When a
thread finishes running its closure, num will go out of scope and release the
lock so that another thread can acquire it.
在主线程中,我们收集所有的联接句柄 (join handles)。然后,就像我们在示例 16-2 中所做的那样,我们在每个句柄上调用 join 以确保所有线程都完成。在那一点上,主线程将获取锁并打印此程序的结果。
In the main thread, we collect all the join handles. Then, as we did in Listing
16-2, we call join on each handle to make sure all the threads finish. At
that point, the main thread will acquire the lock and print the result of this
program.
我们暗示过这个例子无法编译。现在让我们找出原因!
We hinted that this example wouldn’t compile. Now let’s find out why!
{{#include ../listings/ch16-fearless-concurrency/listing-16-13/output.txt}}
错误消息指出 counter 值在循环的上一次迭代中已被移动。Rust 告诉我们,我们不能将 counter 锁的所有权移动到多个线程中。让我们使用我们在第 15 章讨论过的多重所有权方法来修复这个编译器错误。
The error message states that the counter value was moved in the previous
iteration of the loop. Rust is telling us that we can’t move the ownership of
lock counter into multiple threads. Let’s fix the compiler error with the
multiple-ownership method we discussed in Chapter 15.
多线程的多重所有权 (Multiple Ownership with Multiple Threads)
Multiple Ownership with Multiple Threads
在第 15 章中,我们通过使用智能指针 Rc<T> 创建引用计数值,从而给一个值赋予了多个所有者。让我们在这里做同样的事情,看看会发生什么。我们将在示例 16-14 中将 Mutex<T> 包裹在 Rc<T> 中,并在将所有权移动到线程之前克隆 Rc<T> 。
In Chapter 15, we gave a value to multiple owners by using the smart pointer
Rc<T> to create a reference-counted value. Let’s do the same here and see
what happens. We’ll wrap the Mutex<T> in Rc<T> in Listing 16-14 and clone
the Rc<T> before moving ownership to the thread.
{{#rustdoc_include ../listings/ch16-fearless-concurrency/listing-16-14/src/main.rs}}
再一次,我们编译并得到了……不同的错误!编译器教了我们很多:
Once again, we compile and get… different errors! The compiler is teaching us a lot:
{{#include ../listings/ch16-fearless-concurrency/listing-16-14/output.txt}}
哇,那条错误消息真冗长!这里是需要关注的重要部分: `Rc<Mutex<i32>>` cannot be sent between threads safely(无法在线程间安全地发送 Rc<Mutex<i32>>)。编译器还告诉了我们原因: the trait `Send` is not implemented for `Rc<Mutex<i32>>`( Rc<Mutex<i32>> 未实现 Send 特征)。我们将在下一节讨论 Send :它是确保我们与线程一起使用的类型是用于并发场景的特征之一。
Wow, that error message is very wordy! Here’s the important part to focus on:
`Rc<Mutex<i32>>` cannot be sent between threads safely. The compiler is
also telling us the reason why: the trait `Send` is not implemented for `Rc<Mutex<i32>>`. We’ll talk about Send in the next section: It’s one of
the traits that ensures that the types we use with threads are meant for use in
concurrent situations.
不幸的是, Rc<T> 在线程间共享是不安全的。当 Rc<T> 管理引用计数时,它会在每次调用 clone 时增加计数,并在每个克隆被丢弃时减少计数。但它没有使用任何并发原语来确保计数的更改不会被另一个线程打断。这可能导致错误的计数——细微的 bug 可能转而导致内存泄漏或是在我们用完之前值就被丢弃。我们需要的是一个完全像 Rc<T> ,但以线程安全的方式修改引用计数的类型。
Unfortunately, Rc<T> is not safe to share across threads. When Rc<T>
manages the reference count, it adds to the count for each call to clone and
subtracts from the count when each clone is dropped. But it doesn’t use any
concurrency primitives to make sure that changes to the count can’t be
interrupted by another thread. This could lead to wrong counts—subtle bugs that
could in turn lead to memory leaks or a value being dropped before we’re done
with it. What we need is a type that is exactly like Rc<T>, but that makes
changes to the reference count in a thread-safe way.
使用 Arc<T> 进行原子引用计数 (Atomic Reference Counting with Arc<T>)
Atomic Reference Counting with Arc<T>
幸运的是, Arc<T> “是”一种像 Rc<T> 一样可以安全用于并发场景的类型。字母 a 代表“原子 (atomic)”,意思是它是一个“原子引用计数 (atomically reference-counted)”类型。“原子 (Atomics)”是另一种并发原语,我们在这里不做详细介绍:有关更多细节,请参阅 std::sync::atomic 的标准库文档。此时,你只需要知道原子类型的工作方式类似于原始类型,但在线程间共享是安全的。
Fortunately, Arc<T> is a type like Rc<T> that is safe to use in
concurrent situations. The a stands for atomic, meaning it’s an atomically
reference-counted type. Atomics are an additional kind of concurrency
primitive that we won’t cover in detail here: See the standard library
documentation for std::sync::atomic for more
details. At this point, you just need to know that atomics work like primitive
types but are safe to share across threads.
你可能会想,为什么不是所有的原始类型都是原子的,为什么标准库类型不默认实现为使用 Arc<T> 。原因是线程安全伴随着性能损失,你只在真正需要的时候才想支付这笔开销。如果你只是在单线程内对值执行操作,如果不必强制执行原子提供的保证,你的代码可以运行得更快。
You might then wonder why all primitive types aren’t atomic and why standard
library types aren’t implemented to use Arc<T> by default. The reason is that
thread safety comes with a performance penalty that you only want to pay when
you really need to. If you’re just performing operations on values within a
single thread, your code can run faster if it doesn’t have to enforce the
guarantees atomics provide.
让我们回到我们的例子: Arc<T> 和 Rc<T> 具有相同的 API,因此我们通过更改 use 行、 new 调用和 clone 调用来修复我们的程序。示例 16-15 中的代码终于可以编译并运行了。
Let’s return to our example: Arc<T> and Rc<T> have the same API, so we fix
our program by changing the use line, the call to new, and the call to
clone. The code in Listing 16-15 will finally compile and run.
#![allow(unused)]
fn main() {
{{#rustdoc_include ../listings/ch16-fearless-concurrency/listing-16-15/src/main.rs}}
}
这段代码将打印以下内容:
This code will print the following:
Result: 10
我们成功了!我们从 0 数到了 10,这看起来可能并不令人印象深刻,但它确实让我们学到了很多关于 Mutex<T> 和线程安全的知识。你也可以使用这个程序的结构来执行比仅仅递增计数器更复杂的操作。使用这种策略,你可以将计算拆分为独立的部分,将这些部分分散到多个线程中,然后使用 Mutex<T> 让每个线程使用其部分结果更新最终结果。
We did it! We counted from 0 to 10, which may not seem very impressive, but it
did teach us a lot about Mutex<T> and thread safety. You could also use this
program’s structure to do more complicated operations than just incrementing a
counter. Using this strategy, you can divide a calculation into independent
parts, split those parts across threads, and then use a Mutex<T> to have each
thread update the final result with its part.
注意,如果你正在进行简单的数值运算,标准库的 std::sync::atomic 模块提供了比 Mutex<T> 更简单的类型。这些类型提供了对原始类型的安全、并发、原子的访问。我们在这个例子中选择将 Mutex<T> 与原始类型一起使用,是为了能专注于 Mutex<T> 的工作原理。
Note that if you are doing simple numerical operations, there are types simpler
than Mutex<T> types provided by the std::sync::atomic module of the
standard library. These types provide safe, concurrent,
atomic access to primitive types. We chose to use Mutex<T> with a primitive
type for this example so that we could concentrate on how Mutex<T> works.
RefCell<T>/Rc<T> 和 Mutex<T>/Arc<T> 的比较 (Comparing RefCell<T>/Rc<T> and Mutex<T>/Arc<T>)
Comparing RefCell<T>/Rc<T> and Mutex<T>/Arc<T>
你可能已经注意到 counter 是不可变的,但我们可以获得它内部值的可变引用;这意味着 Mutex<T> 提供了内部可变性,就像 Cell 家族所做的那样。就像我们在第 15 章中使用 RefCell<T> 允许我们修改 Rc<T> 内部的内容一样,我们使用 Mutex<T> 来修改 Arc<T> 内部的内容。
You might have noticed that counter is immutable but that we could get a
mutable reference to the value inside it; this means Mutex<T> provides
interior mutability, as the Cell family does. In the same way we used
RefCell<T> in Chapter 15 to allow us to mutate contents inside an Rc<T>, we
use Mutex<T> to mutate contents inside an Arc<T>.
另一个需要注意的细节是,当你使用 Mutex<T> 时,Rust 无法保护你免受所有种类的逻辑错误。回想第 15 章,使用 Rc<T> 伴随着创建引用循环的风险,即两个 Rc<T> 值相互引用,导致内存泄漏。类似地, Mutex<T> 伴随着创建“死锁 (deadlocks)”的风险。当一个操作需要锁定两个资源,而两个线程各获取了一个锁,导致它们永远互相等待时,就会发生这种情况。如果你对死锁感兴趣,请尝试创建一个具有死锁的 Rust 程序;然后,研究任何语言中互斥锁的死锁缓解策略,并尝试在 Rust 中实现它们。 Mutex<T> 和 MutexGuard 的标准库 API 文档提供了有用的信息。
Another detail to note is that Rust can’t protect you from all kinds of logic
errors when you use Mutex<T>. Recall from Chapter 15 that using Rc<T> came
with the risk of creating reference cycles, where two Rc<T> values refer to
each other, causing memory leaks. Similarly, Mutex<T> comes with the risk of
creating deadlocks. These occur when an operation needs to lock two resources
and two threads have each acquired one of the locks, causing them to wait for
each other forever. If you’re interested in deadlocks, try creating a Rust
program that has a deadlock; then, research deadlock mitigation strategies for
mutexes in any language and have a go at implementing them in Rust. The
standard library API documentation for Mutex<T> and MutexGuard offers
useful information.
我们将通过讨论 Send 和 Sync 特征以及我们如何将它们与自定义类型结合使用,来圆满完成本章。
We’ll round out this chapter by talking about the Send and Sync traits and
how we can use them with custom types.