x-i18n: generated_at: “2026-03-01T14:48:54Z” model: gemini-3-flash-preview provider: google-gemini-cli source_hash: e8c64d4048eb471389f65ba6cae78f56faf6017c2a733fb5ff9811bf9a01afd1 source_path: ch17-06-futures-tasks-threads.md workflow: 16
综合应用:Future、任务与线程 (Putting It All Together: Futures, Tasks, and Threads)
正如我们在第 16 章中看到的,线程提供了一种并发方法。我们在本章中看到了另一种方法:将异步与 future 和 stream 结合使用。如果你想知道什么时候该选择哪种方法,答案是:视情况而定!而且在许多情况下,选择不是线程“或”异步,而是线程“与”异步。
As we saw in Chapter 16, threads provide one approach to concurrency. We’ve seen another approach in this chapter: using async with futures and streams. If you’re wondering when to choose one method over the other, the answer is: it depends! And in many cases, the choice isn’t threads or async but rather threads and async.
许多操作系统提供基于线程的并发模型已经有几十年了,因此许多编程语言都支持它们。然而,这些模型并非没有权衡。在许多操作系统上,每个线程都会占用相当多的内存。只有在你的操作系统和硬件支持时,线程才是一个可选项。与主流桌面和移动计算机不同,一些嵌入式系统根本没有操作系统,因此它们也没有线程。
Many operating systems have supplied threading-based concurrency models for decades now, and many programming languages support them as a result. However, these models are not without their tradeoffs. On many operating systems, they use a fair bit of memory for each thread. Threads are also only an option when your operating system and hardware support them. Unlike mainstream desktop and mobile computers, some embedded systems don’t have an OS at all, so they also don’t have threads.
异步模型提供了一组不同的——并且最终是互补的——权衡。在异步模型中,并发操作不需要它们自己的线程。相反,它们可以在“任务 (tasks)”上运行,就像我们在流那一节中使用 trpl::spawn_task 从同步函数启动工作时那样。一个任务类似于一个线程,但它不是由操作系统管理的,而是由库级代码管理的:运行时。
The async model provides a different—and ultimately complementary—set of
tradeoffs. In the async model, concurrent operations don’t require their own
threads. Instead, they can run on tasks, as when we used trpl::spawn_task to
kick off work from a synchronous function in the streams section. A task is
similar to a thread, but instead of being managed by the operating system, it’s
managed by library-level code: the runtime.
产生线程和产生任务的 API 如此相似是有原因的。线程充当一组同步操作的边界;并发可以在线程“之间”实现。任务充当一组“异步”操作的边界;并发既可以在任务“之间”实现,也可以在任务“之内”实现,因为任务可以在其主体中的 future 之间切换。最后,future 是 Rust 最细粒度的并发单元,每个 future 可能代表其他 future 的一棵树。运行时——特别是它的执行器——管理任务,而任务管理 future。在这一点上,任务类似于轻量级的、由运行时管理的线程,具有由运行时而非操作系统管理所带来的额外功能。
There’s a reason the APIs for spawning threads and spawning tasks are so similar. Threads act as a boundary for sets of synchronous operations; concurrency is possible between threads. Tasks act as a boundary for sets of asynchronous operations; concurrency is possible both between and within tasks, because a task can switch between futures in its body. Finally, futures are Rust’s most granular unit of concurrency, and each future may represent a tree of other futures. The runtime—specifically, its executor—manages tasks, and tasks manage futures. In that regard, tasks are similar to lightweight, runtime-managed threads with added capabilities that come from being managed by a runtime instead of by the operating system.
这并不意味着异步任务总是优于线程(或反之亦然)。线程并发在某些方面是比 async 并发更简单的编程模型。这既可以是优势也可以是弱点。线程在某种程度上是“发射后不管 (fire and forget)”的;它们没有 native 对应于 future 的东西,因此它们只是运行到完成而不会被中断,除非被操作系统本身中断。
This doesn’t mean that async tasks are always better than threads (or vice
versa). Concurrency with threads is in some ways a simpler programming model
than concurrency with async. That can be a strength or a weakness. Threads are
somewhat “fire and forget”; they have no native equivalent to a future, so they
simply run to completion without being interrupted except by the operating
system itself.
事实证明,线程和任务通常能很好地配合工作,因为任务(至少在某些运行时中)可以在线程之间移动。事实上,在底层,我们一直使用的运行时——包括 spawn_blocking 和 spawn_task 函数——默认就是多线程的!许多运行时使用一种称为“工作窃取 (work stealing)”的方法,根据线程当前的利用情况,在线程之间透明地移动任务,以提高系统的整体性能。这种方法实际上需要线程“与”任务,以及 future。
And it turns out that threads and tasks often work
very well together, because tasks can (at least in some runtimes) be moved
around between threads. In fact, under the hood, the runtime we’ve been
using—including the spawn_blocking and spawn_task functions—is multithreaded
by default! Many runtimes use an approach called work stealing to
transparently move tasks around between threads, based on how the threads are
currently being utilized, to improve the system’s overall performance. That
approach actually requires threads and tasks, and therefore futures.
在考虑何时使用哪种方法时,请参考以下经验法则:
- 如果工作是“高度可并行的”(即 CPU 密集型),例如处理一堆可以各部分分开处理的数据,线程是更好的选择。
- 如果工作是“高度并发的”(即 I/O 密集型),例如处理来自一堆不同来源的消息,这些消息可能以不同的间隔或不同的速率到来,那么异步是更好的选择。
When thinking about which method to use when, consider these rules of thumb:
- If the work is very parallelizable (that is, CPU-bound), such as processing a bunch of data where each part can be processed separately, threads are a better choice.
- If the work is very concurrent (that is, I/O-bound), such as handling messages from a bunch of different sources that may come in at different intervals or different rates, async is a better choice.
如果你既需要并行又需要并发,你不必在线程和异步之间做出选择。你可以自由地将它们结合使用,让各自发挥所长。例如,示例 17-25 显示了现实世界 Rust 代码中这种混合方式的一个相当常见的例子。
And if you need both parallelism and concurrency, you don’t have to choose between threads and async. You can use them together freely, letting each play the part it’s best at. For example, Listing 17-25 shows a fairly common example of this kind of mix in real-world Rust code.
#![allow(unused)]
fn main() {
{{#rustdoc_include ../listings/ch17-async-await/listing-17-25/src/main.rs:all}}
}
我们首先创建一个异步通道,然后产生一个线程,使用 move 关键字获取该通道发送端的所有权。在该线程内,我们发送数字 1 到 10,每个数字之间休眠一秒。最后,我们运行一个通过异步块创建并传递给 trpl::block_on 的 future,就像我们在本章中一直做的那样。在那个 future 中,我们 await 那些消息,就像我们在见过的其他消息传递示例中那样。
We begin by creating an async channel, then spawning a thread that takes
ownership of the sender side of the channel using the move keyword. Within
the thread, we send the numbers 1 through 10, sleeping for a second between
each. Finally, we run a future created with an async block passed to
trpl::block_on just as we have throughout the chapter. In that future, we
await those messages, just as in the other message-passing examples we have
seen.
回到本章开头的场景,想象一下使用专用线程运行一组视频编码任务(因为视频编码是计算密集型的),但使用异步通道通知 UI 那些操作已完成。在现实世界的用例中,这种组合的例子数不胜数。
To return to the scenario we opened the chapter with, imagine running a set of video encoding tasks using a dedicated thread (because video encoding is compute-bound) but notifying the UI that those operations are done with an async channel. There are countless examples of these kinds of combinations in real-world use cases.
总结 (Summary)
这不是你在本书中最后一次看到并发。 第 21 章中的项目将会在比这里讨论的简单示例更实际的场景中应用这些概念,并更直接地比较使用线程与任务和 future 解决问题的方法。
This isn’t the last you’ll see of concurrency in this book. The project in Chapter 21 will apply these concepts in a more realistic situation than the simpler examples discussed here and compare problem-solving with threading versus tasks and futures more directly.
无论你选择这些方法中的哪一种,Rust 都会为你提供编写安全、快速、并发代码所需的工具——无论是对于高吞吐量的 Web 服务器还是嵌入式操作系统。
No matter which of these approaches you choose, Rust gives you the tools you need to write safe, fast, concurrent code—whether for a high-throughput web server or an embedded operating system.
接下来,我们将讨论随着你的 Rust 程序变大,对问题建模和构建解决方案的惯用方式。此外,我们还将讨论 Rust 的惯用法如何与你可能熟悉的面向对象编程惯用法相关联。
Next, we’ll talk about idiomatic ways to model problems and structure solutions as your Rust programs get bigger. In addition, we’ll discuss how Rust’s idioms relate to those you might be familiar with from object-oriented programming.