Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help


x-i18n: generated_at: “2026-03-01T14:45:17Z” model: gemini-3-flash-preview provider: google-gemini-cli source_hash: c839d2eb1a7051a3719a948f4aa972577bd6e3def82e08c6d903931dcb29cad5 source_path: ch17-02-concurrency-with-async.md workflow: 16

使用 Async 应用并发 (Applying Concurrency with Async)

Applying Concurrency with Async

在本节中,我们将异步应用到我们在第 16 章中使用线程处理的一些相同的并发挑战。因为我们在那里已经讨论了很多关键思想,在本节中我们将重点关注线程和 future 之间的不同之处。

In this section, we’ll apply async to some of the same concurrency challenges we tackled with threads in Chapter 16. Because we already talked about a lot of the key ideas there, in this section we’ll focus on what’s different between threads and futures.

在许多情况下,使用异步处理并发的 API 与使用线程的 API 非常相似。在其他情况下,它们最终会大不相同。即使线程和异步之间的 API “看起来”相似,它们通常也有不同的行为——并且它们几乎总是具有不同的性能特征。

In many cases, the APIs for working with concurrency using async are very similar to those for using threads. In other cases, they end up being quite different. Even when the APIs look similar between threads and async, they often have different behavior—and they nearly always have different performance characteristics.

使用 spawn_task 创建新任务 (Creating a New Task with spawn_task)

Creating a New Task with spawn_task

我们在第 16 章“使用 spawn 创建新线程”部分处理的第一项操作是在两个独立的线程上进行计数。让我们使用异步来做同样的事情。 trpl crate 提供了一个看起来与 thread::spawn API 非常相似的 spawn_task 函数,以及一个作为 thread::sleep API 的异步版本的 sleep 函数。我们可以将它们结合起来实现计数示例,如示例 17-6 所示。

The first operation we tackled in the “Creating a New Thread with spawn section in Chapter 16 was counting up on two separate threads. Let’s do the same using async. The trpl crate supplies a spawn_task function that looks very similar to the thread::spawn API, and a sleep function that is an async version of the thread::sleep API. We can use these together to implement the counting example, as shown in Listing 17-6.

#![allow(unused)]
fn main() {
{{#rustdoc_include ../listings/ch17-async-await/listing-17-06/src/main.rs:all}}
}

作为起点,我们使用 trpl::block_on 来设置我们的 main 函数,以便我们的顶级函数可以是异步的。

As our starting point, we set up our main function with trpl::block_on so that our top-level function can be async.

注意:从本章的这一点开始,每个示例都将包含这段在 main 中使用 trpl::block_on 进行包裹的完全相同的代码,因此我们通常会像处理 main 一样跳过它。请记住在你的代码中包含它!

Note: From this point forward in the chapter, every example will include this exact same wrapping code with trpl::block_on in main, so we’ll often skip it just as we do with main. Remember to include it in your code!

然后我们在该代码块中编写两个循环,每个循环都包含一个 trpl::sleep 调用,它会等待半秒(500 毫秒)再发送下一条消息。我们将一个循环放在 trpl::spawn_task 的主体中,另一个放在顶级 for 循环中。我们还在 sleep 调用后添加了一个 await

Then we write two loops within that block, each containing a trpl::sleep call, which waits for half a second (500 milliseconds) before sending the next message. We put one loop in the body of a trpl::spawn_task and the other in a top-level for loop. We also add an await after the sleep calls.

这段代码的行为与基于线程的实现类似——包括当你运行它时,你可能会在自己的终端中看到消息以不同的顺序出现:

This code behaves similarly to the thread-based implementation—including the fact that you may see the messages appear in a different order in your own terminal when you run it:

hi number 1 from the second task!
hi number 1 from the first task!
hi number 2 from the first task!
hi number 2 from the second task!
hi number 3 from the first task!
hi number 3 from the second task!
hi number 4 from the first task!
hi number 4 from the second task!
hi number 5 from the first task!

由于由 spawn_task 生成的任务在 main 函数结束时被关闭,所以这个版本在主异步块主体中的 for 循环结束时就会停止。如果你想让它一直运行到任务完成,你需要使用联接句柄 (join handle) 来等待第一个任务完成。对于线程,我们使用了 join 方法来“阻塞”直到线程运行结束。在示例 17-7 中,我们可以使用 await 执行相同的操作,因为任务句柄本身就是一个 future。它的 Output 类型是一个 Result ,所以我们在等待它之后也对其进行了 unwrap。

This version stops as soon as the for loop in the body of the main async block finishes, because the task spawned by spawn_task is shut down when the main function ends. If you want it to run all the way to the task’s completion, you will need to use a join handle to wait for the first task to complete. With threads, we used the join method to “block” until the thread was done running. In Listing 17-7, we can use await to do the same thing, because the task handle itself is a future. Its Output type is a Result, so we also unwrap it after awaiting it.

#![allow(unused)]
fn main() {
{{#rustdoc_include ../listings/ch17-async-await/listing-17-07/src/main.rs:handle}}
}

这个更新后的版本会一直运行到“两个”循环都结束:

This updated version runs until both loops finish:

hi number 1 from the second task!
hi number 1 from the first task!
hi number 2 from the first task!
hi number 2 from the second task!
hi number 3 from the first task!
hi number 3 from the second task!
hi number 4 from the first task!
hi number 4 from the second task!
hi number 5 from the first task!
hi number 6 from the first task!
hi number 7 from the first task!
hi number 8 from the first task!
hi number 9 from the first task!

到目前为止,看起来异步和线程给了我们类似的结果,只是语法不同:使用 await 而不是在联接句柄上调用 join ,以及等待 sleep 调用。

So far, it looks like async and threads give us similar outcomes, just with different syntax: using await instead of calling join on the join handle, and awaiting the sleep calls.

更大的区别在于,我们不需要为了执行此操作而产生另一个操作系统线程。事实上,我们甚至不需要在这里生成任务。因为异步块会编译成匿名 future,我们可以将每个循环放在一个异步块中,并让运行时使用 trpl::join 函数让它们都运行到完成。

The bigger difference is that we didn’t need to spawn another operating system thread to do this. In fact, we don’t even need to spawn a task here. Because async blocks compile to anonymous futures, we can put each loop in an async block and have the runtime run them both to completion using the trpl::join function.

在第 16 章“等待所有线程完成”部分中,我们展示了如何在你调用 std::thread::spawn 时返回的 JoinHandle 类型上使用 join 方法。 trpl::join 函数类似,但是针对 future 的。当你给它两个 future 时,它会产生一个新的单一 future,一旦你传入的两个 future “都”完成了,它的输出就是一个包含每个 future 输出的元组。因此,在示例 17-8 中,我们使用 trpl::join 来等待 fut1fut2 结束。我们“不”等待 fut1fut2 ,而是等待由 trpl::join 产生的新 future。我们忽略输出,因为它只是一个包含两个单元值的元组。

In the “Waiting for All Threads to Finish” section in Chapter 16, we showed how to use the join method on the JoinHandle type returned when you call std::thread::spawn. The trpl::join function is similar, but for futures. When you give it two futures, it produces a single new future whose output is a tuple containing the output of each future you passed in once they both complete. Thus, in Listing 17-8, we use trpl::join to wait for both fut1 and fut2 to finish. We do not await fut1 and fut2 but instead the new future produced by trpl::join. We ignore the output, because it’s just a tuple containing two unit values.

#![allow(unused)]
fn main() {
{{#rustdoc_include ../listings/ch17-async-await/listing-17-08/src/main.rs:join}}
}

当我们运行此代码时,我们看到两个 future 都运行到了完成:

When we run this, we see both futures run to completion:

hi number 1 from the first task!
hi number 1 from the second task!
hi number 2 from the first task!
hi number 2 from the second task!
hi number 3 from the first task!
hi number 3 from the second task!
hi number 4 from the first task!
hi number 4 from the second task!
hi number 5 from the first task!
hi number 6 from the first task!
hi number 7 from the first task!
hi number 8 from the first task!
hi number 9 from the first task!

现在,你每次都会看到完全相同的顺序,这与我们通过线程以及通过示例 17-7 中的 trpl::spawn_task 看到的情况非常不同。这是因为 trpl::join 函数是“公平的 (fair)”,这意味着它同等地检查每个 future,在它们之间交替,并且如果另一个 future 就绪,绝不让其中一个跑在前面。对于线程,由操作系统决定检查哪个线程以及让它运行多久。对于异步 Rust,由运行时决定检查哪个任务。(在实践中,细节变得很复杂,因为异步运行时可能会在底层使用操作系统线程作为其管理并发的一部分,因此保证公平性对于运行时来说可能需要更多工作——但仍然是可能的!)运行时不必保证任何给定操作的公平性,它们通常提供不同的 API 来让你选择是否需要公平性。

Now, you’ll see the exact same order every time, which is very different from what we saw with threads and with trpl::spawn_task in Listing 17-7. That is because the trpl::join function is fair, meaning it checks each future equally often, alternating between them, and never lets one race ahead if the other is ready. With threads, the operating system decides which thread to check and how long to let it run. With async Rust, the runtime decides which task to check. (In practice, the details get complicated because an async runtime might use operating system threads under the hood as part of how it manages concurrency, so guaranteeing fairness can be more work for a runtime—but it’s still possible!) Runtimes don’t have to guarantee fairness for any given operation, and they often offer different APIs to let you choose whether or not you want fairness.

尝试一些这些在等待 future 上的变体,看看它们会产生什么效果:

  • 移除其中一个或两个循环周围的异步块。
  • 在定义每个异步块后立即 await 它。
  • 仅将第一个循环包装在异步块中,并在第二个循环体之后 await 生成的 future。

Try some of these variations on awaiting the futures and see what they do:

  • Remove the async block from around either or both of the loops.
  • Await each async block immediately after defining it.
  • Wrap only the first loop in an async block, and await the resulting future after the body of second loop.

为了增加挑战性,看看你是否能在运行代码“之前”弄清楚每种情况下的输出会是什么!

For an extra challenge, see if you can figure out what the output will be in each case before running the code!

使用消息传递在两个任务之间发送数据 (Sending Data Between Two Tasks Using Message Passing)

Sending Data Between Two Tasks Using Message Passing

在 future 之间共享数据也会感觉很熟悉:我们将再次使用消息传递,但这次使用的是类型和函数的异步版本。我们将采取与第 16 章“通过消息传递在线程间转移数据”部分略有不同的路径,以说明基于线程和基于 future 的并发之间的一些关键区别。在示例 17-9 中,我们将仅从单个异步块开始——“不”像产生单独线程那样产生一个单独的任务。

Sharing data between futures will also be familiar: we’ll use message passing again, but this time with async versions of the types and functions. We’ll take a slightly different path than we did in the “Transfer Data Between Threads with Message Passing” section in Chapter 16 to illustrate some of the differences between thread-based and futures-based concurrency. In Listing 17-9, we’ll begin with just a single async block—not spawning a separate task as we spawned a separate thread.

#![allow(unused)]
fn main() {
{{#rustdoc_include ../listings/ch17-async-await/listing-17-09/src/main.rs:channel}}
}

在这里,我们使用 trpl::channel ,这是我们在第 16 章中与线程一起使用的多生产者单消费者通道 API 的异步版本。该 API 的异步版本与基于线程的版本只有一点点不同:它使用的是可变的而非不可变的接收端 rx ,并且它的 recv 方法产生一个我们需要 await 的 future,而不是直接产生值。现在我们可以从发送端向接收端发送消息。请注意,我们不必产生单独的线程甚至任务;我们只需要 await rx.recv 调用。

Here, we use trpl::channel, an async version of the multiple-producer, single-consumer channel API we used with threads back in Chapter 16. The async version of the API is only a little different from the thread-based version: it uses a mutable rather than an immutable receiver rx, and its recv method produces a future we need to await rather than producing the value directly. Now we can send messages from the sender to the receiver. Notice that we don’t have to spawn a separate thread or even a task; we merely need to await the rx.recv call.

std::mpsc::channel 中的同步 Receiver::recv 方法会一直阻塞直到收到消息。 trpl::Receiver::recv 方法则不会,因为它是异步的。它不是阻塞,而是将控制权交还给运行时,直到收到消息或通道的发送端关闭。相比之下,我们不等待 send 调用,因为它不会阻塞。它不需要阻塞,因为我们要发送到的通道是无界的。

The synchronous Receiver::recv method in std::mpsc::channel blocks until it receives a message. The trpl::Receiver::recv method does not, because it is async. Instead of blocking, it hands control back to the runtime until either a message is received or the send side of the channel closes. By contrast, we don’t await the send call, because it doesn’t block. It doesn’t need to, because the channel we’re sending it into is unbounded.

注意:因为所有这些异步代码都运行在 trpl::block_on 调用中的异步块内,所以其中的一切都可以避免阻塞。然而,在此“之外”的代码将阻塞在 block_on 函数返回处。这就是 trpl::block_on 函数的全部意义:它让你“选择”在某组异步代码上的何处阻塞,以及在何处在同步和异步代码之间进行切换。

Note: Because all of this async code runs in an async block in a trpl::block_on call, everything within it can avoid blocking. However, the code outside it will block on the block_on function returning. That’s the whole point of the trpl::block_on function: it lets you choose where to block on some set of async code, and thus where to transition between sync and async code.

注意这个例子的两点。首先,消息会立即到达。其次,虽然我们在这里使用了 future,但目前还没有并发。清单中的所有内容都是按顺序发生的,就像没有涉及 future 一样。

Notice two things about this example. First, the message will arrive right away. Second, although we use a future here, there’s no concurrency yet. Everything in the listing happens in sequence, just as it would if there were no futures involved.

让我们通过发送一系列消息并在它们之间休眠来解决第一部分,如示例 17-10 所示。

Let’s address the first part by sending a series of messages and sleeping in between them, as shown in Listing 17-10.

{{#rustdoc_include ../listings/ch17-async-await/listing-17-10/src/main.rs:many-messages}}

除了发送消息,我们还需要接收它们。在这种情况下,因为我们知道会有多少条消息到来,我们可以手动调用 rx.recv().await 四次。但在现实世界中,我们通常会等待某种“未知”数量的消息,所以我们需要一直等待,直到我们确定不再有消息为止。

In addition to sending the messages, we need to receive them. In this case, because we know how many messages are coming in, we could do that manually by calling rx.recv().await four times. In the real world, though, we’ll generally be waiting on some unknown number of messages, so we need to keep waiting until we determine that there are no more messages.

在示例 16-10 中,我们使用 for 循环处理从同步通道接收的所有项。然而,Rust 目前还没有一种方法将 for 循环与“异步产生”的一系列项一起使用,因此我们需要使用一种我们以前没见过的循环: while let 条件循环。这是我们在第 6 章“使用 if letlet...else 的简洁控制流”部分中看到的 if let 结构的循环版本。只要它指定的模式继续与该值匹配,循环就会继续执行。

In Listing 16-10, we used a for loop to process all the items received from a synchronous channel. Rust doesn’t yet have a way to use a for loop with an asynchronously produced series of items, however, so we need to use a loop we haven’t seen before: the while let conditional loop. This is the loop version of the if let construct we saw back in the “Concise Control Flow with if let and let...else section in Chapter 6. The loop will continue executing as long as the pattern it specifies continues to match the value.

rx.recv 调用产生一个 future,我们 await 它。运行时将暂停该 future 直到它就绪。一旦消息到达,该 future 将解析为 Some(message) ,次数与消息到达的次数相同。当通道关闭时,无论是否“有”任何消息到达,该 future 都将解析为 None ,以指示不再有值,因此我们应该停止轮询——也就是说,停止等待。

The rx.recv call produces a future, which we await. The runtime will pause the future until it is ready. Once a message arrives, the future will resolve to Some(message) as many times as a message arrives. When the channel closes, regardless of whether any messages have arrived, the future will instead resolve to None to indicate that there are no more values and thus we should stop polling—that is, stop awaiting.

while let 循环将这一切结合在一起。如果调用 rx.recv().await 的结果是 Some(message) ,我们就可以访问该消息,并可以在循环体中使用它,就像使用 if let 一样。如果结果是 None ,循环结束。每次循环完成,它都会再次命中等待点,因此运行时会再次暂停它,直到另一条消息到达。

The while let loop pulls all of this together. If the result of calling rx.recv().await is Some(message), we get access to the message and we can use it in the loop body, just as we could with if let. If the result is None, the loop ends. Every time the loop completes, it hits the await point again, so the runtime pauses it again until another message arrives.

代码现在成功地发送和接收了所有消息。不幸的是,仍然存在几个问题。一方面,消息并不是每隔半秒到达一次。它们在启动程序 2 秒(2,000 毫秒)后一次性到达。另一方面,这个程序也永远不会退出!相反,它永远在等待新消息。你需要使用 ctrl-C 关闭它。

The code now successfully sends and receives all of the messages. Unfortunately, there are still a couple of problems. For one thing, the messages do not arrive at half-second intervals. They arrive all at once, 2 seconds (2,000 milliseconds) after we start the program. For another, this program also never exits! Instead, it waits forever for new messages. You will need to shut it down using ctrl-C.

单个异步块内的代码线性执行 (Code Within One Async Block Executes Linearly)

让我们首先研究一下为什么消息会在延迟结束时一次性到达,而不是在每条消息之间都有延迟。在给定的异步块内, await 关键字在代码中出现的顺序也是程序运行时执行它们的顺序。

Let’s start by examining why the messages come in all at once after the full delay, rather than coming in with delays between each one. Within a given async block, the order in which await keywords appear in the code is also the order in which they’re executed when the program runs.

示例 17-10 中只有一个异步块,所以其中的一切都线性运行。仍然没有并发。所有的 tx.send 调用都会发生,并穿插着所有的 trpl::sleep 调用及其关联的等待点。只有到那时, while let 循环才能通过 recv 调用上的任何 await 点。

There’s only one async block in Listing 17-10, so everything in it runs linearly. There’s still no concurrency. All the tx.send calls happen, interspersed with all of the trpl::sleep calls and their associated await points. Only then does the while let loop get to go through any of the await points on the recv calls.

为了获得我们想要的行为,即在每条消息之间发生休眠延迟,我们需要将 txrx 操作放在它们自己的异步块中,如示例 17-11 所示。然后运行时可以使用 trpl::join 分别执行它们,就像在示例 17-8 中一样。我们再次等待调用 trpl::join 的结果,而不是单个 future。如果我们按顺序等待单个 future,我们最终只会回到顺序流——这正是我们“不”想做的。

To get the behavior we want, where the sleep delay happens between each message, we need to put the tx and rx operations in their own async blocks, as shown in Listing 17-11. Then the runtime can execute each of them separately using trpl::join, just as in Listing 17-8. Once again, we await the result of calling trpl::join, not the individual futures. If we awaited the individual futures in sequence, we would just end up back in a sequential flow—exactly what we’re trying not to do.

{{#rustdoc_include ../listings/ch17-async-await/listing-17-11/src/main.rs:futures}}

使用示例 17-11 中更新后的代码,消息以 500 毫秒的间隔打印出来,而不是在 2 秒后全部匆忙打印。

With the updated code in Listing 17-11, the messages get printed at 500-millisecond intervals, rather than all in a rush after 2 seconds.

将所有权移动到异步块中 (Moving Ownership Into an Async Block)

然而,由于 while let 循环与 trpl::join 交互的方式,程序仍然永远不会退出:

  • trpl::join 返回的 future 仅在传递给它的“两个” future 都完成后才完成。
  • tx_fut 这个 future 在发送 vals 中的最后一条消息后完成休眠时完成。
  • rx_fut 这个 future 在 while let 循环结束前不会完成。
  • while let 循环在 await rx.recv 产生 None 之前不会结束。
  • 只有当通道的另一端关闭时,await rx.recv 才会返回 None
  • 只有当我们调用 rx.close 或者当发送端 tx 被丢弃时,通道才会关闭。
  • 我们没有在任何地方调用 rx.close ,并且直到传递给 trpl::block_on 的最外层异步块结束, tx 才会被丢弃。
  • 该代码块无法结束,因为它正被阻塞在 trpl::join 完成处,这又把我们带回了这份清单的顶部。

The program still never exits, though, because of the way the while let loop interacts with trpl::join:

  • The future returned from trpl::join completes only once both futures passed to it have completed.
  • The tx_fut future completes once it finishes sleeping after sending the last message in vals.
  • The rx_fut future won’t complete until the while let loop ends.
  • The while let loop won’t end until awaiting rx.recv produces None.
  • Awaiting rx.recv will return None only once the other end of the channel is closed.
  • The channel will close only if we call rx.close or when the sender side, tx, is dropped.
  • We don’t call rx.close anywhere, and tx won’t be dropped until the outermost async block passed to trpl::block_on ends.
  • The block can’t end because it is blocked on trpl::join completing, which takes us back to the top of this list.

现在,我们发送消息的异步块只是“借用”了 tx ,因为发送消息不需要所有权,但如果我们可以将 tx “移动 (move)”到该异步块中,它就会在那个块结束时被丢弃。在第 13 章的“捕获引用或移动所有权”部分,你学习了如何对闭包使用 move 关键字,并且如第 16 章“在线程中使用 move 闭包”部分所述,我们在处理线程时通常需要将数据移动到闭包中。相同的基本动态也适用于异步块,因此 move 关键字在异步块上的工作方式与在闭包上的工作方式相同。

Right now, the async block where we send the messages only borrows tx because sending a message doesn’t require ownership, but if we could move tx into that async block, it would be dropped once that block ends. In the “Capturing References or Moving Ownership” section in Chapter 13, you learned how to use the move keyword with closures, and, as discussed in the “Using move Closures with Threads” section in Chapter 16, we often need to move data into closures when working with threads. The same basic dynamics apply to async blocks, so the move keyword works with async blocks just as it does with closures.

在示例 17-12 中,我们将用于发送消息的代码块从 async 更改为 async move

In Listing 17-12, we change the block used to send messages from async to async move.

#![allow(unused)]
fn main() {
{{#rustdoc_include ../listings/ch17-async-await/listing-17-12/src/main.rs:with-move}}
}

当我们运行“这个”版本的代码时,它会在最后一条消息被发送和接收后优雅地关闭。接下来,让我们看看为了从多个 future 发送数据需要做出哪些更改。

When we run this version of the code, it shuts down gracefully after the last message is sent and received. Next, let’s see what would need to change to send data from more than one future.

使用 join! 宏联接多个 Future (Joining a Number of Futures with the join! Macro)

Joining a Number of Futures with the join! Macro

这个异步通道也是一个多生产者通道,所以如果我们想从多个 future 发送消息,我们可以在 tx 上调用 clone ,如示例 17-13 所示。

This async channel is also a multiple-producer channel, so we can call clone on tx if we want to send messages from multiple futures, as shown in Listing 17-13.

#![allow(unused)]
fn main() {
{{#rustdoc_include ../listings/ch17-async-await/listing-17-13/src/main.rs:here}}
}

首先,我们克隆 tx ,在第一个异步块之外创建 tx1 。我们像之前对 tx 所做的那样,将 tx1 移动到该代码块中。然后,在后面,我们将原始的 tx 移动到一个“新”的异步块中,在那里我们以稍慢的延迟发送更多消息。我们恰好将这个新的异步块放在接收消息的异步块之后,但放在它之前也同样可以。关键是 future 被等待的顺序,而不是它们被创建的顺序。

First, we clone tx, creating tx1 outside the first async block. We move tx1 into that block just as we did before with tx. Then, later, we move the original tx into a new async block, where we send more messages on a slightly slower delay. We happen to put this new async block after the async block for receiving messages, but it could go before it just as well. The key is the order in which the futures are awaited, not in which they’re created.

两个发送消息的异步块都需要是 async move 块,这样 txtx1 都会在这些块结束时被丢弃。否则,我们将最终回到我们一开始所处的那个无限循环。

Both of the async blocks for sending messages need to be async move blocks so that both tx and tx1 get dropped when those blocks finish. Otherwise, we’ll end up back in the same infinite loop we started out in.

最后,我们从 trpl::join 切换到 trpl::join! 来处理额外的 future: join! 宏可以等待任意数量的 future,前提是我们在编译时知道 future 的数量。我们将在本章稍后讨论如何等待未知数量的 future 集合。

Finally, we switch from trpl::join to trpl::join! to handle the additional future: the join! macro awaits an arbitrary number of futures where we know the number of futures at compile time. We’ll discuss awaiting a collection of an unknown number of futures later in this chapter.

现在我们看到了来自两个发送端 future 的所有消息,并且因为发送端 future 在发送后使用了略有不同的延迟,消息也以这些不同的间隔被接收:

Now we see all the messages from both sending futures, and because the sending futures use slightly different delays after sending, the messages are also received at those different intervals:

received 'hi'
received 'more'
received 'from'
received 'the'
received 'messages'
received 'future'
received 'for'
received 'you'

我们已经探索了如何使用消息传递在 future 之间发送数据,异步块内的代码是如何顺序运行的,如何将所有权移动到异步块中,以及如何联接多个 future。接下来,让我们讨论如何以及为什么要告诉运行时它可以切换到另一个任务。

We’ve explored how to use message passing to send data between futures, how code within an async block runs sequentially, how to move ownership into an async block, and how to join multiple futures. Next, let’s discuss how and why to tell the runtime it can switch to another task.