Home Explore Blog CI



rustc

4th chunk of `src/parallel-rustc.md`
1ca8ad5bafc6b23ab1c8da28175eff508782dd1b62356be30000000100000c9e
| rustc_typeck::check::typeck_item_bodies                 | Type check                                                   | Map::par_body_owners     |
| rustc_interface::passes::hir_id_validator::check_crate  | Check the validity of hir                                    | Map::par_for_each_module |
| rustc_interface::passes::analysis                       | Check the validity of loops body, attributes, naked functions, unstable abi, const bodys | Map::par_for_each_module |
| rustc_interface::passes::analysis                       | Liveness and intrinsic checking of MIR                       | Map::par_for_each_module |
| rustc_interface::passes::analysis                       | Deathness checking                                           | Map::par_for_each_module |
| rustc_interface::passes::analysis                       | Privacy checking                                             | Map::par_for_each_module |
| rustc_lint::late::check_crate                           | Run per-module lints                                         | Map::par_for_each_module |
| rustc_typeck::check_crate                               | Well-formedness checking                                         | Map::par_for_each_module |

There are still many loops that have the potential to use parallel iterators.

## Query system

The query model has some properties that make it actually feasible to evaluate
multiple queries in parallel without too much effort:

- All data a query provider can access is via the query context, so
  the query context can take care of synchronizing access.
- Query results are required to be immutable so they can safely be used by
  different threads concurrently.

When a query `foo` is evaluated, the cache table for `foo` is locked.

- If there already is a result, we can clone it, release the lock and
  we are done.
- If there is no cache entry and no other active query invocation computing the
  same result, we mark the key as being "in progress", release the lock and
  start evaluating.
- If there *is* another query invocation for the same key in progress, we
  release the lock, and just block the thread until the other invocation has
  computed the result we are waiting for. **Cycle error detection** in the parallel 
  compiler requires more complex logic than in single-threaded mode. When 
  worker threads in parallel queries stop making progress due to interdependence, 
  the compiler uses an extra thread *(named deadlock handler)* to detect, remove and 
  report the cycle error.

The parallel query feature still has implementation to do, most of which is
related to the previous `Data Structures` and `Parallel Iterators`. See [this
open feature tracking issue][tracking].

## Rustdoc

As of <!-- date-check--> November 2022, there are still a number of steps to
complete before `rustdoc` rendering can be made parallel (see a open discussion
of [parallel `rustdoc`][parallel-rustdoc]).

## Resources

Here are some resources that can be used to learn more:

- [This IRLO thread by alexchricton about performance][irlo1]
- [This IRLO thread by Zoxc, one of the pioneers of the effort][irlo0]
- [This list of interior mutability in the compiler by nikomatsakis][imlist]


Title: Potential for Parallel Iterators, Query System, Rustdoc, and Resources
Summary
This section discusses the remaining potential for using parallel iterators in the Rust compiler and the feasibility of parallelizing query evaluations due to the query model's properties (data access via query context, immutable results). It explains the locking mechanism for query cache tables and cycle error detection. The section also touches on the steps needed to parallelize `rustdoc` rendering and provides resources for further learning, including links to IRLO threads on performance and interior mutability in the compiler.