Dynamic Sharding of Arbitrary Neural Networks
While the sequential method provides a structured and heuristic-driven approach to partitioning, it operates under the constraints of a linear, predetermined exploration path. This may not fully capture the dynamism and complexity of modern neural network architectures, where computational and memory demands can vary significantly across different layers and segments of the network. Given an arbitrary neural network, our objective is to partition the network's computation graph for optimal execution across multiple nodes. This computation graph, , consists of computational unit operations and data flow edges , with each operation outputting a tensor consumed by downstream operations , forming edges . The graph represents the entirety of the model's computational workload which ranges from simple arithmetic operations to layer-specific matrix multiplications, each associated with specific computational and memory requirements i.e. the running time , the memory footprint of the model parameters , and the size of the operation's output .
Partitioning this graph involves dividing into distinct blocks such that each block can be processed on a different node in a swarm under the constraint that the induced quotient graph of remains acyclic. This division aims to maximize throughput while minimizing inter-node communication subjected to the bandwidth between nodes, with the I/O cost from node to node given by:
where represents the set of nodes whose outputs are consumed by block .
The core challenge lies in efficiently distributing the model's parameters and activations across the available fast memory (e.g., SRAM) of each node. Parameters not fitting in fast memory must be streamed from slower storage which introduces additional latency. The overflow cost which represents the time to stream parameters exceeding the fast memory limit is calculated as:
where denotes the peak memory requirement for activations within block .
The overall block cost, , combines the costs of receiving input tensors, executing the block's operations (including any overflow cost from streaming parameters), and sending output tensors downstream:
The goal of partitioning, defined by the Max-Throughput Partitioning Problem (MTPP), is to minimize the maximum cost across all blocks, optimizing the throughput of the entire pipeline. Formally, MTPP seeks a partition that minimizes the bottleneck cost:
where denotes the set of all possible partitions of into blocks, and is the minimum achievable bottleneck cost across these partitions.
Last updated