The Good Tech Companies - YaFSDP - An LLM Training Tool That Cuts GPU Usage by 20% - Is Out Now
Episode Date: June 22, 2024This story was originally published on HackerNoon at: https://hackernoon.com/yafsdp-an-llm-training-tool-that-cuts-gpu-usage-by-20percent-is-out-now. YaFSDP is an open-s...ource tool that promises to revolutionize LLM training. Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #llm-fine-tuning, #llm-optimization, #llm-training, #gpu-utilization, #what-is-yafsdp, #open-source-tools, #good-company, #imporve-llm-training, and more. This story was written by: @yandex. Learn more about this writer by checking @yandex's about page, and for more stories, please visit hackernoon.com. YaFSDP is an open-source tool that promises to revolutionize LLM training. In a pre-training scenario involving a model with 70 billion parameters, using YaFSDP can save the resources of approximately 150 GPUs. This translates to potential monthly savings of roughly $0.5 to $1.5 million.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
YAFSDP, an LLM training tool that cuts GPU usage by 20% is out now.
By Yandex, developing large language models requires substantial investments in time and
GPU resources, translating directly into high costs. The larger the model, the more pronounced
these challenges become. Recently, Yandex has introduced a new solution. Y larger the model, the more pronounced these challenges become. Recently,
Yandex has introduced a new solution, YAF-SDP, an open-source tool that promises to revolutionize
LLM training by significantly reducing GPU resource consumption and training time.
In a pre-training scenario involving a model with 70 billion parameters,
using YAF-SDP can save the resources of approximately 150 GPS.
This translates to potential monthly savings of roughly $0, $5 to $1, $5 million, depending on
the virtual GPU provider or platform. Yandex has made YAF-SDP publicly available on GitHub.
The challenge of distributed LLM training. Training LLMs across
multiple GPUs involves complex operations that lead to inefficiencies and high memory consumption.
One of the main issues is the need to send and receive massive amounts of data between GPUs.
For instance, in a typical all underscore reduce operation, twice the amount of gradient data as
there are network parameters must be communicated. In the case of a LAMA70B model, this means transferring 280GB of data per iteration.
Furthermore, weights, gradients, and optimizer states are duplicated across GPUs,
leading to an enormous memory load. The LAMA70B model and the Atom Optimizer require over 1TB of memory, far exceeding the typical
80GB memory capacity of most GPUs. This redundancy severely slows down the training process and often
makes it impractical to fit even moderately-sized models into GPU memory. Introducing YAF-SDP.
Yandex's YAF-SDP offers a highly effective solution to these challenges.
By focusing on optimizing memory consumption and eliminating communication bottlenecks,
Y-AFSDP enhances the efficiency of LLM training. It works by sharding layers instead of individual parameters, maintaining efficient communications, and avoiding redundant operations. Additionally,
Y-AFSDP-PRE allocates buffers for all required data,
ensuring that the torch allocator does not introduce inefficiencies.
YAFSDP operates by utilizing two buffers for intermediate weights and gradients,
with odd layers using one buffer and even layers using the other.
NTHE weights from different layers are stored in the same memory.
If the layers shave the same structure, they will always be identical.
It is crucial to ensure that when you need Layer X, the buffer contains the weights for Layer X.
All parameters will be stored in the corresponding memory chunk within the buffer.
Memory Consumption
During training, the primary memory consumers are weights, gradients, optimizer states,
buffers, and activations. YAF-SDP significantly reduces memory consumption by optimizing how these elements are stored and accessed. Weights, gradients, and optimizer states.
These depend on the number of processes, and their memory consumption tends to approach zero
as the number of processes increases. By sharding these components
across GPUs, YAF-SDP minimizes duplication and thus reduces memory usage. Buffers consume a
constant amount of memory and store intermediate values during computations. Activations depend
on the model size and the number of tokens processed per GPU. Activation checkpointing.
Activation checkpointing is a technique that stores only necessary activations during the
forward pass and recomputes them during the backward pass. This reduces the memory footprint
significantly, as only essential data is stored. For example, in training a LAMA270B model with
a batch size of 8192 tokens, activation storage can be reduced from over 110 GB to just
5 GB. However, this approach introduces additional computational overhead, which YAF-SDP allows to
avoid by not using the activation checkpointing for some layers which is possible due to memory
optimization. Communication Optimization
YAF-SDP improves GPU communication efficiency by ensuring
that data is transferred only when necessary and by overlapping communication with computation.
It utilizes CUDA streams to manage concurrent computations and communications effectively.
NTHE tool uses two streams, a computation stream and a communication stream. Events synchronize
these streams, ensuring that operations are executed in the correct order without introducing
deadlocks. NTHE forward pass on the third layer doesn't start until the all underscore gather
operation is completed, condition 1. Likewise, the all underscore gather operation on the third
layer won't begin until the forward pass on the first layer that uses the same buffer is completed, condition 2. Since there are no cycles in this scheme,
deadlock is impossible. Experimental results and performance gains. The implementation of
YAF-SDP has shown remarkable improvements in training efficiency. In a pre-training scenario
with a model having 70 billion parameters, YAF-SDP was able to save the resources of approximately 150 GPUs.
This translates into significant monthly cost savings, ranging from $0.5 to $1.5 million,
depending on the virtual GPU provider or platform.
YAF-SDP reduces training time by up to 26% compared to existing methods like FSDPand
optimizes memory usage, making it possible to train larger models more efficiently.
NY-Yandex has made YAF-SDP publicly available on GitHub. ML engineers can leverage this tool to
enhance the efficiency of their LLM training processes. By open-sourcing YAF-SDP,
Yandex aims to foster innovation and collaboration in the AI community,
enabling developers to train models faster and cost-effectively.
YAF-SDP represents a significant advancement in LLM training. Addressing the critical challenges
of memory consumption and communication inefficiencies enables faster and more efficient training of large language models.
Thank you for listening to this Hackernoon story, read by Artificial Intelligence.
Visit hackernoon.com to read, write, learn and publish.