Optimization Strategies for Enhancing Resource Efficiency in Transformers & Large Language Models

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Advancements in Natural Language Processing are heavily reliant on Transformer architectures, whose improvements come at substantial resource costs due to ever-growing model sizes. This study explores optimization techniques, including quantization, knowledge distillation, and pruning, focusing on energy and computational efficiency while retaining performance. Among standalone methods, 4-Bit quantization significantly reduces energy use with minimal accuracy loss. Hybrid approaches, like NVIDIA’s Minitron approach combining KD and structured pruning, further demonstrate promising trade-offs between size reduction and accuracy retention. A novel optimization framework is introduced, offering a flexible framework for comparing various methods. Through the investigation of these compression methods, we provide valuable insights for developing more sustainable and efficient LLMs, shining a light on the often-ignored concern of energy efficiency.

Description

Citation