Google AI had introduced the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system used to train a single model across multiple TPU v4 Pods.
The researchers from Google evaluated PaLM on hundreds of language understanding and generation tasks and achieved state-of-the-art few-shot performance across most tasks, by significant margins in many cases.
PaLM demonstrates the first large-scale use of the Pathways system to scale training to 6144 chips, the largest TPU-based system configuration used for training to date. It further achieves a training efficiency of 57.8% hardware FLOPs utilisation, the highest yet achieved for LLMs at this scale, thanks to a combination of the parallelism strategy and a reformulation of the Transformer block that allows for attention and feedforward layers to be computed in parallel, enabling speedups from TPU compiler optimisations.
PaLM was trained using a combination of English and multilingual datasets that include high-quality web documents, books, Wikipedia, conversations, and GitHub code. The researchers also created a “lossless” vocabulary that preserves all whitespace (especially important for code), splits out-of-vocabulary Unicode characters into bytes, and splits numbers into individual tokens, one for each digit.
PaLM showed breakthrough capabilities on numerous difficult tasks. When tested against other language models, PaLM 540B surpassed few-shot performance on language understanding and generation when evaluated on 29 widely-used English natural language processing (NLP) tasks. In addition, PaLM demonstrated impressive natural language understanding and generation capabilities on several BIG-bench tasks.
PaLM exhibited breakthrough capabilities on reasoning tasks that require multi-step arithmetic or common-sense reasoning. Prior LLMs, like Gopher, saw less benefit from model scale in improving performance.
Source: indiaai.gov.in