A large language model (LLM) is a deep learning model which consists of a neural network with a huge number of parameters (billions of weights or more) and is trained on large quantities of unlabelled text via self-supervised learning. As there's no strict definition to what an LLM is, it usually means a language model that has a high number of parameters (for instance, some LLMs such as GPT have over 100 billion parameters).
Back to all