Toloka Team
LoRA AI models: Low-Rank Adaptation for a More Efficient Fine Tuning
Pre-training a model is typically a labor-intensive, time-consuming, and expensive endeavor. However, pre-trained AI models need to be further tuned to perform specific tasks. For instance, an AI will not be able to generate a story in a certain style if it has not been previously trained on texts in that style.
This is where Low-Rank Adaptation models, or LoRa models, come into play. LoRa models provide a one-of-a-kind solution to the challenges posed by data adaptation in machine learning (ML). In this article, we will explore what LoRa models are, how they work, their applications, and provide some examples of their use.
What is a LoRA Model?
Low-Rank Adaptation Models, or LoRA models, are a class of ML models designed to adapt and learn from new data efficiently. They are relatively small models that apply minor modifications to standard checkpoint models to achieve better efficiency and adaptability for specific tasks.
Unlike traditional models that require extensive retraining when new data arrives, LoRa models are engineered to dynamically adjust to the evolving information landscape while keeping computational complexity low. They achieve this through innovative techniques in data representation and adaptability.
LoRA or Low Rank Adaptation is an approach that presents parameter efficient fine tuning for Large Language Models. However, it was only in the early days of LoRA existence that this technique could be applied only to LLMs. Now LoRA training is applied, for example, for image-generating models like Stable Diffusion models as well. Basically, such methods are applied to get a fine-tuned model that has been trained on fewer trainable parameters.
Why Do We Need a Fine-Tuning Process?
Fine-tuning is an integral component of model training as it allows it to adapt to a particular type of application or project. Before realizing why we deem it necessary we need to figure out what fine tuning is.
Essence of Fine-Tuning
Fine-tuning in neural networks is a process where you take a pre-trained model, which has already learned useful features or knowledge from a large dataset, and then you further train it on a smaller, specific dataset for a particular task. This task could be anything from image classification to natural language understanding. It is a continued training process for a model architecture, but using new specified data.
Fine-tuning involves making small, focused adjustments to the pre-trained model's weights to create fine-tuned weights that are specialized for a particular task. Weights in a neural network are like adjustable parameters that control the flow of information and play a crucial role in how the network learns and makes predictions. The values of these weights are learned from data to improve the network's performance on a specific project.
Instead of training a new model from scratch for a specific task, you adapt or fine-tune the pre-trained model by modifying its parameters to better suit the new task. The pre-trained model's existing knowledge, represented in its learned parameters, serves as a valuable starting point. The fine-tuning process makes small changes to this knowledge to make it more relevant and accurate for their new purpose.
For example, you might start with a pre-trained image recognition model that knows about common objects. Then, with fine-tuning, you can adapt it to recognize specific types of flowers. The pre-trained model already understands things like edges, colors, and shapes, so it's easier to teach it to recognize flower types.
Fine-tuning is a form of transfer learning, where knowledge gained from one task is transferred and applied to another related task. It's inspired by the idea that learning from one experience can help improve performance on a new, yet similar, experience. The model retains the knowledge learned from the source data, but it becomes specialized for the target task. Let's take a look at how fine-tuning works with LLMs as an example.
How Does Fine-Tuning Work?
Fine-tuning large models can be a resource-demanding process. However, it allows you to leverage the impressive language understanding and/or image generation capabilities of these models for task-specific applications. That's why fine-tuning is not to be discarded.
For example, huge models like LLaMA, Pythia, and MPT-7B have already learned a lot about words from tons of text. People can take these models and teach them to do specific tasks. They already possess their pre-trained weights and data scientists can fine-tune them.
These models possess many layers, and each layer has some special trainable parameters. They can change a bit to learn new things. When data scientists teach a large model new tasks, it adjusts the weights of parameters based on the new data. Data scientists show the model some examples by feeding it a new dataset during a fine-tuning process, it guesses what comes next. After forecasting the next token, the model compares such an output with the true data also known as ground truth.
In that way, the Large Language Model changes the values of its weights to be able to predict or generate true data. This process involves multiple iterations, meaning that the model proceeds through these stages over and over again to become good or better at some specifically tailored purpose. After several such operations, which by the way may take a very long time, the model will be ready to be applied for its intended purpose, for example, it will become a banking or a medical chatbot.
However, if fine-tuning can be performed on the entire model, you may wonder why LoRA exists.
Why LoRA is Necessary?
Why did methods like LoRA emerge? The simple answer is that standard or full-parameter fine-tuning is difficult in all sorts of ways. The final fine-tuned model comes out as bulky as its pre-trained version, so if the results of training are not to your liking the entire model will have to be re-trained to make it work better. And now with the advent of the LoRA approach to fine tuning, even some PCs can accomplish fine tuning on consumer GPUs.
Pre-trained models, such as large deep neural networks, can have millions or even billions of parameters. Fine-tuning a model with a large number of parameters can be computationally expensive. It requires significant processing power, often involving powerful GPUs and other specialized hardware. Storing such models also entails a significant amount of hard drive space. The cost of electricity, hardware maintenance, and equipment itself must be taken into account as well.
Fine-tuning requires loading and working with these massive models, which puts a heavy burden on GPU memory. It happens because it typically involves reading and processing a lot of data, which can be a bottleneck in the GPU's processing pipeline. Loading large datasets from storage into GPU memory can be slow, especially for very large models.
During fine-tuning, the model adjusts its parameters by computing gradients using backpropagation. Backpropagation, short for "backward propagation of errors," is an algorithm used to compute gradients and update the model's parameters. The gradient is a vector function that is used to update the model's parameters during training to reduce the error between predictions and actual targets. This backpropagation involves lots of multiplications and memory operations, which can be slow on GPUs.
Moreover, fine-tuning deep learning models can tie up the GPU for extended periods, making it less available for other tasks. Such full-parameter fine-tuning can strain available resources, leading to resource contention if multiple processes or tasks are running on the same hardware simultaneously.
A Full-Parameter Fine-Tuning is More Exhausting Than LoRA
It is possible to perform a full-parameter fine-tuning, but not everyone can afford it. Mostly it can be done by big corporations that possess huge resources and capabilities. These organizations often have the financial means to invest in high-end hardware, employ experienced teams of machine learning experts, and access vast amounts of data for training.
Sure, it can lead to excellent results. But performing full-parameter fine-tuning as was previously mentioned can be resource-intensive and expensive. It can make it less accessible to smaller companies, startups, and individuals who may have more limited budgets, computational resources, and access to large datasets.
With pre-trained models, a complete or full fine-tuning, where all parameters are re-trained, makes less sense. Here's why. Large computer models, like those for language or images, learn a lot of general ideas about their area of expertise. They're like jacks of all trades, knowing a bit of everything. They can handle different jobs quite well without much extra training.
But when we want them to get really good at one specialized task or deal with certain data, we don't need to teach them everything from scratch. They already have most of the tools they need to learn more. We just need to tweak a few things to make them experts. So, instead of making a lot of big changes, we can make small, focused adjustments. These small adjustments can be thought of as a simple set of changes.
This is where parameter-efficient methods like low-rank adaptation come in. Data scientists apply LoRA to reduce the computational and memory requirements during fine-tuning of neural networks. All of these improvements help to facilitate and speed up such additional training processes.
For those with constrained resources, there are alternative approaches, such as LoRA, which allow them to benefit from pre-training and achieve good results with less computational cost and data requirements.
LoRA Model Approach to Fine-Tuning
To understand how LoRA works, let's figure out how the weight system in a model is organized. Think of our model as a massive group of big spreadsheets or matrices with lots of numbers in them. These spreadsheets represent our model's knowledge of language or model parameters.
Each spreadsheet has a "rank," which tells us how many unique columns it has. A unique column is like a special piece of information that can't be made by combining other columns. They are called linearly independent columns. However some columns that are dependent are not unique. Therefore, they can be made by mixing other columns.
The concept of the LoRA model says that when we're training our base model for a specific task, we don't need all the information in those spreadsheets (matrices). We can get rid of some columns and still keep most of the useful data.
In other words, the aim of this approach is to create a LoRA model with a compressed number of columns i.e. lower-rank matrices. That way the number of parameters will decrease and they will be easier to manage. Such LoRA parameters are much fewer than the initial base model weights.
A matrix containing all the information is called a full-rank weight matrix. To put it simply, a full-rank weight matrix has a complete set of unique and independent parameters. Each parameter in the matrix contributes uniquely to the model's ability to learn and represent complex patterns in data. This makes it very expressive but also potentially large and computationally intensive, as it may contain many parameters.
Full-rank weight matrices are common in deep learning models, where their high dimensionality allows the model to capture a wide range of features and relationships in the data. However, they can also be computationally expensive to work with, both in terms of training and inference. So, instead of using one big full-rank weight matrix, LoRA uses two smaller less complex ones.
LoRA freezes the pre-trained model and its original weights then adds smaller matrices to every layer of the model that can be trained. When you freeze a layer, it means that you prevent the layer's weights from being updated during the fine-tuning process. Instead, the layer retains the values it had when it was pre-trained on the original task.
Small matrices assist the model in adjusting to a variety of applications while keeping all original parameters unchanged. When we're fine-tuning the model with the help of LoRA models, it makes changes to these smaller matrices, while keeping the initial weight matrix and its parameters untouched. They are much easier to work with and train, so it's faster and cheaper.
Due to the emphasis on actualizing these two smaller matrices instead of the initial complete weight matrix, the efficiency of the computing process can be dramatically enhanced.
Creating LoRA Model
There are many ways to make your own LoRA model, but the basic steps for making one are as follows:
Gather information for training. For example, it could be 5-10 photos, but it would be better to be able to collect 50-100 pics for the best results particularly if you are willing to teach the model to replicate a certain style;
Apply trainer models for training the LoRA model. There are plenty of them available, for example, Kohya Trainer or LoRA training on Replicate.com. Application instructions will differ depending on the training model you choose;
Using a trained LoRA model. Once you have created a LoRA model, you can load it into a main model like Stable Diffusion and generate images. Sometimes LoRA activation requires the use of special trigger words in a textual prompt. They were incorporated into the model during the training process for the LoRA model to function correctly.
LoRA Model Fine-Tuning Benefits as Compared to Full-Parameter
Compared to the conventional fine-tuning approach, LoRA possesses several distinct advantages:
Computational Efficiency
Fine-tuning the entire model can be computationally expensive, especially when dealing with huge models with millions or billions of parameters. LoRA reduces the computational cost by working with low-rank matrices, making it more feasible for resource-constrained environments.
It focuses on the optimal use of computational resources, such as CPU processing power, GPU capabilities, and memory by decreasing the count of parameters that should be adjusted or trained. LoRA is designed to be resource-efficient, making it a more viable option for organizations with limited computational resources and smaller budgets.
Knowledge Preservation
LoRA retains the general knowledge captured during pre-training, which is essential for applications where the model's broad understanding is beneficial. Knowledge preservation can be a key motivation for using LoRA. Instead of completely retraining or fine-tuning a model from scratch, which might result in a loss of valuable pre-trained knowledge, LoRA allows you to adapt the model while minimizing the loss of this knowledge.
Reduced Catastrophic Forgetting
When fine-tuning a pre-trained model on a new task, there is a risk that the model might overfit the new data and lose some of the knowledge it gained during pre-training. This is known as catastrophic forgetting. The model's weights are updated primarily for the new task, and it might lose its knowledge of previous tasks. LoRA can potentially mitigate catastrophic forgetting by keeping pre-trained weights frozen so they are not changed during fine-tuning.
LoRA Models Portability
Due to its reduced number of parameters that are trained and original weights frozen, the LoRA model is compact and mobile. The extent to which the rank of weight matrices is reduced affects the final model size. A higher rank reduction will result in a smaller model. In any case, LoRA models weigh less than a fully fine-tuned model. That enables a user, for example, to keep a variety of models for different styles to generate images without filling up their local storage. At the same time retaining only one original base model with its initial weights.
LoRA Performance
When you fine-tune a pre-trained model using LoRA, you aim to balance the task-specific performance with the efficiency of the model. As it was already mentioned, LoRA reduces the rank of weight matrices to make the model more efficient and memory-friendly.
While this reduction in capacity might lead to a slight degradation in task performance compared to fully fine-tuned models, it's often the case that the difference in performance is relatively small, especially for less complex tasks. At the same time, the savings in terms of computational resources can be substantial.
LoRA Models Examples
Stable Diffusion models are a class of generative models employed in tasks related to image synthesis, style transfer, and image-to-image translation. These models are typically pre-trained on extensive datasets and have a remarkable capacity to capture complex data distributions.
The success of Stable Diffusion models comes at the cost of large file sizes. These models often require significant storage space, making it challenging for users to manage and store multiple models, especially when they are dealing with limited disk space and resource constraints.
LoRA offers an approach to tackle this challenge. LoRA models are essentially small Stable Diffusion models. They use a training technique that applies smaller changes to initially huge models, which proceed to be substantially decreased in file size. The file size of LoRA models typically ranges from 2 MBs to 800 MBs, which is significantly less compared to the original model checkpoints.
These models can be added to the base Stable Diffusion model to produce more specific images, for instance with more details or in a particular style. Any Stable Diffusion model supports LoRA models, the important thing is to make sure that they are compatible. LoRA and the base model must be of the same version. For example, if you have SD v2.x installed, the LoRA must be trained on SD v2.x.
It is best to look for LoRA models on HuggingFace.co or on Civitai.com. Here are some interesting models:
Detail Tweaker LoRA. Allows users to add details to realistic photos and artwork, including anime;
Ikea Instructions LoRA. Creates instructions and step-by-step guides similar to those from Ikea from nearly any text query;
Epi_noiseoffset. Creates high-quality, lifelike photos and artwork with an emphasis on increased contrast and color saturation;
Watercolor Style - SDXL LoRA. Generates images as if they were painted with watercolors;
VanGoghSketcher SD XL 1.0. Creates sketches similar to Van Gogh's work;
Product Design (Elegant minimalism-eddiemauro) LoRA. Creates photographs of elegant, austere objects in a minimalist style;
Doctor Diffusion's Tarot Card Crafter. As the name suggests this LoRA model can generate tarot cards.
Conclusion
The existence of LoRA is justified by the fact that it serves as a valuable tool that addresses specific challenges associated with fine-tuning a deep learning network. While traditional fine-tuning allows for updates to the entire model, LoRA offers a different approach by tweaking some parts of it. It enables the reduction of model size by reducing the rank of weight matrices while retaining valuable knowledge from the pre-trained model.
Reduced size allows users to reduce computational power during the fine-tuning. LoRA offers the flexibility to fine-tune different parts of the model to different degrees, enabling a more focused adaptation process. It is possible to download a ready-made LoRA model, or you can build your own customized version, which is also relatively faster and easier compared to full fine-tuning. LoRA models simply help deliver great results faster and more cost-effectively.
Most likely, this technology will continue to improve and amaze us even more with its capabilities. Technologies are evolving so fast that even an average owner of a more or less powerful PC can now make use of and build AI models. Even though they are not quite fully functional and can operate in conjunction with the basic model, still it is a start.
The performance of LoRA models may be comparable or slightly degraded compared to fully fine-tuned models. However, all substantial advantages of LoRA models such as reduced processing memory, hard disk storage space, and preservation of pre-trained knowledge resulting in decreased catastrophic forgetting may be decisive for many enterprises.
About Toloka
Toloka is a European company based in Amsterdam, the Netherlands that provides data for Generative AI development. Toloka empowers businesses to build high quality, safe, and responsible AI. We are the trusted data partner for all stages of AI development from training to evaluation. Toloka has over a decade of experience supporting clients with its unique methodology and optimal combination of machine learning technology and human expertise, offering the highest quality and scalability in the market.
Article written by:
Toloka Team
Updated:
Oct 2, 2023