What is MLOps? Learn the Basics

What is MLOps? Learn the Basics

MLOps is a set of practices, guidelines, and tools that unify machine learning system development and operations. MLOps seeks to automate, streamline, and optimize the end-to-end lifecycle. MLOps is all about applying best practices from software development to the machine learning lifecycle, ensuring smoother transitions from experimentation to production and more efficient and robust ML systems.

How MLOps work?

Machine learning consists of a total of four major things - Data Processing, Model Training, Model Inferencing, and Model deployment. To automate this process, we are using MLOps. So, let's start step by step:- We know machine learning has two main components, Data and model.

To store the data we can use MinIO or Amazon S3 bucket and in MinIo we also integrate dvc(data version control) so using dvc we can version our data. When the data part is complete, we need to write the pipeline which is being used in ML flow. So for this, we can use Kubeflow. In which we can create our volume, and we can connect our MinIO server to the Kubeflow. It will create a (.dvc) file in Kubeflow. We need to write pipeline code in Kubeflow volume. The pipeline code will contain the whole process like, it will fetch the latest data from MinIO and preprocess it, train the model and test the model. Kubeflow is a very user-friendly UI so you can visualize and modify your pipeline quite easily. In Kubeflow you can also change hyperparameters in UI and do rapid experiments and you can compare runs from two or more than two experiments. We have Kubeflow and MinIO ready and have to automate the process. We can now use GITOps for this. We put all our models in git for model versioning and we can also put data. We know git has limited storage that's the best part of data versioning. it will create a (.dvc) file which is nothing but the hash value of the data so you can put then to git also. For GITOps, we need to write git actions. Git actions is a (.yaml) file in which you write if any changes happen in the particular branch, then the pipeline of Kubeflow will automatically trigger and then you can visualize your results in Kubeflow. So, the basic idea is if any changes happen either in code or data the pipeline automatically triggers. If you want to upload your model to the git again after the pipeline runs you can modify it as per your need.

  • For Data storage and versioning, we use MinIo with dvc(data versioning control).

  • For pipeline code and visualization, we use Kubeflow.

  • For model versioning we use Git.

  • For automation we use Git actions.

Usage of MLOps

  • For versioning of model and data.

  • Automated model training and deployment.

  • Continuous model monitoring.

  • It will reduce our manual efforts and boost our productivity.

Main components of MLOps

  • Data and model version control.

  • Continuous Integration/Continuous Deployment (CI/CD).

  • Recording details of model training runs, including hyperparameters, performance metrics, and the associated datasets.

  • Packaging models and their dependencies in containers for consistent deployment.

  • Monitoring and optimizing the cost of model training, deployment, and infrastructure.

  • You can set a time in kubeflow, at that time kubeflow pipeline automatically triggers.

  • You can visualize and perform the whole process easily.

  • In MinIO you can also upload data using UI or using code also.

  • You only need (pip install "dvc[s3]") to install dvc in MinIO.

Why do we need MLOps?

The simple answer to this question is without MLOps we have to do so much manual work such as if any change happens then we manually have to train, infer and push the model to git. But with MLOps, it will do automatically. We also have to maintain an Excel sheet if we want to compare runs but with this, all our experiments and their record come under one platform, Kubeflow. because of this, it encourages comprehensive model documentation, making it easier for teams to understand, maintain, and troubleshoot machine learning systems. It will also help manage sensitive data and ensure regulatory compliance. It optimizes costs by automating resource allocation, scaling, and efficient use of cloud resources during model training and deployment. This tends, MLOps is essential for organizations and teams that leverage machine learning models to make data-driven decisions

Benefits of MLOps

  • Almost every task is automated.

  • Rapid experiments done without going to code you can do using UI.

  • All the experiments under one platform.

  • It is a user-centric approach which aims to improve user experiences by ensuring that models are always up-to-date and perform optimally in production.

  • It allows for the efficient scaling of machine learning models to handle larger datasets and increased workloads.

  • It includes feedback loops to collect user feedback and data for continuous model improvement and retraining.

Difference between MLOps and DevOps

MLOpsDevOps
MLOps is used for machine learning projects. It includes data preparation, model training, testing, deployment and monitoring.DevOps mainly focused on Development, testing and deployment.
MLOps handles the versioning of data and models.DevOps didn't focus on versioning.
In MLOps, the primary artifacts are machine learning models, data pipelines, and feature engineering processes.In DevOps, the primary things are source code, application binaries, configuration files, and infrastructure as code.
MLOps emphasizes model performance, data drift, and concept drift for monitoring. Involves specific ML metrics.In DevOps, it will monitor application performance, system metrics, and user experience. Uses traditional IT metrics.
Tools and Technologies are specific ML tools like TensorFlow, PyTorch, scikit-learn, and model serving frameworks.Tools and Technologies are CI/CD tools like Jenkins, GitLab CI/CD, and container orchestration tools like Kubernetes.
Teams required: - Cross-functional teams may include data scientists, ML engineers, data engineers, and DevOps.Teams required: - Developers, IT operations, quality assurance, and other stakeholders.

Here are some of the most commonly used tools in MLOps (Machine Learning Operations):

Data Preparation

  1. Pandas: A popular Python library for data manipulation and analysis.

  2. NumPy: A library for efficient numerical computation in Python.

  3. Apache Spark: A unified analytics engine for large-scale data processing.

  4. Apache Beam: A unified programming model for both batch and streaming data processing.

Model Development

  1. TensorFlow: An open-source machine learning library developed by Google.

  2. PyTorch: An open-source machine learning library developed by Facebook.

  3. Scikit-learn: A popular open-source machine learning library for Python.

  4. Keras: A high-level neural networks API for Python.

Model Deployment

  1. TensorFlow Serving: A serving system for machine learning models.

  2. AWS SageMaker: A cloud-based platform for building, training, and deploying machine learning models.

  3. Azure Machine Learning: A cloud-based platform for building, training, and deploying machine learning models.

  4. Google Cloud AI Platform: A cloud-based platform for building, training, and deploying machine learning models.

Model Monitoring and Maintenance

  1. Prometheus: A monitoring system for tracking application performance and metrics.

  2. Grafana: A visualization platform for metrics and monitoring data.

  3. ELK Stack: A collection of tools for log analysis and visualization (Elasticsearch, Logstash, Kibana).

  4. New Relic: A monitoring and analytics platform for application performance and metrics.

Model Governance and Collaboration

  1. MLflow: An open-source platform for managing the machine learning lifecycle.

  2. GitHub: A web-based platform for version control and collaboration.

  3. GitLab: A web-based platform for version control, collaboration, and project management.

  4. DVC (Data Version Control): A tool for data versioning and collaboration.

MLOps Platforms

  1. MLflow: An open-source platform for managing the machine learning lifecycle.

  2. TensorFlow Extended (TFX): A platform for building, deploying, and managing machine learning models.

  3. PyTorch Lightning: A platform for building, deploying, and managing machine learning models.

  4. Kubeflow: A platform for building, deploying, and managing machine learning models on Kubernetes.

These tools are widely used in the industry and are often combined to create a robust MLOps pipeline. However, the choice of tools may vary depending on the specific use case, scalability requirements, and the team's expertise.

Conclusion

MLOps is very important in machine learning if you have continuous training development then this is the best thing we have. Once the pipeline is created all the tasks will be completely automated you only need to monitor your model and with a user-friendly UI you can easily and efficiently complete your work.