Creating the Whole Machine Learning Pipeline with PyCaret

By Medium - 2020-12-03

Description

This tutorial covers the entire ML process, from data ingestion, pre-processing, model training, hyper-parameter fitting, predicting and storing the model for later use. We will complete all these…

Summary

  • Recreating the entire experiment without PyCaret requires more than 100 lines of code in most libraries.
  • It is directly from the PyCaret datasets, and it is the first method of our Pipeline Image by Author from pycaret.datasets import get_datadataset = get_data('credit')#check the shape of datadataset.shape In order to demonstrate the predict_model() function on unseen data, a sample of 1200 records from the original dataset has been retained for use in the predictions.
  • There are 6,841 samples in the test set.
  • 4- Create the Model Image by Author create_model is the most granular function in PyCaret and is often the basis for most of PyCaret's functionality.

 

Topics

  1. Backend (0.25)
  2. Database (0.13)
  3. NLP (0.11)

Similar Articles

30 Most Asked Machine Learning Questions Answered

By Medium - 2021-03-18

Machine Learning is the path to a better and advanced future. A Machine Learning Developer is the most demanding job in 2021 and it is going to increase by 20–30% in the upcoming 3–5 years. Machine…