Description
This tutorial covers the entire ML process, from data ingestion, pre-processing, model training, hyper-parameter fitting, predicting and storing the model for later use. We will complete all these…
Summary
- Recreating the entire experiment without PyCaret requires more than 100 lines of code in most libraries.
- It is directly from the PyCaret datasets, and it is the first method of our Pipeline Image by Author from pycaret.datasets import get_datadataset = get_data('credit')#check the shape of datadataset.shape In order to demonstrate the predict_model() function on unseen data, a sample of 1200 records from the original dataset has been retained for use in the predictions.
- There are 6,841 samples in the test set.
- 4- Create the Model Image by Author create_model is the most granular function in PyCaret and is often the basis for most of PyCaret's functionality.