Calculating AUROC Workout
- 01:33
What the area under the ROC curve (AUROC) is, and its importance in evaluating the performance of a model.
Glossary
AUROC Machine Learning PythonTranscript
Write a for loop that cycles through the keys in models keys, and completes the following steps. First, it calculates predictions for each model based on the test inputs. Then it calculates the ROC curve and unpacks all three outputs. Finally, it prints the key and the AUROC for each model rounded to four decimal places. Pause the video now and complete the exercise.
First, make sure that you have the ROC curve and AUC functions imported. Then we're creating a for loop that says, for each key in the model's keys iterable, we're creating a temporary variable called pred to store our predictions, and that's going to be equal to our trained model's predictions based on our input test features. Then in the next line, we're comparing those predictions to our target test series using our ROC curve function, and then we're unpacking the results of that ROC curve function into FPR TPR and thresholds because those are the objects that ROC curve produces. Then we print the key on the next line, and then we're printing ROC equal to the area under curve function, comparing the false positive rate and the true positive rate rounded to four decimal places. Then finally, I'm putting in a little spacer for readability, and when you run this code your output should match what you see right here. It looks like these are all very strong performers, but the gradient boosting classifier is the highest one.