Administrative Information
Title | Model Evaluation |
Duration | 60 |
Module | A |
Lesson Type | Tutorial |
Focus | Technical - Foundations of AI |
Topic | Foundations of AI |
Keywords
model evaluation, cross-validation, hyperparameter optimization,
Learning Goals
- Learners understand the need for systematic model evaluation
- Learners understand the difference between training, test and validation sets
- Learners know the most widely applied performance metrics
- Learners are able to recognize underfitting and overfitting
- Learners are capable of designing cross-validation experiments for hyperparameter optimization
Expected Preparation
Learning Events to be Completed Before
Obligatory for Students
None.
Optional for Students
None.
References and background for students
None.
Recommended for Teachers
None.
Lesson materials
Instructions for Teachers
Prepare a Jupyter notebook environment with pandas, matplotlib, numpy and scikit-learn packages
Outline/time schedule
Duration (min) | Description | Concepts |
---|---|---|
5 | Introduction to model evaluation | empirical error, predictive and generalization performance |
5 | Training a simple classifier | MLP, hyperparameters |
10 | Evaluating a classifier | confusion matrix, accuracy, TPR, FPR, precision, misclassification rate, F1 score |
10 | ROC/PR curves and their interpretation | decision boundary, ROC curve, PR curve, AUC |
10 | Underfitting and overfitting | training and test error |
10 | Cross-validation and hyperparameter optimization | validation set, validation error, 5-fold cross-validation |
10 | Evaluation of regression models | MSE, RMSE, MAE |
Acknowledgements
The Human-Centered AI Masters programme was Co-Financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068.