Credit analyst with extensive experience in risk analysis and recommending for various lending proposals.
Project description: A data analyst is tasked to perform loan analysis and to build Machine learning models to present to the Credit risk management of the bank with the insights on how to minimise default risk for small-medium enterpriseses under Commercial Banking.
Research from Kaggle: https://www.kaggle.com/mirbektoktogaraev/should-this-loan-be-approved-or-denied
Correlation matrix created to visualize the relationship among the features and some notable correlations.
Visualisation for paid in full status vs defaulted loans by industry, state, asset-backed status, disbursed-in-full status.
Modeling with Logistic regression, Random forest, Multi level perceptron (classification problem).
To keep in mind when evaluating model performance that a good accuracy doesn’t necessarily mean the model performed well. We want to consider metrics like precision, recall, and F1-score (weighted average of precision and recall) to ensure we are evaluating model performance based on the ‘cost’ of the outcomes.
Baseline model Logistic regression yields a decent accuracy at 83%, however the F1-score of 75% for defaulted loans does not appear to be very promising. The precision suggests that the model is correct 80% of the time when the loan defaults. Recall suggests that the model identifies 70% of defaulted loans correctly, which means that 30% of loans that defaulted were incorrectly classified as loans that would be fully paid, which could provide misleading outcome.
Random forest model performs better across the board with a general accuracy of 93% and higher precision, recall and F1-score.
In general, a model that outperforms another model on both precision and recall is likely the better model. In this scenario general accuracy and f1-score could be relied on when the outcomes have similar costs.