Enabling Human-In-The-Loop Interpretability Methods of Machine Learning Models