Setting the record straight on explainable AI : (2nd out of N) Are ML models really black boxes?

  1. Sees the model as equal to the actual system it attempts to model, or
  2. Lacks the necessary technical skills to benefit from the model’s glass-box transparency, or
  3. Finds the natural-language explanation complicated/hard to understand (regardless of technical ability) due to the model’s complexity.
ML models are usually glass box models of some black-box systems/phenomena; given that our ultimate curiosity is about the modelled systems, despite ML models being glass box, many tend to view them as black box systems unless they address our questions about the back box.

--

--

--

Chief Scientist at AIG, and PI at University of Oxford’s Deep Medicine Program; interested in Machine Learning in Biomedicine and FinTech

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Classification using Logistic Regression

Paper reading : Neural Architecture Search: A Survey

Machine Learning in a very simple way for beginners (Supervised & Unsupervised Learning)

Carnivores Image Classification using Google Colab

Beginners guide to Convolution Neural Networks

How and Why to Use Agile for Machine Learning

Object Localization without Deep Learning

Using Ensemble Methods to predict Loans

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Reza Khorshidi

Reza Khorshidi

Chief Scientist at AIG, and PI at University of Oxford’s Deep Medicine Program; interested in Machine Learning in Biomedicine and FinTech

More from Medium

Approaches to Industrial AI: Finding a Path to Scale — Vanti Analytics

#1 AI Weekly Research News

AI Might Help Us Decode Whale Language

How is Artificial Intelligence (AI) being used to predict fashion trends?