Sign in

Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching

By Scott Cheng-Hsin Yang and others
State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modeling the human explainee via Bayesian Teaching, which evaluates explanations by how much they shift... Show more
April 26, 2021
=
0
Loading PDF…
Loading full text...
Similar articles
Loading recommendations...
=
0
x1
Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching
Click on play to start listening