[NOTE: The paper was retracted, as described below—here is an updated link to a copy of it.]
In the spirit of the Ig Nobel Prize-winning dead salmon study (and subsequent studies that went looking for fishy things) comes this new study about Covid-19, cat images, and some limitations of technology:
“Can Your AI Differentiate Cats from Covid-19? Sample Efficient Uncertainty Estimation for Deep Learning Safety,” Ankur Mallick, Chaitanya Dwivedi, Bhavya Kailkhura, Gauri Joshi, and T. Yong-Jin Han, a paper presented at the ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning. The authors, at Carnegie Mellon University and Lawrence Livermore National Laboratory, explain:
Deep Neural Networks (DNNs) are known to make highly overconfident predictions on Out-of-Distribution data…. In this work, we show that even state-of-the-art BNNs and Ensemble models tend to make overconfident predictions when the amount of training data is insufficient….
We demonstrate the usefulness of the proposed approach on a real-world application of COVID-19 diagnosis from chest X-Rays by (a) highlighting surprising failures of existing techniques, and (b) achieving superior uncertainty quantification as compared to state-of-the-art.
UPDATE (June 17, 2020): One of the study’s authors sent us a note that says:
- that their study is NOT about “Covid-19, cat images, and some limitations of technology”
- that the paper, which says on its first page “Presented at the ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning”, was not presented at the ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning
- that their study “has been retracted”