domingo, 3 de março de 2019

Deep k-Nearest Neighbours: Rumo à Aprendizagem Profunda Confiante, Interpretável e Robusto em ShortScience.org

Deep k-Nearest Neighbours: Rumo à Aprendizagem Profunda Confiante, Interpretável e Robusto em ShortScience.org: Papernot and McDaniel introduce deep k-nearest neighbors where nearest neighbors are found at each intermediate layer in order to improve interpretbaility and robustness. Personally, I really appreciated reading this paper; thus, I will not only discuss the actually proposed method but also highlight some ideas from their thorough survey and experimental results.

First, Papernot and McDaniel provide a quite thorough survey of relevant work in three disciplines: confidence, interpretability and robustness. To the best of my knowledge, this is one of few papers that explicitly make the connection of these three disciplines. Especially the work on confidence is interesting in the light of robustness as Papernot and McDaniel also frequently distinguish between in-distribution and out-distribution samples. Here, it is commonly known that deep neural networks are over-confidence when moving away from the data distribution.

The deep k-nearest neighbor approach is described in Algorithm 1 and summarized in the following. For a trained model and a training set of labeled samples, they first find k nearest neighbors for each intermediate layer of the network. The layer nonconformity with a

explicita

Sem comentários:

Murdered Chinese Ambassador Tried to Defect?

Murdered Chinese Ambassador Tried to Defect? explicita