Différences entre les versions de « Joulin & al. (2016b) »

De Arbres
m (Mjouitteau a déplacé la page Joulin et al. (2016b) vers Joulin & al. (2016b))
Ligne 1 : Ligne 1 :
* Joulin, Armand, Edouard Grave, Piotr Bojanowski, M. Douze, H. Jégou, Tomas Mikolov, ''Compressing text classification models'', [https://arxiv.org/abs/1612.03651 texte].
* Joulin, Armand, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov. 2016. ''Compressing text classification models'', [https://arxiv.org/abs/1612.03651 texte].


  '''abstract''':
  "We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent quantization artefacts. Our experiments carried out on several benchmarks show that our approach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outperforms the state of the art by a good margin in terms of the compromise between memory usage and accuracy."




[[Category:ouvrages de recherche|Categories]]
[[Category:ouvrages de recherche|Categories]]
[[Category:TAL|Categories]]
[[Category:TAL|Categories]]

Version du 10 octobre 2022 à 17:52

  • Joulin, Armand, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov. 2016. Compressing text classification models, texte.


 abstract:
 "We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent quantization artefacts. Our experiments carried out on several benchmarks show that our approach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outperforms the state of the art by a good margin in terms of the compromise between memory usage and accuracy."