Building a Chatbot: Architecture Models and Text Vectorization Methods

Anna V. Chizhik, Yulia A. Zherebtsova

Abstract


In this paper, we review the recent progress in developing intelligent conversational agents (or chatbots), its current architectures (rule-based, retrieval based and generative-based models) and discuss the main advantages and disadvantages of the approaches. Additionally, we conduct a comparative analysis of state-of-the-art text data vectorization methods which we apply in implementation of a retrieval-based chatbot as an experiment. The results of the experiment are presented as a quality of the chatbot responses selection using various R10@k measures. We also focus on the features of open data sources providing dialogs in Russian. Both the final dataset and program code are published. The authors also discuss the issues of assessing the quality of chatbots response selection, in particular, emphasizing the importance of choosing the proper evaluation method.


Full Text:

PDF (Russian)

References


Artetxe M., Schwenk H. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. CoRR. arXiv:1812.10464. 2018.

Bellegarda J.R. Large-Scale Personal Assistant Technology Deployment: the Siri Experience. INTERSPEECH. 2013. P. 2029-2033.

Bojanowski P., Grave E., Joulin A., Mikolov T. Enriching Word Vectors with Subword Information. arXiv:1607.04606. 2017.

Che W., Liu Y., Wang Y., Zheng B., Liu T. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. CoRR. arXiv:1807.03121. 2018.

Cho K. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. EMNLP . 2014. 1724-1734 pp.

Dariu J., Rodrigo A., Otegi A., Echegoyen G., Rosset S., Agirre E., Cieliebak M. Survey on Evaluation Methods for Dialogue Systems. arXiv:1905.04071. 2019.

Gao J., Galley M., Li L. Neural Approaches to Conversational AI. arXiv:1809.08267. 2019. 95 pp.

Harris Z.S. Distributional structure. Word. 10. Issue 2-3. 1954. P. 146–162.

Holger S., Douze M. Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP. arXiv:1704.04154. 2017.

Ihaba M., Takahashi K. Neural Utterance Ranking Model for Conversational Dialogue Systems. Proceedings of the SIGDIAL 2016 Conference. Association for Computational Linguistics. 2016. 393-403 pp.

Joulin A., Grave E., Bojanowski P., Mikolov T. Bag of Tricks for Efficient Text Classification. arXiv:1607.01759. 2016.

Jurafsky D., Martin J. H. Title Speech and Language Processing. 2nd edition. Prentice Hall. 2008. 988 pp.

Liu C.-W., Lowe R., Serban I. V., Noseworthy M., Charlin L., Pineau J. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. arXiv:1603.08023. 2016.

Ma W., Cui Y., Shao N., He S., Zhang W.-N., Liu T., Wang S., Hu G. TripleNet: Triple Attention Network for Multi-Turn Response Selection in Retrieval-based Chatbots. arXiv:1909.10666. 2019.

Manning C. D., Raghavan P., Schütze H. An Introduction to Information Retrieval. Stanford NLP Group, Cambridge University Press. URL: https://goo.su/0LzL (дата обращения: 17.02.2020).

Masche J., Le N.-T. A Review of Technologies for Conversational Systems Conference Paper in Advances in Intelligent Systems and Computing. 2018. P. 212-225.

Mikolov T. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of Workshop at ICLR. URL: https://goo.su/0LZi (дата обращения: 16.03.2020).

Nio L., Sakti, S., Neubig G., Toda T. Developing Non-goal Dialog System Based on Examples of Drama Television. Natural Interaction with Robots, Knowbots and Smartphones. 2014. P. 355-361.

Radford A., Wu J., Child R., Luan D., Amodei D., Sutskever I. Language Models are Unsupervised Multitask Learners. Technical Report OpenAi. URL: https://goo.su/0LzI (дата обращения: 16.03.2020)

Ritter A. Data-Driven Response Generation in Social Media. Conference on Empirical Methods in Natural Language Processing. Edinburgh. 2011. P.583-593.

Pennington J., Socher R., Manning C. D. GloVe: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. 2014. 1532-1543 pp.

Peters M.E., Neumann M., Iyyer M. Deep contextualized word representations. arXiv preprint arXiv: 1802.05365. 2018.

Sountsov P., Sarawagi S. Length bias in Encoder Decoder Models and a Case for Global Conditioning. arXiv:1606.03402. 2016.

Sutskiever I., Vinyals O., Le, Q. V. Sequence to Sequence Learning with Neural Networks. arXiv:1409.3215. 2014.

Vaswani A. ‎ Attention Is All You Need. arXiv: 1706.03762. 2017.

Vinyals O., Le Q.V. A neural conversational model. arXiv preprint arXiv:1506.05869. 2015.

Wallace R. The Elements of AIML Style. ALICE A.I Foundation, 2003. 86 pp.

Weizenbaum J. ELIZA – A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 1966. Р. 36-45.

Yang Z., Dai Z., Yang Y., Carbonell J., Salakhutdinov R., Le Q. V. XLNet: Generalized Autoregressive Pretraining for Language Understanding.arXiv:1906.08237. 2019.

Zhang R., Lee H., Polymenakos L., Radev D. Addressee and Response Selection in Multi-Party Conversations with Speaker Interaction RNNs. arXiv:1709.04005. 2017.


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162