Natural Language Generation: Explaining the unexplainable AI?
Technical talk | English
Technical talk | English
Track 3 - Theatre 19
Wednesday - 16.20 to 17.00 - Technical
The high and increasing interest on natural language generation techniques is being consistently reported by Forrester (one of the “2016 Nine Essential AI Technologies”), Gartner (“Natural-language generation will become standard in analytics, 2017”, “Market Guide for Natural Language Generation Platforms 2019”) or Forbes (“2017 Top 10 Hot AI Techniques”).
Starting from this, we will consider in the talk three areas of convergence between Natural Language Generation and Big Data:
i) explainable Big Data models: almost all of the Big Data models and technologies (namely machine learning) are black box models, which are intrinsically unexplainable. Nevertheless, there is an increasing social (and even legal) call for explainable AI systems. We will deal here about how Natural Language Generation is being used for increasing transparency and understandability of models and algorithms. Starting with classical interpretable (white box) models in Machine Learning, such as decision trees or rules for classification or regression, we will go into gray box models which consider the language imprecision which is inherent to human communication, such as fuzzy decision trees, and discuss how some of these models are being used as proxies for explaining black box models, such as neural networks. We will also present some of the new models and strategies used in AI for explaining or justifying complex tasks (e.g. classification, regression) done by black-box (typically Deep neural models) and, in principle, unexplainable models.
ii) visualization vs natural language explanations: visualization techniques are the most commonly used and preferred (almost unique) way for conveying the information extracted from Big Data to the users. Nevertheless, it is an evidence that it is the combination of visualization and natural language explanations the best way for effectively transmitting information (and knowledge) to the final users. We will discuss how Natural Language Generation provides insights on data thus allowing final users a better comprehension of the hidden information on it. In this regard, we will describe the Data-To-Text (D2T) NLG paradigm for automatically building textual explanations from numerical data (e.g. time series). The usual architecture of D2T systems, their design, examples of real applications currently in operation in many areas (industrial monitoring, e-health, news, …) will be also described. Finally, some of the current challenges open when applying D2T systems in Big Data contexts will be presented and discussed. Examples of these are scalability in data analysis for content determination or the need of real interaction between the D2T and the user (as in conversational agents).
iii) Big Data and Machine Learning Technologies and models have recently started to play a relevant role in Natural Language Generation, as it is also happening in many AI areas. On one hand, this is challenging many classical models and procedures in this area, mostly related with the design methodology of NLG systems. On the other hand, validity of these innovative approaches is endorsed by many recent practical results in this area reported by Google, Facebook and others, which include automatic building of descriptions of images (photo captions) or video sequences. As usual, these proposals consist on Machine Learning approaches (e.g. Deep CNNs, RNNs) which are used on annotated datasets (corpus).
This irruption is provoking, at the same time, that new challenges arise in this field. Some examples of this are how to properly validate the results of these approaches or how reliable these systems are in critical systems (e.g. e-health, industrial realms).