To show the motivation of this talk, first the problem to be addressed will be presented. Currently, the research in Artificial Intelligence is focused on fields such as the optimization of Deep Learning algorithms. Since the appearance of architectures such as Transformers, excellent results have been achieved in areas like NLP or Computer Vision. When these types of algorithms have been trained with massive amounts of data, as has been the case with GPT-3, the resulting models have been applied to tasks for which they had not been trained, requiring only fine-tuning with a few examples of these new tasks (few-shot learning). The problem shared by both Machine Learning and Deep Learning algorithms is that they are oriented to the search for patterns based on linear and / or non-linear correlations. In this search, they do not take into account the study of the presence of causal relationships. At this point, we must remember the famous phrase that the existence of correlation does not imply that there is causality. The weaknesses of Machine Learning or Deep Learning models when making predictions about situations that are far from the distribution of the data with which they were trained can begin to be explained when the absence of a causality study is taken into account. Some examples of this situation can be found when deploying these models in open environments, as is the case with autonomous driving or speech recognition systems. Other more subtle cases of the vulnerabilities of the ML / DL-based AI paradigms are the adversarial examples in which making small changes in the values of the features (for example, in the pixels of an image) is able to deceive the model to misclassify. In addition, these models will carry the biases present in the data and those that can be introduced by the algorithm itself and by the loss functions and regularization terms that it uses. It is clear that it is necessary to try alternative points of view for the development of Artificial Intelligence and one of these ways is to be able to introduce causal reasoning in predictive models. The objective of this talk is to show in a totally practical way how to carry out an end-to-end study that allows obtaining conclusions of causality between the features of the problem, in situations in which there is only access to observational data and that, for therefore, it is not possible to perform interventions typical of the A / B tests. Once this is done, it will be shown how to approach a full cycle of Causal Inference techniques. It will start from the process of discovering a graph of causal relationships between the features, for which the concept of Structural Causal Models will be introduced, and it will be shown how to find out which features must be controlled to determine the relationship between the treatment feature and the target feature. Among the techniques that will be shown are the Backdoor Adjustment, Inverse Propensity Score, Doubly Robust Propensity Estimator, Trimming, and Stratification. It will end by teaching how to solve situations in which there are unobserved confounders that make it impossible to apply the previous techniques, for which the Frontdoor Adjustment will be applied. The most interesting thing about this talk is that all the concepts will be taught in a practical way by solving a case using Python libraries so that the attendees at the end of the talk have an idea about how to add these techniques to their models.