Talk | Technical | Spanish

In Artificial Intelligence, the figure of “Data Scientist” quickly found its niche in companies to create, train and experiment with data and models. All this work was done manually working with Jupyter Notebooks and Python libraries‚Ķ and a lot of patience!


A few years ago, the academic world raised this hypothesis: What if we train a model whose purpose in itself is to choose the parameters that optimize the performance of the model I am training?


This hypothesis has been materialized in the trend known as AutoML: a multitude of libraries and processes that promise to automate the configuration and training of models, the reproducibility of experiments, generation of metrics … so that we can focus purely on decision-making. To what extent is it true?


Let’s talk about the origins of AutoML, how it evolved and the proliferation of low-code tools, products, frameworks and libraries that are changing the rules of the game. Should Data Scientists be concerned?