Since the late-2010s, business uses cases for graph technologies have become more widespread. Along with that, a practice of _graph data science_ has emerged, blending graph capabilities into existing data science teams and industry use cases. Research topics have also moved into production. For example, recent innovations such as _graph neural networks_ provide excellent solutions in business use cases for _inference_ – a topic which has otherwise perplexed the semantic web community for decades. In practice, we tend to encounter “disconnects” between the expectations of IT staff (who are more familiar with relational databases and big data tools) and business users (who are more familiar with their use cases, such as network analysis in logistics).
This talk explores _graph thinking_ as a cognitive framework for approaching complex problem spaces. This is the missing part between what the stakeholders, domain experts, and business use cases require – versus what comes from more “traditional” enterprise IT, which is probably focused on approaches such as “data lakehouse” or similar topics, but not doing much yet with large graphs.
We’ll explore some of the more common use cases for graph technologies among different business verticals and look at how to approach a graph problem from the point of having a blank white board. Where are graph databases needed? Where should one focus more on graph computation with horizontal scale-out and hardware acceleration? How do graph algorithms complement what graph queries perform? There’s lots of excellent open source software that can be leveraged, and our team at Derwen has been busy with open source integration on behalf of a very large EU manufacturing firm. Current solutions integrate Ray, Dask, FastAPI, RAPIDS, RDFlib, openCypher, and several C++ libraries for HPC.