TALK

← Back to the schedule

Validating Big Data Jobs – Stopping Failures before Production (w/ Spark, BEAM, & friends!)

Calendar icon

Wednesday 14th

Time icon

14:05 | 14:45

Location icon

Theatre 25

Technology

Description:

As big data jobs move from the proof-of-concept phase into powering real production services, we have to start considering what will happen when everything eventually goes wrong (such as recommending inappropriate products or other decisions taken on bad data). This talk will attempt to convince you that we will all eventually get aboard the failboat (especially with ~40% of respondents automatically deploying their Spark jobs results to production), and it’s important to automatically recognize when things have gone wrong so we can stop deployment before we have to update our resumes.

Figuring out when things have gone terribly wrong is trickier than it first appears since we want to catch the errors before our users notice them (or failing that before CNN notices them). We will explore general techniques for validation, look at responses from people validating big data jobs in production environments, and libraries that can assist us in writing relative validation rules based on historical data. For folks working in streaming, we will talk about the unique challenges of attempting to validate in a real-time system, and what we can do besides keeping an up-to-date resume on file for when things go wrong.

The talk will have code examples in Apache Spark, as well as explore similar concepts in Apache BEAM (a cross-platform tool), but the techniques should be applicable across systems.

To keep the talk interesting real-world examples (with company names removed) will be presented, as well as several creative-common licensed cat pictures and an adorable panda GIF.

MEDIA

Keynote