Data processing pipelines for Small Big Data
Small Big Data is a grey area in data science between “it fits in memory” and 100 Tb. Some of the tools used for big data are overkill, and they might require a particular set of expertise that not every organization has. In contrast, many of the libraries and paradigms used for small data can become expensive when deploying to the cloud. How can we process large-ish data fast and efficiently?
Esteban J. G. Gabancho
Anthony Franklin, PhD
Accomplished advanced analytics expert and consultant. Serial entrepreneur and co-founder of Fanalytical Inc. Former Div. I college football player and lifelong academic.