The Data Source #5 | Data Science Notebooks & the Opportunity Ahead π
Welcome to The Data Source, your monthly newsletter covering the top innovation in data infrastructure, engineering and developer-first tooling.
Subscribe now and never miss an issueΒ ππΌ
In this edition of The Data Source, Iβm diving into current data science notebook solutions to make sense of the ongoing innovation in the space and share some areas that Iβm particularly interested in investing in.Β
Data Science Notebooks, simply explained π₯
There are a number of free, open sourced notebooks as well as a mix of proprietary solutions that exist in the market today that are being adopted by data scientists at large across the enterprise. Even though computational notebook interfaces have existed for decades now, we are starting to see some innovative work being done both at the notebook level and beyond.
Before I jump into some of the ongoing work around next-gen computational notebooks and collaborative data workspaces, let me set some context:
When Project Jupyter spun off from IPython in 2015, the idea was to create an open-source environment, governed by open standards, for interactive computing that supports multiple programming languages. Fundamentally, a Jupyter notebook is a JSON document that stores live codes, represented in a logical way across cells and displays the outputs upon executing those codes in a way thatβs relatively easy for humans to read and write.Β
Over time, Jupyter notebooks gained popularity within the data science community as it created a new standard for teams to collaborate on code in an interactive way. Just like you would on a Google document, multiple teammates can work off a Jupyter notebook, contributing, embedding, and tracking in real-time text and code. Teams have the ability to add descriptive texts to their codes to explain what the code is doing, why certain decisions are being made, and how certain analyses are being derived.Β
With ~10 million public Jupyter Notebooks on GitHub and a growing number (~2,000) of open job postings on LinkedIn that list βJupyter Notebookβ as a required experience or bonus skill set, itβs clear that Jupyter has become a critical tool in the data science stack.
As organizations continue to grow their data processes to the point where collaboration between teams (and tooling) becomes a friction point, this will unlock some exciting opportunities in the Jupyter ecosystem for a better collaborative front-end experience.Β
But while there are undoubtedly many things to love about Project Jupyter, there are a few key areas for improvement that came up in my research and conversations with data practitioners that Iβd like to call out:Β
π‘ Workflow oriented tools that address the challenges in bridging the gap between performing computational work and communicating the results
While Jupyter was developed around the idea of βshareableβcomputational notebooks, aimed at making it easier for data scientists / analysts / scientific practitioners to iterate faster on code and share their experimental results with their teams (typically over GitHub, Jupyter Notebook Viewer, email etc.), my research shows that Jupyterβs sharing functionality doesnβt extend well to non-programming / non-technical counterparts as it was primarily built with the technical user in mind. Itβs a problem thatβs becoming more and more apparent in specific industries, such as biotechnology and healthcare where data scientists and analysts often need to communicate back and forth with non-technical stakeholders, such as scientists and researchers.
From my research, it seems like there is consensus that computational notebook interfaces have created a step change in the way that folks collaborate and iterate on processes. Whatβs missing in the market today is a better UI/UX for collaboration (more on this below) and a better workflow-oriented tool that can effectively close the gap between performing computational work and reporting out on those results in a way thatβs secure, easily reproducible, and easily accessible to a diverse group of users.
In fact, one of the themes Iβve seen come up a lot from the data science community is this idea of better data shareability where you can have a set process in place for creating reproducible work. Today, there are a number of solutions that have emerged in the market such as Hex, Curvenote and Noteable, that are fundamentally re-thinking the way teams beyond the data science organization collaborate on data through a shared interface. Unlike Jupyter that caters largely to one particular user group, these next-gen solutions are taking a different approach. Instead of focusing on the needs of the individual data scientist or analyst, their goal is to open up access to data to anyone within the organization.Β
Building on top of the Jupyter ecosystem, these workflow-specific solutions are hyper-focused on turning data science computations into meaningful and usable outputs (e.g. report, dashboard, etc.), while optimizing for a better data sharing and collaboration experience for users. These tools integrate deeply into the data layer, offer great back-end support and a good query experience.
Also tackling the βcomputation-to-communicationβ challenge is another interesting category of solutions, namely data framework products that focus on the development of interactive apps. Tools such as Streamlit, Plotly Dash, Voila, RStudio Shiny and Panel enable ML engineers and data scientists to turn their codes and ML scripts into interactive web applications.
π‘ Tools that are building a new computational medium altogether
Youβve probably heard, over and over again, that a good data science practice must follow the rules of good software engineering and this couldnβt be more true. Data science has significantly evolved over time where data scientists today are building and deploying their own software.Β
While traditional notebooks are fairly good at delivering guided experiences for software development, they are still missing some of the core features (autocompletion, unit testing, debugging, version control, code reviews, documentation, etc.) that would encourage best practices around building reliable software. Besides, there are a bunch of known operational challenges around running and executing notebooks that make it quite difficult to go from prototyping to productionizing.
The question is, what does it take to address these challenges?Β
Of the more recent innovations in the data science notebook space is Deepnote, a python notebook, built around the Jupyter ecosystem. Their approach is to create a completely new computational front-end that brings core software development best practices to the data science workflow. At a quick glance, what really stands out is their focus on real-time collaboration and growing library of integrations.
Other tools in this category include: Google Colaboratory and Datalore.
Itβs exciting to see innovation happen at the UX/UI layer. In addition to improving collaboration and data science reporting capabilities, I think thereβs an exciting opportunity around improving the discoverability of notebooks and enabling data governance, and auditability (especially for organizations that have a significant number of notebooks under their purview), and Iβll be watching this space as it continues to evolve.
Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β
Letβs Chat!
And thatβs a wrap folks! To all the founders and data scientists out there, Iβd love to swap notes if this is a space that youβre passionate about. MyΒ Twitter DMΒ is open and you canΒ Β π©Β Β atΒ priyanka@work-bench.com!
/Priyanka
β¦
Work-BenchΒ is an early stage enterprise software-focused VC firm based in NYC with our sweet spot for investment being at the Seed II stage which correlates with building out a startupβs early go-to-market motions. In the data world, weβve been fortunate to invest in companies like Cockroach Labs, Arthur, Algorithmia, Datalogue, Alkymi, x.ai and others.