< Articles & News
A Simple 3-Step Recipe For Cooking Up Scalable Analytics
January 10, 2017 | 12:13 PM
Big Data Analytics
Originally posted 12/13/16 on TeradataVoice by Bill Franks, Teradata
Most of my use cases around scaling analytics involve transforming processes and even transforming products. We reconfigure operations or the supply chain, industrialize analytics architectures and take any number of other steps to make the entire organization more data-driven. But, stop for a moment to think of a fast-growing cookie manufacturer, who needs to make sure that some things never change – like the flavor or texture of their best selling chocolate chip variety. They need a recipe for a cookie that tastes the same whether you’re baking a dozen, or a million of them, at a time.
So, how do you innovate in such situations, especially considering the risk that comes with implementing change to the point where the consumers of that change would feel the experience is no longer the same. In analytics, the balancing act comes in ensuring new processes integrate well with current operational environments. In the spirit of our cookie analogy, here’s my own three point recipe for analytics that enable scaling and risk management in an era where algorithms take more and more action on their own:
- Pilot to Scale – To scale a business analytics solution, you must pilot how that given approach works in a few cases before you try to make it work for all cases. The goal is to automate. The cookie manufacturer thoroughly tests a new recipe in small batches before turning on the assembly line. To illustrate with an analytics example, I used to build propensity models – commonly used to predict who will buy what – for a retail client to analyze who’d be buying their top 10 or 20 products. While I was working with many different product models and categories, I nonetheless found that by the time I had built 5 or 10 models I was seeing a lot of commonalities. That, in turn, allowed me to semi-automate a kind of model creation framework for the hundreds of other categories and products.
- Risk is Unavoidable – To paraphrase the Spider Man mantra, “with great power comes great responsibility”… and risk. As you automate and have decisions more and more proactively made by algorithms, things can and will go wrong. Your strategy should focus on the good enough vs. the perfect: the goal is not to maximize the quality of each individual decision, but rather to maximize the aggregate impact of the process across all decisions.
- Build Complexity Over Time – Operational analytics will continue to become more and more complex over time. Just think about the automobile and the system of systems innovation curve from automated braking to full autopilot; these innovations build on one another. Without the building blocks of automated braking and lane change detection, you can’t get to the much more exciting autopilot functionality. The same is true for operational analytics: You don’t replace traditional analytics, you just build on them – adding new layers of complexity after the prior layers are fully deployed and stable. This layer cake approach is what might guide a shipping company to first optimize truck contents; then optimize daily routes; and ultimately do it all in real-time.
All of this takes work, but remember that even a one or two percent increase in efficiency can translate into millions of dollars if you’re a large organization. And don’t forget that your brand is ultimately at stake, because scaling can threaten core business value if it’s not aligned with quality control and product consistency.