You've probably heard buzzwords like 'predictive analytics' or 'digital value propositions,' but what do they mean, and how do they fit together? This guide offers a solid foundation for data fluency: shared understanding of how data is disclosed, manipulated, and processed—and the implications thereof. It will help you understand how developers think, how business stakeholders can be effective in digital efforts, and critical questions for each stage of a data ‘supply chain.’ And of course, this deep dive into data fluency is part of a larger opportunity to increase digital fluency in yourself and your organization.
In our Digital Thinking Guidebooks, we talk a lot about the need to expand beyond our default ways of thinking and integrate new mindsets into our work.
Of course, some parts of our businesses have requirements that mean we must stay in existing analog mindsets for the moment. But other areas allow for newer, more creative thinking and if we're not prepared to try on these new mindsets we'll miss out on big opportunities to innovate.
Throughout this guidebook, we'll be highlighting new mindsets which are necessary for understanding and interacting with data masterfully. Let's start by expanding how you think about data's role in your business and industry, moving from a mindset of data as a utility to seeing data fluency as a capability.
Our first instincts for how to use data often focus on improving existing 'analog' processes or products incrementally: making them run faster, better, or cheaper. In this 'utility' role, data is used to optimize the sales, production and support pipelines of an organization. It's considered part of the IT domain, which is also commonly viewed as a utility. We also often imagine data as something that is collected and stored for future use, like a filing cabinet full of records that we'll come back to later.
If we want to become data fluent and take advantage of data's full potential, we have to start thinking of digital and data as something that greatly expands the limits of what is possible. Data is increasingly central to value creation, exponential business models and new technologies. And we need to be able to access, combine and transform the data we have quickly, which is more like waterworks than a filing cabinet. In order for this to work, we need to build data fluency into every part of our organizations.
Two years ago, the amount of digital initiatives that failed to reach their goal was a staggering 70%. In 2021, of the 26% of companies reporting success implementing digital change, only half of them managed to sustain that transformation. There are many reasons given for why this occurs, ranging from inconsistent 'buy-in,' a lack of investment, to lagging technologies.
In order for any digital transformation to sustain, however, the majority of the company must have increased digital fluency, including the understanding of the building blocks of digital: data.
Causeit talks a lot about shifting from an existing or default mental model to a new mental model. It doesn't mean the current mental model is bad; leaders need to know which mental model we’re applying. Some parts of our businesses have particular requirements requiring us to stay in an existing analog model for the moment, while other areas allow for newer thinking. We’ll talk a lot about these mindset shifts throughout the article. For example, changing the way we think of documents from ‘attachments' to links in the cloud, from data at rest to data in motion, or from spreadsheets to algorithms are examples of shifts in mindsets.
breaking down a complex problem into several simpler problems
a model of a system which leaves out unnecessary parts
using reusable components to minimize error and work
a series of unambiguous instructions to process data, make decisions and/or solve problems
algorithms converted to programming languages; sometimes called applications
Computational thinking is a mindset that allows machines and humans to work together to solve real-world problems. We need more than buzzwords that make us sound cool, but also a deeper understanding, allowing us to discuss tech meaningfully.
Programmers tend to think of problems as computations, but business stakeholders usually think in terms of packaged solutions. So, to collaborate effectively, we have to do something called decomposition: breaking down a large, complex problem into several smaller, simpler ones.
For example, to decompose the task of drawing a human face, we could break it into step-by-step instructions. First, draw a circle, then draw the lips and the eyes. If there is a variable, like hair, you might have parallel steps: Is the hair spiky or smooth? Based on the choices you make, the next steps might change.
As you decompose a problem into smaller steps, you also practice abstraction: creating a model of a system that leaves out unnecessary parts, while allowing us to see how different pieces fit together. In our example, a face has turned into a collection of individual features. And each feature is comprised of lines, shapes, or strokes of a pen.
Another element of computational thinking is looking for patterns that are reusable in multiple contexts, like building blocks. As organizations go digital, they often end up with many parallel systems that are built 'from scratch,' creating a digital ecosystem that is expensive and complex. The challenge they're faced with now is to standardize parts of that complexity without constraining anyone, so everyone can focus on what they do best.
These reusable building blocks can be used to create algorithms. An algorithm is a series of specific instructions telling a computer how to process data, make decisions, or solve problems. Algorithms can be combined to make programs or applications so they can work together to process data in useful ways.
Think of a problem or process you're working with that could benefit from computational thinking, i.e., being broken down into individual steps. What key steps can you identify? When you look at the list of steps, notice how using computational thinking may change the way you see the problem.
To understand how data works, we also need to understand the lifecycle of data as it moves through the stages and activities in the data supply chain. There are three distinct stages:
1) Disclosure by a human or a sensor or a system, during which data is acquired and stored;
2) Manipulation, during which data is aggregated and analyzed; and
3) Consumption, where data is used, shared or sold, and disposed of.
As an example, take the idea of a driverless car. The goal of 'driving' needs to be decomposed to see the sub-processes which make it up, like changing lanes or parking. An abstraction is a model of the roads, cars, laws, and environments the vehicle drives through. Patterns could be established for interpreting road signs. An algorithm is the collection of steps needed for a 'driverless' car to determine what to do in a given situation, such as another vehicle stopping unexpectedly in front of them. The collection of all these elements, abstractions, patterns, and algorithms equal a program.
For this article, we'll use the example of a driverless car or autonomous vehicle as the context for data’s journey through the supply chain.