Tidy Data: Why It Matters, the data.table Way

182 VIEWS

Have you ever been to Norway’s capital, Oslo? I have. I’ve also visited Asia. After having seen Oslo, and after traveling throughout Asia a couple of times, the one thing that stands out is the level of tidyness I witnessed. No litter in the streets. The air is fresh because the streets are car-free, or electric vehicles are zooming by. And the persistent spatial planning makes you feel you have arrived at Moore’s Utopia.

But that’s not that kind of “tidy” this article is about. When you’re a data scientist and you hear the word “tidy,” the first thing that should pop into your mind is “tidyverse”—a mystical yet very useful concept that makes your job a lot easier. In this article I explain the concept of “tidy data” and how the data.table library relates to it.

Tidy data

If I lived in Norway, I would stare out at the sea and count boats all day.
If I were a bad data scientist, my data on those boats would look like what you will see below. Let’s load the required libraries for this example, set the seed, and generate some example data.

Indeed, this isn’t tidy data. According to Hadley Wickham, data science guru and chief scientist at RStudio, tidy data abides three rules:

1. Each variable must have its own column.
2. Each observation must have its own row.
3. Each value must have its own cell.

The data above violates rule one and rule two. The variable “count of boats” is spread over three columns, and a column for boat color does not exist. There are multiple ways to transform your sloppy data into tidy data. Most of these libraries exist inside an ecosystem that is called the tidyverse. How you transform data using these libraries is very well described in Wickham’s book, R for Data Science. However, I want to show you how the popular data.table package can do the work for you—and faster.

Transforming the above dataset can be done by converting the data table from “wide format” to “long format.”

Why is this relevant? Because many packages reward tidy data. A popular plotting package is ggplot2, also part of the tidyverse. If we were to plot the data with three line plots using the sloppy data set, we would have to write quite a lot of code.

Each boat color has its own line, for which we individually need to specify the column. Worse, it doesn’t even give each line a color manually, so that also needs to be specified. Finally, it doesn’t even have a legend. Consequently, the reader of our plot does not know what the colors represent. The red line could represent the type of boat, or the flag under which it sails, etc.

However, if you consistently use ggplot2 with tidy data, your life will become a lot easier. The same graph, with a legend, can be created with three lines of code.

Or even better, split it into three different plots with only one line of code:

It’s surely become clear by now why the tidyverse matters: coding speed. Follow the tidyverse rules, and you get rewarded with more efficient coding. Is this always true? Unfortunately, no. As another famous data scientist, Jeff Leek, points out: (1) sometimes you need computations that are more efficient if the data is represented in another format, and (2) some functions and libraries might ask you to use a different format. But generally, working with tidy data will earn you a lot of data science street cred.

data.table

There is another way to get that street cred: computing speed. Coding speed matters when you’re working with small datasets. And computing speed matters when you’re working with huge amounts of data. And that’s where data.table comes in.

Let’s benchmark data.table to dplyr for some basic operations. In the first operation, we count the total amount of boats per day. In the second operation we filter the counts on their color. In the third example, we select three out of four columns.

The data.table syntax differs strongly from the dplyr syntax, and if you’re unfamiliar with it, it can be difficult to understand at first glance. However, as you can see from the benchmarks, data.table is twice as fast as its standard tidyverse ecosystem counterparts. In the long run, coding in R in the data.table way is rewarding. If you’re having a hard time working with the syntax, there is a very handy cheat sheet by DataCamp that should get you through your first weeks.

Wrapping up

If you want to be a fast data scientist, you should make the tidy data standards your own. If you want to be the fastest data scientist, use the data.table library. It’s like driving a Tesla in Norway.
You can find the images and code used in this example here.

Do you think you can beat this Sweet post?

If so, you may have what it takes to become a Sweetcode contributor... Learn More.

Roel Peters works as online business consultant and has degrees in economics, international relations and data science. He started coding as a kid and has experience in half a dozen of coding languages. His core specialization is knowledge discovery and insights generation. Roel is a wide-eyed techno-optimist and avid supporter of a universal basic income.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu