The Importance of Columnar Databases
January 5, 2015
A long, long time ago in the tiny Slobovian fishing village of Tikriphit, a merchant carved a transaction into a cuneiform tablet of his ledger. The customer’s name, tribe of origin, maritime permit, and items sold (fish net, spear, rope, etc.) were laboriously inscribed onto one line. This was the twentieth and last record to be carved onto this tablet. When he was done, the merchant smiled, marched it to the back room, and carefully placed it on a stack of similar tablets. With a few more tablets, he would top off this stack to its eventual height of one hundred. There were already thirty-five such stacks around the room. Almost seventy thousand transactions since the beginning of last year. Again, the merchant smiled; business was good.
A knock at the door. The King’s Taxmen were outside. The merchant realized he had forgotten this was Tax Day. Every year he had pulled each tablet off its stack, added up the sales figures for the last year, and had his tax payment ready for the King. But this year he had forgotten.
An involuntary twitch rolled across his lip. His hands trembled when he opened the door. “Gracious sirs,” he blubbered, “you’re early this year. I’m not ready.”
The first Taxman, placing his hand on his sword scabbard, leaned into the diminutive merchant. His foul breath hissed behind his knotted beard. He sneered, “You know the penalty for failing to pay the King’s tribute, don’t you?”
“Please, wait,” the merchant gasped. “I have all my records in the back room. Let me go through them and sum up my income.”
The second Taxman sneered. “Little man, you think we have all day to watch you add up records? Far easier to take you away now, let you work off your debt in the salt mines.”
The merchant fell back, reeling. “Ahhhhh!” he cried. “Why? Why didn’t I carve all my sales figures on their own board? All their numbers would have been in one place, and I could have known my tax right away. And now it’s too late!”
“Aye,” said the first Taxman, dragging the hapless merchant out the door by his hair. “It’s too late. And now, the salt gnomes will get a carving board of their own.”
And that, dear reader, is the importance of a columnar database.
Shoot forward to the present day, and not much has changed. The technology is a little smaller, but it’s still the same idea. You’re a multinational corporation selling your fishing gear over the Internet. You’re recording all your sales with an On-Line Transaction Processor (OLTP) like Sybase ASE, Oracle 10g, or MySQL. Business is great. Unfortunately, all your data are stored record-by-record, and they’re all on one tiny little hard-drive inside an old 386 in Aunt Petunia’s closet, because – well, because no one ever thought to move them. Even if you use your indexes wisely, the I/O required to get even the most basic summary figures off that hard-drive – going through every single record, pulling one or two fields – is going to be off the hoof. We’re talking hours, maybe even days.
A knock at the door.
See your problem? Whoever’s behind that door – regulators with punitive authority, customers with probing concerns, management with new crises – is going to make you wish you had an On-Line Analytical Processor (OLAP) instead. Powered by columnar storage, these OLAP engines read their data field-by-field, not record-by-record. And that makes all the difference in the world.
How’s that, you say? Consider that a table has multiple records, and that each record has multiple fields. For example, the contact list in your phone has a record for each person, and each record has a different field for name, address, home number, etc. These records and fields are synonymous with, respectively, rows and columns. In a row-oriented database, each record is stored in sequence, record #1 coming before record #2, and so on. If you want to read any individual field, you have to read all the way through each record before you get to the next one. It’s unwieldy.
In a columnar database, however, each column of data is stored contiguously on one logical unit node. This is referred to as a LUN, and it is usually the same piece of hardware, like a disk or an SSD drive. Consider the hapless merchant. If he had put all his sales figures on one tablet, that would have been the LUN that saved him. He could have referenced all his income before the Taxman got through the door.
Having said that, columnar storage is not so important by itself. Rather, it is important as the foundation for everything that follows. This simple reconfiguration of data on media enables a variety of optimizations. With these optimizations, the retrieval performance is off the hoof.
The first optimization is in the super-charged indexing. The database engine, freed of the constraints of record management, can apply indexing to each column. Each column may have a different index type, and each index may be optimized separately.
For instance, the most basic index is the Fast Projection index: the FP index represents the data themselves. If you’re used to row-oriented databases, this can be another strange concept. Rather than the indexes being generated separately and stored elsewhere, the raw data are themselves indexes. Literally speaking, they are what they are. It’s a heavy concept, but one worth contemplating.
Another optimization is what are called Low Fast indexes. These are applied to low cardinality data. By “low cardinality”, I mean data like “City”, “Zip Code”, etc., for which there are only a few distinct values. Duplicates can be thrown away, and the distinct values saved in one small LUN.
Another optimization is the Comparison Index. This compares data between two columns in the same table, providing they are of the same data type and precision. Such a dedicated index would super-charge any query which depends on those two columns. That sounds obvious, to be sure, but until the introduction of Columnar Storage, it wasn’t particularly feasible.
Besides indexing, columnar storage enables other optimizations. For instance, there is something called “Copy On First Write”, which solves the conundrum of how to maintain a “Quality Assurance” (QA) database without consuming your precious disk space. The idea is this: Rather than create a separate copy of your Primary database, you simply place a timestamp inside your DBMS that says, in effect, “All updates to the Primary after this time will cause a copy of the affected data to be saved to the QA LUNs”. In other words, you’re only using disk space as you need it, and even then, you’re only using the LUNs which you’ve dedicated to this QA database. Now, when you look at the QA database, you are actually looking at the original data layered on top of the Primary. And since most updates only affect a small subset of the table’s columns, only those columnar data are “Copied on First Write.”
Another optimization is the maximum usage of multicore CPUs. Reads and writes to different columns of data may be isolated in different threads, which, in turn, may be assigned to different cores. This parallel processing burns up the cycles, but produces answers extremely quickly. In fact, your only limit is in how many cores are made available by the hardware techs, and how many you are allowed by the licensing staff.
Another optimization is in the storage technology itself. For instance, there is the massive engineering discipline of Data Striping. That, my friends, is a whole other topic. It’s a really big one, and it is covered in most Computer Science curricula. Spend a few hours on the Internet looking at it, and you will see how vast it is. What is clear is that it is enabled by columnar storage.
Had enough of optimizations? Don’t blame you. On the other hand, if you’re an “Optimo” junkie, then you can’t get enough. We are everywhere. In fact, there is even a professional group dedicated to providing scorecards, the Transaction Performance Processing Council. This group was formed, in part, to respond to the exploding popularity of columnar databases. Wikipedia’s “Column-oriented DBMS” page lists no fewer than 27 different offerings. Sybase IQ is among the oldest of them, currently at version 15.0. There are the trendy newcomers like HBase and Redshift, as well as the directed products like Vertica and KDB. Clearly, columnar databases are here to stay, and for all the reasons covered above.
The era of Big Data starts with this concept: divide the data table by column, not record, and you can store more data, more easily, and with more access. Just like our hapless merchant in the days of yore, today’s merchants are finding themselves increasingly dependent on querying their data quickly and efficiently. Columnar databases lay the foundation for such optimizations.
Copyright 2014 U.Q. Magnusson
Back to Blogs