In context: Databases are in something of a Golden Age right now. There is an immense amount of development taking place in and around the way we store and access data. The world is obsessed with "data," and while we would not call it the "new oil," our ability to manipulate and analyze data continues to advance in important ways. But at their heart, databases are fairly straightforward things - repositories of data.

All this innovation we are seeing centers on new ways to access that data (a.k.a. the "cloud") and the speed with which we can convert massive amounts of data into something useful. Not to diminish the very real innovation taking place here, but like the rest of technology it is driven by trade-offs – speed in one area slows another, optimize for readability and writing slows down.

Editor's Note:
Guest author Jonathan Goldberg is the founder of D2D Advisory, a multi-functional consulting firm. Jonathan has developed growth strategies and alliances for companies in the mobile, networking, gaming, and software industries.

Much of the advances we are seeing in databases and around companies like Snowflake and Data Dogs comes from the application of faster networks and more powerful compute, which make all of this possible. Given our view of the changes taking place around compute, we have recently been exploring areas where custom chips could have an impact here. It seems likely that all these advances in cloud data processing lend themselves to some very special purpose chips.

The purpose of a chip is to run software as efficiently as possible. In the past, all of this could be accomplished with a CPU, especially when Intel was leading the way on Moore's Law. There was always a faster CPU just coming out that could solve any processing problem.

There was always a faster CPU just coming out that could solve any processing problem.

Even before Moore's Law slowed, certain applications stood out for needing a better solution. The prime example was graphics. GPUs could just run graphical operations more efficiently than a CPU, and so, GPUs became commonplace.

Much of this advantage came from the fact that GPUs were just laid out differently than CPUs. In the early days of GPUs the algorithms for handling graphics were fairly common for most uses (i.e. gaming). And GPUs were originally designed to replicate the math in those algorithms. You could almost look at the architecture of a GPU and map individual blocks to the different terms of those equations. This process is now being reproduced in many other fields.

For databases, there are considerable similarities. Databases are already fairly "streamlined" in their design, they are highly optimized from inception. Someone should be able to design a chip that mirrors the database directly. The problem is that "databases" are not a single thing, they are not just giant spreadsheets of rows and columns. They come in many different flavors – some store data in rows, others in columns, others as a grouping of heterogenous objects (e.g. photos, videos, snarky tweets, etc.). A chip designed for one of those will not work as well for one of the others.

Now to be clear, companies have been designing chips for optimizing data for a long time. Storage makers like Western Digital and Fujitsu are prominent components on our list of homegrown silicon companies. They make chips that optimize their storage on those companies' hardware. But we think things are going to go further, where companies start to design chips that operate at a layer above the management of physical bits.

A big topic in databases is the trade-off between analyzing and storing data. Some databases are just large repositories of data that only need to be accessed on occasion, but far more important are data that need to be analyzed in real-time. This ideally involves keeping the data in memory close to the processor making those real-time decisions. Without getting too deep into the weeds, there are several different approaches one could make when improving database utility in silicon. Each of these is a company waiting to become a unicorn.

This work is already happening. Companies like Fungible are already far down this path. Many of the problems that the big Internet companies are solving with their homegrown chips are attacking this problem in some way. We have to imagine that Google has something even more advanced along this avenue in the works.

We think this area is important not only because it offers significant commercial opportunity. It also highlights the ways in which compute is shifting. All of the advances we mentioned in innovation rest on the assumption of ongoing improvements in compute. With traditional methods for achieving those advances now greatly slowed, all that innovation in software is going to spur – it is going to require – innovation in silicon to deliver.