Welcome!

@DXWorldExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @DXWorldExpo, Java IoT, Microservices Expo, Open Source Cloud, Containers Expo Blog, @CloudExpo

@DXWorldExpo: Article

Four Myths of In-Memory Computing

For Your Information

Let's start at... the beginning. What is the in-memory computing? Kirill Sheynkman from RTP Ventures gave the following crisp definition which I like very much:

"In-Memory Computing is based on a memory-first principle utilizing high-performance, integrated, distributed main memory systems to compute and transact on large-scale data sets in real-time - orders of magnitude faster than traditional disk-based systems."

The most important part of this definition is "memory-first principle". Let me explain...

Memory-First Principle
Memory-first principle (or architecture) refers to a fundamental set of algorithmic optimizations one can take advantage of when data is stored mainly in Random Access Memory (RAM) vs. in block-level devices like HDD or SSD.

RAM has dramatically different characteristics than block-level devices including disks, SSDs or Flash-on-PCI-E arrays. Not only RAM is ~1000x times faster as a physical medium, it completely eliminates the traditional overhead of block-level devices including marshaling, paging, buffering, memory-mapping, possible networking, OS I/O, and I/O controller.

Let's look at example: say you need to read a single record in your program.

In in-memory context your code will be compiled to interact with memory controller and read it directly from local RAM in the exact format you need (i.e. your object representation in particular programming language) - in most cases that will result in a simple pointer arithmetic. If you use proper vectorized execution technique - you'll often read it from L2 cache of your CPUs. All in all - we are talking about nanoseconds and this performance is guaranteed for all cases.

If you read the same record form block-level device - you are in for a very different ride... Your code will have to deal with OS I/O, buffered read, I/O controller, seek time of the device, and de-marshaling back the byte stream that you get from it to an object representation that you actually need. In worst case scenario - we're talking dozen milliseconds. Note that SSDs and Flash-on-PCI-E only improves portion of the overhead related to seek time of the device (and only marginally).

Taking advantage of these differences and optimizing your software accordingly - is what memory-first principle is all about.

Now, let's get to the myths.

This is one of the most enduring myths of in-memory computing. Today - it's simply not true. Five or ten years ago, however, it was indeed true. Look at the historical chart of USD/MB storage pricing to see why:

The interesting trend is that price of RAM is dropping 30% every 12 months or so and is solidly on the same trajectory as price of HDD which is for all practical reasons is almost zero (enterprises care more today about heat, energy, space than a raw price of the device).

The price of 1TB RAM cluster today is anywhere between $20K and $40K - and that includes all the CPUs, over petabyte of disk based storage, networking, etc. CIsco UCS, for example, offers very competitive white-label blades in $30K range for 1TB RAM setup: http://buildprice.cisco.com/catalog/ucs/blade-server Smart shoppers on eBay can easily beat even the $20K price barrier (as we did at GridGain for our own recent testing/CI cluster).

In a few years from now the same 1TB TAM cluster setup will be available for $10K-15K - which makes it all but commodity at that level.

And don't forget about Memory Channel Storage (MCS) that aims to revolutionize storage by providing the Flash-in-DIMM form factor - I've blogged about it few weeks ago.

Myth #2: It's Not Durable
This myths is based on a deep rooted misunderstanding about in-memory computing. Blame us as well as other in-memory computing vendors as we evidently did a pretty poor job on this subject.

The fact of the matter is - almost all in-memory computing middleware (apart from very simplistic ones) offer one or multiple strategies for in-memory backups, durable storage backups, disk-based swap space overflow, etc.

More sophisticated vendors provide a comprehensive tiered storage approach where users can decide what portion of the overall data set is stored in RAM, local disk swap space or RDBMS/HDFS - where each tier can store progressively more data but with progressively longer latencies.

Yet another source of confusion is the difference between operational datasets and historical datasets. In-memory computing is not aimed at replacing enterprise data warehouse (EDW), backup or offline storage services - like Hadoop, for example. In-memory computing is aiming at improving operational datasets that require mixed OLTP and OLAP processing and in most cases are less than 10TB in size. In other words - in-memory computing doesn't suffer from all-or-nothing syndrome and never requires you to keep all data in memory.

If you consider the totally of the data stored by any one enterprise - the disk still has a clear place as a medium for offline, backup or traditional EDW use cases - and thus the durability is there where it always has been.

Myth #3: Flash Is Fast Enough
The variations of this myth include the following:

  • Our business doesn't need this super-fast processing (likely shortsighted)
  • We can mount RAM disk and effectively get in-memory processing (wrong)
  • We can replace HDDs with SSDs to get the performance (depends)

Mounting RAM disk is a very poor way of utilizing memory from every technical angle (see above).

As far as SSDs - for some uses cases - the marginal performance gain that you can extract from flash storage over spinning disk could be enough. In fact - if you are absolutely certain that the marginal improvements is all you ever need for a particular application - the flash storage is the best bet today.

However, for a rapidly growing number of use cases - speed matters. And it matters more and for more businesses every day. In-memory computing is not about marginal 2-3x improvement - it is about giving you 10-100x improvements enabling new businesses and services that simply weren't feasible before.

There's one story that I've been telling for quite some time now and it shows a very telling example of how in-memory computing relates to speed...

Around 6 years ago GridGain had a financial customer who had a small application (~1500 LOC in Java) that took 30 seconds to prepare a chart and a table with some historical statistical results for a given basket of stocks (all stored in Oracle RDBMS). They wanted to put it online on their website. Naturally, users won't wait for half a minute after they pressed the button - so, the task was to make it around 5-6 seconds. Now - how do you make something 5 times faster?

We initially looked at every possible angle: faster disks (even SSD which were very expensive then), RAID systems, faster CPU, rewriting everything in C/C++, running on different OS, Oracle RAC - or any combination of thereof. But nothing would make an application run 5x faster - not even close... Only when we brought the the dataset in memory and parallelized the processing over 5 machines using in-memory MapReduce - we were able to get results in less than 4 seconds!

The morale of the story is that you don't have to have NASA-size problem to utilize in-memory computing. In fact, every day thousands of businesses solving performance problem that look initially trivial but in the end could only be solved with in-memory computing speed.

Speed also matters in the raw sense as well. Look at this diagram from Stanford about relative performance of disks, flash and RAM:

As DRAM closes its pricing gap with flash such dramatic difference in raw performance will become more and more pronounced and tangible for business of all sizes.

Myth #4: It's About In-Memory Databases
This is one of those mis-conceptions that you hear mostly from analysts. Most analysts look at SAP HANA, Oracle Exalytics or something like QlikView - and they conclude that this is all that in-memory computing is all about, i.e. database or in-memory caching for faster analytics.

There's a logic behind it, of course, but I think this is rather a bit shortsighted view.

First of all, in-memory computing is not a product - it is a technology. The technology is used to built products. In fact - nobody sells just "in-memory computing" but rather products that are built with in-memory computing.

I also think that in-memory databases are important use case... for today. They solve a specific use case that everyone readily understands, i.e. faster system of records. It's sort of a low hanging fruit of in-memory computing and it gets in-memory computing popularized.

I do, however, think that the long term growth for in-memory computing will come from streaming use cases. Let me explain.

Streaming processing is typically characterized by a massive rate at which events are coming into a system. Number of potential customers we've talked to indicated to us that they need to process a sustained stream of up to 100,000 events per second with out a single event loss. For a typical 30 seconds sliding processing window we are dealing with 3,000,000 events shifting by 100,000 every second which have to be individually indexed, continuously processed in real-time and eventually stored.

This downpour will choke any disk I/O (spinning or flash). The only feasible way to sustain this load and corresponding business processing is to use in-memory computing technology. There's simply no other storage technology today that support that level of requirements.

So we strongly believe that in-memory computing will reign supreme in streaming processing.

More Stories By Nikita Ivanov

Nikita Ivanov is founder and CEO of GridGain Systems, started in 2007 and funded by RTP Ventures and Almaz Capital. Nikita has led GridGain to develop advanced and distributed in-memory data processing technologies – the top Java in-memory computing platform starting every 10 seconds around the world today.

Nikita has over 20 years of experience in software application development, building HPC and middleware platforms, contributing to the efforts of other startups and notable companies including Adaptec, Visa and BEA Systems. Nikita was one of the pioneers in using Java technology for server side middleware development while working for one of Europe’s largest system integrators in 1996.

He is an active member of Java middleware community, contributor to the Java specification, and holds a Master’s degree in Electro Mechanics from Baltic State Technical University, Saint Petersburg, Russia.

DXWorldEXPO Digital Transformation Stories
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
CloudEXPO New York 2018, colocated with DevOpsSUMMIT and DXWorldEXPO New York 2018 will be held November 12-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI and Machine Learning to one location.
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, softwar...
ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of computational needs for many industries. Their solutions provide benefits across many environments, such as datacenter deployment, HPC, workstations, storage networks and standalone server installations. ICC has been in business for over 23 years and their phenomenal range of clients include multinational corporations, universities, and small busines...
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...