Presentation: "A Bit of Algebra: Massive Amounts of In-memory Key/Value Storage + In-Memory Search + Java == NoSQL Killer?"

Time: Friday 15:35 - 16:35

Location: Rutherford Room, Fourth Floor

Abstract:

Have you ever wanted to put tens or hundreds of gigabytes in heap for high-speed in-memory operations, but couldn't because the garbage collector ate your app? It's a problem many have faced and that Terracotta's BigMemory for Enterprise Ehcache solves.

Unless exquisitely tuned, garbage collectors cause unpredictable—and unacceptable—application pauses that destroy application performance, frustrate end users and jeopardize critical business transactions. Server class machines are now shipping with 16GB or more of RAM that Java applications can’t effectively use without being exhaustively tuned or partitioned into multiple JVMs with small heaps. Many architects hope that advanced garbage collectors such as the G1 and concurrent mark sweep (aka CMS) collector can be used to keep applications running optimally. The ugly truth is, however, that no garbage collector is immune from unpredictable pauses that kill performance and blow SLAs.

Ehcache, the de facto caching standard for enterprise Java, can help. Its simple cache interface and sophisticated snap-in storage options have a much easier time than the garbage collector managing cache objects and figuring out what to keep and toss out. And, with the new search feature, the cache moves beyond a simple key/value store.

In this talk, we will cover the details of garbage collection in large scale JVMs. We will gain a shared understanding of the challenges faced by a generic collector and how Ehcache has found a way to hide the contents of the cache from the collector's view. We will demo a huge JVM running with its cache as heap objects versus letting Ehcache manage that same memory while still keeping that data inside the JVM. We will also discuss the implications of this new Java memory management capability on scaling

• Should apps be scaled up or continue to scale out?
• What can we do with all this data once we get it in memory?
• Can we do more than just key/value storage and retrieval?
• Does that mean Terracotta is now offering NoSQL solutions?

This is an advanced talk, but not limited to huge clustered applications. You should attend this talk if you want your application to run faster and be easier to develop and maintain.

Download slides

Kunal Bhasin, Deputy CTO at Terracotta

 Kunal  Bhasin

Kunal Bhasin is the Deputy CTO at Terracotta where he overlooks the ongoing evolution of the Terracotta Platform, mentors the Solution Architects in the field and helps with other initiatives of the CTO's office. For close to five years at Terracotta, Mr. Bhasin has been instrumental in some of Terracotta’s biggest deployments in terms of scale, performance and availability while playing a key role in the evolution of the Terracotta technology and product suite. He currently specializes in building and architecting highly scalable, highly available, fully fault tolerant and robust Java applications delivering peak optimum performance, ease of operations and lower cost.

Mr. Bhasin has many years of experience in a wide array of systems ranging from hard real-time at NASA-Ames to web/e-commerce applications. He came to Terracotta from Infospace, where he built and architected components of server side infrastructure including search, mobile and social gaming. Prior to Infospace, Mr. Bhasin was involved in various roles that included building a Robotics test bed for NASA-Ames and performing computer science research as a Research Scholar for Carnegie Mellon University.

Mr. Bhasin holds a M.S. in Software Engineering from Carnegie Mellon University and a bachelor’s degree in Computer Science from Pune University, India. He grew up in India and currently lives in Sausalito, California.