Scale Infrastructure with Data Growth: Instant Access Delivers Accurate Financial Modeling

Across the financial services industry, the ability to access and analyze massive data sets as quickly as possible can determine a company’s competitive advantage. Even a few milliseconds of latency can mean unsecured transactions or unacceptable trading results. Now, managing these ever-growing amounts of data can be overwhelming to existing technology infrastructure. The solution is rearchitecting your memory and storage hierarchy with Intel® Optane™ technology.

Highlights

  • Scalable data architecture helps quantitative analysts find insights faster by not compromising speed (processing power) or accuracy (data set size).

  • Intel® Optane™ PMem addresses these speed and performance challenges with a new compute architecture and a tiered approach to memory.

  • Paired with solutions such as MemVerge Memory Machine* or VAST Universal Storage*, Intel® Optane™ PMem has helped customers in the financial service industry reduce their time to insight, scale memory out to petabytes safely, and implement affordable flash storage infrastructure.

author-image

By

The Need for More Data, in Real-Time

Increasingly, financial services firms are turning to artificial intelligence (AI) to wrangle floods of data. Today, more than 90% of the top 50 banks around the world are using advanced analytics to improve the speed and quality of customer and business insights.1

The ability to combine historical market and customer data with real-time applications and analytics greatly improves the quality of insights. This leads to improved business decisions as well as employee and customer experiences.

Financial firms face a significant hurdle: the exponentially increasing amount of data required to improve the quality of business decisions. Real-time financial services require extreme processing performance to access, read, and analyze petabytes of current and historic data while computing with high availability and sub-millisecond processing time. A single financial transaction can initiate 2,000 concurrent requests for data access; each payment transaction may require hundreds of database reads/writes; and all while the institution’s compute infrastructure is processing millions of similar transactions per second.2

Quantitative analysts need the ability to make fast and informed decisions by analyzing historical customer and market data as well as real-time customer and market data. In the world of financial services, customer service is being increasingly defined by the ability to personalize customer experiences. In a recent report, Forester highlights “the importance of retaining customers via engaging and empathetic customer service.”3 And Forbes adds that “today, 89% of companies compete primarily on the basis of customer experience.”4

For financial services firms, data needs to be instantaneously disseminated across all trading and analytics platforms. This allows AI and machine learning (ML) to extract insights while keeping the infrastructure secure.

For financial and payment applications, speed is critical. A latency delay of even a few milliseconds can result in fraudulent or unsecured transactions and unacceptable risk exposures. That’s why digital payment providers increasingly rely on access to more data—all delivered and analyzed in real time—to provide secure digital payment and related services.

Delivering and maintaining this level of transaction performance requires solutions that must be, above all, scalable. Scalability is important to future-proof your infrastructure to accommodate these changing workloads. However, legacy infrastructure and aging memory technologies store transactional data and analytical data in separate storage silos, hindering the ability to access transactional data and historical data together for instantaneous analysis.

Scalability means not to have to compromise speed (processing power) or accuracy (data set size). This enables faster time to insights: for the quantitative analysts making investment decisions, for the retail banking product managers creating custom products and improving the banking customer experience, or the security and regulatory professionals protecting the financial institution and their customers from fraud and risk.

Scalability means not to have to compromise speed (processing power) or accuracy (data set size) while enabling faster time to insights.

The Limitation of Legacy Technologies

Traditionally, hot data lives in DRAM, while older data lives in storage. But today, quick access to all data is a must as DRAM is too expensive to scale to keep pace—and reading and writing to storage creates latency.

As data increases exponentially, traditional DRAM cannot scale to meet the demand. The growth rates for DRAM-density slow over time because it becomes costly and complex to scale. Yet, the critical, data-intensive workloads that run on these servers need to marshal more hot data in memory.

In-memory databases—which allow enormous datasets to be stored in active DRAM—are a recent advance in data processing architecture and can greatly enhance data processing speed, performance, and flexibility for financial services applications. However, the large amounts of DRAM capacity required to run large-scale in-memory databases is cost-prohibitive. In addition, DRAM media is also volatile, and if a processing node or application shuts down for any reason, the data on that node is lost. Rebuilding a data set after an unanticipated hard shutdown may take hours, if not days, a luxury that time-sensitive financial services applications cannot afford. What financial institutions need is a new scalable data architecture that offers affordable large capacity, near-DRAM processing speed, and non-volatile memory.

These realities have converged into a need for a new memory tier to provide the large capacity and scale of storage with the speed and latency characteristics of DRAM, coupled with hardware encryption. The solution is Intel® Optane™ technology.

The first new memory and storage technology in 25 years comes in two formats: Intel® Optane™ Persistent Memory (Intel® Optane™ PMem) and Intel® Optane™ Solid State Drives (Intel® Optane™ SSDs). These technologies essentially add two layers of memory and storage to close the capacity, cost, and performance gaps between DRAM (nanoseconds) and NAND SSDs (microseconds). If we think of this in the view of the computing pyramid, Intel Optane Persistent Memory sits right below DRAM in memory, providing memory-like performance at a lower cost with NAND-like persistence. Meanwhile, Intel Optane SSDs sit above cold storage, delivering faster data writes and information retrieval (reads and writes simultaneously) while easing bottlenecks even under heavy loads.

Intel Optane PMem is a revolutionary technology that addresses speed and performance challenges by offering a new compute architecture with the capacity to unlock the potential of vast stores of data. Intel Optane PMem delivers a tiered memory approach that provides a unique combination of affordable high memory capacity and support for data persistence. Not only does this help eliminate data center performance bottlenecks—it also accelerates applications and increases scale per server, enabling banks and other financial services firms to gain significant boosts in performance as well as lowering infrastructure TCO.

Intel Optane SSDs enable a new storage tier between Intel Optane PMem and traditional flash storage or NAND SSDs that offers fast caching or fast storage of hot and warm data. In contrast to traditional NAND-based SSDs, Intel Optane SSDs provide high random read/write performance. They offer low latency, higher drive writes per day, higher endurance, and consistent responsiveness—even under heavy loads when compared to NAND SSDs. And, unlike NAND, Intel Optane SSDs can read and write simultaneously without performance degradation.

Let’s look at how Intel Optane PMem and Intel Optane SSDs pair with new innovative software technologies to create even more powerful solutions for financial services institutions.

Intel® Optane™ Persistent Memory and MemVerge

In order to maintain a market advantage, financial firms and Intel has partnered with MemVerge Inc. to create Memory Machine*, the industry’s first “Big Memory” software environment. Memory Machine is at the cutting edge of software-defined memory, with the ability to virtualize DRAM and Intel® Optane™ Persistent Memory to eliminate storage I/O bottlenecks cost-effectively. MemVerge virtualizes DRAM and Intel Optane PMem into large persistent-memory lakes, allowing for instant scaling by making 100% use of available memory capacity while providing new operational capabilities to memory-centric workloads.

MemVerge Memory Machine abstracts the memory pool to enable all applications to take advantage of persistent memory without code modifications. The result is that Intel Optane PMem operates at DRAM-like performance using DRAM and persistent memory in a two-tier memory hierarchy. In a virtual machine (VM) environment, Memory Machine can help scale both the number and size of VMs on a single server. Memory Machine also offers flexibility in allocating DRAM and persistent memory to individual VMs.

This is possible, as TechTarget notes, because unlike Intel Optane PMem, “most applications are not designed to run efficiently in volatile DRAM. The application code first needs to be rewritten for memory, which does not natively include data services or enable data sharing by multiple servers. The combination of Intel Optane [Persistent Memory] with MemVerge Memory Machine vastly increases the byte-addressable storage capacity of main memory, said Eric Burgener, a vice president of storage at IT analysis firm IDC.”5

While this enables immediate access to huge data sets, it also enables an important new big memory capability called MemVerge ZeroIO*. A Memory Machine data service that takes in-memory snapshots, MemVerge ZeroIO can quickly recover terabytes of data from persistent memory for system restart—orders of magnitude faster than from storage.

Now trading and financial market data analytics are not only easier to deploy, but you can train and infer from AI or ML models faster and work with larger data sets in memory. You can also complete more queries in less time and consistently replicate memory between servers.

The results are staggering. A recent briefing from MemVerge shows how recovery times for a Redis database were accelerated up to 33X. MemVerge summarized their results saying, “With Intel Optane Persistent Memory and Memory Machine software, memory can safely scale out to petabytes because it is now possible to recover from crashes in seconds.”6

MemVerge also reported that for one financial services industry customer this process previously took three hours to recover 500GB of data. Intel Optane Persistent Memory paired with their Memory Machine software reduced the process to only 2 seconds (a 5400x performance increase).6

Recovery from crash improved from 3 hours to 2 seconds7 when Intel® Optane™ Persistent Memory is paired with MemVerge Memory Machine* software.

Intel® Optane™ SSDs and VAST Data

VAST is breaking the decades-old storage performance and capacity tradeoff to enable at-scale processing of market data and train AI datasets. With VAST Universal Storage*, your market data archive can be stored on one tier of scalable, affordable flash—making it possible to backtest and train trading models in real time.

Banks and hedge funds have struggled to keep pace with the needs of their quant researchers because of the inefficiencies of mechanical media and classic tiered-storage data management. Data is growing at an unprecedented rate, and trading desks need to be able to access vast amounts of data—even historical data—very quickly.

Over 30 years ago, Gartner introduced the storage tiering model to optimize data center costs by advising customers to deprecate older and less-valuable data to lower-cost (and slower) tiers of storage. Fast forward 30 years and the sprawl of storage technologies within organizations has grown to unmanageable proportions—where many of the world’s largest companies can be found managing dozens of different types of storage devices.

These companies need the ability to backtest three months, six months, or even years. They require the data to recognize patterns, perform deep trend analysis, and power the entire AI data lifecycle to build more accurate models. Even small increases in financial model accuracy can make an enormous difference.

Unfortunately, many are still stuck in a tiered data system, with only 15 to 30 percent of data on high performance flash. For cost reasons, the rest is stored on low cost, slow tiers. With the tiered approach, organizations must constantly devalue significant data sets by placing them in tiers, which can be up to a hundred times slower than the network. Additionally, while customers may see some savings in the short-term, having multiple tiers of storage creates more issues. For example: Storage users need to create workflows to move and/or copy data back and forth between primary and secondary storage. Users can spend extraordinary amounts of time balancing their applications across a myriad of storage silos while application wait time can be significant as data travels between archival and production storage.

With the advantage of new, enabling technologies that weren’t available before 2018, VAST’s Universal Storage concept can achieve a previously impossible architecture design point. The system combines low-cost QLC flash drives and Intel Optane SSDs with stateless, containerized storage services. This is all connected over new low-latency NVMe over Fabric networks to create a disaggregated shared everything (DASE) scale-out architecture.

VAST’s design includes a mix of Intel Optane SSDs and QLC-based SSDs, and the way Intel Optane technology is employed resolves the write performance and endurance challenges of QLC.

As a recent IDC Report notes, “In this cacheless architecture, all writes are reliably written to the Intel Optane SSD, providing extremely low write latencies. Because of the relatively large size of the Intel Optane SSD layer (it is not a tier because the Intel Optane SSD and QLC are managed together as a single tier by the VAST storage OS), writes can be retained for a very long time relative to legacy cache-based architectures before they need to be written out to the lower-cost QLC media. Reads and writes occur from the Intel Optane SSD layer while data access patterns are noticed by the storage OS and, using that data, writes are coalesced into sequential streams of “like data” for eventual destaging to QLC in large block sizes that reduce the need for device- or system-level garbage collection and increase the endurance of the QLC media. This approach enables the use of QLC media in write-intensive enterprise environments, and VAST Data provides a 10-year flash media endurance guarantee to underline the viability of this design choice.”8

Together Intel Optane technologies and VAST Data are delivering disruption to the financial services industry. The innovative use of Intel Optane SSDs in VAST Data’s Universal Storage helps drive down the cost of flash infrastructure, bringing an end to the complexity of storage tiering. Now, financial organizations can take advantage of the abundant market data on a single tier of affordable flash storage, helping them find new correlations and improve the accuracy of trading models.

Intel® Optane™ technology enables greater data scalability overall, allowing for revolutionary new business models and customer experiences. This new memory and storage portfolio can help financial services institutions effectively and efficiently extract value from massive data sets now and into the future. On the memory side, firms can take advantage of massive memory scalability with MemVerge and Intel® Optane™ Persistent Memory. On the storage side, they can leverage the unique capabilities of Intel® Optane™ SSDs to scale storage, all while balancing cost and performance considerations. Intel Optane technology is a truly game-changing ingredient for the financial services industry.