Like last year, SNIA held its Persistent Memory and Computational Storage Summit virtually this year. The summit examined some of the latest developments on these issues. Let’s explore some of the takeaways from this virtual conference from day one.
dr Yang Seok, vice president of Memory Solutions Lab at Samsung, spoke about the company’s SmartSSD. He argued that computer storage devices that offload processing from CPUs could reduce energy consumption and thus provide an environmentally friendly computing alternative. He pointed out that due to technological innovations, data center energy consumption has remained constant at around 1% since 2010 (in 2020 it was 200-250 TWh per year). Still, there is a challenging milestone to reduce data center greenhouse gas emissions by 53% from 2020 to 2030.
Computer storage SSDs (CSDs) can be used to offload data from a CPU to free the CPU for other tasks, or for local process acceleration closer to the stored data. This local processing is done with less power than a CPU, can be used for data reduction operations in the storage device (to allow higher storage capacities and more efficient storage to avoid data movement, and can be part of a virtualization environment. CSDs seem to be competitive and use less power for IO-intensive tasks. The computing power of CSDs also increases with the number of CSDs used.
Samsung announced its first SmartSSD in 2020 and says the next generation will be available soon, see image below. The next generation will allow more customization of what the processor can do in the CSD, allowing its use in more applications and potentially saving power for many processing tasks.
Eidetic’s Stephen Bates and Intel’s Kim Malone discussed emerging standards for NVMe computational computing. One such addition to the Computational Programs Command Set is the Computational Namespace. This is an entity in an NVMe subsystem that can run one or more programs, can have asymmetric access to subsystem memory, and can support a subset of all possible program types. The conceptual image below gives an idea of how this works.
Both device-defined and downloadable programs are supported. The device-defined programs are fixed programs provided by the manufacturer or various functionalities implemented by the device that can be invoked as programs such as compression or decryption. Downloadable programs are loaded into the computer program namespace by the host.
Intel’s Andy Rudoff gave an update on persistent memory. He spoke about developments along a timeline. He said by 2019 Intel Optane Pmem will be generally available. The image below shows Intel’s approach to connecting to the memory bus using Optane Pmem.
Note that direct access to memory (DAX) is key to this use of Optane PMem. The image below shows a timeline for PMem-related developments since 2012.
Andy went through several customer use cases for Intel’s Optane PMem, including
g Oracle Exadata with PMem access via RDMA, Tencent Cloud and Baidu. He also discusses future PMem directions, especially in combination with CXL. This includes accelerating AI/ML and data-centric applications with temporary caching and persistent metadata storage.
VMware’s Jinpyo Kim and Intel Labs’ Michael Mesnier partnered with MinIO to discuss compute storage in a virtualized environment. Some of the applications presented included data scrubbing (reading data and detecting accumulated errors) of a MinIO storage stack and a Linux file system. They found that performing this calculation close to the stored data was 50% to 18 times more scalable, depending on connection speed (process is read-intensive). VMware ran a near-storage log analysis project with UC Irvine in a research prototype and found that query performance was an order of magnitude better compared to just using software – this feature will be ported to a Samsung SmartSSD.
VMware also used NGD computer storage devices to run a Greenplum MPP database. They have put a lot of work into virtualizing CSD (they call it vCSD) on vSphere/vSAN, allowing for more effective sharing of hardware accelerators and migrating a vCSD between compatible hosts. CSDs can be used to disaggregate native cloud apps and offload memory-intensive functions. The figure below shows the joint efforts to use CSDs between MinIO, VMware and Intel.
Meta’s Chris Petersen spoke at Meta about AI storage. Meta uses AI for many applications and at scale, from the data center to the edge. Because AI workloads scale so quickly, more vertical integration is required from SW requirement to HW design. A significant portion of the capacity requires high BW accelerator memory, but inference has more of its capacity at low bandwidth compared to training. Also, inference has a strict latency requirement. They found that a memory layer beyond HBM and DRAM can be useful, especially for inference.
They found that for software-defined storage supported by SSDs, they had to use SCM SSDs (Optane SSDs). Using faster (SCM) SSDs reduced the need to scale up and thus reduced power consumption. The image below shows Meta’s view of required storage tiers for AI applications, showing that higher CXL latency with higher performance and storage capacity will be required to achieve optimized performance, cost and efficiency.
Arthur Sainio and Pekon Gupta from SMART Modular Technologies spoke about using NVDIMM-N in DDR5 and CXL enabled applications. These NVDIMM-Ns include a battery backup that allows the DRAM on the module to be written back to flash in the event of a power failure. This technology is also being developed for CXL applications with an NV-XMM specification of devices that have an integrated backup power source and use the standard programming model for CXL Type 3 devices. The figure below shows the form factors for these devices.
In addition to these presentations, David Eggleston from Intuitive Cognitive Consulting moderated a panel discussion featuring the speakers of the day and at the end of the day there was a Birds of a Feather session on computer storage moderated by Scott Shadley from NGD and Jason Molgaard from AMD .
Samsung, Intel, Eideticom, VMware, meta and SMART Modular delivered insightful presentations on persistent memory and computational storage on the first day of the SNIA Persistent Memory and Computational Storage Summit 2022.