Low latency is a critical part of high-performing SMB for SDS customers. With RDMA (SMB Direct), an SMB implementation can cut down on sluggish performance by sparing the CPU.
For software-defined storage (SDS) vendors, SMB performance is crucial.
One aspect relating to performance that should not be forgotten is latency.
Latency in SMB shouldn’t be overlooked
In enterprise networking, latency refers specifically to the time it takes for a data packet to travel from one destination to another.
With the large amounts of data moving between servers and clients in an enterprise, slowdowns in latency can occur, harming performance and productivity. This can be critical in industries that depend on rapid enterprise network data sharing.
The Media & Entertainment (M&E) industry is one of those cases. With content creators (such as studios involved in the post-production of films and animation) increasingly moving into 4k and 8k video production, larger file sizes and bandwidth needs lead to challenges in video editing and processing. Increases in latency can significantly hinder performance, delaying content production across teams – which translates to increased costs and a frustrating customer experience.
What is RDMA (SMB Direct)?
RDMA is a direct memory access that permits information exchange and high-throughput, low-latency networking. RDMA does that by accessing computer memory without involving the CPUs.
SMB file servers can leverage RDMA over Converged Ethernet (known as RoCE). This is also known as SMB Direct. The implementation allows data packets to bypass the associated CPUs of the destinations (server and client). Because CPU usage is drastically reduced, latency is kept extremely low. This process ensures high throughput, low latency, and low CPU usage for customers networking in even very ordinary network infrastructure.
Shortening the SMB server path
Data that’s exchanged between an SMB file server and its client has to travel down multiple layers of the network stack (seven layers to be exact). To reach the client, typically an application on the SMB file server has to perform context switching and send information through the CPU.
Figure 1 below illustrates this path the data must travel as it moves back and forth between the SMB server and its target. The graphic has two parts: in the top half, the data path without RDMA is presented. In the lower portion of the image, we can get an idea of how that path is shortened significantly, thanks to RoCE NIC. SMB Direct bypasses the TCP stack and allows for remote direct memory access (RDMA) between client and server.
With the latency reductions provided by RDMA, customers in the Media & Entertainment industry see a huge benefit. RDMA allows such customers to maintain the high data exchange and frames-per-second playback they need.
How much of a difference does RDMA really make?
Not all SMB implementations feature RDMA support. Fusion File Share by Tuxera has a suite of technical features – such as RDMA – that enable SDS vendors to provide high-performing, low latency enterprise storage to customers in the M&E industry.
To examine how RDMA impacts network latency, we used Frametest to compare Fusion File Share by Tuxera with the open source alternative Samba, which lacks RDMA support. Frametest is an application used to test file read I/O from local volumes and network mounted volumes. It allows developers to create simulations that use different parameters (such as number of threads) in order to test performance. The application is a command line tool and creates a number of frames specified in a directory (for our testing purposes, Frametest parameters were set at 2000 4K frames). Then another flag in the same command reads back the frames with the specified number of threads, afterwards printing a histogram of the IO and a short report on the run metrics.
The difference in the latency results between Fusion File Share (that benefits from RDMA) and the open source alternative (without any kind of RDMA) was huge. Fusion was able to maintain a consistently low latency – with 8 threads Fusion achieved a latency one fourth that of the open source alternative. Over the network, Fusion provides to clients approximately 80 to 85% of the throughput of directly accessing the local volume.
Get a quick overview of RDMA here:
Final thoughts
In industries like M&E that rely on the rapid transfer of large quantities of data, a consistently low latency in network file sharing can be one of the keys to enabling – and maintaining – high performance. For such SDS customers, that means greatly improved productivity, and a lot less frustration and wasted time.
Ensure rock bottom latency with our enterprise-grade SMB server implementation – Fusion File Share by Tuxera.
Tuxera
Tuxera is the leading provider of quality-assured data storage management software and networking technologies. We help people and businesses store and move data reliably, while making file transfers fast and content easily accessible. Our software is at the core of billions of phones, tablets, cars, TV sets, cameras, drones, external storage, routers, spacecraft, IoT devices, and public cloud storage platforms.
Tuxera’s customers include car makers, device manufacturers, industrial equipment manufacturers, data-driven enterprises, and many more. We help them solve complex challenges involving data in all its states – at rest, in use, and in motion. They rely on our software to protect data integrity, ensure data accessibility, improve storage performance, transfer data rapidly and securely, and extend flash memory lifetime in their products and for their projects.