How does RDM work?

RDM or raw device mapping is a feature or mechanism that allows a virtual machine to directly access a physical LUN on a storage array. This allows the VM to use that LUN and access a storage device directly, rather than using the standard virtual machine file system (VMFS).

Instead of storing the VM’s data in a virtual disk file (VMDK) on the VMFS, the RDM provides a mapping file (stored on the VMFS) that points to the raw LUN. This allows the VM to perform block-level I/O directly to the LUN, as if it were a physical disk.

The RDM appears to the VM as a standard small computer system interface (SCSI) device with the standard file operations.

How does RDM work?

Why would an organization implement RDM?

Guest OS clustering

Guest clusters are collections of two or more VMs that work together to improve VM availability and to handle failover in the host device.

RDM allows organizations to set up a few different types of these clusters:

  • Cluster-in-a-box (CIB) systems, where two virtual machines are running on the same ESX/ESXi host. This type of system is useful for testing and development.
  • Cluster-across-boxes (CAB) systems, where virtual machines on separate ESXi hosts form a cluster using shared RDM disks. CAB systems are commonly used in production for high availability.
  • Physical-to-virtual cluster systems, a mixed cluster configuration where a physical machine and a virtual machine form a cluster using shared storage. This setup helps during migrations or to bridge legacy systems with virtual environments.

Distributed file locking

RDM uses VMFS’s distributed locking for the RDM pointer file, which manages VM-level access and operations. This mechanism allows a virtual machine to access a raw LUN directly, making it possible to implement shared-disk clustering scenarios while helping prevent data corruption at the virtualization layer.

While the RDM pointer file resides on VMFS and benefits from its file-level locking, access to the raw LUN itself is not managed by VMFS. Instead, coordination is handled by clustering software (such as Microsoft Failover Clustering) running within the guest operating systems.

This setup ensures safe and coordinated access to shared storage in supported clustering configurations.

SAN snapshots

Virtual RDM setups can use mapping files that behave like a standard virtual disk file or VMDK, which makes it simpler to include RDMs in storage area network (SAN) and vSphere snapshots and backups.

User-friendly persistent names

RDM allows administrators to assign user-friendly names to the RDM mapping files and virtual disk labels used in a VM configuration.

While the underlying SCSI device retains its original identity (e.g., WWN or LUN ID), the VM can reference the RDM through a labeled disk file, simplifying management and identification within vSphere.

File system operations

RDM allows a virtual machine to access a raw LUN as if it were a locally attached SCSI disk.

Once the guest operating system mounts and formats this LUN with a supported file system (like NTFS or ext4), it can use standard file system utilities and perform typical file operations—just as it would with any other disk.

Direct access for SCSI commands

RDM in physical compatibility mode allows a virtual machine to send SCSI commands directly to a storage device, bypassing some layers of virtualization.

This enables support for advanced storage features like SAN-based snapshots and guest clustering. However, the performance benefits over standard virtual disks are typically minimal, and RDM is used primarily for specific functionality rather than speed.

What are the limitations of RDM?

As useful as RDM can be, there are also some ways it may not fit an organization's goals.

System and management complexity

RDM systems are more complex in regard to VM deployment, backup, and migration processes. They require more effort to map and maintain than a standard virtual disk file and require more in-depth understanding from an organization's IT team.

Direct-attached block or RAID devices

RDM relies on unique SCSI identifiers (such as serial numbers or WWNs) to map and persistently track individual LUNs.

As a result, RDM may not work with direct-attached storage (DAS) or certain RAID configurations if they don't expose unique, consistent SCSI IDs or fail to support necessary features like SCSI-3 persistent reservations.

These limitations make such devices unsuitable for RDM use in shared-disk clustering or high-availability setups.

Storage for physical compatibility mode

When using physical compatibility mode, SAN snapshots are not available.

However, this mode allows direct passthrough of SCSI commands, enabling the use of SAN-level snapshots, backups, and mirroring—features that depend on the storage array's capabilities rather than vSphere.

Partitioning limitations

RDM systems require mapping an entire LUN to a virtual machine—partial mappings, such as individual partitions or segments of a disk, are not supported.

Any partitioning or formatting of the disk must be done from inside the guest OS after the full LUN has been presented to the VM.

LUN consistency

When migrating VMs, RDM systems need all participating hosts to have consistent LUN IDs for proper mapping.

Physical RDM vs. virtual RDM: What's the difference?

There are two primary types of RDM systems: Physical and virtual.

Physical RDM

Physical RDM (also known as pass-through RDM or pRDM) allows a virtual machine's guest OS to directly communicate with a SCSI device, bypassing most of VMware’s virtualization layer.

While the “REPORT LUN” command is virtualized to preserve isolation between VMs, all other SCSI commands are passed through to the storage device.

This makes physical RDM ideal for scenarios that require low-level storage access or shared-disk clustering.

Common use cases for physical RDM include:

  • Running SAN management agents or software that needs direct SCSI access
  • Configuring cost-effective virtual-to-physical or physical-to-virtual clusters
  • Supporting Microsoft Failover Clustering with shared storage

Virtual RDM

Virtual RDM (vRDM) provides a virtualized interface to a raw LUN.

From the virtual machine's perspective, the disk appears similar to a VMDK file on VMFS, though the data is stored directly on the physical LUN. All SCSI commands—including reads, writes, and control commands—are handled by the VMware virtualization layer.

Because the RDM pointer file resides on a VMFS volume, virtual RDM supports key vSphere features like:

  • VM snapshots
  • VM cloning
  • File locking for access coordination
  • Cluster-in-a-box and cluster-across-box configurations

RDM vs. VDI: What's the difference?

While RDM and virtual desktop infrastructure (VDI) are both part of virtualized environments, they serve very different purposes.

  • RDM is a VMware feature that allows a virtual machine to directly access a physical storage LUN, often used in clustering or SAN-integrated scenarios.
  • VDI is a desktop virtualization strategy that delivers user desktops via virtual machines, usually through platforms like Parallels.

RDM is about how a VM accesses storage, while VDI encompasses the various aspects of virtual applications and desktop delivery.

RDM is a virtualization feature built into VMware solutions and allows virtual machines to access physical device storage such as a SAN LUN directly.

While the RDM pointer file resides on a VMFS volume, the underlying storage can be formatted with any file system (e.g., NTFS, EXT4), giving the guest OS direct access to the raw disk.

RDM is especially useful for scenarios like clustering, SAN tools, and workloads needing low-level disk access. It's useful for some cluster configurations and snapshots but doesn't map partitions and may not work on direct-attached block devices.

VDI, on the other hand, is an end user-facing technology that delivers desktop environments through virtual machines hosted on a data server. These VMs are run through a centrally managed hypervisor.

Because user desktops and data reside on centralized servers, users can access the same desktop and files from any device, improving flexibility and security.

Feature/Element RDM VDI
Storage access Accesses physical storage directly (raw LUN) Stores data on shared virtual disks (VMDKs)
Performance Can use the physical device to improve performance for I/O-intensive applications Performance relies on its network and resource allocation
Snapshots and cloning Limited support for snapshots and cloning (only virtual RDM supports cloning) Full support for functionality like snapshots and cloning
Centralization Typically less centralized and allows for more direct device control Primarily centralized, giving IT greater control over device and updates
Use cases Useful for I/O intensive workloads and clustering Delivers desktops and apps to users remotely, secures remote access
Dependency Depends on physical SAN or storage device(s) Depends on hypervisor, network, and other infrastructure

See how Parallels RAS works alongside RDM for more efficient resource usage at your organization.

Try it free Learn more

Parallels and RDM

While Parallels RAS does not directly integrate with VMware’s raw device mapping (RDM), it can be deployed on top of vSphere infrastructure that uses RDM for specific virtual machines.

This approach allows organizations to deliver legacy applications—especially those that require raw LUN access—via virtual desktops or published apps, while still benefiting from a VDI-like user experience.

It also offers infrastructure flexibility and eliminates the need to lock into a single vendor stack.

Take the next step

Parallels Workspace Solutions are designed to make things simpler for your organization.

Ready to explore your options and find and create the combination of tools that fit your organization's unique needs? Start with Parallels RAS and see how it enhances your organization's virtual application and desktop delivery and management—get your free trial today.

Try it free Learn more