Configuring ISCSI Target using VSAN 6.5

Introduction to ISCSI Target using VSAN 6.5

With VMWare vSphere 6.5 release, VSAN 6.5 also extended workload support to physical servers using ISCSI Target Service. iSCSI targets on VSAN are managed the same as other objects with Storage Policy Based Management. vSAN functionality such as deduplication, compression, mirroring, and erasure coding, RAID-1, RAID-5, RAID-6 can be utilized with the iSCSI target service. CHAP and Mutual CHAP authentication is supported . Leveraging VSAN, physical servers and clustered applications can take benefits simplicity, centralized management and monitoring, and high availability. A LUN is represented by an individual .VMDK file, backed by VSAN object.

There are no floating VIP’s, ISCSI will be using vmkernal ports. All the hosts should have the same vmk nics for iSCSI. iSCSI will work with an active/passive architecture with a target being active on a single host.  An initial connection can be made to any host, and iSCSI Redirects will be used to redirect client traffic to the host and VMKernel port that owns the target.

Once we have VSAN configured correctly, next step is to Enabled the iSCSI Target Service.

Enable the iSCSI Target Service

  1. Login to VMware vSphere Web Client and Enable the iSCSI Target Service. 

1

2

3

Now we have ISCSI Target Service successfully enabled next step is to create ISCSI Target.

4 5 6 7 8

As we have ISCSI Target ready, Physical workload’s ISCSI initiators can be configured to get access to VSAN ISCSI Target.

11 12

14

VSAN is always known for simplicity. That’s how you can simply configure ISCSI Target in VSAN 6.5 and can be presented as ISCSI LUN to a Windows 2012 Server,

VMware VSAN Network Design consideration

VMware Virtual SAN is a distributed shared storage solution that enables the rapid provisioning of storage within VMware vCenter. As Virtual SAN is a distributed shared storage, it is very much dependent on correctly configured network for Virtual Machines I/O and for communication between Virtual SAN Cluster nodes.  Because the majority of virtual machine I/O travels the network due to the distributed storage architecture, highly performing and available network configuration is critical to a successful Virtual SAN deployment.

In this post we will be covering few important points need to be considered from network perspective before VMware VSAN deployment.

Supported Network Interface Cards

In a VMware Virtual SAN hybrid configuration, Virtual SAN supports both 1 GB and 10 GB Network Interface Cards. If you have 1 GB Network Interface card installed on ESXi host than VMware requires this NIC to be dedicated only for Virtual SAN traffic. If a 10Gb NIC is used, this can be shared with other network traffic types. It is advisable to implement QoS using Network I/O Control to prevent one traffic to claim all the bandwidth. Considering the potential for an increased volume of network traffic between the hosts to achieve higher throughput, for Virtual SAN All Flash Configuration VMware supports only 10 GB Network Interface Card which can be shared with other network traffic types.

Teaming Network Interface Cards

Virtual SAN supports Route based on IP-hash load balancing, but cannot guarantee improvement in performance for all configurations. IP-hash performs the load balancing when Virtual SAN traffic type is among its many network traffic types. By design, Virtual SAN network traffic is not designed to load balanced across teamed network interface cards. NIC Teaming for VMware Virtual SAN Traffic is a way of making the Virtual SAN traffic network high available, where standby adapter take over the communication if primary adapter fails.

Jumbo Frame Support

VMware Virtual SAN supports Jumbo Frame. Even if use of Jumbo frame reduce CPU utilization and improve throughput, VMware recommends to configure jumbo frame only if the network infrastructure already supports it. As vSphere already use TCP segmentation offload (TSO) and large receive offload (LRO), Jumbo frame configured for Virtual SAN provides limited CPU and performance benefits. The biggest gains for Jumbo Frames will be found in all flash configurations.

Multicast Requirement

Multicast forwarding is a one-to-many or many-to-many distribution of network traffic. Rather than using the network address of the intended recipient for its destination address, multicast uses a special destination address to logically identify a group of receivers.

One of the requirements for VSAN is to allow multicast traffic on the VSAN network between the ESXi hosts participating in the VSAN cluster. Multicast is being used in discovering ESXi host and to keep track of changes within the Virtual SAN Cluster. Before deploying VMware Virtual SAN, testing performance of switch being used for Multicast is also very important. One should ensure a high quality enterprise switch is being used for Virtual SAN multicast traffic. Virtual SAN health services can also be leveraged to test Multicast performance.

Summary of network design considerations

  • Virtual SAN Hybrid Configuration support 1 GB and 10 GB network.
  • Virtual SAN All Flash Configuration support 10 GB network.
  • Consider implementing QoS for Virtual SAN Traffic using NIOC.
  • Consider Jumbo frame for Virtual SAN traffic if it is already configured in network infrastructure.
  • Consider NIC team for availability / redundancy for Virtual SAN traffic.
  • Multicast must be configured and functional between all hosts.

I hope this is informative for you. Thanks for Reading, be social and share it on social media if you feel it is worth sharing it.  Happy Learning … 🙂

Configuring Windows Server 2016 as iSCSI Server

In this post I’m going to show the steps to install and configure iSCSI Server in Windows Server 2016. iSCSI ( Internet Small Computer System Interface) allows to send SCSI command over LAN or WAN. iSCSI devices are disks, tapes, CDs, and other storage devices on another networked computer that you can connect to. While accessing storage devices using iSCSI, the client will be referred as iSCSI initiator and the storage device will be referred as iSCSI target.

  Step – 1  : Configuring Windows Server 2016 as iSCSI Server

The first thing required to configure Windows Server 2016 as a iSCSI Server is to install iSCSI Target Server role on Windows Server 2016.  Open the Add Roles and Feature Wizard and choose iSCSI Target Server from the list of Roles under File and Storage Services. Click Install to proceed further.

1Do not choose any option from the feature list. Click on Next on another slides to finish the iSCSI Target Server Role installation.2

Post sucessful installation of iSCSI Target server role. Open Server Manager and Click on File and Storage Services.

3

4

Click on iSCSI. To share storage, the first thing is to create an iSCSI LUN. iSCSI virtual disk is backed by a VHD. Click on “To create an iSCSI virtual disk, start the New iSCSI Virtual Disk Wizard” 5

Select the Server and the volume. Click on Next.6

Specify the iSCSI virtual disk name and click Next.

7

Provide the size of the virtual disk. Choose the option from Fixed Size, Dynamically expanding or Differencing depending on your organization requirements. 8Choose New iSCSI Target.9Choose the Target Name10Next step we need to do is to choose the Access Server who will be accessing iSCSI Server. Click on Add.11

Before adding iSCSI connecting initiator to the list configure iSCSI initiators to connect to this iSCSI Server.

Click on Add iSCSI initiator. You can see all the configured iSCSI imitators connecting to this iSCSI Server. Click OK to proceed further.

17Click on Add to add more iSCSI initiator’s to the list of Access servers.18As we don’t have CHAP authentication configured. Click on Next to proceed.19Review the setting. Click on Create to finish the setup.20

21

22

In this post, we covered the steps to configure Windows Server 2016 as an iSCSI Server. I hope this is informative for you. Thanks for Reading!!. Be social and share it in social media, if you feel worth sharing it

Deploying EMC vVNX Community Edition

Virtual VNX (vVNX) is a software stack that provides many VNX features. vVNX Community Edition is a freely downloadable  virtual storage appliance (VSA), that can be downloaded onto ESX 5 or 6 servers, to run a software-defined unified VNX array. Once installed, you can leverage the vVNX vApp to provide storage services and apply VMware-based availability and protection tools to maintain it. It delivers unified block and file data services on general purpose server hardware, converting the server’s internal storage into a rich, shared storage environment with advanced data services.

Environmental requirements:

  • VMware infrastructure: VMware vCenter and ESXi Server, release 5.5 or later
  • Network infrastructure: 2x 1 GbE OR 2x 10 GbE
  • Battery-backed Hardware RAID controller required (512MB NV Cache recommended)

Virtual appliance configuration options:

  • 2 vCPUs at 2GHz+ and 12 GB RAM
  • Up to 4 TB Storage

During deployment of vVNX, deployment wizard will create three disk. Do not modify the existing disk. For capacity you should be adding additional disk. Don’t add any additional disk till the time appliance boots up completely first time. First boot of vVNX appliance will take a longer time. It took around 35 min in my lab first time. Subsequent boot won’t take longer time.

Deployment process

Login to vSphere Web Client and choose the host or cluster you want to deploy OVA.

Screenshot

Accept extra configuration and click next.

Screenshot-1

Accept the license agreement and click NEXT.

Screenshot-2

Screenshot-3

Choose the disk format. It is recommended to use Thick. As I am running the appliance in LAB, I configured disk format as Thin.

Screenshot-4

Choose the appropriate port group.

Screenshot-5

Provide the management interface IP Address.

Screenshot-6

Select Power on the VM and click on Finish. Do not add any disk till the time appliance boots up completely once.

Screenshot-7

Login to EMC Unisphere accessing MGMT IP Address using Internet browser. Login using default username / password i.e. admin/Password123#

Screenshot-8

Once logged in you will see configuration wizard for post deployment configuration.

Screenshot-9

Accept the License agreement and click on NEXT.

Screenshot-10

Change the admin password.

Screenshot-11

Wizard will give you the System UUID which is required for registering the product.

Screenshot-13

Login to EMC portal to download the License file. Provide the System UUID to generate license file.

Screenshot-14

Download the license file.

Screenshot-15

Import the license file in the appliance.

Screenshot-16

Choose the license file and click Finish.

Screenshot-17

Screenshot-18

Click Next post importing License to the appliance.

Screenshot-19

Provide the DNS Server IP address.

Screenshot-21

Provide the NTP Server IP Address and click Next.

Screenshot-23

Now it is required to create Storage pool. I have added additional disk to the appliance. Click on Create Storage Pools.

Screenshot-24

Give the name and description for the Storage pool.Screenshot-25

Click on the Highlighted icon to choose whether you want to use the Storage Tier for Capacity or Performance. Screenshot-26

Screenshot-27

Click Next Screenshot-28

Screenshot-29

Define the Capability profile. This is required if you want to use the Storage Tier for VMware VVols based storage provisioning.

Screenshot-30

Add the additional Tags if needed.

Screenshot-31

Click Finish to create Storage Pool.

Screenshot-32

Screenshot-33

Screenshot-34

Next you can configure iSCSI network interface to make device accessible to SCSI Clients. Provide the networking details for iSCSI interface.

Screenshot-35

Screenshot-36

Next we can configure the appliance as NAS Server. Click on highlighted icon to configure the appliance as NAS Server.

Screenshot-37

Type in the server name and the storage pool to be made available to NAS Clients.

Screenshot-38

Choose the interface and provide the network details for the interface.Screenshot-39

Choose appropriate sharing protocols.

Screenshot-40

Configure Directory Service if needed.Screenshot-41

Enable DNS for NSX Server.Screenshot-42

Click Finish to configure NAS Server.

Screenshot-43

Screenshot-44

Screenshot-45

Click Next to finish the configuration wizard. Screenshot-46

Screenshot-47

Screenshot-48

In this post we covered the process to deploy vVNX appliance and configure it as iSCSI and NAS Server. I hope this is informative for you. Thanks for reading !!!. Be social and share it in social media, if you feel worth sharing it.