Because the marketing slides from the VMware One Cloud presentation did not provide me with enough detailed information on what's new with vSphere 6 I copied the information from the hands on labs module HOL-SDC-1410 - What's New with vSphere 6.
I hope more readers can benefit from this information and get up-to-speed quickly with the latest and greatest from VMware.
WHAT'S NEW IN VSPHERE & VCENTER 6.0
SCALABILITY - CONFIGURATION MAXIMUMS
Click to enlarge
The Configuration Maximums have increased across the board for vSphere Hosts in 6.0. Each vSphere Host can now support:
- 480 Physical CPUs per Host
- Up to 12TB of Physical Memory
- 1000 VMs per Host
- 64 Hosts per Cluster
Scalability - Virtual Hardware v11
This release of vSphere gives us Virtual Hardware v11. Some of the highlights include:
- 128 vCPUs
- 4 TB RAM
- Hot-add RAM now vNUMA aware
- WDDM 1.1 GDI acceleration features
- xHCI 1.0 controller compatible with OS X 10.8+ xHCI driver
- A virtual machine can now have a maximum of 32 serial ports
- Serial and parallel ports can now be removed
LOCAL ESXI ACCOUNT AND PASSWORD MANAGEMENT ENHANCEMENTS
Click to enlarge
In the latest release of vSphere 6.0, we expand support for account management on ESXi Hosts.
New ESXCLI Commands:
- CLI interface for managing ESXi local user accounts and permissions
- Coarse grained permission management
- ESXCLI can be invoked against vCenter instead of directly accessing the ESXi host.
- Previously, the account and permission management functionality for ESXi hosts was available only with direct host connections.
Password Complexity:
- Previously customers had to manually edit by hand the file /etc/pam.d/passwd, now they can do it from VIM API OptionManager.updateValues().
- Advanced options can also be accessed through vCenter, so there is not need to make a direct host connection.
- PowerCLI cmdlet allows setting host advanced configuration options
Account Lockout:
- Security.AccountLockFailures - "Maximum allowed failed login attempts before locking out a user's account. Zero disables account locking.”
- Default: 10 tries
- Security.AccountUnlockTime - "Duration in seconds to lock out a user's account after exceeding the maximum allowed failed login attempts.”
- Default: 2 minutes
VCENTER SERVER 6.0 – PLATFORM SERVICES CONTROLLER
Click to enlarge
The Platform Services Controller (PSC) includes common services that are used across the suite.
- These include SSO, Licensing and the VMware Certificate Authority (VMCA)
- The PSC is the first piece that is either installed or upgraded. When upgrading a SSO instance becomes a PSC.
- There are two models of deployment, embedded and centralized.
- Embedded means the PSC and vCenter Server are installed on a single virtual machine. – Embedded is recommended for sites with a single SSO solution such as a single vCenter.
- Centralized means the PSC and vCenter Server are installed on different virtual machines. – Centralized is recommended for sites with two or more SSO solutions such as multiple vCenter Servers, vRealize Automation, etc. When deploying in the centralized model it is recommended to make the PSC highly available as to not have a single point of failure, in addition to utilizing vSphere HA a load balancer can be placed in front of two or more PSC’s to create a highly available PSC architecture.
The PSC and vCenter servers can be mixed and matched, meaning you can deploy Appliance PSC’s along with Windows PSC’s with Windows and Appliance based vCenter Servers. Any combination uses the PSC’s built in replication.
WHAT'S NEW IN VSPHERE 6.0 - NETWORKING AND SECURITY
Networking in vSphere 6.0 has received some significant improvements which has led to the following new vMotion capabilities:
- Cross vSwitch vMotion
- Cross vCenter vMotion
- Long Distance vMotion
- vMotion across Layer 3 boundaries
More detail on each of these follows as well as details on the improved Network I/O Control (NIOC) version 3.
CROSS VSWITCH VMOTION
Click to enlarge
Cross vSwitch vMotion allows you to seamlessly migrate a VM across different virtual switches while performing a vMotion.
- No longer restricted by the network you created on the vSwitches in order to vMotion a virtual machine.
- Requires the source and destination portgroups to share the same L2. The IP address within the VM will not change.
- vMotion will work across a mix of switches (standard and distributed). Previously, you could only vMotion from vSS to vSS or within a single vDS. This limitation has been removed.
The following Cross vSwitch vMotion migrations are possible:
- vSS to vSS
- vSS to vDS
- vDS to vDS
- vDS to VSS is not allowed
Another added feature is that vDS to vDS migration transfers the vDS metadata to the destination vDS (network statistics).
CROSS VCENTER VMOTION
Click to enlarge
Expanding on the Cross vSwitch vMotion enhancement, we are also excited to announce support for Cross vCenter vMotion.
vMotion can now perform the following changes simultaneously.
- Change compute (vMotion) - Performs the migration of virtual machines across compute hosts
- Change storage (Storage vMotion) - Performs the migration of the virtual machine disks across datastores
- Change network (Cross vSwitch vMotion) - Performs the migration of a VM across different virtual switches
and finally…
- Change vCenter (Cross vCenter vMotion) - Performs the migration of the vCenter which manages the VM.
All of these types of vMotion are seamless to the guest OS. Like with vSwitch vMotion, Cross vCenter vMotion requires L2 network connectiviy since the IP of the VM will not be changed. This functionality builds upon Enhanced vMotion and shared storage is not required. Target support for local (single site), metro (multiple well-connected sites), and cross-continental sites.
LONG DISTANCE VMOTION
Click to enlarge
Long Distance vMotion is an extension of Cross vCenter vMotion however targeted for environments where vCenter servers are spread across large geographic distances and where the latency across sites is 100ms or less. Although spread across a long distance, all the standard vMotion guarantees are honored.
This does not require VVOLs to work. A VMFS/NFS system will work also.
Use Cases:
- Migrate VMs across physical servers that spread across a large geographic distance without interruption to applications
- Perform a permanent migration for VMs in another datacenter.
- Migrate VMs to another site to avoid imminent disaster.
- Distribute VMs across sites to balance system load.
- Follow the sun support.
Requirements:
- The requirements for Long Distance vMotion are the same as Cross vCenter vMotion, except with the addition of the maximum latency between the source and destination sites must be 100 ms or less, and there is 250 Mbps of available bandwidth.
- To stress the point: The VM network will need to be a stretched L2 because the IP of the guest OS will not change. If the destination portgroup is not in the same L2 domain as the source, you will lose network connectivity to the guest OS. This means in some topologies, such as metro or cross-continental, you will need a stretched L2 technology in place. The stretched L2 technologies are not specified. Any technology that can present the L2 network to the vSphere hosts will work, because it’s unknown to ESX how the physical network is configured. Some examples of technologies that would work are VXLAN, NSX L2 Gateway Services, or GIF/GRE tunnels.
- There is no defined maximum distance that will be supported as long as the network meets these requirements. Your mileage may vary, but are eventually constrained by the laws of physics.
- The vMotion network can now be configured to operate over an L3 connection. More details on this are in the next slide.
NETWORK I/O CONTROL V3
Click to enlarge
Network I/O Control Version 3 allows administrators or service providers to reserve or guarantee bandwidth to a vNIC in a virtual machine or at a higher level the Distributed Port Group.
This ensures that other virtual machines or tenants in a multi-tenancy environment don’t impact the SLA of other virtual machines or tenants sharing the same upstream links.
Use Cases:
- Allows private or public cloud administrators to guarantee bandwidth to business units or tenants. --> This is done at the VDS port group level.
- Allows vSphere administrators to guarantee bandwidth to mission critical virtual machines. --> This is done at the VMNIC level.
WHAT'S NEW IN VSPHERE 6.0 STORAGE & AVAILABILITY
Click to enlarge
At a high level, these are the new Storage & Availability features of vSphere 6.0.
You will find more details on some of the features below.
VMWARE VIRTUAL VOLUMES
Click to enlarge
VVOLS changes the way storage is architected and consumed. Using external arrays without VVOLS, typically the LUN is the unit of both capacity and policy. In other words, you create LUNs with fixed capacity and fixed data services. Then, VMs are assigned to LUNs based on their data service needs. This can result in problems when a LUN with a certain data service runs out of capacity, while other LUNs still have plenty of room to spare. The effect of this is that typically admins overprovision their storage arrays, just to be on the safe side.
With VVOLS, it is totally different. Each VM is assigned its own storage policy, and all VMs use storage from the same common pool. Storage architects need only provision for the total capacity of all VMs, without worrying about different buckets with different policies. Moreover, the policy of a VM can be changed, and this doesn’t require that it be moved to a different LUN.
VVOLS - VASA PROVIDER
Click to enlarge
The VASA Provider is the component that exposes the storage services which a VVOLS array can provide. It also understands VASA APIs for operations such as the creation of virtual volume files. It can be thought of as the “control plane” element of VVOLS. A VASA provider can be implemented in the firmware of an array, or it can be in a separate VM that runs on the cluster which is accessing the VVOLS storage (e.g., as a part of the array’s management server virtual appliance)
VVOLS - STORAGE CONTAINER (SC)
Click to enlarge
A storage container is a logical construct for grouping Virtual Volumes. It is set up by the storage admin, and the capacity of the container can be defined. As mentioned before, VVOLS allows you to separate capacity management from policy management. Containers provide the ability to isolate or partition storage according to whatever need or requirement you may have. If you don’t want to have any partitioning, you could simply have one storage container for the entire array. The maximum number of containers depends upon the particular array model.
VVOLS - STORAGE POLICY-BASED MANAGEMENT
Click to enlarge
Instead of being based on static, per-LUN assignment, storage policies with VVOLS are managed through the Storage Policy-Based Management framework of vSphere. This framework uses the VASA APIs to query the storage array about what data services it offers, and then exposes them to vSphere as capabilities. These capabilities can then be grouped together into rules and rulesets, which are then assigned to VMs when they get deployed. When configuring the array, the storage admin can choose which capabilities to expose or not expose to vSphere.
To get more detailed information on VVOLS consider taking HOL-SDC-1429 - Virtual Volumes (VVOLS) Setup and Enablement.
VSPHERE 6.0 FAULT TOLERANCE
Click to enlarge
The benefits of Fault Tolerance are:
- Protect mission critical, high performance applications regardless of OS
- Continuous availability - Zero downtime, zero data loss for infrastructure failures
- Fully automated response
The new version of Fault Tolerance greatly expands the use cases for FT to approximately 90% of workloads with these new features:
- Enhanced virtual disk support - Now supports any disk format (thin, thick or EZT)
- Now supports hot configure of FT - No longer required to turn off VM to enable FT
- Greatly increased FT host compatibility - If you can vMotion a VM between hosts you can use FT
The new technology used by FT is called Fast Checkpointing and is basically a heavily modified version of an xvMotion (cross-vCenter vMotion) that never ends and executes many more checkpoints (multiple/sec).
FT logging (traffic between hosts where primary and secondary are running) is very bandwidth intensive and will use a dedicated 10G nic on each host. This isn’t required, but highly recommended as at a minimum an FT protected VM will use more . If FT doesn’t get the bandwidth it needs the impact is that the protected VM will run slower.
VSPHERE FT 6.0 NEW CAPABILITIES
Click to enlarge
DRS is supported for initial placement of VMs only.
BACKING UP FT VMS
Click to enlarge
FT VMs can now be backed up using standard backup software, the same as all other VMs (FT VMs could always be backed up using agents). They are backed up using snapshots through VADP.
Snapshots are not user-configurable – users can’t take snapshots. It is only supported as part of VADP.
AVAILABILITY - VSPHERE REPLICATION
Click to enlarge
The features on this slide are new in vSphere Replication (VR) 6.0
- Compression can be enabled when configuring replication for a VM. It is disabled by default.
- Updates are compressed at source (vSphere host) and stay compressed until written to storage. This does cost some CPU cycles on source host (compress) and target storage host (decompress).
- Uses FastLZ compression libraries. Fast LZ provides a nice balance between performance, compression, and limited overhead (CPU).
- Typical compression ratio is 1.7 to 1
Best results when using vSphere 6.0 at source and target along with vSphere Replication (VR) 6.0 appliance(s). Other configurations supported - example: Source is vSphere 6.0, target is vSphere 5.5. vSphere Replication Server (VRS) must decompress packets internally (costing VR appliance CPU cycles) before writing to storage.
- With VR 6.0, VR traffic can be isolated from other vSphere host traffic.
- At source, a NIC can be specified for VR traffic. NIOC can be used to control replication bandwidth utilization.
- At target, VR appliances can have multiple vmnics with separate IP addresses to separate incoming replication traffic, management traffic, and NFC traffic to target host(s).
- At target, NIC can be specified for incoming NFC traffic that will be written to storage.
- The user must, of course, set up the appropriate network configuration (vSwitches, VLANs, etc.) to separate traffic into isolated, controllable flows.
VMware Tools in vSphere 2015 includes a “freeze/thaw” mechanism for quiescing certain Linux distributions at the file system level for improved recovery reliability. See vSphere documentation for specifics on supported Linux distributions.
Consider taking HOL-SDC-1405 Module 2 to explore VR 6.0 in more detail.