In the blog post it indicates that you can install the Windows Feature RSAT-Clustering-CmdInterface which “Includes the deprecated cluster.exe command-line tool for Failover Clustering. This tool has been replaced by the Failover Clustering module for Windows PowerShell.”
Well I already had the Failover Clustering module for Windows PowerShell installed, and I couldn’t figure out how to make this second node a possible owner of the CSV, so I installed RSAT-Clustering-CmdInterface to use cluster.exe
Once this was installed I was able to use cluster.exe to add my second node as a possible owner of “csv_a1”. Then I could drain the Node, and finally reboot it which was what I was trying to do before running into this issue.
Hopefully this helps someone else with Windows Server 2012 R2 Hyper-V Failover Clustering if they have this issue and are also getting errors when trying to use cluster.exe
I was having trouble renaming my single Ethernet adapter because I had added and removed the NIC a few times during VM testing. The name of the NIC in Network and Sharing Center was “Ethernet 2” and I wanted it to be named “Ethernet”. I like things neat.
When I tried to rename the adapter I got an error stating that there was already an adapter with that name. I knew I only had one virtual NIC attached to this VM so I knew it had to be a leftover somewhere.
I tried to use PowerShell to rename the adapter but had no luck – it also indicated that “Ethernet” was already in use.
I did a search on the registry for “Ethernet” and after some digging found what I was looking for:
I was trying to do some maintenance on my Hyper-V 2012 R2 Failover Cluster and I was unable to drain one of the nodes in order to install Windows Updates.
An error occurred pausing node 'RDU-HV01'
Error Code: 0x80071748
"The requested operation can not be completed because a resource has locked status"
In Hyper-V Manager the VM was stuck in a “Backing Up” status, and this was after I manually Live Migrated all other VMs to my second node.
When trying to manually Live Migrate this VM I was prompted to override the locked resources and try again… and just like any good System Administrator I saw an opportunity to try and force something to work, while potentially producing an extremely horrible outcome, so I naturally clicked “Yes”. YOLO.
But it still failed with an error!
Failed to Live migrate virtual machine 'VM_name'
The action 'Move' did not complete.
Error Code: 0x80070057
The parameter is incorrect
I restarted the VM but that had no effect so I shut it down.
Now that the VM was shut down I could restart the node which I did from a command prompt utilizing the shutdown command. However the node would not restart – it was stuck somewhere in the shutdown process. I could still see it in Server Manager and when I did a systeminfo from the command prompt the System Boot Time told me that it had not restarted yet. Since I was doing this remotely, I could not go into the server room to shut down the server and I had no OoB management configured so I had to do a little digging. I found that others with this issue were able to fix it by restarting the Hyper-V Virtual Machine Management service. I tried stopping this service (vmms is the Service Name) from Server Manager of my Windows 8.1 laptop but it did not seem to work.
I then opened a command prompt to try using SC.exe to stop or restart the vmms service. By the time I figured out the correct syntax, I noticed that the node had just gone down for a restart. Maybe it timed out, or maybe my command from Server Manager just took a minute to go through.
The correct syntax would have been:
sc \\rdu-hv01 query vmms
sc \\rdu-hv01 stop vmms
The VM which was stuck in the “Backing up…” state was automatically moved to my second node and the first node restarted itself. The VM which was stuck started properly on the second node and the status for “Backing up…” was no longer showing.
Once the first node came back up from its restart I was able to Pause and Drain Roles to go on with my maintenance.
If this happened again I would suggest shutting down the VM which is stuck in the “Backing up…” status. Then Live Migrate everything else (don’t forget your storage!) so that the only thing on this node is the VM that is off. I would then attempt to restart the vmms service. If that does not work, restart the node.
In going through the motions of upgrading our Hyper-V cluster from 2008 R2 to 2012 R2, I had originally started to deploy a Hyper-V 2012 cluster. While learning more about 2012 R2, I realized that there is no real way to upgrade a Hyper-V cluster, so I would need to burn down our 2012 cluster completely in order to use that hardware to create a 2012 R2 cluster. I wanted the new functionality of 2012 R2, and had not migrated more than a couple of VMs to the 2012 cluster, so I evicted one node from the 2012 cluster and installed 2012 R2. The VMs on the 2012 cluster were living on the single node in the 2012 cluster of one node.
Once I had the new node (a Dell PowerEdge R620 with 128 GB of RAM) running Server 2012 R2 Core Edition, I performed the initial setup of configuring the server properties with sconfig, configuring network settings using PowerShell, joining the server to the domain, running Windows Updates, installing Corefig, installing EMC software such as PowerPath and the Navisphere Agent, and a few other things to prepare the server for deployment.
I even created my new 2012 R2 cluster at this point, even though it was not needed quite yet since there was only one node running with Server 2012 R2.
After everything was ready for deployment, I created a test VM running Server 2012 R2. Since we run Server Core edition I used a 2012 R2 VM in my 2008 R2 Failover Cluster to manage the new node, using Hyper-V Manager to create the VM. Once the test VM was ready to be sent into “test production” I closed the Console connection and used Remote Desktop Connection to log on to my new VM.
I noticed that the performance of the VM via RDP was very slow. Even my RDP sessions to a remote site were better than my RDP session to this test VM which was in the server room at the main office (which was where I was). Doing a simple test by pinging the server came back with poor results. Pinging the node on which this VM was running via the management interface was fine – all response times were between <1ms and 1ms.
The storage network (connectivity to my SAN via two 1GB NIC using EMC PowerPath via iSCSI) was performing fine. Ping was normal and data transfer speeds between the test VM and the SAN matched those between the node and the SAN, as well as those from my 2012 cluster VMs to the same SAN.
Something was obviously wrong, but what?
The first thing I tried was to make another VM from scratch and see if it had the same results when in a RDP session. The outcome was the same – poor performance.
I thought it might be a settings issue, so I compared all of the settings related to networking with my Server 2012 node which was the exact same hardware. The only difference was that it was running Server 2012 and my new node was running Server 2012 R2. I compared settings of VMs themselves, settings of the Virtual Switch attached to these VMs, and the NIC Teaming settings on the nodes. Only one setting was different and it was the “Load balancing mode” of the NIC team dedicated to Cluster traffic (all VM traffic). I changed this to match, but it had no effect.
I figured something might be wrong that I can’t see via the GUI, so I recreated all of the virtual networking components that were tied to this machine. Since this node was so new, there was no production system running on it and I was able to do this outside of an official maintenance window. I deleted the Virtual Switch and destroyed the NIC Team. I then rebuilt the networking and attempted a test – the same problem was occurring.
Every experienced IT Pro has been in this situation before. You have something going wrong and you’ve almost run out of ideas. But on the bright side, you’re probably going to learn something new…
Like I said, I was almost out of ideas.
My next troubleshooting steps included thinking about the physical components. I thought maybe a LAN cable was bad. I was going to test this by trying new cables, but I wanted to try something else first before getting physical.
After doing more research on NIC Teaming with Windows Server 2012 R2 and learning more about Teaming mode and Load balancing mode, I destroyed the NIC Team and recreated it once more for good measure. I noticed that when I recreated the NIC Team it took some time for the second NIC in the team to become Active. Whether or not this observation had any merit, it got me thinking on the right track:
Before I went down the road of troubleshooting drivers I wanted to try the test I had in mind, which was segregating the NICs and testing them individually. If it was a bad cable, I would be able to tell which one (if only one and not both) was having problems.
So I destroyed the NIC team again and assigned the NICs static IP addresses. I didn’t need to assign static IPs to run my test because DHCP was working, but I wanted to reinforce some PowerShell learning. I opted to give out static IP addresses and also disable the interfaces from registering with DNS.
I don’t want these interfaces registering in DNS because they will be the interfaces that are being used for Cluster traffic only; I will not be allowing the Host OS to use the network adapter (a Hyper-V Virtual Switch setting which I will disable). If the host registers these interfaces in DNS I could have some issues, so I opt to remove the DNS registration.
My saved PowerShell code for setting a static IP address:
#call network adapter by name
$netadapter = get-netadapter -name "name of NIC"
#disable dhcp on this network adapter
$netadapter | set-netipinterface -dhcp disabled
#set ipv4 address, subnet mask, type
$netadapter | new-netipaddress -addressfamily ipv4 -ipaddress 192.168.1.100 -prefixlength 24 -type unicast
Then with help from this thread on TechNet I was able to prepare a script to disable DNS registration. I know how to do it with netsh:
netsh interface ipv4 set dnsservers name="name of NIC" source=static address=172.20.1.5 register=none
but I wanted to do it with PowerShell.
#get adapter configuration by adapter name (NetConnectionID)
$na=Get-WMIObject Win32_NetworkAdapter -filter "NetConnectionID = 'name of NIC'"
#display current settings for DNS registration
$config|select DomainDNSRegistrationEnabled, FullDNSRegistrationEnabled
#disable DNS registration
Now that I had my static IP addresses set, I did the ping test to each static IP.
They both came back perfect. The results were <1ms to 1ms for both endpoints. This cemented my belief that it was something to do with the NIC Team and/or the driver.
This led me to believe that it might have some functionality issues due to it being a new feature in Windows Server. I decided that I would update the drivers of my network adapters to see if this would resolve the issue. Since Server 2012 R2 is still fairly new, I figured my Broadcom NICs probably need the latest OEM driver rather than the one that Windows Server 2012 R2 installed on its own.
After installing the Device Management PowerShell cmdlets and trying to figure out how to get the information I wanted, I resorted to using Corefig. I was already about two hours in just trying to figure out what NICs I had in the server. I had contemplated changing over to GUI mode in order to run Device Manager, but I really did not want to have to go that far.
By using Corefig, I was able to view the “System Information” and look at hardware components to find information about the network adapters and what driver they were currently using.
Notes from Beyond The Post: Little did I know I could have opened System Information by typing “msinfo32” in the command prompt.
I scrolled down to “Windows 2012-R2 (x64)” and saw that the latest driver version is 18.104.22.168
The driver listed in “System Information” was old – version 22.214.171.124
Copy/Paste output from “System Information”:
Name  Broadcom NetXtreme Gigabit Ethernet
Adapter Type Ethernet 802.3
Product Type Broadcom NetXtreme Gigabit Ethernet
PNP Device ID PCI\VEN_14E4&DEV_165F&SUBSYS_1F5B1028&REV_00\000090B11C1DBA1D00
Last Reset 2/18/2014 10:16 AM
Service Name b57nd60a
IP Address Not Available
IP Subnet Not Available
Default IP Gateway Not Available
DHCP Enabled No
DHCP Server Not Available
DHCP Lease Expires Not Available
DHCP Lease Obtained Not Available
MAC Address 90:B1:1C:1D:BA:1D
Memory Address 0xD91A0000-0xD91AFFFF
Memory Address 0xD91B0000-0xD91BFFFF
Memory Address 0xD91C0000-0xD91CFFFF
IRQ Channel IRQ 4294967266
IRQ Channel IRQ 4294967242
Driver c:\windows\system32\drivers\b57nd60a.sys (126.96.36.199, 444.20 KB (454,864 bytes), 8/1/2013 8:34 PM)
So I downloaded the new driver and put it on the node in the C:\Drivers folder
I knew that I would get disconnected since I was doing all of this remotely, but if I wasn’t able to reconnect to the node I would just walk into the server room and hop on the server with our KVM in the rack.
pnputil -i -a c:\drivers\broadcom_win_b57_x64\b57nd60a.inf
Yep, I got disconnected. But I knew my session would reconnect after the driver update completed (as long as things went well).
And it did!
Now the exciting part – did this work to fix the network performance??
Seriously. I was excited. This is the fun part of my job that I really enjoy. I quickly went to my Server 2012 R2 VM to manage the node remotely in order to build the NIC Team as quickly as possible. I used Server Manager on this management VM to launch “Configure NIC Teaming” and build my NIC Team. This time, I made my Load Balancing setting “Dynamic” after learning more about that setting.
According to section 3.4.3 of this guide (emphasis my own):
3.4.3 Switch Independent configuration / Dynamic distribution
This configuration will distribute the load based on the TCP Ports address hash as modified by the Dynamic load balancing algorithm. The Dynamic load balancing algorithm will redistribute flows to optimize team member bandwidth utilization so individual flow transmissions may move from one active team member to another. The algorithm takes into account the small possibility that redistributing traffic could cause out-of-order delivery of packets so it takes steps to minimize that possibility.
The receive side, however, will look identical to Hyper-V Port distribution. Each Hyper-V switch port’s traffic, whether bound for a virtual NIC in a VM (vmNIC) or a virtual NIC in the host (vNIC), will see all its inbound traffic arriving on a single NIC. This mode is best used for teaming in both native and Hyper-V environments except when:
a) Teaming is being performed in a VM,
b) Switch dependent teaming (e.g., LACP) is required by policy, or
c) Operation of a two-member Active/Standby team is required by policy.
Once the NIC Team was built I went to Hyper-V Manager and opened the Virtual Switch Manager for the node in question. I then created my Virtual Switch that would carry VM traffic.
Now that the Virtual Switch was created I added it to the VM itself and clicked Start.
Once the machine was online and accessible, I did a ping test just as I had done a long time ago (at this point it had been more time than I care to admit!) and SUCCESS! The pings were all between <1ms and 1ms!
Updating the Broadcom drivers to the latest version for Server 2012 R2 was the solution to my issue. I could not be happier to resolve this, as now I could go full steam into migrating our VM infrastructure from Hyper-V 2008 R2 to Hyper-V 2012 R2.
Just to verify everything, I used “System Information” again to look at the drivers post-update:
The Driver path here technically applies to the NIC above that is not seen, but the NIC is the same as the one that is shown (it is listed 8 times in “System Information” because I have two 4-port Broadcom NICs).
As always if you see any way I could have improved this process or have anything to add, please leave a comment below!
I found a thread on TechNet about the issue and the OP (original poster) replied saying that it was a bug all along and it had been patched.
Well as I wrote on the TechNet thread, I am still having this issue on a fully patched 2012 R2 Standard server. I was able to work around it by using the local Administrator account to assign permissions, rather than using an account in “Domain Admins”.
This bug does not seem effect permissions when using the folders, as I am able to create/modify/etc.; it is only an issue when setting the permissions on the folder.
Here was my folder when logged on as user1, a member of Domain Admins:
As you can see, Domain Admins have “Full control” of this folder and should be able to set any permissions needed. But I kept getting the error in the screenshot at the beginning of this post.
After reading the thread I found on TechNet, I logged on as the local Administrator, went to the folder in question, and added the group I wanted to have access. I then made those permissions propagate to all subfolders and it went quickly and without error. So it works as local Administrator but not as a Domain Admin.
From what I can tell the issue is that Windows Server 2012 R2 cannot recognize that user1 has “Full control” of the folder because user1 is not listed explicitly in the ACL. Even though user1 is a member of “Domain Admins” who are in the ACL, it does not matter.
This seems like a bug to me, but at least there is a fairly easy workaround.