I received a setuproot failed error when trying to boot off a custom 3.7 kernel on CentOS 5.9. Turns out CentOS 5 is getting long in the tooth and needs a hand with kernel settings.
Anyway, this is my error in VMWare.
setuproot: moving /dev failed: No such file or directory
no fstab.sys, mounting internal defaults..
error mounting.etc etc etc.
To fix, configure your kernel with:
I upgraded an ESXi 5 host to 5.1 and received the following error after the upgrade was finished and I had tried to VMotion to the upgraded server:
The VM failed to resume on the destination during early power on. (This happened at 65%)
I read articles on setting a new Heap size…FAIL, making sure the NFS share names were correct and not upper/lower case…..FAIL.
It turns out I was using the server I upgraded to try run ESXi 5.1 in Nested mode. This does not work with an EPT Intel processor. During my attempts to try trick ESXi5 into running a 64 bit VM (nested mode) I added the following to: /etc/vmware/config:
vhv.allow = “true”
vhv.enabled = “true”
Once I removed these lines and rebooted, VMotions started working again.
Change the Default Retention Policy in Exchange 2010 Hosted mode
List your available retention policies.
# Get-RetentionPolicy -Organization test.the.net.au
List your current Default Policy for the organisation.
# Get-MailboxPlan -Organization test.the.net.au | fl Ret*
There will be a Policy for each mailbox plan.
List your Policy for the mailbox.
# get-mailbox -organization test.the.net.au -Identity wes | fl Ret*
Change the Policy for a mailbox
# get-mailbox -organization test.the.net.au -Identity wes | Set-Mailbox -RetentionPolicy "test.the.net.au\Defau
lt Archive and Retention Policy"
Change the Policy for the organisation.
# Get-MailboxPlan -Organization test.the.net.au | Set-MailboxPlan -RetentionPolicy "test.the.net.au\DefaultRetentionPolicy"
# Start-OrganizationUpgrade test.the.net.au
# Complete-OrganizationUpgrade test.the.net.au
I just added a CX-2PDAE0-FD DAE to our old Celerra NS20. I used USM to add the disk shelf and as we were running the Celerra at 2 Gbps (instead of 4 Gbps), everything went fine.
Once the DAE was added, I used Unisphere to connect to the Clariion (CX3-10) side of the Celerra and configured a RAID 5 (4+1) RG on 5 x 300 GB FC drives. I then created two Luns of the same size on different SP’s and then added them to the NS20 Storage Group. All normal so far.
When I recan the Celerra, I get the following:
17716810659: server_2 c16t1l6 skipping unmarked disk with health check error,
CK200074600886 stor_dev=0x0016, RAID5(-1+1), doesn’t match any storage profile
After a bit of fruitless searching I did the following to fix:
I rebooted both Storage processors on the Clariion side first. (One at a time obviously).
I then rebooted the Standby Data Mover and then suddenly the Celerra could see the DAE and the disks properly. I did not reboot the master Datamover (although I will be doing that just for good measure).
If there is an outage between the SANs, then the mirrors may break and go into either System Fractured or Administrativly Fractured state.
This can be viewed in Unisphere by logging into CX4-DC1, and then browse to Replicas -> Mirrors.
Click in the Mirror of interest in the top window to see the primary and secondary LUNs in the bottom window. The condition should be “Normal” or “Updating” (updating means its synchronising). If it says “System Fractured” or “Administrativly Fractured”, then you have problems. What you want to do is manually force a synchonisation. Do this by highlighting the broken mirror LUN and click on Synchronise. If it works, hurrah. But if it does not, you will get a vague error. In my case, the the error hinted that there was a connectivity issue between the SP’s between the SANs. Remember, Mirrorview works on a SAN1 SPA <-> SAN2 SPA path. (Repeat for SPB). So we need to check connectivity. Considering a cut in connectivity is the most likely reason for a break in the mirror, this makes sense. So, let’s check connectivity.
You can use the Unisphere GUI to test, and we will in later steps but it is helpful to have a Unix/Linux shell at your disposal on the the same VLAN as your iSCSI traffic network. You need to use ping to test connectivity. To add to this mix, if you are using Jumbo frames (which we are), you need to test that as well.
Test ping with Jumbo packets and no frag: ping -s 8972 -M do 10.3.9.3 -I eth1 (This will ping 10.3.9.3 from eth1 (on GTO) with a packet size of 8972 (+28 bytes of header to equal 9000 MTU) and with no defragment of packent ( -M do). This needs to work. If not, please follow up with your friendly Netops staff. They will need to check connectivity and MTU setting for EACH interface along the way. This is working now so it would only break if Netops made a change.
To test with Unisphere, open the SAN you want in Unisphere drop down window. Click on System. Under SPA Tasks and SPB Tasks there are ping command. Use the interface, choose a source and destination IP and test. These should be successful.
Once you are confident the connectivity is there, confirm the SAN to SAN relationship from the System window. Click on “Storage System Connectivity status”. Then view “MirrorView Initiators”. There should be two initiators, one for SPA and one for SPB. They should both be registered and logged in. Once confirmed, check “Connections Between Storage Systems” inder iSCSI Management (Still in the System view). This allows you to view and test the connectivity between the SANs. Do so before moving on.
The last thing to test is under Replicas -> Mirrors. Click on “Manage Mirror Connections” on the left hand side menu under “Configuration and Settings”. The left hand side is for iSCSI and the right is for Fibre Channel. We only need to worry about iSCSI. Make sure its says Enabled. If not then disable/Enable the connection.
Even though it said Enabled, I had to disable and re-enable the connection here to regain connectivity.
Go back to the fractured mirror LUN. Go into its properties and on the Secondary Image tab, change the Recovery method to manual. Try to synchronise. It should now work. If it does not check to see if it says “System Fractured” or “Administratively Fractured”. If “System Fractured”, then turn of the connection as stated in the line above, and then manually fracture the mirror. This should change “System Fractured” to “Administratively Fractured”. You may need to refresh to see the change. Then, restart the Mirror connection (as stated in the line above), and then manually synchronise.
Change the recovery method back to automatic.
*Update. You can do the following or you can make life easy and use www.teamviewer.com
This is really for me to remember…..
We all know how to do local tunnels with SSH but this is how to a remote tunnel to get around 2 firewalls.
Example….. A user on their mac, pc, whatever is behind their DSL router has SSH and Remote Desktop/VNC available.
You are at work or at your home behind a firewall.
You have access to an intermediary SSH server available on the Internet for both parties to connect to.
So, for example, a mac user would enable Remote desktop under Sharing and then open Terminal and type:
# ssh -R 57000:localhost:5900 email@example.com
This breaks down thus:
-R means create a remote tunnel with a port listening on the remote SSH server
57000 is the random port I chose (over 1024) to use on the remote server. For this to work you have to add “GatewayPorts yes” to your sshd_config.
localhost:5900 = the local port you want the remote user to connect on (VNC)
Once that connection is made, on the other end of the connection, let’s say its a Windows box (with cygwin for ssh or Putty) you would run:
ssh -L 57000:localhost:57000 firstname.lastname@example.org
This breaks down thus:
-L 57000:localhost:57000 means create a local tunnel from the remote port 57000 to local port 57000.
Once this is connected, on the Windows box, open vncviewer and connect to localhost:57000 and you will connect to the Mac.
I had an issue when I had one ESX4 server that had some slightly tweaked routing. This resulted in the service console not working via vCenter. It would error giving an MKS error on port 903. This is the mouse, keyboard, and something beginning with S port used with the service console. I also could connect to it via ‘telnet IPADDR 903’ so something weird was happeing. After discounting the firewalls, I googled and found a fix.
Here it is:
Here’s one thing you could try:
– ssh to your ESX box
– add the following to /etc/vmware/config:
– try reconnecting
Hopefully that should allow you to work around the problem.
this was taken from:
Isn’t it great to be locked into a vendor so that you are reliant on them for parts and support. Isn’t it great when their “support” ends up in 2 days of down time. Deep breaths.
I just saw this URL for all you Debian users out there….you both know who you are !!