Irrani Dam
Forum Replies Created
-
Hi,
Try the following solution it may work. You may be got an email
notification from the appliance showing error
counters on a given volume/disk.. Or maybe, you opened appliance’s
inbox (NMC ‘show inbox’) and noticed the same.. Or maybe, you noticed
non-zero error counters on the NMV (web GUI) volumes page, or simply
executed
‘show volume [volume-name] status’ command. In all of those cases,
please note the following:* If the volume is not in a FAULTED or UNAVAILABLE state, the errors
are recoverable. In other words, ZFS has enough information to (a)
notice the read/write/checksum error, and (b) circumvent it by
presenting the user with the correct data.
* Non-zero read, write, or checksum error counters may indicate that
the corresponding device need to be replaced, sooner or later. If the
device produces a fault two times in a row, say, in the period of two
weeks – statistically, there is a much higher probability that it’ll
produce another fault during the next week of operation, compared to
a
device that has never failed so far. This accumulating risk needs to
be addressed.
To clear the device from recoverable faults, use NMC ‘clear-errors’
command.
This brings another question, of how to find NMC commands if you do
not remember them exactly. The answer is – easy. Simply run:
nmc$ help keyword clear
or
nmc$ help keyword error
The result in both of those cases will be (*):
* setup volume [volume-name] clear-errors
regards,Irrani
-
Irrani Dam
November 24, 2009 at 2:01 am in reply to: Check disk message on IP-SAN volumes which are presented to windowsA CHKDSK command may be used to restore the file system inside the IP-
SAN volume. The only concern might be a volume that is enabled for
Snapshots. If the CHKDSK command causes a large number of writes to
the file system tables this might artificially fill the snapspace for
the volume. The snapshots can be restarted or space can be freed up by
deleting most of the current snapshots or run the CHKDSK command and
let the Storage Concentrator delete the oldest snapshots to create
space in the snapspace volume.please try this may work
-
Hi,
I could be due to the frame size mismatch between the root switch and
the host nic ports. btw, but i think you may not have any changes in
the network setup recently correct.Thanks
-
1.Normally the fastest SCSI device should be the first one on the cable and it should be the first on on the SCSI ID. You should check with the terminiator, and try to insert one drive at a time. You can usually get into the SCSI config during boot-up by pressing keys instructed on the screen. Go into the BIOS of the LSI SCSI card and make sure it shows up there. Also make sure all the parameters are correct in the SCSI BIOS. You can use as many floppies as needed. If it takes one to start DOS and start the SCSI controller and a second to do the Ghost program then you need to to that. The HD will not show up in the motherboard BIOS. IT will be just simply “SCSI device”. Another note would be to check and make sure you jumpers on the HD are set correctly. In the past I do remember having to set the spin delay jumper for the OS to recognize the HD. It would show in the SCSI BIOS but not in the OS.
2.You should check with the terminiator, and try to insert one drive at
a time.. -
I guess it is caused by ESX multipathing. If ESX has issue to connect to one target portal and it happens to use round-robin load balance policy, you will see this error. Any virtual machines using the affected datastore may become unresponsive. However, since there are still path to the lun, VMs are not affected. You can check load balance policy on ESX and use Fixed policy. This will make it active/passive. If you have multiple LUNs, you can force ESX to use different active path for each LUN.
-
Hello,
The storage device reading (vmhba35:C1:T0:L7) mentioned in the
example
earlier contains several potential failure points:
• vmhba35 – HBA (Host Bus Adapter)
• C1 – Channel
• T0 – Target (Storage processor port)
• L7 – LUN (Logical Unit Number or Disk Unit)
To determine the actual failure or to eliminate possible issues:
1. Identify the available storage paths to the reported storage
device
by running the esxcfg-mpath -l command
2. Check that a rescan does not restore visibility to the targets.
3. Determine whether the connectivity issue is with the iSCSI storage
or the fiber storage. Perform one of the following depending on what
your connectivity issue is:
o To troubleshoot the connectivity to the iSCSI storage using the
software initiator:
a. Check whether a ping to the storage array fails from ESX.
b. Check whether a vmkping to each network portal of the the storage
array fails.
c. Check that the initiator is registered on the array. Contact your
storage vendor for instructions on this procedure.
d. Check that the following physical hardware is correctly
functioning:
? Ethernet switch
? Ethernet cables between the switch and the ESX host
? Ethernet cables between the switch and the storage array
o To troubleshoot the connectivity to the fiber attached storage,
check the following:
a. The fiber switch zoning configuration permits the ESX host to see
the storage array. Consult your switch vendor if you require
assistance.
b. The fiber switch propagates RSCN messages to the ESX hosts
4. Check the physical hardware for the following:
o The storage processors on the array.
o The fiber switch and the Gigabit Interface Converter (GBIC) units
in
the switch.
o The fiber cables between the fiber switch and the array.
o The array itself.
Important: A rescan is required after any change is made to see if
the
targets are detected.kind regards,
Irrani -
Hi,
1.The problem is that DFS-R is only supported in Windows Server 2003 R2 and later. For DFS-R, you can create a report for the replication –> MMC Snap- in for DFS-R and then it’s an action you can run on the replication-set. SBS 2003 does not support DFS-R not even SBS 2003 R2, which was, confusingly, not based on Win2k3 R2. The best way to synchronize the is to use Offline Files.
2.Have a look at the network connections and make sure everything is up. Check router config if it has no disabled ports, that migh cause the problems… Do post back.Check it.
-
Irrani Dam
October 27, 2009 at 11:43 pm in reply to: SCVM SAN migration/iSCSI NPIV on Windows Storage Server 2008Ah, right, I see where the confusion comes from then – I was referring to copying entire (shutdown) VMs, from a template on SAN for instance, rather than live migrations of running VMs where only RAM was involved. Essentially, looking for a way to not copy the VHD over LAN but to have this handled within SAN.
I’d hope the next step would be for Intel to come up with a CPU feature where RAM gets copied straight from one server to another in hardware to live migrate, perhaps over a dedicated interconnect – that ought to beat any other methods 😉So yes, was hoping someone from MS will come back and tell me how to get NPIV to work on Storage Server, so I can have a disk-to-disk copy when deploying a new VM in SCVMM R2, rather than have it copy via LAN each time
-
Irrani Dam
October 27, 2009 at 11:42 pm in reply to: Inconsistent Windows virtual machine performance when disks are located on SAN datastoresHi Duf,
Inconsistent Windows virtual machine performance when disks are located on SAN datastores
Details
Windows virtual machines may experience intermittent issues when stored on datastores presented from non-local storage. This issue may be encountered on virtual machines that use SAN, NFS or iSCSI storage.These issues may include:
• Bluescreen errors
• Event ID: 9 messages in the Event Viewer
• This error reported in guest operating system:The device, \Device\ScsiPort0, did not respond within the timeout period
• Virtual machine becomes unresponsive, halts, or is inaccessible from the console
Solution
Windows guest operating systems that are using virtual disks on non-local datastores might experience unexpected blue screens.
This issue occurs when the responses from the storage array take longer than the guest operating system expects to wait. The default disk timeout period in Windows is too short to handle the longer delays that can occur in a SAN, NFS, or iSCSI environment, and a blue screen error is the result of exceeding this timeout.
To resolve this issue, increase the disk timeout to 60 seconds in the Windows virtual machines by editing the Windows registry.To increase the disk timeout value:
1. In the registry, go to HKEY_LOCAL_MACHINE/System/CurrentControlSet/Services/Disk.
2. Click Edit/Add value.
3. Set the value name to TimeOutValue.
4. Set the data type to REG_DWORD.
5. Set the data to 0x03c hex (or 60 decimal).
6. Reboot the virtual machine.
Note:
• Contact your Storage vendor to confirm whether a specific TimeOutValue setting has been identified for your particular environment.
• Increasing this disk timeout setting does not affect the performance of the guest operating system or virtual machine under normal operating conditions, but you must verify how the applications you are running in the guest operating system handle disk access delays.Cheers,
warm regards,
Irra