Hi Friends,
Today I tested the ISILON InsightIQ Appliance for getting the LIVE performance data of my ISILON Cluster. The good thing about this tool is the easy to use and faster deployment !!
You can also do some stress tests on your ISILON and check on the load with InsightIQ. Have a look at the below picture which shows the main page dashboard of your ISILON Cluster with LIVE performance details.
(Click to expand)
Sunday, June 8, 2014
Thursday, June 5, 2014
My article on calculating IOPS... The much awaited one!
Larger disks would give more capacity that you don’t need and faster disks would provide performance above and beyond what was requested. This may be good depending on your confidence in the performance requirements.
random I/O
RAID10: write penalty = 2, read = 1; available space = number of disks devided by 2
RAID5: write penalty = 4, read = 1; available space = number of disks minus 1 disk
RAID6: write penalty = 6, read = 1; available space = number of disks minus 2 disks
Always count all the drives involved, since the write penalty takes care of that.
An app does 1000 IOps, where the read / write ratio is 3 / 1, so 3 times as many reads as writes. These 1000 IOps are 750 reads and 250 writes
Backend IOps:
RAID10: 750 + 2 x 250 = 1250; you'll need 1250/180 15k = 7, so at least 8 drives or 1250/130 10k = at least 10 drives
RAID5: 750 + 4 x 250 = 1750; you'll need 1750/180 15k = at least 10 drives or 1750/130 10k = at least 14 drives
RAID6: 750 + 6 x 250 = 2250; you'll need 2250/180 15k = at least 13 drives or 2250/130 10k = at least 18 drives
Not digested ???
Let me take it this way now...
As for the IOPS per drive here is what is used as industry standard:
FC 10K= 150 IOPS
FC 15K= 200 IOPS
SSD = 400 IOPS
SATA = 80 IOPS
Flash: 3500 IOPS
SAS 15k: 180 IOPS
NLSAS: 90 IOPS
These are just rules of thumb used to size environments.
random I/O
RAID10: write penalty = 2, read = 1; available space = number of disks devided by 2
RAID5: write penalty = 4, read = 1; available space = number of disks minus 1 disk
RAID6: write penalty = 6, read = 1; available space = number of disks minus 2 disks
Always count all the drives involved, since the write penalty takes care of that.
An app does 1000 IOps, where the read / write ratio is 3 / 1, so 3 times as many reads as writes. These 1000 IOps are 750 reads and 250 writes
Backend IOps:
RAID10: 750 + 2 x 250 = 1250; you'll need 1250/180 15k = 7, so at least 8 drives or 1250/130 10k = at least 10 drives
RAID5: 750 + 4 x 250 = 1750; you'll need 1750/180 15k = at least 10 drives or 1750/130 10k = at least 14 drives
RAID6: 750 + 6 x 250 = 2250; you'll need 2250/180 15k = at least 13 drives or 2250/130 10k = at least 18 drives
Not digested ???
Let me take it this way now...
As for the IOPS per drive here is what is used as industry standard:
FC 10K= 150 IOPS
FC 15K= 200 IOPS
SSD = 400 IOPS
SATA = 80 IOPS
Flash: 3500 IOPS
SAS 15k: 180 IOPS
NLSAS: 90 IOPS
These are just rules of thumb used to size environments.
Wednesday, June 4, 2014
IBM XIV: Phasing Out and Phasing In a component
When a part is failed in a XIV system, the part is marked as phased out.
This command instructs the system to stop using the component, where the component can be either a disk, module, switch or UPS.
For disks, the system starts a process for copying the disk’s data, so that even without this disk, the system is redundant. The state of the disk after the command is Phasing-out.
The same process applies for data modules. The system starts a process for copying all the data in the module, so that the system is redundant even without this module. A data module phase-out causes a phase-out for all the disks in that module.
For UPSs and switches, the system configures itself to work without the component. There is no phase-out for power supplies, SFPs or batteries.
Phasing out a module or a disk, if it results in the system becoming non-redundant, is not permitted. Components must be in either OK or a Phase In status.
Once the phase-out process is completed, the component's state is either Fail or Ready, depending on the argument markasfailed. If true, the phased-out component is marked as a failed component (in order to replace the component). If false, the phased-out component is in the Ready state.
component_phaseout component=ComponentId [ markasfailed=<yes|no> ]
Phasing In:
This command instructs the system to phase in a component. Components are used by the system immediately. For disk and data modules, a process for copying data to the components (redistribution) begins. Components must be in Ready or Phasing Out states. There is no phase-in for power supplies, SFPs or batteries.
component_phasein component=ComponentId
This command instructs the system to stop using the component, where the component can be either a disk, module, switch or UPS.
For disks, the system starts a process for copying the disk’s data, so that even without this disk, the system is redundant. The state of the disk after the command is Phasing-out.
The same process applies for data modules. The system starts a process for copying all the data in the module, so that the system is redundant even without this module. A data module phase-out causes a phase-out for all the disks in that module.
For UPSs and switches, the system configures itself to work without the component. There is no phase-out for power supplies, SFPs or batteries.
Phasing out a module or a disk, if it results in the system becoming non-redundant, is not permitted. Components must be in either OK or a Phase In status.
Once the phase-out process is completed, the component's state is either Fail or Ready, depending on the argument markasfailed. If true, the phased-out component is marked as a failed component (in order to replace the component). If false, the phased-out component is in the Ready state.
component_phaseout component=ComponentId [ markasfailed=<yes|no> ]
Phasing In:
This command instructs the system to phase in a component. Components are used by the system immediately. For disk and data modules, a process for copying data to the components (redistribution) begins. Components must be in Ready or Phasing Out states. There is no phase-in for power supplies, SFPs or batteries.
component_phasein component=ComponentId
VMAX: Newly created datadevs not immediately available for allocations ???
There is a background process PHCO/IVTOC that runs on the new devices and before that process is finished, they are not available for allocations.
There is a fix# 68779 available for 5876.268.174 that will give high priority for PHCO over normal IVTOC.
PHCO is mainly a security feature introduced in 5876.229 that will make the ucode run a scan on all newly added TDATs to check if they degraded due to a disk failure in the RAID group or so. DAs should scan the devices and once the scan is complete, It clears the PHCO flag and then devices will be eligable for new extents allocation.
To be honest with you, The scan takes some time. There is a way to disable it (Senior PSE is needed for this) but ususally PSE lab discourages doing so, however after adding the fix mentioned above, PHCO is giving very high priority so the scan should take less time.
Hope this helps!
There is a fix# 68779 available for 5876.268.174 that will give high priority for PHCO over normal IVTOC.
PHCO is mainly a security feature introduced in 5876.229 that will make the ucode run a scan on all newly added TDATs to check if they degraded due to a disk failure in the RAID group or so. DAs should scan the devices and once the scan is complete, It clears the PHCO flag and then devices will be eligable for new extents allocation.
To be honest with you, The scan takes some time. There is a way to disable it (Senior PSE is needed for this) but ususally PSE lab discourages doing so, however after adding the fix mentioned above, PHCO is giving very high priority so the scan should take less time.
Hope this helps!
Subscribe to:
Posts (Atom)