All of this is hypothetical.
General Math - I have a server that houses a SQL database. The database resides on RAID 10 made up of 4 15k RPM SAS drives. I want to figure up what my total available IOPS is for this array. For the sake of this example I'm going to assume that each drive has an available IOPS of 150. My total IOPS for this array would be 450(factoring in the RAID 10 write penalty)?
Planning - I plan on running an application that utilizes SQL for data storage. There are apparently 4 different criteria I need to know to plan appropriately; RAID, Total IOPS needed, Read workload, and Write Workload. Without having the application setup or a system to benchmark on, how can I come up with my workload requirements for this application?
Performance Measuring - I walk into a business that would like an overall audit performed on their existing equipment. They have a server setup running as a Print and File Server that I want to measure IOPS on as they have grown since they put that server in. Array stats would be 4 10k RPM drives (assume 100IOPS per drive) in a RAID 5 configuration. That should give me 500 IOPS available for the array. I monitor the Disk I/O and come up with 60 reads/sec, 95 writes/sec. This gives me an avg IOPS of 77.5 rounded to 78, ~40% read/60% write functions. Do the math, (78 x .4) + (78 x .6) * 4 = 311.2 IOPS on this array. Which is within my available IOPS calculated for this array
Does all of this sound correct or am I way off base?
Edit: Sorry for the delay. Had to sit down and come up with a way to write out my confusion.