Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by sonic_agamemnon

  1. Another requirements change is forcing a motherboard replacement and a graphics card switch as well. The server is morphing into a server/workstation hybrid, since a need for occasional modeling, rendering and video editing has been added. Therefore, the small Radeon passive video card will not hack it, so a TBD workstation adapter (probably from the Quadro family) will replace it. Also, the SuperMicro server motherboard does not provide a full 16x PCIE slot, so it will be exchanged for the ASUS Z9PA-D8 mainboard with a Pike RAID adapter.
  2. It is a good thing the HAF XB is basically a test bench, because requirements and hardware keep changing even before the server has been deployed! Due to an increase in expected scope/workload, both Xeon E5-2620 processors have been returned for an even bigger punch: two Xeon E5-2670 2.6GHz processors with 3.3GHz Turbo Boost and 20MB L3 Cache. The number of developers is expected to increase from two to four later this year, a fifth VM needs to be spun up to support a CPU-intensive search engine (Solr), and the server has also been nominated for initial load-testing and baseline benchmark duty. An increase in both thread count and clock rate was deemed appropriate to maintain good performance. Compared to the E5-2620, the E5-2670 provides eight cores instead of six. The 2670 also clocks at 2.6GHz instead of 2GHz, with a temporary boost up to 3.3GHz instead of 2.5GHz. The 2670 also provides 5GB of additional L3 cache, 20MB instead of 15MB. However, the 2670 does consume more power, 115W instead of 95W.
  3. Although clearances between both Arctic Freezers and the memory slots are good, there is not enough wiggle room to replace every stick of DDR3 memory with the CPU heat-pipes mounted, especially the two slots nearest each CPU socket. Unfortunately, both coolers had to be removed in order to install all eight sticks of 8GB Samsung memory. This also required removal of the prior Arctic Silver 5 thermal compound, cleaning all the surfaces and repeating the mounting process after installing all 64GB of server memory.
  4. My primary area of concern with this smaller HAF XB foot print are temperature levels, especially in the bottom chamber. The top chamber is designed very well, especially for air cooling, and I expect good results with high-performance Noctua case fans all around. However, the lower area is very cramped and the only active exhaust comes from two 80mm fans. At least they are positioned directly in the path of the internal drive cage. I wouldn't be as concerned if the drives were SSDs; I've never used these Seagate hybrid drives before, so I'm not sure how hot they will run. I did check the power requirements, and the Seagate drives actually draw about the same power as pure SSDs and definitely less power than regular disk drives.
  5. One of the finest aspects of the HAF XB is its flexibility and total accessibility, a direct byproduct of its test-bench design origin. Need to remove the top or both sides? No problem. Need to remove the motherboard? No problem. On the other hand, after gaining quick access to the guts a nasty reminder just how cramped the quarters are becomes the main impression. Space it tight, making cable management even tougher than normal. In order to cleanly install all four Seagate hybrid drives, the Enermax power supply was nudged halfway out for better clearance and easier access behind the 2.5" drive cage: Thankfully, cable management in the lower compartment is complete at this point; hopefully, both 80mm Noctua fans will perform well and provide sufficient airflow in the bottom chamber:
  6. I thought those cards generate both 3.0s and 2.0s? But in order to utilise the 3.0s, you'd need to connect a separate USB 3.0 header onto the board (those that comes with cases these days) I selected these cards because all ports are active, both external ports and the 20-pin internal connection. One internal connection supports the HAF XB case USB 3 inputs on the front panel, and the other connection is for the multi-card reader in the upper-front 5.25" bay. One downside to this board is the need for extra power (4-pin) to drive all the external/internal connections, but I'd rather deal with that than some other USB 3 adapters which force you to decided whether to use the external or internal connections, since both cannot be active at the same time.
  7. The slowest boat from China has finally arrived with two USB v3 PCIE adapters that provide a total of 4 external ports and two 20-pin internal jacks to satisfy all on-board USB 3 requirements. Unfortunately, the SuperMicro mainboard only provides USB v2 ports.
  8. 64GB of replacement DRAM arrived today: eight sticks of Samsung 8GB DDR3 1600 MHz ECC Registered Server Memory (M393B1K70DH0-CK0). The two Kingston 16GB DRAM modules were exchanged for this Samsung memory, since the Kingston sticks were not on the SuperMicro compatibility list. Also, further investigation indicated the Xeon 2600 series memory controller operates at maximum bandwidth only when four channels are populated. Moreover, moving from 32GB to 64GB allows each UNIX VM more room for larger JVM heaps, and should allow all four VMs to remain alive and responsive even while video tasks are running as well.
  9. Why do peoople do this? Not trying to pick on you but I see a lot of people do this when displaying new parts for their build. quoted from wikipedia Why? Because they don't know (like me) the shipping bag is so very dangerous....
  10. WIth the work it sounds like you will be doing, you do not want to lose any data. With six drives, the chances of one failing are high. Not to mention you will be putting them in a RAID 0 configuration, which effectively doubles the chances of data loss from drive failure. Data is backed-up to remote storage over the network overnight, meaning the main risk will be loosing a day's worth of work if a drive fails prior to the nightly back-up. This is an acceptable trade-off for me, since this is a local development sandbox server (not production). I'd rather have the speed and accept the risk. It would be great if there was room and budget for a much larger box with lots of mirrored drives and less need for frequent network back-ups, but such is not the case...
  11. The HAF XB case has been disassembled, including the removal of both stock fans in front; all five Noctua case fans were installed thereafter. The motherboard tray was removed next and the SuperMicro mainboard was mounted. Two of eight motherboard mount points did not align with available, pre-drilled holes on the Cooler Master tray, so a trusty Dremel was deployed to drill out both missing holes. Installing the Xeon processors was a snap thanks to a very clean LGA 2011 socket design. The "Artic Squeeze Play" worked according to plan, with both heat sinks in perfect back-to-back alignment. Clearances are good between the cooler and the top of each server DRAM module. Stock Artic Freezer fans were replaced with Noctua Focused Flow hardware, offering better cooling performance and higher top-end CFM numbers at an equivalent noise level. I am very eager to see how the push-pull configuration performs under load. The Samsung 840 SSDs were installed in the front 3.5" bays, ready to be configured in a RAID-0 boot configuration. The LG BluRay burner and AFT All-in-one media reader were also installed in the front 5.25" bays, completing all front-side setup work.
  12. After a rather extended delivery process, the Seagate hardware has finally arrived. It will be fun stuffing the entire internal HAF XB drive cage with 3TB of relatively quick RAID-0 storage. However, the two USB3 adapters must be on a slow boat from China (literally), since they are both sitting in San Francisco still, obviously mired in a US Postal twilight zone... A number of factors led to a rather "momentous" decision, and the numbers provided by this StorageReview.com analysis swayed me away from my habit of relying on traditional WD RE4 disks for data RAID sets: http://www.storagereview.com/seagate...t_750gb_review 1. Cost Effective Storage Pricing has dropped for the Momentus which is available now for $120 with 750GB of capacity, providing a huge cost-per-gigabyte advantage over SSD hardware. 2. Higher Capacity The latest Momentus drive offers 50 percent more capacity over its prior version, 750GB, and 50 percent more storage than most SSDs today-- unless your budget allows for $3K to purchase a 1GB SSD. 3. Good Performance Scaling two Seagate Momentus 750GB XTs in RAID-0 just about doubles its single-drive performance, delivering nearly 240 MB/s of sequential read throughput. This matches the sequential performance I see using WD RE4 in RAID-0, although neither drive is anywhere near the 980 MB/s capability of two Samsung SSDs in RAID-0. 4. Low Power Power consumption is lower than most SSDs and traditional disk drives, peaking at 3.7 watts during sequential reads, compared to 9+ watts for the WD RE4 2TB drive and 4.3 watts for the Samsung 840 SSD. 5. Small Size The Momentus is a 2.5" drive, an important factor in smaller cases like the Cooler Master HAF XB, which only ships with an internal 2.5" drive cage, thereby eliminating all larger 3.5" drives. Using four Momentus XT disks maximizes limited internal capacity, providing 3TB of cage storage. 6. Nice Warranty Seagate is offering a 5-year warranty, something rather rare in the SSD marketplace. Another advantage of the hybrid design is whenever the high-density 8GB flash chip eventually does decide to fail, the Momentus remains operational as a traditional 7K RPM hard drive, unlike SSD hardware that degrades and ultimately fails altogether.
  13. Why? It's simple: space is limited, andI need to maximize thread strength while minimizing floor space. There are only two SSDs configured in a 500GB boot array whichI don't think amounts to having too many SSDs. The Momentus disks from Seagate are hybrid drives with the bulk of the capacity provided by two traditional disk platters, with 8GB of flash to cache the most frequently accessed files. I really don't consider these disks to be SSDs and its performance lies somewhere between an SSD and a typical 7K RPM SATA disk drive. Regarding the HAF XB case and storage capacity, there are eight bays in total: two 3.5" bays, two 5.25" bays and four internal 2.5" bays, and that offers plenty of storage capacity in my opinion, at least for my purposes-- 4TB.
  14. You're absolutely right: 16GB wouldn't be enough for sure, but my original spec was 32GB, 2 sticks of Kingston 16GB server memory. However, even 32GB is pushing it if I need to have all four UNIX VMs running and the Adobe capture/encoding tasks running as well. Moreover, after reviewing the Sandy Bridge architecture, I realized the latest Xeon memory controller needs all four channels populated to attain maximum bandwidth, so I returned both Kingston 16GB sticks and decided to fully populate all eight channels with 8GB Samsung server memory sticks for 64GB total available memory. Thread/CPU saturation should be encountered before available memory presents an issue, even when every VM and video task is running simultaneously.
  15. Unboxing Most of the parts have been delivered, although four Seagate Momentum 750GB hybrid drives and two PCIE USB 3 adapters are still in the delivery pipeline. The build can proceed, however, since those missing parts do not prevent the system from bootstrapping.
  16. This is a small form factor server designed to support engineering sandbox development and post-production video capture/encoding work. The build features a Sandy Bridge-EP standard ATX mainboard with dual Xeon E5-2600 LGA 2011 sockets. A small Cooler Master HAF XB case will be used in an air-cooled configuration with Noctua fans. Software: Windows 7 Professional will run Adobe Creative Suite CS6 for background video capture, encoding and batch image processing. Four CentOS UNIX virtual machines will host the following server software: 1. Web Service VM: Tomcat server running Spring RESTful web services 2. Web Application VM: Tomcat and Apache servers running web applications 3. Cache/Queue VM: MongoDB server with data cache and queue collections 4. Database VM: Oracle database server Hardware: Cooler Master HAF XB ATX Computer Case Enermax Galaxy EVO 1250W ATX12V 80 Plus Bronze Power Supply SuperMicro Dual Xeon LGA 2011 DDR3 1600 ATX Server Motherboard MBD-X9DRL-IF-O 2 x Intel Xeon E5-2620 6-Core 2.0GHz Sandy Bridge-EP Processor, 2.5GHz Turbo Boost, 15MB L3 Cache 64GB 8 x Samsung 8GB DDR3 1600 MHz ECC Registered Server Memory M393B1K70DH0-CK0 HIS Radeon HD 6450 2GB 64-bit DDR3 PCIE x1 HDCP Low Profile Passive Video Card 2 x Sedna PCIE 4-Port USB 3.0 Adapter (2 x External, 2 x Internal) Creative Audigy 2 ZS High Definition 7.1 Surround PCI Sound Adapter Boot Array: 2 x Samsung 840 Series 250GB Solid State Drive (SSD) in RAID-0 (500GB) 1.5TB VM Array A: 2 x Seagate Momentus XT 750GB SATA 6.0Gbs Solid State Hybrid in RAID-0 1.5TB VM Array B: 2 x Seagate Momentus XT 750GB SATA 6.0Gbs Solid State Hybrid in RAID-0 LG Blu-Ray Burner SATA 14X BD-R 2X BD-RE 16X DVD+R 5X DVD-RAM 12X BD-ROM 4MB Cache AFT PRO-57U All-in-one USB 3.0 5.25" Media Card Reader 2 x Noctua NF-A14 ULN 140x140x25mm Fan, 800/650 RPM, SSO2 Bearing (front fans) Noctua NF-P12 PWM 120mm SSO2 Bearing (upper-rear fan) 2 x Noctua NF-R8 80mm Case Fan (lower-rear fans) BitFenix Spectre 200mm Case Fan (top fan) 2 x Artic Freezer i30 CPU Cooler, Four Direct Contact Heat Pipes 2 x Noctua NF-F12 PWM 120mm 2-Speed Focused Flow Fan, 1500/1200 RPM, SSO2 Bearing (CPU fans) Configuration: Somehow, the SuperMicro X9DRL-IF squeezes two LGA 2011 sockets, eight RDIMM DDR3 memory and six PCIE expansion slots onto a standard ATX form factor motherboard! Initially, the server will provide 24 threads in 12 cores using two Xeon E5-2620 six-core processors with 64GB of ECC DDR3 server memory. Eventually, the processors will be upgraded to a faster eight-core model when pricing eventually drops. The processors will be fitted with two Artic Freezer i30 CPU heat sinks, replacing the stock fans with two Noctua NF-F12 PWM 120mm Focused Flow fans. A passively cooled Radeon HD 6450 in the first PCIE slot provides basic video support at 2560x1600. Two Sedna 4-Port USB 3.0 PCIE adapters (4 x external, 2 x Internal 20-pin) will provide plenty of USB v3 connectivity. In front, for a slight increase in CFM along with a notable decrease in noise levels, the two stock 120mm Cooler Master fans will be replaced with two Noctua NF-A14 ULN 140mm fans. For media input, a fast LG Blu-Ray burner and AFT all-in-one USB 3 reader will be installed in the front 5.25" bays. Both 3.5" hot-swap bays will contain Samsung 840 Series 250GB solid state drives in RAID-0 (500GB). Windows 7 will boot from this Samsung RAID-0 array. VirtualBox, Adobe Creative Suite CS6 and related video processing applications will be installed was well, including large scratch and temporary file areas. Internally, encoding space and all four UNIX virtual machines will reside in two 1.5TB RAID-0 arrays, with the image files periodically backed up to external network storage. Each array set consists of two Seagate Momentus XT 750GB SATA 6.0Gbs Solid State Hybrid drives. In back, one Noctua NF-P12 120mm PWM fan will be installed in the upper position. Two Noctua NF-R8 80mm fans will be placed in the rear location. Network connectivity is provided by SuperMicro via two Intel 82574L Gigabit Ethernet RJ45 ports. The expansion slots expose HDMI and Dual-link DVI video ports on the HIS Radeon HD 6450, along with four external USB 3.0 ports from two Sedna PCIE adapters. Power is provided by a very quiet Enermax Galaxy EVO 1250W power supply. On top, a BitFenix Spectre 200mm Case Fan will be fitted to help exhaust the heat.
  17. 3960X CPU @ 4GHz R7970 GPU @ 1050MHz R7970 VRAM @ 1500MHz
  18. The camera is a Canon EOS 5D Mark II, and the lens is a Canon EF 50mm F1.2 L USM.
  19. I am thankful this build has progressed steadily and rather smoothly. The workstation is now standing up. Every chosen component arrived safely and performed as expected, with no returns, RMAs and zero regrets. Even the problematic fan controller in the Cosmos II turned out to be a non-issue with the Enermax case fans. Moreover, the workstation just survived an overnight burn-in at stock speeds, although technically the XFX Black Edition 7970 cards do ship with mild GPU and memory over-clocks. So, how old school am I? Well, how about analog temperature gauges? That's right:She wears glasses! My overall impression is astonishment. The performance delta between my prior workstation and this behemoth is massive. Windows 7 Professional boots in thirteen seconds with stock settings. The Samsung 830 striped set simply rips through all application and scratch storage requests. Launching Photoshop CS6-64 takes place in two eye-blinks, with every window and floating panel appearing in less than two seconds. The Adobe OpenCL graphics engine manipulates dozens of massive RAW images in memory with ease, instantly rendering 3D effects, rotations, etc. Running six cores with 32GB of RAM enables instantaneous switching between several Adobe editing and affects packages (Premiere, Encore, After Effects, etc.) that are all configured at maximum performance setting levels, even while working with very large video and photo projects. Performance within each application is superb. The Western Digital RE4 striped arrays transfer several hundred GBs of HD footage at nearly twice the throughput of my prior rig, providing uninterrupted editing of two HD timelines, each assigned a dedicated RAID0 set. Final encoding times should drop dramatically using this workstation! Given the traditional air-cooled approach with high-performance components, idle state temperatures are amazingly good across the board. The room temperature is 22C. Inside the Cosmos II, the main upper chamber is one degree warmer at 23C. The lower storage chamber is a half-degree higher at 22.5C. The XFX Double Dissipation and Ghost air cooling technologies idle both video cards at 30C with a standard 20% fan level, rather astonishing considering both cards ship with over-clocked processors (1000 MHz) and VRAM (1425 MHz). Equally impressive, perhaps even more so, the Noctua NH-D14 heat-pipe and BeQuiet! PWM fan configuration do an amazing job quietly cooling the 3960X, with all six cores idling between 28C and 37C, an average core temperature of exactly 31.8C. Although the BeQuiet! fans provide a 1-db decrease in fan noise, the trade-off is they push about 10% less air compared to the Noctua fans. I suspect if I switched back to the stock Noctua fans the average core temperature would drop by about 10 percent to below 30C.
  20. The Cooler Master Cosmos II Achilles heel is the fan controller, which causes many fan models to tick, sometimes with loud clicking noises whenever fan speeds are changed. I'm beginning to think people who simply switched to a third-party controller took the right path. However, I decided to take Cooler Master's free replacement offer, and after a three-week wait an entirely new head/controller assembly arrived. As a back-up measure, I also decided to go with Enermax manual-control case fans, which override and regulate fan speed locally with a small adjustment switch-box. If the Enermax fans misbehave with this Cooler Master controller, the plan is to keep the Cosmos II controller set on high for maximum power, and then manually regulate each fan down based on fan role/location/performance requirements. It is a shame the case's fan controller is so problematic, given its otherwise excellent design. I probably should have just went with an entirely different fan controller... Before totally removing the original Cosmos II head/fan controller assembly, I decided to wire up all eight Enermax TB Silence (manual speed control model UCTB12A) 120mm case fans to determine how the manually controlled hardware performs with the problematic fan controller. Several fan makes/models emit clicking or humming noise as speed adjustments are made with the controller. To my surprise, the Enermax fans worked perfectly with the original fan controller. No issues whatsoever moving from low, medium or high speed settings. I soon realized the replacement fan controller Cooler Master sent me recently may not be necessary, given the good results encountered with the original hardware. However, I decided to proceed with moving to the latest fan controller since it should provide even better fan support, and the fan/led and mainboard cabling is now entirely black and that fits my color preference as well. I was a bit concerned The God of Irony would strike, and I would discover issues with the latest fan controller and the Enermax fans. Thankfully, I can report both the original and new fan controller work extremely well with Enermax TB Silence manual-control fans.
  21. I'm probably making the wrong call, but I've decided to keep the 7970s despite only partial Adobe support for AMD and OpenCL in CS6. I am gambling Adobe will implement OpenCL in a broader selection of AMD cards later this year. However, the sphincter factor on this situation is about 8.5... Two 7970s is probably overkill, especially since I don't game that much, but when I do I prefer high-end rendering quality at 2560 x 1600 whenever possible. I use an old Apple 30" Cinema HD display, and my standard desktop resolution is 2560 x 1600. I selected XFX because I wanted slightly lower noise and better cooling numbers compared to stock hardware. I didn't want to risk installing after-market cooling myself, so I decided to go with XFX and its Double Dissipation and Ghost air cooling approach. When under heavy gaming load, these cards will be the source of significant heat. To help keep them relatively cool, open space provided by the Cosmos II and MSI mainboard are key factors, allowing plenty of clearance for air flow around/between the CrossFireX configuration. Two rear exhaust slots remain open on either side of both video cards. Also, two 120mm fans mounted on the side case door intake cool air directly into the GPU space. This solution is probably the most that can be done without resorting to liquid. Essentially, the entire build is an experiment to determine what kind of performance can be attained with a purely air-cooled configuration.
  22. The Cosmos II is so cavernous, even Noctua's enormous NH-D14 seems normal in size. Any object placed in this case recedes and appears smaller than it really is. I am especially happy with clearances between the heatpipe towers and the rest of the case, especially the front and rear. Therefore, I decided to add a third BeQuiet! 120mm Shadow Wings PWM fan on the back side of the tower closest to rear exhaust fan. Noctua (located in Austria) quickly processed my request for extra fan mounting hardware, and shipped it Priority Mail, no charge. I am very impressed! Unfortunately, clearances between the towers and the Dominator SDRAM modules was poor, and I was forced to remove the heat sinks on top of each module. The BeQuiet! fans replace the stock Noctua hardware. Essentially, at full-RPM, the Noctua fans are slightly better performers but also slightly louder. I decided to give up a little CFM to gain a little reduction in decibel level. If performance turns out to be an issue, I can always bring back one or both Noctua fans. The Enermax power supply is rather small given its modular design and power rating. Installing the Enermax PSU was a far easier process than I first imagined: remove the rear Cosmos II back-plate, attach it to the power supply, slide it in from the outside, and then attach a very handy security clip. This clip attachment above the power chord adapter is a nice touch, safeguarding inadvertent pulling of the plug. The roomy lower chamber isolates power and storage components from the rest of the case. The hardware should remain cool with three dedicated 120mm intake fans, along with adequate passive exhaust on the other side and rear of the case. Why no liquid cooling? I didn't venture into Waterworld for a few reasons: My lack of knowledge dealing with water cooling, especially the custom setups Therefore, a general fear my attempt at plumbing would result in damaging very expensive components Noctua's simpler heatpipe is easier to maintain and offers very good cooling performance No significant performance gain going with all-in-one water cooling solutions from Corsair No real need to over-clock anything at this point, given the hardware's strong stock performance
  23. How do I describe MSI's outrageously named, outrageously looking "Big Bang-XPower II" mainboard? The over-the-top heat sink design deserves an intergalactic name, and despite its overblown design this board is actually very well thought-out otherwise. Connection points and headers are in the right places. The over-clocking features and numerous SATA ports are wonderful. The board's larger form factor provides good clearance, especially between the seven expansion slots, thus making it an excellent companion for the Cosmos II. Regardless, I freely admit the whole "military class" theme is sort of creepy, campy, nerdy, etc. What can I say? I can only confess being a sucker for Gatling guns! I don't even own a gun, but I have always been fascinated by them in the movies, Clint Eastwood, Predator, etc.
  • Create New...