Jump to content

Workstation Build Guide and Tips


ir_cow

Recommended Posts

Workstation Guide:

 

(Disclaimer: this is my personal opinion so please do your own research also. This guide is only to direct you, not a absolution. While I believe everything I wrote is true, times, change, programs and companies change. This could become out of date tomorrow.)

 

I started writing this awhile ago, than someone posted on the forums about building a workstation for his son so I ended up PMing about 5-pages worth of information, I didn't want it to go to waste so I revised it and posted it here. I'm sure speller errors exist or the "son" is left around. I remove it when I find it (or if you do first).

 

 

Introduction:

 

I've been wanting to write a guide for quite some time now and I feel it's finally time to pass on my collective thoughts in one article. Throughout this editorial you will find little to no data and spread sheets backing up my findings. This is because the information I've gathered is from my personal experiences as well as years of research from forum posts, brochures, articles to hardware reviews. If I had the money or if someone was willing to fund me, I could and would be willing to compile first hand data. If that ever happens I'm sure this introduction would be re-written. Thefore I suggest you take this information as a starting point to your own research. Your conclusions for the most part hopefully will be similar.

 

At the time of writing this I am currently work towards my BA if Art at the University of Montana. I often find myself surrounded by contemporary artist who believe they going to enter the adult world with a job. I took it upon myself to make sure on top of my degree I can venture further outside my direct field to learn more than I've been taught. My definition of a amateur is someone who is still learning the process or program and/or a hobbyist. You become a professional by doing freelance or working for a company. In either case by stating you are a pro, you have given the assumption you have mastered the services you offer. This does not have to be the mastery of the software, just the skills you offer. It is true you can still be a horrible professional and I've seen it. At the place I current work (avoiding names) I have been tasked sometimes in teaching the new hires the expectations and workflow. Needles to say, one of the requirements is to know the basics of Adobe Photoshop and it still amazed me how many people didn't know what they where doing. A indication people will lie in interviews and applications to land a job.

 

This guide is focused towards amateurs, hobbyist, professionals and on lookers. I hope to continue to update this on occlusion but the basic and fundamentals generally will not change even when technology pushes forwards. I consider myself a Freelance photographer and Photoshop users, as well as  being a hobbyist in a large amount of professional programs from Adobe Dreamweaver for web design, to Autodesk Maya for 3D work.

 

 

Definitions / Aberrations for article:

 

Pro or Professional = A type of market in which quality matters and cost generally does not

 

Consumer = A type of market aimed at typical computer users and or competitive. Corners can and will be cut to save money and pass it on the the buyer. The consumer market is also used a genie pigs in many cases to test new products. In many cases certain products from this market does not effect the stability of a true workstation.

 

Rig = A personal Computer

 

Index:

 

1. What is a Workstation and what makes it different from a Gaming computer?

2. What CPU you want to go with (AMD or Intel)?

3. Should I go Dual Socket (2x CPU)?

4. If Intel should I go with Xeon or not?

5. Video cards, Why isn't any old card good?

6. What can a pro card give me that a consumer card never can?

7: Can I SLI Gefore Cards and get the same preformance as a Quadro?

8. Can I SLI Quado cards?

9. Can I have a Quadro and Geforce in the same system? And whats the benefits of a second card?

10. What Programs Use Maximus?

11. What card is best?

12. CUDA or OpenCL?

13. Tips for buying used cards?

14. Memory (must be based on everything above)

15. What Monitor should I get or does it not matter?

16. What is the best hard drive option?

17. Motherboards

18. Preformance Upgrades

19. Workstation Builds

20. Programs to their GPU counterpart (Best Match?)

21. What is good benchmark programs to compare?

 

 

1. What is a Workstation and what makes it different from a Gaming computer?

 

A workstation is geared towards work and generally sacrifices performance for stability. In a true sense a workstation has all its components are catered to preform with low downtime and failure rate. This teeters on the server realm in which every second of downtime, money is lost. With multiple redundancy like dual power supplies and hard drives a server has even less flaws. Servers are unlike personal computers because of limited interface and generally is used to crunch, sort and send data with little to no management. When time has a high value, companies will send the workload to a server for completion. But the start and end of a project is still on a workstation.

 

At it's basics, a gaming computer is different from a workstation because of the components used in the computer. A consumer / gaming rig is focused on high performance, low cost objective. In many cases it sacrifices long term uptime in favor of speed and low costs. Components difference can vary in a large range but generally it's the video card and redundancy that put the two apart the most. Because the simplest definition of a workstation is to complete tasks for pay. You can use any computer you like to complete those tasks and it is often a cheap alternative to use a modest gaming rig for amateur / student work.

 

2 :What CPU you want to go with (AMD or Intel)?

 

It's hard to justify any AMD chip for production work. AMD chips have always been geared towards gaming / low data and the tactic they use is the more smaller cores (less powerful), the better overall performance you will get. In gaming this has failed because programmers generally only program for 2 cores so the extra ones are wasted. The way AMD and intel cores work, I can only explain in a analogy and this will work for the rest also. Bare in mind it is flawed and not correct but in a sense it true.

 

Let's imagine a our center hub or major outlet of information, in this case a CPU. This hub is going to be a pizza place. They make pizza fast, they deliver fast. At the bases of both companies this is true. CPU cores can be considered drivers delivering pizza. What AMD and Intel both have done is tried to expand outwards and instead of having say one driver going to many houses it sends many cars to many houses. fast and effective. The problem is AMD cards run into many red lights because they drive in town. They may be faster in going a few blocks but across town a single Intel car has completed all its delivers and returned before AMD has made it to the first house. Intel was able to do this because it avoided all the lights by taking the back roads or highway to go longer distances. unfortunately there is no off ramp so Intel car must go all the way before returning home. Great for distant travels in pizza deliver but bad if they had to make a few pit stops along the way.

 

You multiply your cars by how many cores / threads. You may have 8 AMD cars going a few blocks and 6 Intel cars going a few miles. both have their own advantage. AMD is good at small takes like quick turn around that games may require but Intel is able to carry a large amount of pizzas a longer distance and faster. In short Intel is good for large tasks like rendering, encoding movies and crunching numbers. AMD is good as data servers where it needs to get information out quickly without much delay. A 6-core Intel chip like the 3930k (I also using it) is still faster than a AMD 8-core cpu even though its lacking 2 extra cores. In some cases a i5 4-core is faster than a 8-core AMD chip and it really depends on the app. for Photoshop, After effects, 3d Rendering get a Intel, it costs more but you get a lot more out of it.

 

If you doing work stat doesn't need to rendered out or a program with CUDA / OpenCL that is not CPU bound. a AMD chip is very cheap alternative to intel.

 

3: Should I go Dual Socket (2x CPU)?

 

The benefits of 2 CPUs can have tremendous in the right situation. I would say rendering out video and or large 3D scenes would benfit for a second cpu. Bare in mind many professional apps are starting to off load CPU work onto the video cards (cover later). If you program of choice does not, you need more than 64GB of ram or cost isn't a issue than its a worth the buy. Otherwise it's a waste of money.

 

4: If Intel should I go with Xeon or not?

 

Xeon is the workstation CPU and it does have some benefits but at a cost. The counterpart to the i7 3930k is $1600. You can only use 2 CPUs if you get a Xeon, normal Intels will not work. Xeon only major advantages is ECC ram support (I'll explain next) and more L3 cache. The L3 cache isn't as important as it use to be, the more L3 generally the more time you can shave off the render. What is 20 hours vs 18? anything that is quick like a hour or so you won't see any difference what so ever past like 12mb of L3 (3930k has that). ECC ram is error connection memory. So if the memory continues to get errors it can turn itself off so the system doesn't cash. This does not prevent bad programing because software does crash on it's own and has nothing to do with hardware. It's less likely with pro apps but i'll give you an example. At one time if you change a certain part of a 3d mesh maya would freeze and crash, they patched it like 3 months later but it was still a software issue. Adobe Photoshop right now for me crashes if I don't close my plug-in window before exiting the program (not doesn't effect me much because I'm done at that point). Hardware does have its errors and its very common but the difference is most errors are corrected before they cause issues. It's when you get repeated errors that cause major crashes that will freeze the computer. Between auto saves and constantly saving myself I don't feel the need to speed $1000 more to avoid a unlikely problem. Now if I was rendering out Avatar or money IS time a $1000 wouldn't seem like anything. I most likely at that point would be running a server farm and off loading the work to it. This is actually common practice for large scale things. If you are freelance or a amateur you have limited funds and depending what you do you may or may not need ECC ram. For me I don't because when you render say movies out, you render them in frames and compile them in After Effects or another program like Nuke. If I come across a bad frame, I just go back and render that one frame out. This is also a common practice. If you are working on video ECC can prevent errors that would cause a artifact or two. Generally low bit-rate would cause artifacts before an CPU error but it is something to think about.

 

Example of render out by frames rather than a whole video.

 

 

5: Video cards, Why isn't any old card good?

 

You can check out my Quadro 2000 personal review and find the different results. For the most part a Pro card is stress tested to preform without issues. It use to be the consumer card was nearly identical to the Pro card and you could soft-mod up until the Geforce 200 series (last card being 8800 soft moded into a Quadro Fx 4600). Nowadays even though the core is the same per series the bios is different and certain GPU functions are disabled in the consumer cards (enough to make them a poor choice). The High-End Quadro models have ECC Vram so when you are doing CUDA or OpenCL work, whatever is offload to the video ram, will continue to be error free. This is one step closer to having a true stable system and really is only needed if you are putting this cards in a server. Another reason would be if you are using the GPGPU function for work rather than have the cpu do it. In more cases you would be doing the programing yourself and would already know the pros and cons of this.

 

Link to Benchmark showing a 5 year old quadro is sitll faster than a GTX 680 and ATi 7970 (SPECview Benchmark)

 

6. What can a pro card give me that a consumer card never can?

 

This is a little redundant but. Most pro cards support 10-bit color depth for billions of color combinations. You would need the correct monitor with a display port to see more than the standard 16.7 Million. Do you need it no, but having good color accuracy is yes even if its 16.7 million. Second up is stability , the drivers are tested longer and bugs are fixed and orientated towards pro apps rather than games. You pay more for stability and they will write special drivers for you if you have serious issues. Lastly, ECC ram in the higher models. This isn't a very big thing. But good to have.

 

Edit: Consumer cards and Pro have different look up tables for colors. The short version is basically each chip has a lookup table for colors. The blue you see (OCC) is a number sent to the card and that card displays what it has in its database. The problem is Consumer card aren't not quailty controlled in the asspect. Two cards can have two different results from the color (let's say Hex #) blue.  If you guess already, the pro cards do not have this issue and each card has the same lookup table.  I won't get into the monitors DDC but that can also have a very large change in results since each monitor also has it's on lookup table in a sense.

 

7: Can I SLI Gefore Cards and get the same preformance as a Quadro?

 

Simply no, Like I said above, Consumer card run fairly poor in 3D Pro apps. Now if you are using it for CUDA or OpenCL Two cards might be an alright idea if you are willing to give up the pros of a Quadro card. (Read 9).

 

8: Can I SLI Quado cards?

 

You cannot Sli Quadros unless its like a Dell or HP workstation (purely marketing) and the benefits are nearly none existent unless you are working with cutting edge tech like motion capture, real time green screen, advance 3d work or multiple video edits. These would benefit from Maximus more (read 9).

 

9. Can I have a Quadro and Geforce in the same system? And whats the benefits of a second card?

 

Nope, they removed that function once Maximus came out. The reason to have a second card is what Nvidia calls Maximus, allows your main card like a Quadro 2000 to perform tasks it requires and offloads all the CUDA or OpenCL to the second card for the most cost effective manner. Now it once again only helps in previews, the second you want to render something out like a video in After effects or a frame from Maya (any app really) it uses the CPU for that. Maya / 3Ds Max has a function called iray which allows you to see in real time what your scene will look like without rendering it out. I found it very helpful at times but overall a feature I could live without. What I originally had was a Quadro 2000 which was a little underpowered for CUDA stuff. I opted to get 2x GTX 570s purely for CUDA use, it did crash a lot more than say a Maximus setup but I got 2 cards for $400 instead of spending $1600 on a single Telsa 2075 for similar performance. I switched back to a Quadro because after using V-ray (very much like Iray) I found the instability was more annoying than having a real time view. The same setup would have cost about $4500 for a Quadro 6000 and a Tesla 2075. Money I do not have, so you see sometimes you can do more with less. Unfortunately Nvidia no longer allows Geforce cards to be in the same computer as a Quadro (most likely to sell the Maximus thing). If you noticed I didn't talk about AMD cards because they showed me poor results every time I use them. I suggest you read up and find what works for you.

 

10. What Programs Use Maximus?

 

As far as I can tell anything that uses Physx, CUDA or OpenCL. Certain programs like After Effects can only use a second card if it's a Telsa 2075 (Marketing?) . I have read the other Telsas do not work and the Mercuy playback is disabled if a different card is detected.

 

*Support Cards for Adob Mercuy playback engine - Tesla 2075 is the only one on the list. At the time of writing ATi cards have no Merucy Playback support eg. OpenCL.

 

Telsa 20k = G110 Kelper = No Counterpart (Very fast) = Geforce Titan

Telsa 10K = Kelper = Quadro 5000K (K is a different model) = Geforce GTX 670?

Telsa 2075 /70 = Femi core = Quadro 6000 = Geforce GTX 480

Telsa 1050 = Quadro Fx 5800 = Geforce GTX 280

Telsa 800s = Quadro FX 4600 = Geforce 8800

 

11. What card is best?

 

This ones a little more tricky. Its very hard to say because the market is always changing. Pro cards are costly. And just like the consumer market, the mean very little but usually a mid-range card for the newest generation is faster than a high-end of the last. If you don't mind used parts got my Quadro Fx 4800 on ebay for $200 and it works perfectly fine. In the past I bought cards on newegg but new costs a lot of money. Like the Quadro 4000 is faster than the Quadro Fx 5800 and way cheaper. Neither (AMD or Nvidia) company drops prices when the new cards come out which I still don't understand why. All it does is it confuses people and money is wasted. The Quadro 600, 2000, 4000 , 5000 and 6000 is the the newest nvidia cards. Every app is different so it's almost impossible to say what will be best for you. Try to find benchmarks. I can leave you with a hint. For amateurs in 3D apps a Quadro 2000 is about all you will ever need and if you are on a budget a Quadro Fx 3700 can be found for $100 on ebay. Better than a Quadro 600 but expect to replace it once you mastered the programs. For video work you could go three directions which would be a Pro card, Maximus or a Geforce (if its certified). If you are only doing personal work and tight on cash I would suggest a Geforce card with lots of vram. Otherwise a Quadro 4000.

 

12. CUDA or OpenCL?

 

Now like I was talking about before for pure computing power (not talking about FPS but CUDA and OpenCL) both AMD and Nvidia use many cores in the video cards to produce lots of small things fast but lack the distance. CUDA has been around for a while and Nvidia cards can do both CUDA and OpenCL. I personally don't like AMD video cards because of the buggy drivers in the past. They may have gotten better but I really do not know and I don't want to spend $500 to find out. In any case when programed right (Right IS the keyword) OpenCL has proven to be much faster and Ati cards take the lead. OpenCL is very new and has the very basic support. In after effects or Premiere the video cards are used to buffer the video for live playback instead of using the Ram. Neither are used for the end video encoding because the results can be questionable. So the benefit of a a beefy card isn't going to be noticed unless you editing 4k or 8k video along with the built in 3d effects they have. If your going integrate 3d (CGI) into shots than you won't be using the built in functions.

 

13. Tips for buying used cards?

I generally use Ebay and only buy for power sellers. Make sure you get a OEM Nvidia card or a PNY card. Dell and HP branded cards only work with their drivers from THEIR website. It's the same ones but a pain to download and sometimes they just don't update them. Nvidia OEM usually is cards to reviewers or was in a Telsa server and was pulled. I am currently using a Nvidia Quadro Fx 4800 OEM. The seller said it was PNY, but for $200 I didn't complain.

 

 

14: Memory (must be based on everything above)

 

When it comes to workstations, the more memory the better. My old computer had 24gb of ram and this one is 64gb of ram. I only once maxed it out but I have used 32gb a few times. Usually if you max out your memory, either your system will crash or run extremely slow. This does limit what you can do. Since 64gb of ram is like $300 now, I suggest that route, in which you will have to go with a 2011 Socket with 8 Dimm slots. For work, memory speed matters very little. You have 5 seconds with 2100 speed memory vs your standard 1600. You will not notice the difference at all. A few seconds less than an hour is still...basically an hour.

 

If you choose to go the ECC route. It doesn't matter, you are stuck with 1333 or 1600 and it costs a ton.

 

* Link to Benchmark showing Memory speed has no impact on render times (Scroll to bottom for CineBench 11.5)

 

 

Example of 64gb being used: 160 photos compiled into 20 HDR file for the final Panorama. Filesize 3.8GB 28380 x 5580 (10'x2') 

webpan.jpg

 

 

webpan2.jpg

*Demonstration only,  low quality for web

 

 

15: What Monitor should I get or does not matter?

 

Well if you are doing any sort of work that involves needing to see colors being right than you might want to get a monitor that has very good color accuracy out of the box. Soft-proofing is when you make sure what you have on the screen is as close as it will be to a print. Hard-proofing is when you print it (on a pro printer, not your home one) to make sure that is what you want before you put it in for a production run. You could go a few ways with this. Easiest would be to get any monitor really. I personally find working across 2 screens much easier and productive than minimizing windows left and right. I made this www.randomactsofclay.org website for my school by having one monitor displaying the website in Dreamweaver while the other I had Photoshop or Lightroom open so I could edit images. It was seamless and much faster than in the past. The second you could get a monitor and calibrate it using something like One-Eye or Xrite. Its like $100-200 and it makes any monitor get closer to the correct colors. Most monitors are limited to like 79% of the RGB range anyways. The last options and can be used with number 2, that is to have a good monitor like the Dell ultrasharp ColorPreime series and have good colors out of the box and you could calibrate it also. I use all 3 because I need 2 monitors and I need to have my colors right for the stuff I do.

 

16. What is the best hard drive option?:

 

I'll leave it up to OCC to help you out here but I will say, I use a SSD drive for temp folders. In After Effects this is called a Scratch disk, basically if I exceed my ram or if programs needs to cache things it goes into that folder. it's good practice not to have what you are working on where you cache files go because they its trying to access both at the same time. The way I have my setup is a SSD for window and all my apps. a Raid-1 setup for my data to be safe and another SSD as a temp drive. that way if windows gets corrupted my data is safe, if one of my raid drives go bad, I'm safe and if and most likely my Temp drive wears out (any scratch disk will be used heavily during editing) I can swap things out without much hassle. The biggest rookie mistakes are having your data in only once place. if you drive goes bad (and they do, just ask around) you lost your data forever. second is they use one drive for everything and that leads to poor performance. You can have a super computer and be super slow because everything is waiting on a single drive.

Be aware of what type of SSD you get. Any SSD with a Sandforce controller has very poor performance with impressionable data like video, cache, photos and MP3s. Basically what a cache drive is used for. Also be aware of TLC Nands are they only have 1,000 write cycles now which equals around 100TBs of writes. For a Temp drive that could be 100-300 days of work. Beware of nm size. The smaller the less it can be written to (MLC 25 nm is around 1PB (1,000 Tbs).

 

 (Top) Example of a Full Temp SSD that uses a Sandforce Controller. As you can see it is very slow with this kind of data

(Bottom) Example of a non Sandforce drive, In this case a OCZ Vertex 4

 

ssdused.png


HDTuneOCZVERTEX4.png

 

 

17. Motherboards:

 

Personally I am a Asus, MSI, Gigbyte and Evga fan. ASUS being the best I have every come across. the rest are nearly similar in my mind. Avoid any other company and ASROCK for that matter. If you are going with Xeon or 64gb of ram you should check the supported memory. Here is an example of 32gb kit http://www.gskill.co...s.php?index=494 they give a listing of every single board the ram works on. of course if it's not listed most likely it will work but I found sticking to the lists saves huge problems.

 

18: Preformance Upgrades

 

Below is a list of upgrades and description that can shave off some time and speed up work-flows without making you buy a new computer. If you do decide to build a new computer these parts can come along for the ride.

 

RAM: Most people will tell you 8gb of ram is enough for anything you will ever do. This is true for web browsing and possibly playing games. Many games run slow because of video card and CPU; video card being no.1 . You can find 32gb of ram on sale for around $100, you don't need lighting fast ram because it won't really have an effect. What will effect the render times often is running out of memory. Once that happens the computer is doing the speed of the hard drive to swap data. I'm not saying you need to speed $500 on fast 64gb memory, but go for 32gb or value (quality brand) 64gb kits.

 

 

Temp Drive: Besides having enough ram this might take the lead when it comes to rendering. One of the best things you can do for your computer is get a Temp SSD drive. It has to be with a non Sandforce controller (Ex: OCZ Vertex 4, Intel 335, Google it) because Sandforce does not handle incompressible data very well and tends to read slower than a traditional hard drive. In turn that would make part of using a temp drive pointless. The two main reasons for a Temp is it is very good practice not to have what your working on on the same drive as your OS to avoid unneeded slowdowns. By putting your current project on another drive or on temp it will speed things up simply because it's not conflicting with the hard drives normal tasks. Second reason is to set all you cache folders to the temp drive including

Edited by hornybluecow
  • Like 1

Share this post


Link to post
Share on other sites

I could have really used this information a month ago, granted I changed the direction of my build from budget general use to budget workstation really late in the build. <_<

 

Going to stick with what I have now because I fall under the amateur/student category and this will not be put into a professional environment.  Just doing some relatively small assemblies in CAD and low-poly 3D modeling.

 

AMD FX-4170

Gigabyte GA-970A-UD3

2x4GB GSkill RipJaws X 1600

AMD FirePro v4900

128GB Adata SX900

 

I might upgrade to the FX-8320 and 16GB non-ECC RAM in the future if needed (doubtful since I won't be doing much rendering tasks), but I don't think my use within the next couple years will warrant a completely new build.  If it does... well, I know what to look for now.

Share this post


Link to post
Share on other sites

I just read over it and found a lot of spelling mistakes. I thought of more things to write about which i'll add later today.

 

Kiro how do you like that FirePro? The last one I used was a FirePro v5700. I noticed in benchmarks CAD preforms really low with ATi cards. But I don't use CAD so I don't have an idea of what acceptable. You also don't have to render anything out so the video card is going to be what's holding you back the most I would think in a CAD build.

Edited by hornybluecow

Share this post


Link to post
Share on other sites

I haven't had a chance to try it out yet; the GPU came in last week and the software arrived last night.  My classmate tried the FirePro (don't know which model) with Solidworks and had a terrible experience; dual GTX 460's in SLI did much better.  Our school uses GeForce GT 520's and they've been horrible.  I'll be using the v4900 with Autodesk Inventor, so I'm hoping for slightly better results.

 

At my price range, I was pretty much comparing the FirePro v4900 and the Quadro 600.  Chose the v4900 because I found it for $105 shipped.  Although benchmarks don't tell the whole story, the v3900 generally gets better SPECview scores than the Quadro 600, and the v4900 does better than the v3900.  If interested, I can try benchmarking my system and posting the results.

Share this post


Link to post
Share on other sites

At CES 2012 while I was visiting with the folks from Kingston they showed me this really cool project that made use of a full 64 gig of RAM. The used a high definition camera on a rail system to take literally hundreds of pictures as it moved slowly past an object. They then used a program to compile all the material into a single ultra high def image. Was really incredible.

 

As for Pro cards vs Consumer cards, we did an interview with some folks from AMD about this. The January 7th show of this year was when we did the interview with AMD, the show archive is below and you will find the interview at around the 49 minute mark.

 

http://computered.files.wordpress.com/2012/01/7january2012.mp3

 

Does a pretty good job of letting a lay person understand what a Pro graphics card is.

Share this post


Link to post
Share on other sites

Just listened to the AMD interview as SPECview was doing its thing.  Pretty much summarized in minutes what took me hours to determine.  Very informative.  Thanks for the share. :)

 

Anyway, results of the SPECview with no FSAA at 1920x1080:

 

catia-03: 18.18

ensight-04: 20.29

lightwave-01: 52.89

maya-03: 57.26

proe-05: 5.07

sw-02: 42.64

tcvis-02: 17.60

snx-01: 23.58

 

Did poorly in ProE compared with Tom's Hardware's review of the v3900: http://www.tomshardware.com/reviews/firepro-v3900-review-benchmark,3153.html

Though they make sense in Tom's Hardware's review of the W8000/W9000: http://www.tomshardware.com/reviews/firepro-w8000-w9000-benchmark,3265.html

 

Keeping in mind that the systems used in the reviews are significantly better than mine and they benchmark at 1920x1200.

 

Again, confirms that the v4900 does better in benchmarks than the Quadro 600: http://www.spec.org/gwpg/gpc.static/vp11results.html

 

Edit:

 

Messed around a bit with Autodesk Inventor.  Imported the Solidworks project that crashed the school computers and was slightly better than a slideshow on my classmate's dual GTX 460 SLI gaming machine.  Let me say this: I'm impressed.  I don't know whether it's because Inventor is more efficient than Solidworks or if my v4900 works better than the SLI'd 460's, but in comparison to the troubles we had faced, it's ridiculously smooth.  Rotating the model as quickly as possible (which displays no sluggishness whatsoever) gets GPU usage up to 54% max.  I don't think I'll be reaching GPU limitations any time soon with modeling.

Edited by Kiro

Share this post


Link to post
Share on other sites

  • 5 weeks later...
  • 3 months later...

update: created new section 20 and moved old to 21. Adobe now supports AMD and Nvidia OpenCL/CUDA in CS6 with patches! creative cloud update is coming next month which means the end up CS6 support :(

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...