There have been a lot of questions going around about setting up [email protected] lately, so I thought it would be helpful to put together a guide about how to set everything up.
I am writing this guide with the assumption that you will be using the console clients, since that is what I'm familiar with. If there is a large demand I'll add a section for the system tray versions. Below are links to download the current versions of each console client. These links were obtained from the downloads page and this [email protected] forum post.
CPU Client - Folds on a single core. The number of points returned will be relatively low.
SMP Client - Folds on several cores.
GPU3 Client (Vista / Windows 7) - Folds on a GPU in Vista/7
GPU3 Client (XP) - Folds on a GPU in Windows XP
HFM.net - Monitors the progress of several [email protected] clients, which can be on local or remote machines you have access to. For now I'll leave HFM.net setup up to you
From the above links, determine what you'll need to fully utilize your system's power.
First consider your processor. On a laptop with an single core processor you will want to run the CPU Client. If you're running an i7-930, the SMP client should be a no brainer since with hyperthreading enabled you can use all 8 "cores". Even using a Core 2 Duo with two cores, the SMP client should show an improvement over two CPU clients due to the bonuses.
Now consider your GPU(s). Most systems have one GPU, but since the target audience for this is OCC I'll hit on multiple GPU systems as well. Since GPU's generate a tremendous amount of heat, please keep the cooling of your system in mind when running the GPU client. If you choose to run one or more GPU clients, download the GPU3 client from a link above for your operating system.
Passkey, SMP Client, and Bonuses
To take full advantage of the SMP client, you'll want to obtain a passkey from Stanford's website. This will make you eligible for the bonuses when SMP work units are completed before the deadline. In order to get a bonus you must meet the following criteria (quoted from here):
1. Run the latest SMP client (v6.29 or above).
2. Configure that client with a passkey.
3. Complete 10 a2 and/or a3 work units...
....a. within their preferred deadlines
....b. using the same passkey and fah user name combination
....c. on one or more of your systems.
4. Successfully return >=80% of assigned [bonus] WU's.
A passkey can be obtained here. You will use this during the setup of your clients. Note that from the above criteria, you will not get the bonus until you fold 10 units with that pass key -- this means your first 10 units won't get a bonus, but after that the points will go way up. The bonuses are pretty significant -- I'm currently running a WU from project P6020 with a base credit of 467, but with the bonus it will get me 3350.
Unzip the client you've downloaded to an easy to remember location. If you are running multiple GPU clients, unzip it to several directories. For my setup I am using an SMP client and two GPU clients so I extracted them to:
Complete the following instructions for each client you're planning to use. For simplicity, rename the main executable in each directory to fah.exe
- Open a console window. In W7/vista hit start, type cmd, hit enter. For XP, start -> run, type cmd, hit enter.
- Navigate to the FAH directory, i.e. use the command: cd C:/FAH-SMP
- Run the following command: fah.exe -configonly
- Follow the instructions here. These are the main points to make sure you follow, anything not listed can be left as default unless you're doing something fancy:
- Enter your [email protected] Username
- Enter 12772 for your team number
- Enter your pass key from above
- Use "big" work units (required for SMP work units to be fetched)
- Yes to "change advanced options"
- For machine ID, enter 1 for your SMP client, 2 for GPU client, 3 for second GPU client, etc. Each client must have a separate machine ID.
- For additional client parameters, enter "-gpu 0" when configuring first GPU, enter "-gpu 1" when configuring second GPU. If you like, you can enter "-smp 8" to make it default to running the smp client with 8 cores (replace 8 with appropriate number for your processor). I opt to enter this flag manually when I run the client since I use -smp 7 when running GPU clients and -smp 8 when I'm not running GPU clients.
Running [email protected]
The hard part is over!
To run the SMP client open the console and navigate to your SMP directory as shown above. Run "fah -smp #" where # is the number of cores you'd like to fold on.
To run the GPU clients, just run the fah.exe executable in each GPU directory. To make your life easier, put some shortcuts on your desktop.
If you have a powerful machine you're dedicating to [email protected], running bigadv units will give you huge PPD. These are enormous work units on tight deadlines -- to my knowledge the client won't fetch bigadv units unless you have 8 cores (although virtual threads count). With the "good" bigadv units I see 33,000 ppd on my i7-930 @ 4ghz. However, if you also use your computer for gaming, you'll have to have a strong (and very stable!) overclock to meet the deadlines. Because of this, I normally don't do bigadv units unless I know I won't want to play games for several days.
To have a chance to run a bigadv unit, run the SMP client with the -bigadv flag (i.e. "fah.exe -smp 8 -bigadv"). Bigadv units aren't always available, so it's not guaranteed that you'll get one.
- If you're running one or more GPU clients, keep one CPU core free to maximize the GPU's outputs.
- For multiple GPU folding, disable SLI.
- Always keep a temperature monitor running while folding to prevent damaging your computer. I use RealTemp since it displays both my CPU and GPU temps in my systray and records max temps.
- Keep your electric bill in mind, folding can get expensive if you're running a CPU and multiple GPU's. If you're interested, I threw some math at my [email protected] setup here.
Please post any questions/feedback. I fully intend to keep this updated to help the [email protected] project and the OCC community.
Edited by endorphiend, 18 October 2010 - 06:26 PM.