Jump to content

Folding Using The Linux SMP Client Under Windows


Nemo
 Share

Recommended Posts

Stanford currently has a SMP client for folding that is available for Linux and Mac OSX only (no Windows client). For those of you with multi-core/multi-CPU systems, running the SMP client can result in faster processing and an increased PPD while making a valuable contribution to the folding project. However, if you're like me, you either don't have Linux installed on a multi-core/cpu machine or don't want to dedicate a box solely to Linux.

 

PlanentAMD64 has posted a guide on how to set up the SMP client running Linux as a virtual machine using VM Ware. Sounds like an interesting project to me.

 

Update: here is a step-by-step guide for setting up file sharing in Ubuntu. When the [email protected] folder is shared, it means you can monitor the machine using a folding monitor (like FahMon) from another computer on the network.

 

Note about monitoring Ubuntu/VMWare over network

 

If you are trying to monitor other machines on your network that are running Ubuntu in VMWare, follow the guides above, but use Ethernet = Bridged (in each Virtual Machine settings in VMWare), rather than Ethernet = NAT, in order to be able to access the Samba share from another machine. Ethernet = NAT gives Internet access, and the Samba share works on the local machine, but you need Ethernet = Bridged in order to see the shared folders from another computer --- hardnrg

Share this post


Link to post
Share on other sites

  • Replies 423
  • Created
  • Last Reply

Top Posters In This Topic

hmmmmmmm intresting.......

 

nice to know there is a smp client now but, why no windows i wonder?.......

 

my thought would be you lose alot with this tho, with all the virtual machine stuff.

 

From the Stanford SMP FAQ page:

Note that we currently only support the console client for 2 platforms: Mac OSX and 64-bit Linux. We are working on a 32-bit Linux client. Due to the nature of our code, porting to Windows is considerably more challenging and we are still looking into the best way to complete this port.

 

That's all I've seen by way of explanation re: Windows SMP client.

 

My feeling is whatever you lose to the VM is more than made up in the speed you can process a WU - Stanford estimates a 4x speed increase with the SMP client.

Share this post


Link to post
Share on other sites

  • 4 weeks later...

i'm currently following the guide and have ubuntu64 updating in vmware... am looking forward to some proper smp processing :)

 

might have to ask Nemo about EMIII monitoring... but will hopefully be folding in 64bit SMP shortly!

 

update: up n running!

 

ubuntu64fah1ka.th.gif

 

BIG UPDATE

 

with dual folding, i was getting something like 300ppd from two instance of p212x

 

NOW, i'm getting like 900ppd from one of the new special SMP WUs :)

 

so that's +600ppd for me on one machine!!!

Edited by hardnrg

Share this post


Link to post
Share on other sites

i used to run dual instances... it's really hard to gauge the performance increase seeing as the SMP client has its own WUs (much like the GPU client)... but my ppd has skyrocketed... i can see how with a Core2Duo rig, 2000ppd isn't out of the question for some WUs

Share this post


Link to post
Share on other sites

i'm currently following the guide and have ubuntu64 updating in vmware... am looking forward to some proper smp processing :)

 

might have to ask Nemo about EMIII monitoring... but will hopefully be folding in 64bit SMP shortly!

 

update: up n running!

 

ubuntu64fah1ka.th.gif

 

BIG UPDATE

 

with dual folding, i was getting something like 300ppd from two instance of p212x

 

NOW, i'm getting like 900ppd from one of the new special SMP WUs :)

 

so that's +600ppd for me on one machine!!!

 

Look at my thread on setting up Samba and see if that helps. I've got EMIII monitoring folding on a Fedora box.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share


×
×
  • Create New...