[phenixbb] Memory and CPU usage

Nathaniel Echols nechols at lbl.gov
Sat Jul 21 08:15:28 PDT 2012


On Sat, Jul 21, 2012 at 3:05 AM, Jon Agirre <jon.agirre at gmail.com> wrote:
> I'm planning on buying a new macbook for my first abroad postdoc venture,
> and I'm currently trying decide what amount of RAM & CPU to choose. My
> question is a little bit intricate: is there any rule of thumb to estimate the needed
> RAM versus the number of input atoms for any particular refinement type?

We do not have any exact rule of thumb, but it is mostly dependent on
the resolution and size of the unit cell, rather than the number of
atoms.  You can blame crystal symmetry for this, and the fact that our
FFTs are done in P1.  Assuming for the moment that you have 90x90x90
unit cell edges, you can calculate the approximate memory size of an
FFT'd map using this formula:

map_size = 8 * a * b * c * (d_min/3)^3

So if you are lucky and it crystallizes in P1 with no NCS, it will
have much less overhead than if it is (for example) P622 with 3-fold
NCS.  This still doesn't tell you exactly how much memory the overall
program will use, however.  One caution I have is that you can cut
down the actual memory usage by being careful with the phenix.refine
parameters - in particular, the fill_missing_f_obs feature of the
output maps takes up a lot of extra memory, so disable this if you're
worried about exceeding the memory limit.

> I'm planning on tackling some big (18,000 residues/asu) refinements but I don't
> have any estimation as to how many GBs of RAM they might require. The
> maximum amount of RAM I can install is 16GB, but it is a non-standard configuration
> that might void Apple's warranty.

My instinct is that you probably want to go with the maximum amount of
memory just on general principle, but we certainly don't want to
encourage anyone to void their hardware warranty.  I would check with
Apple on this.

> About the CPU: how scalable are typical phenix processes? Would it be
> sensible to invest in a quad-core machine with HT? In this particular case,
> and since HT would present 8 logical cores, would I get any speed up from
> launching phenix tasks configured for 8 processors instead of 4?

The HT speedup only really helps for genuinely threaded processes, so
the OpenMP FFT (or OpenMP in Phaser) *might* improve it a little bit,
but in our experience the OpenMP FFT in phenix.refine is not very
effective at reducing overall runtime anyway, certainly much less so
than the parallelization of the weight optimization.  (Also, you can't
use the GUI if you compile with OpenMP.)

Here is a quick summary of the parallelization supported for the
default installation:

AutoBuild: up to 5 cores for building, or unlimited for composite omit
map calculation
LigandFit: up to 7 cores (I think)
phaser.MRage: unlimited cores
MR-Rosetta: unlimited cores (Linux/Mac only)
phenix.refine: up to 18 cores when weight optimization is used (Linux/Mac only)
phenix.den_refine: up to 30 cores (Linux/Mac only)

I do think getting 4 cores instead of just 2, regardless of
hyperthreading, is a good idea if you can afford it.  A secondary
problem, however, is that these processes will eventually create their
own memory segments, so if you're constrained by physical memory, the
degree to which you can take advantage of multiple cores will be
limited.  (OpenMP, in contrast, does not have this problem.)

-Nat


More information about the phenixbb mailing list