[phenixbb] Discrepancy between R-factors from phenix.refine vs phenix Generate "Table 1"

Nathaniel Echols nechols at lbl.gov
Wed May 29 17:18:57 PDT 2013


On Wed, May 29, 2013 at 4:13 PM, Sam Stampfer <Samuel.Stampfer at tufts.edu> wrote:
> When I refined my model in phenix, I used the twin law h,-h-k,-l. I read in
> the documentation that twinning can account for some of this discrepancy,
> but that the program is supposed to take twinning into account if it will
> lower the calculated R-work by more than 2%, which it doesn't seem to have
> done (or there is some other problem with my data).

Okay, the problem is that your data don't actually appear to be
twinned.  The automatic method used by phenix.model_vs_data (which is
used internally for Table 1 and the validation GUI) only tries
possible twin laws if the results of the "L test" show a suspicious
distribution of intensities.  Your data look fine, so it doesn't
bother trying the twin laws.  That the R-factors are lower when you
refine with a twin law isn't necessarily indicative of the data
actually being twinned - Garib Murshudov has looked into this in
detail but I confess to being ignorant of the math (but I can probably
dig up his paper on the subject if anyone is interested).  However,
I'm pretty sure the data are actually in a higher-symmetry space
group.  Will send details and new files off-list (probably tomorrow at
this rate).

I should probably change some of the programs and/or documentation to
make it more clear what is being done internally, since it took me a
bit of digging to realize what was going on.  In general, though,
always be very careful before running twinned refinement!  I have seen
several users do this by mistake when they really had higher symmetry.
 The maps will also be more model-biased when using twinned
refinement, so it's good to avoid doing this unless absolutely
necessary.

-Nat


More information about the phenixbb mailing list