CGLS nls [0] nrf [0] nextra [0] maxvec [511]As L.S., but the Konnert-Hendrickson conjugate-gradient algorithm is employed instead of the full-matrix approach. Although BLOC may be used with CGLS, in practice it is much better to refine all parameters at once. CGLS is much faster than L.S. for a large number of parameters, and so will be the method of choice for most macromolecular refinements. The convergence properties of CGLS are good in the early stages (especially if there are many restraints), but cannot compete with L.S. in the final stages for structures which are small enough for full-matrix refinement. The major disadvantage of CGLS is that it does not provide estimated standard deviations, so when a large structure has been refined to convergence using CGLS it may be worth performing a blocked full-matrix refinement (L.S./BLOC) to obtain the standard deviations in quantities of interest (e.g. torsion angles, in which case only xyz blocks would be required). A further disadvantage of CGLS is its propensity for getting stuck in a local minimum in situations where L.S./BLOC would find the global minimum; for this reason a mixed CGLS/L.S. alternative is provided (CGLS with negative nls) which performs CGLS refinement in the odd numbered cycles and L.S. in the even numbered. When this option is used, it will be normal to provide BLOC instructions for the even numbered cycles only. The other parameters have the same meaning as with L.S.; CGLS is entirely suitable for R(free) tests (negative nrf), and since it requires much less memory than L.S. there will rarely be any reason to change maxvec from its default value.
The CGLS algorithm is based closely on the procedure described by W.A. Hendrickson and J.H. Konnert (Computing in Crystallography, Ed. R. Diamond, S. Ramaseshan and K. Venkatesan, I.U.Cr. and Indian Academy of Sciences, Bangalore 1980, pp. 13.01-13.25). The structure-factor derivatives contribute only to the diagonal elements of the least-squares matrix, but all 'additional observational equations' (restraints) contribute in full to diagonal and off-diagonal terms, although neither the l.s. matrix A nor the Jacobean J are ever generated. The preconditioning recommended by Hendrickson and Konnert is used to speed up the convergence of the internal conjugate gradient iterations, and has the additional advantage of preventing the excessive damping of poorly determined parameters characteristic of other conjugate gradient algorithms (D.E. Tronrud, Acta Cryst. A48 (1992) 912-916).
A further refinement in the CGLS approach is to save the parameter shifts from the previous full CGLS cycle, and to use them to estimate a shift multiplication factor independently for each parameter. This parameter is larger when a parameter appears to 'creep' in the same direction in successive cycles, and small when it oscillates. This technique significantly improves the convergence properties of the CGLS approach, because it indirectly takes into account the correlation terms which were ignored (to save time and space); however it cannot be used with BLOC or 'CGLS -nls'. The maximum and minimum shifts are set by the SLIM instruction; usually it will not be necessary to change them, but if a CGLS refinement appears to be unstable, both parameters should be reduced; in such a case it would be even better to track down and fix the cause of the instability, e.g. trying to refine a structure in the wrong space group!
Back to L.S.
Back to Least-Squares Organization
Back to Table of Contents