Just another site

Quadruple precision computations on asymptotics of colossally abundant numbers

FROM 2015:



Added July 17, 2015:

In “Highly Composite Numbers” by Ramanujan, annotated by Jean-Louis Nicolas and Guy Robin, The Ramanujan Journal, 1997 one finds the previously unpublished continuation of Ramanujan’s 1915 paper. This continuation can be found in the book:

Ramanujan’s Lost Notebook, Part 3 by George E. Andrews, Bruce C. Berndt

Chapter 10 of this book entitled “Highly Composite Numbers”, essentially contains the 1997 paper in The Ramanujan Journal, annotated by Nicolas and Robin.

The motivation for the graph below can be found in formula 10.71.382 on page 386 of the book by G.E. Andrews and Bruce C. Berndt, as well as in liminf and limsup inequalities immediately following formula (10.71.382).

For colossally abundant numbers ‘N’, Ramanujan finds explicit constants C_1 and C_2 such that, under the assumption that the Riemann Hypothesis holds,

liminf_{ N -> oo, N a C.A. number} -delta(N) sqrt(log(N))  >= C_1 ~= -1.558     and

limsup_{N-> oo, N a C.A. number} -delta(N) sqrt(log(N)) <= C_2 ~= -1.393 .

For 8000 CA numbers N_1, N_2, …, N_j, …  N_8000  we computed:

delta(N_j)  and log(N_j) , 1 <= j <= 8000, where

delta(N_j) = exp(gamma)*log(log(N_j)) – sigma(N_j)/N_j   (as in Briggs, 2006).

The quantities of interest are then the

delta(N_j) sqrt(log(N_j))   for 1<=j <= 8000.

One finds that the mean value M of these 8000 values of delta(N_j) sqrt(log(N_j)) is given by M approximately equal to 1.3881198 .

For 1<= j<= 8000, one defines y_j = delta(N_j) sqrt(log(N_j)) – M and

x_j = log(log(N_j)).  The Graph or plot of the 8000 points (x_j, y_j) , 1<= j <= 8000 is reproduced below.



Added July 18, 2015:

If we look at formula 10.71.382 on page 386 of the book by G.E. Andrews and Bruce C. Berndt, from the continuation of Ramanujan’s (suppressed) 1915 paper “Highly Composite Numbers”, i.e. the annotated paper by Ramanujan from 1997 in The Ramanujan Journal, annotated by Jean-Louis Nicolas and Guy Robin, we see that the fine behaviour of the delta(N) sqrt(log(N))  for colossally abunadant numbers  ‘N’ has a contribution owing to (log(N))^{rho-1}, for rho taking successively the values of the non-trivial zeros of the Riemann zeta function, say ordered by increasing modulus, so as create a real-valued correction, absolutely convergent, and depending on the non-trivial zeros both in the upper half-plane and the lower half-plane.

My method was semi-empirical, in the sense that I chose the sign of the correction term S_1 (log(N)) to best cancel the oscillations in the 8000 terms of

y_j = delta(N_j) sqrt(log(N_j)) – M .

as I recall, this involved adding to delta(N_j) sqrt(log(N_j)) – M

(in PARI/gp notation):


moreImrho[] an array of the imaginary parts of 11101 first non-trivial zeros in upper half-plane, from Andrew Odlyzko’s tables of zeta zeros;

Euler is the Euler-Mascheroni constant 0.577 … ;

We take  X to be log(N_j), for 1<=j<= 8000. This is because in formula 382, Ramanujan applies the S_1 function to log(N), N being a colossally abundant numbers;

More PARI/gp commands:

for(X=1,8000, vz[X] = S5(exp(vx[X])) + vy[X])

the j’th element of the array vx[ ] is actually log(log(N_j)), 1<= j <= 8000;

? \u

S5 =


In the plot below, the curve in red has as Y values:

y_j = delta(N_j) sqrt(log(N_j)) – M  , M the mean, M = 1.3881198 .

The curve in blue is my attempt at subtracting from delta(N_j) sqrt(log(N_j)) – M the “first order contribution from the non-trivial zeta zeros”, so to say:



Added July 20, 2015:

For the data on 32,000 CA numbers out to about exp(exp(31.09)), I did a plot of

delta(N_j)*sqrt(log(N_j)) – M  for 1<= j <= 32,000 in Red.

The M is the mean from the series for j = 1 to 8000, which was M ~= 1.3881198.

Once again, I made an attempt at subtracting the first order contribution from the zeta zeros.

The resulting rather flat curve is in blue:




Added Sunday June 19, 2016:


In August 2015, I did computations on colossally abundant numbers out to approximately exp(exp(33)). As above, I substracted the first order contribution of the first 11,101 non-trivial zeta zeros. The resulting blue curve had notable peaks in it for log(log(n)) > 33.

I surmised that this could be due to insufficient precision in the floating point arithmetic, which was done using C “long doubles”, which have a precision of approximately 19 decimal digits.

I therefore decided to redo the computations in quadruple precision, using the math. functions in GNU GCC quadmath library. Even after some optimization such as taking logarithms of products of 256 consecutive primes, the computations of log(N) and sigma(N)/N for 256,000 colossally abundant numbers took approximately 5-6 weeks, without taking into account a re-start of a job that terminated prematurely.

The upshot is that, in quadruple precision, the blue curve has no more suspicious peaks for log(log(n)) > 33, where ‘n’ is a colossally abundant number.

The graphs using quadruple precision arithmetic are shown below.



Written by meditationatae

June 19, 2016 at 7:37 am

Posted in History

%d bloggers like this: