## Quadruple precision computations on asymptotics of colossally abundant numbers

FROM 2015:

====

Added July 17, 2015:

In “Highly Composite Numbers” by Ramanujan, annotated by Jean-Louis Nicolas and Guy Robin, The Ramanujan Journal, 1997 one finds the previously unpublished continuation of Ramanujan’s 1915 paper. This continuation can be found in the book:

If we look at formula 10.71.382 on page 386 of the book by G.E. Andrews and Bruce C. Berndt, from the continuation of Ramanujan’s (suppressed) 1915 paper “Highly Composite Numbers”, i.e. the annotated paper by Ramanujan from 1997 in The Ramanujan Journal, annotated by Jean-Louis Nicolas and Guy Robin, we see that the fine behaviour of the delta(N) sqrt(log(N)) for colossally abunadant numbers ‘N’ has a contribution owing to (log(N))^{rho-1}, for rho taking successively the values of the non-trivial zeros of the Riemann zeta function, say ordered by increasing modulus, so as create a real-valued correction, absolutely convergent, and depending on the non-trivial zeros both in the upper half-plane and the lower half-plane.

My method was semi-empirical, in the sense that I chose the sign of the correction term S_1 (log(N)) to best cancel the oscillations in the 8000 terms of

y_j = delta(N_j) sqrt(log(N_j)) – M .

as I recall, this involved adding to delta(N_j) sqrt(log(N_j)) – M

(in PARI/gp notation):

2*exp(Euler)*sum(Y=1,11101,cos(moreImrho[Y]*log(X))/(1/4+moreImrho[Y]^2))

moreImrho[] an array of the imaginary parts of 11101 first non-trivial zeros in upper half-plane, from Andrew Odlyzko’s tables of zeta zeros;

Euler is the Euler-Mascheroni constant 0.577 … ;

We take X to be log(N_j), for 1<=j<= 8000. This is because in formula 382, Ramanujan applies the S_1 function to log(N), N being a colossally abundant numbers;

More PARI/gp commands:

for(X=1,8000, vz[X] = S5(exp(vx[X])) + vy[X])

the j’th element of the array vx[ ] is actually log(log(N_j)), 1<= j <= 8000;

? \u

S5 =

(X)->2*exp(Euler)*sum(Y=1,11101,cos(moreImrho[Y]*log(X))/(1/4+moreImrho[Y]^2)).

===

In the plot below, the curve in red has as Y values:

y_j = delta(N_j) sqrt(log(N_j)) – M , M the mean, M = 1.3881198 .

The curve in blue is my attempt at subtracting from delta(N_j) sqrt(log(N_j)) – M the “first order contribution from the non-trivial zeta zeros”, so to say:

====

Added July 20, 2015:

For the data on 32,000 CA numbers out to about exp(exp(31.09)), I did a plot of

delta(N_j)*sqrt(log(N_j)) – M for 1<= j <= 32,000 in Red.

The M is the mean from the series for j = 1 to 8000, which was M ~= 1.3881198.

Once again, I made an attempt at subtracting the first order contribution from the zeta zeros.

The resulting rather flat curve is in blue:

========================================================

Added Sunday June 19, 2016:

In August 2015, I did computations on colossally abundant numbers out to approximately exp(exp(33)). As above, I substracted the first order contribution of the first 11,101 non-trivial zeta zeros. The resulting blue curve had notable peaks in it for log(log(n)) > 33.

I surmised that this could be due to insufficient precision in the floating point arithmetic, which was done using C “long doubles”, which have a precision of approximately 19 decimal digits.

I therefore decided to redo the computations in quadruple precision, using the math. functions in GNU GCC quadmath library. Even after some optimization such as taking logarithms of products of 256 consecutive primes, the computations of log(N) and sigma(N)/N for 256,000 colossally abundant numbers took approximately 5-6 weeks, without taking into account a re-start of a job that terminated prematurely.

The upshot is that, in quadruple precision, the blue curve has no more suspicious peaks for log(log(n)) > 33, where ‘n’ is a colossally abundant number.

The graphs using quadruple precision arithmetic are shown below.