## Computations on 8000 colossally abundant numbers out to exp(exp(29.710462))

**Reminder:** For a colossally abundant number n, with n > 5040, one can define

**delta(n) := exp(gamma)*log(log(n)) – sigma(n)/n** *[Briggs, def. 4-2, page 254 ]** .*

This is as in the paper by Keith Briggs,

Briggs, K., 2006, “*Abundant Numbers and the Riemann Hypothesis*“.

sigma(.) is the sum of divisors function, sigma(n)/n is the same as the sum of the inverses of all divisors of n [easy proof], also known as the “abundancy index of n”; **gamma is the Euler-Mascheroni constant 0.57721 …**

The **Theorem of Guy Robin** is that the Riemann Hypothesis is equivalent to :

**exp(gamma)*log(log(n)) – sigma(n)/n > 0 , for all n > 5040.** Also see Grönwall’s theorem on the asymptotic growth rate of the sigma function, at Wikipedia:

https://en.wikipedia.org/wiki/Divisor_function#Approximate_growth_rate .

We can plot how delta(n) changes with n. Here, one continues the study of the variation of log(delta(n)) from a linear (affine) Best-fit function of log(log(n)), as in: in Figure 3 on page 255 of the 2006 paper by Keith Briggs.

The Best-fit line from his paper can be thought of as

y ~= f(x) = 0.323336 – x/2 .

In ‘x’, we would have log(log(n)), and in ‘y’ we would have f( log(log(n) ) , i.e.:

0.323336 – log(log(n))/2 .

In other words,

log(delta(n)) is very very closely approximated by 0.323336 – log(log(n))/2 .

Therefore, one plots the deviation,

log(delta(n)) – [ 0.323336 – log(log(n))/2 ] .

Negative values of : log(delta(n)) – [ 0.323336 – log(log(n))/2 ]

correspond to more negative values of log(delta(n)), i.e. when the ‘gap’ delta(n) is “unusually small or tight”. Nevertheless, compared to values of – log(log(n))/2 , or

0.323336 – log(log(n))/2 , the oscillations are tiny, Briggs Figure 3 (page 255), Top.

For 8000 CA numbers, we show the deviation from the same Best-fit line, i.e.

Y = log(delta(n)) – [ 0.323336 – log(log(n))/2 ] , with X = log(log(n)),

for CA (colossally abundant) numbers n with log(log(n)) up to about 29.710462 , or in other words, the largest CA numbers ‘n’ considered are “close to” exp(exp(29.710462)) .

====

Added July 17, 2015:

In “Highly Composite Numbers” by Ramanujan, annotated by Jean-Louis Nicolas and Guy Robin, The Ramanujan Journal, 1997 one finds the previously unpublished continuation of Ramanujan’s 1915 paper. This continuation can be found in the book:

If we look at formula 10.71.382 on page 386 of the book by G.E. Andrews and Bruce C. Berndt, from the continuation of Ramanujan’s (suppressed) 1915 paper “Highly Composite Numbers”, i.e. the annotated paper by Ramanujan from 1997 in The Ramanujan Journal, annotated by Jean-Louis Nicolas and Guy Robin, we see that the fine behaviour of the delta(N) sqrt(log(N)) for colossally abunadant numbers ‘N’ has a contribution owing to (log(N))^{rho-1}, for rho taking successively the values of the non-trivial zeros of the Riemann zeta function, say ordered by increasing modulus, so as create a real-valued correction, absolutely convergent, and depending on the non-trivial zeros both in the upper half-plane and the lower half-plane.

My method was semi-empirical, in the sense that I chose the sign of the correction term S_1 (log(N)) to best cancel the oscillations in the 8000 terms of

y_j = delta(N_j) sqrt(log(N_j)) – M .

as I recall, this involved adding to delta(N_j) sqrt(log(N_j)) – M

(in PARI/gp notation):

2*exp(Euler)*sum(Y=1,11101,cos(moreImrho[Y]*log(X))/(1/4+moreImrho[Y]^2))

moreImrho[] an array of the imaginary parts of 11101 first non-trivial zeros in upper half-plane, from Andrew Odlyzko’s tables of zeta zeros;

Euler is the Euler-Mascheroni constant 0.577 … ;

We take X to be log(N_j), for 1<=j<= 8000. This is because in formula 382, Ramanujan applies the S_1 function to log(N), N being a colossally abundant numbers;

More PARI/gp commands:

for(X=1,8000, vz[X] = S5(exp(vx[X])) + vy[X])

the j’th element of the array vx[ ] is actually log(log(N_j)), 1<= j <= 8000;

? \u

S5 =

(X)->2*exp(Euler)*sum(Y=1,11101,cos(moreImrho[Y]*log(X))/(1/4+moreImrho[Y]^2)).

===

In the plot below, the curve in red has as Y values:

y_j = delta(N_j) sqrt(log(N_j)) – M , M the mean, M = 1.3881198 .

The curve in blue is my attempt at subtracting from delta(N_j) sqrt(log(N_j)) – M the “first order contribution from the non-trivial zeta zeros”, so to say:

====

Added July 20, 2015:

For the data on 32,000 CA numbers out to about exp(exp(31.09)), I did a plot of

delta(N_j)*sqrt(log(N_j)) – M for 1<= j <= 32,000 in Red.

The M is the mean from the series for j = 1 to 8000, which was M ~= 1.3881198.

Once again, I made an attempt at subtracting the first order contribution from the zeta zeros.

The resulting rather flat curve is in blue:

====

Added July 27, 2015:

Computations done at lower ranges: for n up to about exp(exp(23)). Computations done for 8000 CA numbers. The graph below should extend the graph of July 20 a few units to the left , i.e <———— .

====

Added August 1st, 2015:

I did some numerical computations related to “Ramanujan’s limit” as s-> 1 from below, the step from formula (381) to formula (382) in HCN_annotated of 1997:

? exp(Euler)*(2*sqrt(2)-2)

%85 = 1.475488702200364178518860864621201659347381614115804886\

89386607807376073267075671719251575751211475309019684165155721\

406101382088041587203874211204983974650799847840404676584671979\

27908368025022220823

? -sqrt(log(N))*( abs(zeta(s))*exp(li(log(N)^(1-s)) -(2*s*(2^(1/(2*s))-1)/(2*s-1))*(((log(N)^(1/2-s))/log(log(N))))) -exp(Euler)*log(log(N)))

%86 = 1.47547725759967026339355147306687840290512567347228254983504863850044277248496\

22318145726478415454718439108527534503612832008967125745875995412112611678011

The above is the main two terms in the RHS of formula (381), i.e. “Case III. s > 1/2.”, minus exp(gamma)log(log(N)) , and the result of that

times: sqrt(log(N)) .

N was a very large number with log(log(N)) ~= 16, and ‘s’ was

s = 1 – 10^(-50) . It agrees with:

exp(gamma)*(2*sqrt(2)-2)

to 6 or 7 significant figures. I’ve also numerically subtracted the term in S_1 (log(N)) as appears in formula (382) , in the Computations on colossally abundant numbers .

so, perhaps the unexplicited term: O(1)/(sqrt(log(N))*log(log(N))) in formula (382)

accounts for the difference between 1.3888 to 1.39 in Blue in the graphs, and the asymptotic

exp(gamma)*(2*sqrt(2)-2) = 1.4754887022003641785188608646212…

? log(log(N))

%87 = 16.201053453223136061521113829407945298236354796856028649\

127110586893405656241062998717525594806682434449720417375839868\

7836959420743458426511200419871091416952842351655430260990324033\

92901076318053418

====

Added August 3, 2015:

MEMO:

In deriving formulas (381) and (382) on page 143 of HCN_annotated,

published in the Ramanujan Journal in 1997, Ramanujan at an earlier step integrates an

expression with respect to ‘s’, where ‘s’ is a parameter in Ramanujan’s study of extreme values

of the sigma_{-s} (.) function, ‘s’ being a non-negative real number.

More precisely, referring to Paragraph 68. on page 138,

Ramanujan integrates the LHS and RHS of formula (343) with respect to ‘s’.

Formula (343) is from Paragraph 66. on page 136.

His approximate result of integrating (343) w.r.t. s is formula (356). In completing this

endeavour, Ramanujan uses formulas (354) and (355).

Ramanujan writes that formula (354) is “easily shown”. Moreover, (354) presents as

an exact formula (an identity), i.e. there is no error term, e.g. on the RHS of (354), of the general type (something) + Big_O (something_else) .

By contrast, (355) on the RHS has an error term:

O{ x^(1/2-s)/(log(x)^2) } .

The LHS of (355) is: int_{indefinite} ( S_s (x) ds ) .

S_s(x) is defined on page 134 as follows:

(Definition): S_s (x) :=

– s sum_{over rho being the non-trivial zeta zeros} ( x^(rho-s) / [rho*(rho -s)] )

Note: S_s (x) benefits from adding the term, say, in rho =

1/2 + i*14.134725….

with the term in rho^(Bar) = 1/2 – i*14.134725…. ,

and it’s an easy elementary calculation.

IMPORTANT:

At first glance, after cursory examination, it seems as though the effects of the error term:

O{ x^(1/2-s)/(log(x)^2) }

in formula (355) propagate to further formulas, including (perhaps):

formulas (356), (357), (359), (361), (362), (363), (373), (374), (375), (376),

(377), (378), (379), (380), (381) and (382).

Note: The list of formulas above where the O{ x^(1/2-s)/(log(x)^2) } error term in (355), in the indefinite (alternatively : definite with unspecified integration bounds) integral essentially of

S_s (x) ds ,

“propagates” to later formula is a tentative one, subject to later review, although it seems well-founded, i mean, what do you expect? due to approximating a definite integral like the one in formula (355), and (“en definitive”), the two sides of formula (343) w.r.t. ‘s’ , on page

136.

[ Minor point, ] [ Begin:]

Correcting an earlier mis-conception:

Technically, Sigma_{-s} (N) is defined in

formula (323), page 132.

This is not the same as formula (317) on page 131;

Formula (323) has the three largest terms of formula (317) , and presumably

this is enough to approximate the generalized superior highly composite numbers in

the way Ramanujan says (or suggests, or seems to suggest).

[ Minor point ] [end]

====

Added August 4, 2015:

For S_s (x) as defined on page 134 of HCN_annotated of 1997:

Combining the terms in rho and rho^{bar} in S_s(x), and

setting t:= Im(rho), I get the sum:

-2*s*(cos(log(x)*t)*(1/4-t*t-s/2)+sin(log(x)*t)*(t-s*t))*(x^(1/2-s))/((1/4-t*t-s/2)*(1/4-t*t-s/2)+(t-s*t)*(t-s*t))

How does one integrate that with respect to s ?

Numerics:

? -s*(exp(log(x)*(rho-s))/(rho*(rho-s)) + exp(log(x)*(rhobar-s))/(rhobar*(rhobar-s)))

= 0.000107134564410859877495232267857061994006743111663363248186 + 0.E-217*I

? -2*s*(cos(log(x)*t)*(1/4-t*t-s/2)+sin(log(x)*t)*(t-s*t))*(x^(1/2-s))/((1/4-t*t-s/2)*(1/4-t*t-s/2)+(t-s*t)*(t-s*t))

= 0.000107134564410859877495232267857061994006743111663363248186892733386

? t-imag(rho)

= 0.E-210

====

Added August 24, 2015:

Data on 2560 CA numbers out to ~= exp(exp(33.17)) :

====

Added August 25, 2015:

Data for 2560 colossally abundant numbers out to exp(exp(33.17)). For more details on the calculations for the blue curve, please see the additions of July 18 and July 20.

====

Added October 23, 2015:

The peaks in the blue curve around log(log(N))=33 made me think that perhaps the computations were not being done to sufficient precision. The computations for the Blue Curve in the August 25 addition/figure were done using ~= 11,000 first non-trivial zeta zeros, the ones with real parts > 0 are enough because if 1/2 +i*t is a zero, so is 1/2 – i*t a non-trivial zero (with t real, t>0). Therefore, I used the full first 100,000 non-trivial zeta zeros with Real(rho) > 0 from Andrew Odlyzko’s “Tables of zeros of the Riemann zeta function” here:

http://www.dtc.umn.edu/~odlyzko/zeta_tables/index.html

That still left peaks in the blue curve around log(log(N)) = 33, although strictly speaking, no new “Blue curve” was graphed, just from looking at the numbers for the “blue curve”.

I therefore considered that it could be wise to redo the floating point arithmetic in quad precision, one step beyond the ~= 64-bit mantissas of “long double” arithmetic.

Reference on C “long doubles”: x86 Extended Precision Format at Wikipedia,

https://en.wikipedia.org/wiki/Extended_precision#x86_Extended_Precision_Format .

Unfortunately, logarithms are very slow in quad-precision, and we need one log per prime roughly.

First, I thought of using Taylor series for log(p + epsilon), where log(p) is known and epsilon << p.

Lately, I’ve thought of multiplying many many primes casted to quad precision, while staying under the absolute value limit of ~~= 10^4900 , and then taking the log of the product.

100 numbers of size 10^48 can be multiplied without overflow, with 15-bits exponents … cf. : https://en.wikipedia.org/wiki/Extended_precision#x86_Extended_Precision_Format

The simplest test of this would be to multiply 1 to 100 in quad-precision, take the log and store, then multiply 101 to 200 in quad-precision, take the log and add to previous log, and so on thus obtaining quad-precision approcimations of log(100!), log(200!), …. log( (100k)!) up to large ‘k’ .

====

Added April 02, 2016:

For colossally abundant numbers N_j, we define:

delta(N_j) := **exp(gamma)*log(log(N_j)) – sigma(N_j)/N_j **(as in Briggs, 2006).

Briggs2006 being:

“Abundant Numbers and the Riemann Hypothesis”, Experimental Mathematics 15, 251-6 (2006).

Please see , for example, Project Euclid ,

https://projecteuclid.org/euclid.em/1175789744 for an archive of the article.

For n a colossally abundant or CA number, Briggs in definition (4-2) defines:

delta(n) := e^gamma log log(n) – sigma(n)/n (4-2)

log(delta(n)) is almost linear/affine in log(log(n)) : Figure 3.

In Figure 3, Briggs gives the Best fit over the range:

delta(n) ~= 0.323336 – log(log(n))/2 ,

for n a CA number, with 11.5 < log(log(n)) < 27.

The discrepancy

Y(n) := delta(n) – [ 0.323336 – log(log(n))/2 ]

is given in Figure 3 (bottom) for n a CA number, and 11.5 < log(log(n)) < 27 .

Starting with my additional note of July 17, 2015 above, I started plotting:

y_j = delta(N_j) sqrt(log(N_j)) – M

N_j being a CA number, and M a constant, with M = 1.3881198.

From reading Ramanujan’s HCN_{annotated} by Nicolas and Robin, I began subtracting from y_j as above the “first order contribution from the non-trivial zeta zeros”, as in the July 18, 2015 note. The resulting curve in Blue is much much less oscillatory than the initial curve in red.

In subsequent weeks, I extended the upper range of the CA numbers, up to CA numbers n with log(log(n)) > 33 [ August 24, 2015 ]. The 2560 CA numbers had prime factors up to about the first 256 000 000 000 000 = 2.56 E + 14, with log(log(n)) ~= 33.

In the August 25, 2015 Note, I added the curve obtained by subtracting the “first order contribution from the non-trivial zeta zeros” ; this proved to be a puzzling result, as there remained sharp peaks in the Blue curve.

In the October 23, 2015 Note, I wrote:

“The peaks in the blue curve around log(log(N))=33 made me think that perhaps the computations were not being done to sufficient precision.”

and

“I therefore considered that it could be wise to redo the floating point arithmetic in quad precision, one step beyond the ~= 64-bit mantissas of “long double” arithmetic.”

Below, I have a Big graph done using “long double” C floats (at about 80 bits total space, including sign and exponent) and a blow-up , showing the computing both with C 80-bit floats, and using quad-precision 128-bit floats. The difference in the blow-up is easy to see.

I’m currently working on redoing the CA number computions in 128-bit quad precision using the GNU GCC quadmath library.

====

Added June 18, 2016:

Data on 16,000 CA numbers out to ~= exp(exp(33.17)).

The computations of August 24, 2015 were redone in

quad-precision, which has a precision of approximately 34 decimal digits.

The August 24, 2015 computations were done with GNU GCC long doubles

having a precision of approximately 19 decimal digits.