meditationatae

Just another WordPress.com site

Computations on 8000 colossally abundant numbers out to exp(exp(29.710462))

meditationatae:

Updated today …

Originally posted on meditationatae:

Variation of log(delta(n)) from Best-fit linear function of log(log(n)) Variation of log(delta(n)) from Best-fit linear function of log(log(n))

Reminder:  For a colossally abundant number n, with n > 5040, one can define

delta(n) := exp(gamma)*log(log(n)) – sigma(n)/n               [Briggs, def. 4-2, page 254 ] .

This is as in the paper by Keith Briggs,

Briggs, K.,  2006, “Abundant Numbers and the Riemann Hypothesis“.

sigma(.) is the sum of divisors function, sigma(n)/n is the same as the sum of the inverses of all divisors of n [easy proof], also known as the “abundancy index of n”;  gamma is the Euler-Mascheroni constant 0.57721 …

The Theorem of Guy Robin is that the Riemann Hypothesis is equivalent to :

exp(gamma)*log(log(n)) – sigma(n)/n > 0 , for all  n > 5040. Also see Grönwall’s theorem on the asymptotic growth rate of the sigma function, at Wikipedia:

https://en.wikipedia.org/wiki/Divisor_function#Approximate_growth_rate  .

We can plot how delta(n) changes with…

View original 476 more words

Written by meditationatae

July 17, 2015 at 12:28 pm

Posted in History

How I used the Primesieve program to generate primes, and figure for 32000 CA numbers

It’s easier to count the primes up to some large number N such as 10^12 than to generate all the primes up to N. Sometimes, we need to generate all the primes up to N, and then prime-counting functions will not do.  Many good algorithms to generate the primes up to N are known as sieves. The most famous is the sieve of Eratosthenes, and there is a Wikipedia article on the subject. There are also variations on the sieve of Eratosthenes, and all the subtleties and tricks used to speed-up the sieve of Eratosthenes would take us too far afield; also, I’m no expert on these variations.  Another method to generate the primes up to N is called the sieve of Atkin, and was invented by A. O. L. Atkin and Daniel J. Bernstein; this method is more advanced, and there is a Wikipedia article on the sieve of Atkin.

For a positive integer k, sigma(k) is the sum of all the divisors of k, including 1 and k. For example, if k = 24, the divisors of 24 are:  1, 2, 3, 4, 6, 8, 12, 24. So sigma(24) = 1+2+3+4+6+8+12+24 = 60.  We can use the prime factorization of 24 to compute sigma(24): 24 = 3*8 = 3*(2^3) .

Because 3 and 8 are co-prime, we can say (this can be proved) that sigma(24) = sigma(3)*sigma(8).

Since 3 is a prime, its divisors are 1 and 3, so sigma(3) = 3+1 = 4.

Since 8 is 2^3 and 2 is a prime, the divisors of 8 are 2^0, 2^1, 2^2 and 2^3. In other words, they are: 1, 2, 4 and 8 so that sigma(8) = 1+2+4+8 = 15. Now, sigma(3)*sigma(8) = 4*15 = 60, and we can now verify that sigma(24) = sigma(3)*sigma(8) is true.

How large can sigma(n) get, compared to n? For this, we can look at the ratio sigma(n)/n and ask how large it can get. It turns out that sigma(n)/n can surpass any fixed pre-assigned value, for example 1000 , for some unusual n with “many divisors”. So:

there exists an n such that:  sigma(n)/n >  1000 .

For more on this, there is Gronwall’s theorem on the asymptotic growth rate of the sigma function, published in 1913. Gronwall’s Theorem implies that large values or record values of sigma(n)/n grow very slowly with n, so much so that:

sigma(n)/(n log(log(n))) is bounded .

To avoid problems with log(log(1)) and other small n, and because we are looking at asymptotics, we can safely insert the proviso:  “For n > 16″, i.e.:

” For n> 16, sigma(n)/(n log(log(n))) is bounded “.

Gronwall proved more than that, he proved that

lim sup_{n -> oo}   sigma(n)/(n log(log(n))) = e^gamma = exp(gamma), where gamma is the Euler-Mascheroni constant,

gamma = 0.5772156649………..

This means that, if C > exp(gamma) then only finitely many n satisfy

sigma(n)/(n log(log(n))) > C, while if

C < exp(gamma), then for infinitely many n one has:

sigma(n)/(n log(log(n))) > C.

As far as I know, Gronwall didn’t state anything about the rate at which large or record values of sigma(n)/(n log(log(n))) approach exp(gamma) as n increases.

Srinivasa Ramanujan studied related questions in his 1915 paper: “Highly Composite Numbers”, or HCN. This was the first part of his B.Sc. (or Ph.D.) dissertation. This appears in  the collected papers of Ramanujan. Ramanujan studies the number of divisors function d(n). For example, since the divisors of 24 are:  1, 2, 3, 4, 6, 8, 12, 24 then d(24) = 8. Ramanujan defined a highly composite number as a number n such that for all m < n, d(m) < d(n). For example, 5040 is a highly composite number and d(5040) = 60.

In the unpublished part of HCN (see: “Highly Composite Numbers” by Srinivasa Ramanujan, annotated by Jean-Louis Nicolas and Guy Robin, The Ramanujan Journal, vol. 1 number 2, 1997),

Ramanujan studies the sum of the d^(-s) , for d the divisors of a number n, for a fixed real number s >= 0. This sum of the d^(-s) for n is the same as:

(sum_{d divides n} d^s )/ (n^s) .  For example, with s = 1, one finds that the sum of the 1/d for d divisors of a number n is the same as the sum of the d,  for d dividing n, over n, in other words:

sigma(n)/n .

Whereas it appears that the unpublished part of HCN got little publicity in the first decades after 1915, Erdos and Alaoglu published in 1944 in the Transactions of the American Mathematical Society their paper: “On highly composite and similar numbers”, available at no charge from the web-page:

http://www.ams.org/journals/tran/1944-056-00/home.html

The Erdos-Alaoglu paper has some results on Ramanujan’s highly composite numbers published in 1915, where the object of study is the number of divisors function d(n), and in my opinion more interesting results on highly abundant numbers and sub-classes of the highly abundant numbers, where n is highly abundant if for all m < n, sigma(m) < sigma(n), sigma being the sum of divisors function. When it comes to highly abundant numbers and their sub-classes, the object of study is sigma(n) or alternatively sigma(n)/n. Erdos and Alaoglu define a more restrictive class of numbers which they call superabundant numbers: n is superabundant if, for all m < n,  sigma(m)/m < sigma(n)/n . Finally, they define in the introduction a more restrictive class of the superabundant numbers which they call colossally abundant numbers; these have a precise characterization: see Section 3, and Theorem 10 in particular of the Erdos-Alaoglu paper.

When doing computations on sigma(n)/n and log(n) for several (hundreds or thousands of) very large colossally abundant numbers n, it can be convenient to choose a large constant K, for example K = 10^9, and to tabulate, for values of the integer m >= 1, both

sum_{ p <= mK} log(p)    and

product_{ p <= mK} ( 1 + 1/p ) .

If N is a very large colossally abundant number, and q is the largest prime factor of N, then all prime factors ‘p’ of N with exponent 2 or greater in the prime factorization of N “are small compared to q”, or p << q . So to compute log(N), one can start by adding the sums of the logs of the primes <= q, and make an adjustment. [ I remember seeing something related to the size of prime factors ‘p’ with ‘p’ appearing with exponent 2 or higher in the annotated continuation of Highly Composite Numbers, which was published in 1997. I will look into this. Further note: In the 1997 continuation of HCN, Ramanujan defines a standard form for generalized superior highly composite numbers in Section 60, essentially as the product of (p_1)# , (p_2)#, … (p_r)# , … (p_Q)# with p_1 >= p_2 >= p_3 … >= p_r … >= p_Q, and e.g. (p_1)# being a primorial, the product of the primes up to p_1. When s=1 , then the function under study is sigma(n)/n and generalized superior highly composite numbers is the same as Erdos and Alaoglu’s colossally abundant numbers. Section 61 begins: “Let us consider the nature of p_r” .

In any case, I believe that for colossally abundant numbers, the number of prime factors with exponent 2 or higher is tiny, when compared to the total number of prime factors; this can be checked by doing explicit computations of the exponents for some small epsilon, such as epsilon = 0.000001 . ]

In the same way, to compute sigma(N)/N, one can start by computing the products of sigma(p)/p for p <= q, or  in other words, compute the product of (p+1)/p for the primes p <= q. For the large colossally abundant numbers I’ve done computations with, generically N,  which usually have several billion distinct prime factors, no more than a million primes have exponents of 2 or greater; in other words, all primes p except at most the first million  primes either occur in N to the first power, or do not divide N.

Below, I include the C source code I used for sieving primes and tabulating:

sum_{ p <= mK} log(p)    and

product_{ p <= mK} ( 1 + 1/p ) . (more or less, or morally equivalent, with K = 10^9) :

[david2@localhost primesieve-5.4.2]$ time ./testprogram75a.out > testprogram75a.txt

real 484m56.813s
user 484m43.305s
sys 0m4.036s

This says it took about 8 hours to run testprogram75a.out, which used Primesieve by Kim Wallisch.

$ cat testprogram75a.c
#include <primesieve.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>

int main(void) {
unsigned long long lastp;
unsigned long long bound;
unsigned long long Lobound;
int counter;
primesieve_iterator pi;
long double prod_phi_primes;
long double sum_log_primes;

primesieve_init(&pi);
uint64_t prime = 0;

prod_phi_primes = (long double) 1;
sum_log_primes = (long double) 0;
bound = 1000000000ull;

Lobound = bound – 1000000000ull;

for(counter = 0; counter < 8000; counter++)
{

while ((prime = primesieve_next_prime(&pi)) < bound)
{
lastp = prime;
prod_phi_primes = prod_phi_primes + prod_phi_primes/((long double) prime);
sum_log_primes = sum_log_primes + logl((long double) prime);
}

printf(“%llu %llu “, Lobound, bound);
printf(“%.25Lf “, prod_phi_primes);
printf(“%.25Lf “, sum_log_primes);
printf(“%llu\n”, lastp);

bound = bound + 1000000000ull;
Lobound = Lobound + 1000000000ull;

prod_phi_primes = (long double) 1;
sum_log_primes = (long double) 0;
prod_phi_primes = prod_phi_primes + prod_phi_primes/((long double) prime);
sum_log_primes = sum_log_primes + logl((long double) prime);
}

primesieve_free_iterator(&pi);

return 0;
}

Note: #include <primesieve.h>  tells the C preprocessor (which is not the C compiler but is part of the process of translation to machine code) to include the “header file” primesieve.h, which is provided by Kim Wallisch’s Primesieve package/program.

Cf.:

http://primesieve.org/

The compilation was done as follows:

gcc -lprimesieve -lm -O2 -o testprogram75a.out testprogram75a.c

-lm says to include (or link to ?) the standard math library.

-lprimesieve says to include (or link to ?) the primesieve library

I had set:

$ echo $LD_LIBRARY_PATH
/home/david2/Downloads/primesieve-5.4.2/libs:/usr/local/lib64::/home/david2/Downloads/primesieve-5.4.2/include/primesieve:/home/david2/Downloads/primesieve-5.4.2/libs

So, I will next try 4 times as far as 8000 [billion] : 32000 instead of 8000 in testprogram75a.c …

$ cat testprogram80a.c
#include <primesieve.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>

int main(void) {
unsigned long long lastp;
unsigned long long bound;
unsigned long long Lobound;
int counter;
primesieve_iterator pi;
long double prod_phi_primes;
long double sum_log_primes;

primesieve_init(&pi);
uint64_t prime = 0;

prod_phi_primes = (long double) 1;
sum_log_primes = (long double) 0;
bound = 1000000000ull;

Lobound = bound – 1000000000ull;

for(counter = 0; counter < 32000; counter++)
{

while ((prime = primesieve_next_prime(&pi)) < bound)
{
lastp = prime;
prod_phi_primes = prod_phi_primes + prod_phi_primes/((long double) prime);
sum_log_primes = sum_log_primes + logl((long double) prime);
}

printf(“%llu %llu “, Lobound, bound);
printf(“%.25Lf “, prod_phi_primes);
printf(“%.25Lf “, sum_log_primes);
printf(“%llu\n”, lastp);

bound = bound + 1000000000ull;
Lobound = Lobound + 1000000000ull;

prod_phi_primes = (long double) 1;
sum_log_primes = (long double) 0;
prod_phi_primes = prod_phi_primes + prod_phi_primes/((long double) prime);
sum_log_primes = sum_log_primes + logl((long double) prime);
}

primesieve_free_iterator(&pi);

return 0;
}

time ./testprogram80a.out > testprogram80a.txt [Enter]

job submitted.

JOB DONE …

FIGURE BELOW…

Post scriptum: I prefer not to enable comments on this Blog. This is because a while ago, I read that WordPress has vulnerabilities, and so I disabled comments after that. You can send comments by email to david250 AT videotron DOT ca

If you arrived here from the sci.math newsgroup, replies in the thread on Fast prime-sieving programs are also fine.xy32WalischJuly152015

Written by meditationatae

July 11, 2015 at 7:32 pm

Posted in History

New computations on 8000 colossally abundant numbers using Kim Walisch’s Primesieve program

xyWalischJuly82015 xyGRobin8000_Graph

Written by meditationatae

July 8, 2015 at 5:37 pm

Posted in History

header of new WAV file

I recorded a second WAV file of white noise:

June26TvNoise.wav at

2,073,214,126 bytes length .

Using decrypt04j.c (see below), I wrote the beginning bytes as unsigned numbers

in base ten, from 0 to 255; using the ASCII table, we can find which character is

represented and decipher the header of this WAV file:

[david2@localhost JUNE_26_Tv_Noise]$ more decrypt04jb.txt

byte number: stored unsigned char in base 10  and ASCII decoding (partial)
1: 82    R
2: 73    I
3: 70    F
4: 70   F
5: 166
6: 188
7: 146
8: 123
9: 87
10: 65
11: 86
12: 69
13: 102
14: 109
15: 116
16: 32
17: 18
18: 0
19: 0
20: 0
21: 1
22: 0
23: 2
24: 0
25: 68
26: 172
27: 0
28: 0
29: 16
30: 177
31: 2
32: 0
33: 4
34: 0
35: 16
36: 0
37: 0
38: 0
39: 100     d
40: 97      a
41: 116    t
42: 97      a
43: 128
44: 188
45: 146
46: 123
47: 0
48: 0
49: 0
50: 0

Ref.: Tutorial on WAV files on the Web, in the link, in the previous post.

Source code of decrypt04jb.c:

#include <stdio.h>

int main(void)
{
int j;
unsigned char car;
FILE *in;

in = fopen(“/home/david2/RANDOM/JUNE_26_Tv_Noise/June26TvNoise.wav”, “r”);

for(j=0; j<100000; j++)
{
fscanf(in, “%c”, &car);
printf(“%3d: “, j+1);
printf(“%3d\n”, car);
}

fclose(in);

return 0;

}

To be continued …

David Bernier

Written by meditationatae

July 4, 2015 at 3:41 pm

Posted in History

Reference on *.wav audio files

I found all I needed to know about the header(s) of *.wav files and what they mean from this web page:

http://www.topherlee.com/software/pcm-tut-wavformat.html

It says there:

“The header of a WAV (RIFF) file is 44 bytes long and has the following format:” .

In the case of the *.wav file I had, the header was actually 46 bytes long, but very very close

to what they say at:

http://www.topherlee.com/software/pcm-tut-wavformat.html

David Bernier

Sat Jul 4 11:13:02 EDT 2015

Written by meditationatae

July 4, 2015 at 3:13 pm

Posted in History

The C source code file makebin02a.c

/*************************************************************************************

In the previous post, I explained how one can extract the least significant bit

from the channel 1 16-bit PCM fields of stereo (2-channel) 32-bit per sample

WAVE files, format *.wav, and write them as 1s and 0s at one bit per line

in an output file, which in my case was:

/home/david2/RANDOM/decrypt45.txt

The C source code file below reads 8 bits from

decrypt45.txt , packs an 8 bit byte (the unsigned char ‘byte’ )

with the 8 bits, and writes the unsigned char ‘byte’ to the file:

/home/david2/RANDOM/LSBwave.bin .

When the program has finished, the file

LSBwave.bin contains 1,764,000 bytes from the

14,112,000-line file: /home/david2/RANDOM/decrypt45.txt .

Each line of decrypt45.txt contains a ‘0’ or a ‘1’, at random.

Source code of makebin02a.c :

****************************************************************************/

#include <stdio.h>

int main(void)
{
unsigned char array[256];
unsigned char uc0;
unsigned char uc1;
unsigned char byte;
int j;
int i;
int bits[8];
FILE *in;
FILE *out;

uc0 = (unsigned char) 0;
uc1 = (unsigned char) 1;

in = fopen(“/home/david2/RANDOM/decrypt45.txt”, “r”);
out = fopen(“/home/david2/RANDOM/LSBwave.bin”, “w”);
array[0] = uc0;

for(j=1; j<256; j++)
{
array[j] = array[j-1] + uc1;
}

for(i=0; i<1764000; i++)
{
byte = uc0;

for(j=0; j<8; j++)
{
fscanf(in, “%d”, &bits[j]);
}
if(bits[0] > 0)
{
byte = byte + array[1];
}

if(bits[1] > 0)
{
byte = byte + array[2];
}

if(bits[2] > 0)
{
byte = byte + array[4];
}

if(bits[3] > 0)
{
byte = byte + array[8];
}

if(bits[4] > 0)
{
byte = byte + array[16];
}

if(bits[5] > 0)
{
byte = byte + array[32];
}

if(bits[6] > 0)
{
byte = byte + array[64];
}

if(bits[7] > 0)
{
byte = byte + array[128];
}
fprintf(out, “%c”, byte);

}

fclose(in);

fclose(out);

return 0;

}

Written by meditationatae

July 4, 2015 at 2:58 pm

Posted in History

C source code to select list significant bit in *.wav file of white noise , at 2 channels and 16 bits/channel ; name = decrypt45.c

/****************************

the working directory was:  /home/david2/RANDOM

and the input white noise *wav file, at 2 channels audio and

16 bits of depth per pulse position modulated sample per audio channel,

for a total of 32 bits per sample; the idea is to take the least significant bit

from say Channel 1 and throw away channel 2 ; The LSB should be the

“most random” …

The input file , /home/david2/RANDOM/output.wav is a

895,123,758-byte *.wav file produced by Windows Magnetophone

recording in *.wav format from the two microphones (stereo input),

an analog TV set tuned to a non-broadcast (empty) TV channel, so

it’s white noise, both analog audio and analog video.

Windows used stereo at 16 bits depth, and 44100 samples per second:

this is contained in the 44 or 46 first bytes of the *.wav file.

The Windows *.wav was zipped with Winzip, burnt to disc (a CD), then

sent to a Linux Box, and unzipped, resulting in

/home/david2/RANDOM/output.wav . The program

decrypt45.c skipped approximately the first 90,000 bytes, as there were

many zero bytes there.

The output, redirected to: decrypt45.txt

is 28,224,000 bytes long, in 14,112,000 lines of characters ‘0’ or ‘1’ :

$ wc decrypt45.txt
14112000 14112000 28224000 decrypt45.txt

This is done to get an unambiguous, easily processed file of 0’s and 1’s .

This unambiguous file of 0s and 1s was used as input bu the next program,

makebin02a.c (next post) to pack the 14,112,000 random bits

at 8 per byte into a binary file of 1,764,000 byes.

This packed file, /home/david2/RANDOM/LSBwave.bin of 1,764,000 bytes

is appropriate for use with base64 encoding in the manner:

$ base64 LSBwave.bin  [ENTER ]

gives “truly random” base64 characters for assistance in picking

passwords, etc.

The most strong compression, ” xz -9 ” could not compress

LSBwave.bin to less than 1,764,000 bytes. Nevertheless, many

cryptography experts suggest XOR’ing random bits from noise/physical devices

with bits from pseudo-random number generators.

I’m assuming that the audio signal from the TV, its copy to Windows, the

CD engraved with the zipped *.wav file, and the files in the Linux Box

are safe from any intruders, except for the very very persistent.

The precautions to take go with the threat model…

LSBwave.bin can be pre-pended using makebin06a.c with an appropriate 44 or 46-byte

*.wav file Header to produce a “synthetic” 10-second audio file:

the 1,764,046-byte WAVE audio file: lsbwave.wav ;

this in turn can be played in VLC on Linux.

I consider this mini-project a succes because, while the *.wav file from

TV speakers whitenoise, the 895,123,758-byte output.wav file

WAS significantly biased (compressible), the output from taking the

least significant bit from 1 channel of every 32-bit 2-channel sample,

and packing at 8 bits per byte, the  1,764,000-byte file  LSBwave.bin ,

has resisted any compression tried so far .

David Bernier ( @doubledeckerpot ) ,

Quebec City,

Canada,

Sat Jul 4 04:38:11 EDT 2015

follows:  decrypt45.c from:

/home/david2/RANDOM/decrypt45.c

***********************************************************************/

#include <stdio.h>

int main(void)
{
int j;
unsigned char car;
unsigned char uc1;
unsigned char uc0;
unsigned char bitwiseand;
FILE *in;

in = fopen(“/home/david2/RANDOM/output.wav”, “r”);

uc1 = (unsigned char) 1;
uc0 = (unsigned char) 0;

for(j=0; j<56547736; j++)
{
fscanf(in, “%c”, &car);

if(j> 99737)
{
if((j%4) == 2)
{
// printf(“%3d: “, j+1);
// printf(“%3d “, car);
bitwiseand = uc1&car;
if(bitwiseand == uc1)
{
printf(“%d\n”, uc1);
}
else
{
printf(“%d\n”, uc0);
}
}
}
}

fclose(in);

return 0;

}

#include <stdio.h>

int main(void)
{
int j;
unsigned char car;
unsigned char uc1;
unsigned char uc0;
unsigned char bitwiseand;
FILE *in;

in = fopen(“/home/david2/RANDOM/output.wav”, “r”);

uc1 = (unsigned char) 1;
uc0 = (unsigned char) 0;

for(j=0; j<56547736; j++)
{
fscanf(in, “%c”, &car);

if(j> 99737)
{
if((j%4) == 2)
{
// printf(“%3d: “, j+1);
// printf(“%3d “, car);
bitwiseand = uc1&car;
if(bitwiseand == uc1)
{
printf(“%d\n”, uc1);
}
else
{
printf(“%d\n”, uc0);
}
}
}
}

fclose(in);

return 0;

}

Written by meditationatae

July 4, 2015 at 8:41 am

Posted in History

Follow

Get every new post delivered to your Inbox.