## Kenneth Johnston’s Unusual Suspects, Parts 4-6, Coda

part 2 of comments on Kenneth Johnson’s book “Unusual Suspects”.

Originally posted on Ellen And Jim Have A Blog, Two:

There is no more sombre enemy of good art than the pram in the hall … Whom the gods wish to destroy they first call promising … I was a stage rebel, Orwell a true one — Cyril Connolly,

The Enemies of Promise

Isaac Cruickshank, Royal Extinguisher or Gulliver Putting out Patriots

Dear Friends and Readers,

This is the second half of my summary and commentary on Johnston’s *Unusual Suspects* (see Parts 1-4). This part of Johnston’s book will probably be more familiar territory to those who have read novels of the romantic and regency period, as well as their milieu and development (say in Gary Kelly’s survey). As women who wrote on behalf of radical ideas, 18th century versions of feminism, or reform were given a much rawer response than men, and there were automatically suspect nations (Chapters 7 & 8 of Part 4), so the novel…

View original 4,624 more words

## Kenneth Johnston’s Unusual Suspects, Prologue, Parts 1-4

multi-post …

Originally posted on Ellen And Jim Have A Blog, Two:

It was a misfortune to any man to have been born in the latter end of the last century … The flame of liberty, the light of intellect, was to be extinguished with the sword — or with slander, whose edge is sharper than the sword — Hazlitt,

The Spirit of the Age(1825), quoted by Johnston

Dear friends and readers,

Today I finished writing a review I’ve been reading and working towards for several weeks. I didn’t mean to take such time with it, but Kenneth Johnson’s *Unusual Suspects: Pitt’s Reign of Alarm & the Lost Generation of the 1790s* is so good and important that I wanted to be able to place it in its scholarly as well as contemporary context — and so read other texts and reread some primary materials. This blog is not that review; rather like other blog-reviews I’ve done it’s rather a summary…

View original 4,151 more words

## Computations on 8000 colossally abundant numbers out to exp(exp(29.710462))

Updated today …

Originally posted on meditationatae:

Variation of log(delta(n)) from Best-fit linear function of log(log(n))

**Reminder:** For a colossally abundant number n, with n > 5040, one can define

**delta(n) := exp(gamma)*log(log(n)) – sigma(n)/n** *[Briggs, def. 4-2, page 254 ]** .*

This is as in the paper by Keith Briggs,

Briggs, K., 2006, “*Abundant Numbers and the Riemann Hypothesis*“.

sigma(.) is the sum of divisors function, sigma(n)/n is the same as the sum of the inverses of all divisors of n [easy proof], also known as the “abundancy index of n”; **gamma is the Euler-Mascheroni constant 0.57721 …**

The **Theorem of Guy Robin** is that the Riemann Hypothesis is equivalent to :

**exp(gamma)*log(log(n)) – sigma(n)/n > 0 , for all n > 5040.** Also see Grönwall’s theorem on the asymptotic growth rate of the sigma function, at Wikipedia:

https://en.wikipedia.org/wiki/Divisor_function#Approximate_growth_rate .

We can plot how delta(n) changes with…

View original 476 more words

## How I used the Primesieve program to generate primes, and figure for 32000 CA numbers

It’s easier to count the primes up to some large number N such as 10^12 than to generate all the primes up to N. Sometimes, we need to generate all the primes up to N, and then prime-counting functions will not do. Many good algorithms to generate the primes up to N are known as sieves. The most famous is the sieve of Eratosthenes, and there is a Wikipedia article on the subject. There are also variations on the sieve of Eratosthenes, and all the subtleties and tricks used to speed-up the sieve of Eratosthenes would take us too far afield; also, I’m no expert on these variations. Another method to generate the primes up to N is called the sieve of Atkin, and was invented by A. O. L. Atkin and Daniel J. Bernstein; this method is more advanced, and there is a Wikipedia article on the sieve of Atkin.

For a positive integer k, sigma(k) is the sum of all the divisors of k, including 1 and k. For example, if k = 24, the divisors of 24 are: 1, 2, 3, 4, 6, 8, 12, 24. So sigma(24) = 1+2+3+4+6+8+12+24 = 60. We can use the prime factorization of 24 to compute sigma(24): 24 = 3*8 = 3*(2^3) .

Because 3 and 8 are co-prime, we can say (this can be proved) that sigma(24) = sigma(3)*sigma(8).

Since 3 is a prime, its divisors are 1 and 3, so sigma(3) = 3+1 = 4.

Since 8 is 2^3 and 2 is a prime, the divisors of 8 are 2^0, 2^1, 2^2 and 2^3. In other words, they are: 1, 2, 4 and 8 so that sigma(8) = 1+2+4+8 = 15. Now, sigma(3)*sigma(8) = 4*15 = 60, and we can now verify that sigma(24) = sigma(3)*sigma(8) is true.

How large can sigma(n) get, compared to n? For this, we can look at the ratio sigma(n)/n and ask how large it can get. It turns out that sigma(n)/n can surpass any fixed pre-assigned value, for example 1000 , for some unusual n with “many divisors”. So:

there exists an n such that: sigma(n)/n > 1000 .

For more on this, there is Gronwall’s theorem on the asymptotic growth rate of the sigma function, published in 1913. Gronwall’s Theorem implies that large values or record values of sigma(n)/n grow very slowly with n, so much so that:

sigma(n)/(n log(log(n))) is bounded .

To avoid problems with log(log(1)) and other small n, and because we are looking at asymptotics, we can safely insert the proviso: “For n > 16″, i.e.:

” For n> 16, sigma(n)/(n log(log(n))) is bounded “.

Gronwall proved more than that, he proved that

lim sup_{n -> oo} sigma(n)/(n log(log(n))) = e^gamma = exp(gamma), where gamma is the Euler-Mascheroni constant,

gamma = 0.5772156649………..

This means that, if C > exp(gamma) then only finitely many n satisfy

sigma(n)/(n log(log(n))) > C, while if

C < exp(gamma), then for infinitely many n one has:

sigma(n)/(n log(log(n))) > C.

As far as I know, Gronwall didn’t state anything about the rate at which large or record values of sigma(n)/(n log(log(n))) approach exp(gamma) as n increases.

Srinivasa Ramanujan studied related questions in his 1915 paper: “Highly Composite Numbers”, or HCN. This was the first part of his B.Sc. (or Ph.D.) dissertation. This appears in the collected papers of Ramanujan. Ramanujan studies the number of divisors function d(n). For example, since the divisors of 24 are: 1, 2, 3, 4, 6, 8, 12, 24 then d(24) = 8. Ramanujan defined a highly composite number as a number n such that for all m < n, d(m) < d(n). For example, 5040 is a highly composite number and d(5040) = 60.

In the unpublished part of HCN (see: “Highly Composite Numbers” by Srinivasa Ramanujan, annotated by Jean-Louis Nicolas and Guy Robin, The Ramanujan Journal, vol. 1 number 2, 1997),

Ramanujan studies the sum of the d^(-s) , for d the divisors of a number n, for a fixed real number s >= 0. This sum of the d^(-s) for n is the same as:

(sum_{d divides n} d^s )/ (n^s) . For example, with s = 1, one finds that the sum of the 1/d for d divisors of a number n is the same as the sum of the d, for d dividing n, over n, in other words:

sigma(n)/n .

Whereas it appears that the unpublished part of HCN got little publicity in the first decades after 1915, Erdos and Alaoglu published in 1944 in the Transactions of the American Mathematical Society their paper: “On highly composite and similar numbers”, available at no charge from the web-page:

http://www.ams.org/journals/tran/1944-056-00/home.html

The Erdos-Alaoglu paper has some results on Ramanujan’s highly composite numbers published in 1915, where the object of study is the number of divisors function d(n), and in my opinion more interesting results on highly abundant numbers and sub-classes of the highly abundant numbers, where n is highly abundant if for all m < n, sigma(m) < sigma(n), sigma being the sum of divisors function. When it comes to highly abundant numbers and their sub-classes, the object of study is sigma(n) or alternatively sigma(n)/n. Erdos and Alaoglu define a more restrictive class of numbers which they call superabundant numbers: n is superabundant if, for all m < n, sigma(m)/m < sigma(n)/n . Finally, they define in the introduction a more restrictive class of the superabundant numbers which they call colossally abundant numbers; these have a precise characterization: see Section 3, and Theorem 10 in particular of the Erdos-Alaoglu paper.

When doing computations on sigma(n)/n and log(n) for several (hundreds or thousands of) very large colossally abundant numbers n, it can be convenient to choose a large constant K, for example K = 10^9, and to tabulate, for values of the integer m >= 1, both

sum_{ p <= mK} log(p) and

product_{ p <= mK} ( 1 + 1/p ) .

If N is a very large colossally abundant number, and q is the largest prime factor of N, then all prime factors ‘p’ of N with exponent 2 or greater in the prime factorization of N “are small compared to q”, or p << q . So to compute log(N), one can start by adding the sums of the logs of the primes <= q, and make an adjustment. [ I remember seeing something related to the size of prime factors ‘p’ with ‘p’ appearing with exponent 2 or higher in the annotated continuation of Highly Composite Numbers, which was published in 1997. I will look into this. Further note: In the 1997 continuation of HCN, Ramanujan defines a standard form for generalized superior highly composite numbers in Section 60, essentially as the product of (p_1)# , (p_2)#, … (p_r)# , … (p_Q)# with p_1 >= p_2 >= p_3 … >= p_r … >= p_Q, and e.g. (p_1)# being a primorial, the product of the primes up to p_1. When s=1 , then the function under study is sigma(n)/n and generalized superior highly composite numbers is the same as Erdos and Alaoglu’s colossally abundant numbers. Section 61 begins: “Let us consider the nature of p_r” .

In any case, I believe that for colossally abundant numbers, the number of prime factors with exponent 2 or higher is tiny, when compared to the total number of prime factors; this can be checked by doing explicit computations of the exponents for some small epsilon, such as epsilon = 0.000001 . ]

In the same way, to compute sigma(N)/N, one can start by computing the products of sigma(p)/p for p <= q, or in other words, compute the product of (p+1)/p for the primes p <= q. For the large colossally abundant numbers I’ve done computations with, generically N, which usually have several billion distinct prime factors, no more than a million primes have exponents of 2 or greater; in other words, all primes p except at most the first million primes either occur in N to the first power, or do not divide N.

Below, I include the C source code I used for sieving primes and tabulating:

sum_{ p <= mK} log(p) and

product_{ p <= mK} ( 1 + 1/p ) . (more or less, or morally equivalent, with K = 10^9) :

[david2@localhost primesieve-5.4.2]$ time ./testprogram75a.out > testprogram75a.txt

real 484m56.813s

user 484m43.305s

sys 0m4.036s

This says it took about 8 hours to run testprogram75a.out, which used Primesieve by Kim Wallisch.

$ cat testprogram75a.c

#include <primesieve.h>

#include <stdio.h>

#include <stdlib.h>

#include <math.h>

int main(void) {

unsigned long long lastp;

unsigned long long bound;

unsigned long long Lobound;

int counter;

primesieve_iterator pi;

long double prod_phi_primes;

long double sum_log_primes;

primesieve_init(&pi);

uint64_t prime = 0;

prod_phi_primes = (long double) 1;

sum_log_primes = (long double) 0;

bound = 1000000000ull;

Lobound = bound – 1000000000ull;

for(counter = 0; counter < 8000; counter++)

{

while ((prime = primesieve_next_prime(&pi)) < bound)

{

lastp = prime;

prod_phi_primes = prod_phi_primes + prod_phi_primes/((long double) prime);

sum_log_primes = sum_log_primes + logl((long double) prime);

}

printf(“%llu %llu “, Lobound, bound);

printf(“%.25Lf “, prod_phi_primes);

printf(“%.25Lf “, sum_log_primes);

printf(“%llu\n”, lastp);

bound = bound + 1000000000ull;

Lobound = Lobound + 1000000000ull;

prod_phi_primes = (long double) 1;

sum_log_primes = (long double) 0;

prod_phi_primes = prod_phi_primes + prod_phi_primes/((long double) prime);

sum_log_primes = sum_log_primes + logl((long double) prime);

}

primesieve_free_iterator(&pi);

return 0;

}

Note: #include <primesieve.h> tells the C preprocessor (which is not the C compiler but is part of the process of translation to machine code) to include the “header file” primesieve.h, which is provided by Kim Wallisch’s Primesieve package/program.

Cf.:

The compilation was done as follows:

gcc -lprimesieve -lm -O2 -o testprogram75a.out testprogram75a.c

-lm says to include (or link to ?) the standard math library.

-lprimesieve says to include (or link to ?) the primesieve library

I had set:

$ echo $LD_LIBRARY_PATH

/home/david2/Downloads/primesieve-5.4.2/libs:/usr/local/lib64::/home/david2/Downloads/primesieve-5.4.2/include/primesieve:/home/david2/Downloads/primesieve-5.4.2/libs

So, I will next try 4 times as far as 8000 [billion] : 32000 instead of 8000 in testprogram75a.c …

$ cat testprogram80a.c

#include <primesieve.h>

#include <stdio.h>

#include <stdlib.h>

#include <math.h>

int main(void) {

unsigned long long lastp;

unsigned long long bound;

unsigned long long Lobound;

int counter;

primesieve_iterator pi;

long double prod_phi_primes;

long double sum_log_primes;

primesieve_init(&pi);

uint64_t prime = 0;

prod_phi_primes = (long double) 1;

sum_log_primes = (long double) 0;

bound = 1000000000ull;

Lobound = bound – 1000000000ull;

for(counter = 0; counter < 32000; counter++)

{

while ((prime = primesieve_next_prime(&pi)) < bound)

{

lastp = prime;

prod_phi_primes = prod_phi_primes + prod_phi_primes/((long double) prime);

sum_log_primes = sum_log_primes + logl((long double) prime);

}

printf(“%llu %llu “, Lobound, bound);

printf(“%.25Lf “, prod_phi_primes);

printf(“%.25Lf “, sum_log_primes);

printf(“%llu\n”, lastp);

bound = bound + 1000000000ull;

Lobound = Lobound + 1000000000ull;

prod_phi_primes = (long double) 1;

sum_log_primes = (long double) 0;

prod_phi_primes = prod_phi_primes + prod_phi_primes/((long double) prime);

sum_log_primes = sum_log_primes + logl((long double) prime);

}

primesieve_free_iterator(&pi);

return 0;

}

time ./testprogram80a.out > testprogram80a.txt [Enter]

job submitted.

JOB DONE …

FIGURE BELOW…

Post scriptum: I prefer not to enable comments on this Blog. This is because a while ago, I read that WordPress has vulnerabilities, and so I disabled comments after that. You can send comments by email to david250 AT videotron DOT ca

If you arrived here from the sci.math newsgroup, replies in the thread on Fast prime-sieving programs are also fine.

## header of new WAV file

I recorded a second WAV file of white noise:

June26TvNoise.wav at

2,073,214,126 bytes length .

Using decrypt04j.c (see below), I wrote the beginning bytes as unsigned numbers

in base ten, from 0 to 255; using the ASCII table, we can find which character is

represented and decipher the header of this WAV file:

[david2@localhost JUNE_26_Tv_Noise]$ more decrypt04jb.txt

byte number: stored unsigned char in base 10 and ASCII decoding (partial)

1: 82 R

2: 73 I

3: 70 F

4: 70 F

5: 166

6: 188

7: 146

8: 123

9: 87

10: 65

11: 86

12: 69

13: 102

14: 109

15: 116

16: 32

17: 18

18: 0

19: 0

20: 0

21: 1

22: 0

23: 2

24: 0

25: 68

26: 172

27: 0

28: 0

29: 16

30: 177

31: 2

32: 0

33: 4

34: 0

35: 16

36: 0

37: 0

38: 0

39: 100 d

40: 97 a

41: 116 t

42: 97 a

43: 128

44: 188

45: 146

46: 123

47: 0

48: 0

49: 0

50: 0

Ref.: Tutorial on WAV files on the Web, in the link, in the previous post.

Source code of decrypt04jb.c:

#include <stdio.h>

int main(void)

{

int j;

unsigned char car;

FILE *in;

in = fopen(“/home/david2/RANDOM/JUNE_26_Tv_Noise/June26TvNoise.wav”, “r”);

for(j=0; j<100000; j++)

{

fscanf(in, “%c”, &car);

printf(“%3d: “, j+1);

printf(“%3d\n”, car);

}

fclose(in);

return 0;

}

To be continued …

David Bernier

## Reference on *.wav audio files

I found all I needed to know about the header(s) of *.wav files and what they mean from this web page:

http://www.topherlee.com/software/pcm-tut-wavformat.html

It says there:

“The header of a WAV (RIFF) file is 44 bytes long and has the following format:” .

In the case of the *.wav file I had, the header was actually 46 bytes long, but very very close

to what they say at:

http://www.topherlee.com/software/pcm-tut-wavformat.html

David Bernier

Sat Jul 4 11:13:02 EDT 2015