Goldbach's Conjecture

  • Uploaded by: Albert H. deAprix, Jr.
  • 0
  • 0
  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Goldbach's Conjecture as PDF for free.

More details

  • Words: 53,803
  • Pages: 108
Examining Certain Open Questions in Number Theory

and Solving Other Interesting Problems in Mathematics and Science

E

D

C

B

A

F

E

D

C

F

A

F

A

B

E

B

A

B

C

D

C

D

E

F

A

Albert H. deAprix, Jr. 2011

2

Table of Contents

Preface……………………………………………………………………………………3

Part I: Mathematics Decoding the Prime Patterns on Ulam’s Spiral Square………...…………...……….6 Validating Hardy’s and Littlewood’s Conjecture F………………………..……….14 Generating Pythagorean Triples Using Only Side a……………………...…………24 The Pattern to Primitive Pythagorean Triples………………………………...…….32 Addendum to Pattern of Primitive Pythagorean Triples....…………………...……42 Goldbach’s Conjecture……………………………………………………………..…44 The Twin Prime Conjecture: Proven at Last.…………..……………………...…...62 Proving the Infinitude of 2k Primes…………(in progress)……………..………….xx

Part II: Science Xeno’s Paradoxes: Discovering Motion’s Signature At a Dimensionless Instant of Time……………………………..………….......77 Why Venus Is a Failed Home for Life, and its Significance ......…………………..86

Author’s Biographical Materials Author’s Resume…………………………………………………………………..…103

3

Preface 2011 As an analyst, policymaker, and educator in my varied professional life, I have always carried with me a fascination for problems on the cutting (or unsolved) edges of science and mathematics. I have pondered why we cannot solve certain problems, even though we have had decades or centuries to do so. Have they been that daunting, or is it more a case of not recognizing the most fruitful path for constructing their solution. I think back to the Ptolemaic system for predicting planetary motions across the more stable pattern of the background stars in the evening skies: the ancient Greeks and those who adopted their science tried to explain the retrograde motion of the superior (or outer) planets employing a complex system of epicycles that predicted, with some accuracy, the future locations of the planets in an Earthcentered universe. Claudius Ptolemy’s system worked, but it was not ideal. There was no way to eliminate or account for all error in the system. Even the Copernican system could not, at first, explain planetary motions more accurately, as its sun-centered system had the planets revolving around the sun in circular orbits. It required Johannes Kepler’s insight that the planets followed elliptical orbits to develop a more accurate plotting of planetary pathways, and his work had to be preceded by Tycho Brahe’s meticulous observations for those elliptical orbits to become evident. Unfortunately for scientific progress, Aristarchus had an insight very similar to that of Nicholas Copernicus, eighteen centuries earlier, suggesting that the earth revolved around the sun, but his idea garnered little support among the Hellenistic Greeks. That problem in planetary mechanics suggests that the solution of any challenge may require both careful observation and the discovery of the analytical key to understanding the problem before it can be solved. Kepler needed Brahe’s earlier work before he could discern the correct geometric construction for the planetary orbits, and that would need Sir Isaac Newton’s and Albert Einstein’s later studies of gravitation to more precisely account for observed orbital deviations from Kepler’s theoretical construct. Similarly, mathematical problems oftentimes require the development of a special analytical tool before they can be solved, even if they do not rank with the importance of planetary motions. Without the correct tool, a given problem may seem unsolvable, or only partially solvable. I have, over the period of a few years, been experimenting with a few simple mathematical concepts to see what new insights might be achieved through their use. Those concepts are not actually new, but their potential has been overlooked as other pathways towards problem solution have been pursued. The use of a modified base-6 system, with which I began experimenting in 1993-94, in the later resolution of the apparent prime patterns in Ulam’s Spiral Square involved the application of the recognition that all primes except for 2 and 3 can be reduced to two mathematical expressions: 6x + 1 and 6x + 5, where x is any whole number. All primes except for those first two can be calculated using one or the other of those two expressions, though many composites will also arise from their use. Changing the representation of the spiral square’s elements from base-10 to a modified base-6 provided the key to recognition of why Ulam’s patterns are real. That problem’s solution also required rejection of the spiral square concept in favor of an infinite series of embedded squares. I have decided to assemble the work I have undertaken in an e-book and post it on the internet for anyone who might find it of interest. I leave it to my readers to determine what truly has value. The new field of electronic publishing, in this manner, becomes a revival of Renaissance times when scientific and mathematical investigators shared their work by letter with those in whom they had confidence. I offer mine to all who share my curiosity in mathematics and science. I will be adding sections - an expected total of 15 or 16 articles - to my e-book as I complete them, so this will be an on-going, evolving work. As any internet posting can be changed with minimal effort in contrast with traditional publication routes, I see no reason to refrain from making changes if I uncover something new of value to share or discern a need for correction or enhancement in anything that I have already published. Therein lays the pleasure, the challenge, and the value of working in an open, electronic world.

4

5

Part I: Mathematics

6

Decoding the Prime Patterns on Ulam’s Spiral Square c. 2009

Seemingly insoluble problems, or at least those that initially appear to defy solution, are not always as difficult to resolve as they might seem. At times, finding the correct pathway to their solution is merely a matter of visualization, as is the case with the decoding of the prime number pattern in spiral squares. The late Stanislaw Ulam (1909-84) sketched a spiral square of consecutive integers, like that in Figure I, during a boring presentation at a 1963 scientific conference. Mysteriously, the primes in his square seemed to prefer certain odd-number diagonals over others. Figure I Ulam’s Spiral Square 100

99

98

97

96

95

94

93

92

91

65

64

63

62

61

60

59

58

57

90

66

37

36

35

34

33

32

31

56

89

67

38

17

16

15

14

13

30

55

88

68

39

18

5

4

3

12

29

54

87

69

40

19

6

1

2

11

28

53

86

70

41

20

7

8

9

10

27

52

85

71

42

21

22

23

24

25

26

51

84

72

43

44

45

46

47

48

49

50

83

73

74

75

76

77

78

79

80

81

82

Primes displayed in red Examples of seemingly favored diagonals

Ulam was an exceptionally talented mathematician who had emigrated from his native Poland to the United States in 1938. Eventually joining the Manhattan Project, he was jointly credited with the Teller-Ulam design for thermonuclear weapons. He also originated the concept of nuclear pulse propulsion and devised the Monte Carlo method for solving mathematical problems using statistical sampling. Ulam’s mathematical interests spanned number, set, and ergodic theories as well as algebraic topology.

Prime-Loading Defied Explanation Upon discerning that the primes in his spiral square appeared to favor certain diagonals over others, Ulam went back to Los Alamos, where that government laboratory’s computer plotted out a much greater spiral array that still evidenced an inexplicable difference in prime-loading across the odd-number diagonals. That prime-loading on the spiral square’s diagonals may be defined as differences in prime densities. Density is calculated as the number of elements per unit of measurement: the greater the average number of primes found per given unit of measurement, the higher will be the density and

7 therefore the prime-loading. As regards Ulam’s Spiral, if there is a real and not just an imaginary primeloading on certain diagonals, it should be possible to prove why that difference in density exists. Ulam’s discovery was intriguing enough to grace the cover of Scientific American in March, 1964. Until now, that ostensible prime-loading difference has eluded explanation; no proof has been advanced that confirms that variations in the loading are real, not illusory. Primes, of course, do not follow a pattern that can be unfailingly reproduced through the use of any known algebraic expression, but that does not preclude the existence of certain broad patterns to their progression.

A Solution Emerges The solution to the mystery underlying Ulam’s Spiral Square lay in discovering the best approach to visualizing the problem. Ulam plotted his square in base-10. Subsequent attempts to prove differential prime-loading on the spiral square’s diagonals have also been couched in base-10. However, base-10 does not lend itself to spotting the correct approach to this and other pattern problems involving primes; base-6 offers a visually more powerful tool. Table I illustrates how all primes except for the integers two and three fall in just two of that table’s six modular columns - specifically A and E. The base-10 integers in those columns can be converted into modified base-6 numbers that could arbitrarily employ the letters A through F as their final digits for convenience and ease of recognition. That modified system would not include zero as a placeholder because Ulam’s Spiral started with a one and not a zero. Excluding the integer two, all members of columns B, D, and F in Table I are composites, being whole-number multiples of two. All members of the set defined by membership in column C are odd multiples of three, therefore making all of them, except for three, composites. That leaves columns A and E as the repositories for all of the remaining primes and, in this modified base-6 arrangement, all of those integers, once converted into base6 integers, would end in either an A or an E (such as 53 = ABE, which would be recorded as an E in this analysis’ simplified representation). There will, of course, also be composites in Columns A and E of Table I Converting Base-10 to Base-6 Modularly A

B

C

D

E

F

1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97

2 8 14 20 26 32 38 44 50 56 62 68 74 80 86 92 98

3 9 15 21 27 33 39 45 51 57 63 69 75 81 87 93 99

4 10 16 22 28 34 40 46 52 58 64 70 76 82 88 94 100

5 11 17 23 29 35 41 47 53 59 65 71 77 83 89 95 101

6 12 18 24 30 36 42 48 54 60 66 72 78 84 90 96 102

Primes displayed in red

Table I. Figure II utilizes those converted integers to construct a spiral square in base-6, but it displays only the last digit in the base-6 progression; that is the key to the successful visualization. Mathematicians have known that all primes other than two and three can be represented by the expressions 6n + 1 and 6n + 5 where n is an integer; the members of the set comprising Column A are

8 represented by the former of those two expressions while those comprising Column E are represented by the latter. They have not, though, applied that knowledge to a visual representation of Ulam’s Spiral Square. Figure II Ulam’s Spiral, Converted Modularly A

F

E

D

C

B

A

F

E

D

C

B

A

B

E

D

C

B

A

F

E

D

C

B

A

F

C

F

E

D

C

B

A

F

E

D

C

F

E

D

A

F

A

F

E

D

C

B

A

B

E

D

E

B

A

B

E

D

C

B

A

F

A

D

C

F

C

B

C

F

E

D

C

F

E

F

C

B

A

D

C

D

A

F

A

B

E

D

E

B

A

B

E

D

E

B

A

B

C

D

C

D

A

F

C

F

E

F

C

D

E

F

A

B

C

F

E

D

A

F

A

B

C

D

E

F

A

B

E

D

E

B

A

B

C

D

E

F

A

B

C

D

C

F

C

D

E

F

A

B

C

D

E

F

A

B

Elements of the principal diagonal are highlighted in red Sample diagonal patterns are highlighted in blue

The diagonals of the spiral square depicted in Figure I alternate between odd and even integers. What is here termed, for reference purposes, the principal diagonal, running from 65 through 1 to 81, divides Ulam’s spiral into two halves, the upper right (UR) and the lower left (LL). It actually does not matter which way Ulam’s Spiral is oriented; the direction that the diagonals are deemed to run will not matter even though the patterns that will be discussed here would appear a bit different if the spiral square were rotated 90o. Ulam’s orientation will be retained, but it will be apparent to the careful observer that orientation would not change the explanation for the patterns observed in prime-loading on certain diagonals. The construction of Ulam’s Spiral holds the key to understanding why primes not only appear to favor certain diagonals but do, indeed, appear on some diagonals with greater frequency than on others. Ulam’s Spiral Square can be conceived as a series of embedded squares formed about the initial, starting point (in this case 1 in base-10 or A in base-6). Moving outward, each layer has eight more elements than the layer underneath it beginning with eight elements, or integers, in the first square that surrounds the starting point. Moving outwards from that initial square, the number of elements in the layers thus follows

9 the progression 8, 16, 24, 32, 40… Figure III illustrates how those eight elements are added to each layer; numbers from an inner square can be paired with those in the next square heading outward, and doing so will leave eight unpaired elements in the outer one of the two adjacent squares. Embedding a smaller square immediately inside a larger square requires each side of the outer square to be two elements longer than each side of the inner square. Figure III Ulam’s Spiral Embedded Squares 101

100

99

98

97

96

95

94

93

92

91

102

65

64

63

62

61

60

59

58

57

90

103

66

37

36

35

34

33

32

31

56

89

104

67

38

17

16

15

14

13

30

55

88

105

68

39

18

5

4

3

12

29

54

87

105

69

40

19

6

1

2

11

28

53

86

106

70

41

20

7

8

9

10

27

52

85

107

71

42

21

22

23

24

25

26

51

84

108

72

43

44

45

46

47

48

49

50

83

109

73

74

75

76

77

78

79

80

81

82

110

111

112

113

114

115

116

117

118

119

120

Embedded squares represented in alternating red and gray.

Because six (the base system in Figure II) and eight (the number of elements consecutively added to each embedded square of the spiral moving outwards) have a least common multiple of 24, that LCM becomes critical to understanding how the prime density on the diagonals is real and not imaginary. To understand how that happens, it becomes necessary to determine how the gaps between every three elements on a given diagonal come to always be multiples of six. While the spiral square could be oriented in any one of four positions, Ulam’s original alignment will be used. The gaps between every third element will be explained moving upward to the right from the principal diagonal, but that choice is strictly arbitrary; the same explanation would result, albeit with different numbers, employing any one of the three other possible orientations. Starting with 1 at the center of the spiral square, moving outward along its diagonal finds the sequence 1, 3, 13, 31, 57, 91… Following the spiral pathway outward from the first element on the diagonal to the next three, the first jump, from 1 to 3 is two spaces, which is accomplished by moving one element to the right from 1 to 2 and then up one element to 3. The next move would follow that first embedded square around to 9 and then jump one to the right to 10, which is the first element on the second embedded square, before moving upward to 13. The same net effect, however, could be accomplished by making a complete circuit of the first embedded square from 3 back around to 3, then jumping one element to the right to 12 on the second

10 square, then moving up one more to 13. That switches the jump from square one to square two from 10-27 to 13-30; the move upward from 30 to 31 remains the same, but switch has the valuable result of creating two steps following the circuit of the inner square that matches the two steps from 1-2 to 2-3. The total number of steps does not change, but doing it that second way has the advantage of aiding visualization of how multiples of eight come into play. Each embedded square has a multiple of eight elements, the number of which can be calculated by multiplying the number of the square (numbering square 2-9 as one, 10-25 as two…) by eight. Moving outward three elements along an upper right diagonal will mean moving outward three embedded squares. Moving from 1 to 31 adds eight for square one and 16 for square two, or 24 total. The three jumps from 1 to square one, from square one to square two, and from square two to square three adds six more to the total. Moving outward along a diagonal will thereby add multiples of six every three elements along that diagonal, as Table II illustrates for three examples. Three consecutive embedded squares will have a multiple of 3x8 elements, as any three consecutive integers added together will constitute a multiple of three, which will result in a multiple of the LCM of six and eight. Six will always be added to that figure to account for the three successive jumps to the next outermost squares. Table II Calculating the Number of Elements Spiraling Outward Three Consecutive Squares From 1 to 31 First Square 0* Second Square 8 Third Square 16 Total 24

+ 2 + 2 + 2 + 6

From 3 to 57

From 13 to 91

8 + 2 16 + 2 24 + 2 48 + 6

16 + 2 24 + 2 32 + 2 72 + 6

* 1 is not part of a square, hence the 0 value The same numbers result from moving from any element on the upper right portion of any embedded square outward. It must be remembered, though, that elements on the principle diagonal running from 5 to the upper left (5, 17, 37…) are part of the upper half of an embedded square and are part of the diagonals that move towards the upper right. The lower right-hand corners of the embedded squares (9, 25, 49…), which are squares of consecutive odd integers, constitute parts of the lower left portions of their embedded squares and therefore count in calculating diagonals that run towards the lower left portion of Ulam’s Spiral Square. The center of the principle diagonal (1) can be used in calculating diagonals in either direction.. Table III lists those patterns that are created on the odd diagonals. They repeat on every third diagonal in either the UR or LL halves of the square, due to the same principles that cause the patterns to repeat on the odd-number diagonals. Table III Base-6 Patterns on Ulam’s Odd Diagonals Lower Left ACA or AAC CCE or CEC AEE

Upper Right ACA or CAA EEA ECC or CEC

11

Primes Do Favor Certain Diagonals The repetition of these patterns along and between the diagonals explains why some diagonals have greater prime-loading than others. If the average density of primes across modular sets A through F is defined as x, then the base-6 numbers defined as sets A and E would each have average prime densities of 3x. For practical purposes, the other four sets would have prime densities of zero (ignoring the existence of two and three). That then enables approximations of prime densities to be established for the three odddiagonal patterns set forth in Table IV. The odd diagonals thus have different average prime densities, and that would hold true no matter how far out the diagonals one was to check. The densities are relative to each other over the infinite distance across which the diagonals can be extended and densities would gradually thin out from the central starting point just as the density of primes thins out as one moves through the infinite progression of integers. Table IV Odd-Diagonal Prime Densities Pattern

Density

CCE, CEC, ECC

single (x)

ACA, AAC, CAA

double (2x)

EEA, AEE

triple (3x)

Having different prime-loading densities does not preclude certain diagonal segments from possessing brief spans of greater density. Average density does not control the exact location of primes, but it will result in certain diagonals having greater affinities for primes than others. Primes therefore do favor certain diagonals in Ulam’s Spiral. The matter does not stop there; base-6 provides only a rough outline of how the densities differ. Even clearer pictures of differing densities can be obtained using base-30 and base-210 for starters. Just as base-2 would put all but one prime on the odd diagonals, utilizing successively larger factorials of the primes through the nth prime [2x3x5x7x11…x(nth)] would provide even greater definition for the primeloading densities. But as there are an infinite number of primes, no absolute values for the diagonals will ever be attainable. For practical purposes, the results using base-30 could be crafted with a little extra effort, but base-210, base-2310, and base-30030 rapidly become too complex to make the effort worthwhile once the principle is understood. Ulam’s Spiral Square does have different prime-loadings on its diagonals. It is real, not just apparent, and it is due to the structure of Ulam’s Spiral Square and how it interfaces with where primes appear in the structure of the number system. Further investigation of the spiral square’s structure would reveal a wealth of less-distinct patterns, including patterns which can be found on the horizontal rows and the vertical columns of the square. As with the prime-loading on the spiral square’s diagonals, those patterns are more easily discerned and explained employing the modified base-6 arrangement used in the foregoing analysis than they are in base-10; knowing how the spiral square’s structure creates them may make the search for those density patterns more fruitful.

12

Historical References Gardner, M. “Mathematical Recreations: The Remarkable Lore of the Prime Number.” American. (March, 1964) v.210, pp.120-8.

Scientific

Peterson, Ivars. “Prime Spirals.” Science News. May 3, 2002. http://sciencenews.org/view/authored/id 34/name/Ivars_Peterson Stein, M. and Ulam, S.M. “An Observation on the Distribution of Primes. American Mathematical Monthly. (1967) v.74, pp.43-4. Stein, M.L.; Ulam, S.M.; and Wells, M.B. “A Visual Display of Some Properties of the Distribution of Primes.” American Mathematical Monthly. (1964) v.71, pp.516-20.

13

Techniques There are times when analytical techniques developed to aid in the solution of one problem can be applied to the solution of another. The use of a modified base-6 in the resolution of the question about prime-loading in Stanislaw Ulam’s spiral square opened up possibilities with other problems in number theory. One of the first that the author considered is Hardy-Littlewood’s Conjecture F. The following article was written in 2011 based upon research conducted on that topic in early 2011. The author undertook the solution of this problem even though it was similar to the spiral square because there are differences between it and the square; had he solved them in the reverse order, Hardy-Littlewood would have validated the conjecture that primes do favor certain diagonals in the spiral square, but the reverse is only partially resolved because Hardy-Littlewood trinomials do not necessarily generate straight rays from their outset. The validation of Conjecture F therefore required a little extra work, but that work was based upon the recognition that the modified base-6 employed by the author in the solution of Ulam’s patterns would also provide the explanation for the Hardy-Littlewood prime density conjecture.

14

Validating Hardy’s and Littlewood’s Conjecture F c. 2011 Hardy’s and Littlewood’s Conjecture F may be considered a prequel to Ulam’s Spiral Square. Although primes cannot be represented by any known mathematical expression that either generates all primes and nothing but primes or, failing that, nothing but primes, quite a lot of information has nevertheless been gathered about them. The author’s previous article concerning apparent patterns to primes, Decoding the Prime Patterns on Ulam’s Spiral Square (deAprix, 2009) demonstrates that primes can be arranged mathematically and visually in patterns without violating that bedrock principle of number theory. Number theorists have, for example, developed formulae for calculating the number of primes and the number of twin primes through given points along the line of natural numbers. In 1923, G.H. Hardy and John Littlewood determined that, among other aspects of their conjectures, certain polynomial expressions possess a greater richness, or density, of primes than other expressions and that the variations in those prime densities can be calculated relative to random probability. Their work on that topic is known as the Hardy-Littlewood Conjecture F. If proven, that conjecture would provide one route for validating the apparent preferences primes seem to have for certain diagonals, horizontal lines, and vertical lines in Stanislaw Ulam’s Spiral Square, which that eminent mathematician first created in 1963, forty years after Hardy and Littlewood expressed formulaically what Ulam would express visually. It is an interesting quirk in mathematical history how a pattern created during a boring lecture provided a tool for analyzing and validating an earlier conjecture. The author of this paper revealed why certain diagonals have greater prime densities than others in the Spiral Square in his 2009 article on decoding those apparent patterns. This paper will now blend the author’s recent work with Hardy’s and Littlewood’s conjecture to demonstrate why those early 20th Century mathematicians were correct in their theory regarding polynomials. Hardy and Littlewood believed that polynomial expressions could represent a wide variety of prime densities. Although the question is now moot, if such polynomials can be shown to have real differences in their prime densities and those differences can be explained, then Ulam’s structure must also have real differences in its prime densities. The polynomials in Hardy-Littlewood take the trinomial form: ax2 + bx + c where a, b, and c are integers and a is positive. Where b is negative, it ultimately represents a diagonal on the spiral square while a positive value creates either a vertical or a horizontal line on the square. The author validated the reality of varying prime densities in the Spiral Square by first converting its base-10 numbers to a modified base-6; doing so provided the visual clues that led to the solution of the density question. Table I illustrates how that conversion took place, but only the last digit for the base-6 numbers is shown, simplifying the visual modularity of the analytical tool. Table I Converting Base-10 to Modified Base-6 Numbers A

B

C

D

E

F

1 7 13 19 25

2 8 14 20 26

3 9 15 21 27

4 10 16 22 28

5 11 17 23 29

6 12 18 24 30

15

From a quick visual inspection, the columns for base-6 numbers ending in B, D, and F possess only one prime: 2. Column C has only one prime: 3. All of the other numbers in those four columns are composites. The remaining primes in their infinitude are confined to the columns for A and E. That has an important practical consequence in the validation of Hardy’s and Littlewood’s Conjecture F.

Validating the Polynomial Prime Density Conjecture If the density of primes in the set of natural numbers, or any sufficiently large span of consecutive natural numbers, is 1, then the average prime densities for A and E within that span will approach 3. Those densities will never be exactly 3 because 2 and 3 fall within different sets (B and C), but for large enough spans, 3 is a good approximation for A and E. To demonstrate why Hardy’s and Littlewood’s conjecture is correct, one needs to show how and why different densities must exist. Something within the structure of those polynomial expressions must require those variations in prime densities. That will be where this inquiry begins. As Hardy-Littlewood was expressed in general terms that some polynomials had greater relative densities than others, the author can limit his discussion to one class of polynomials, which will be trinomials. If differences are found within trinomials, the conjecture is validated because there will be relative differences between some polynomials. One trinomial with an ostensibly high prime density is 4x2 - 2x + 41. As Table II illustrates, it has a propensity for generating primes. Table II Initial Values for 4x2 - 2x + 41

x

4x2 -2x + 41

Increase Over Previous Value

0 1 2 3 4 5 6 7 8 9 10

41 43 53 71 97 131 173 223 281 347 421

2 10 18 26 34 42 50 58 66 74

Primes in second column highlighted in red

That eventually defines a diagonal in Ulam’s Spiral Square, but at first it appears to jump around, as depicted in Figure I. That means that while there may be considerable similarities between the HardyLittlewood Conjecture F trinomials and Ulam’s Spiral Square, there are visible differences. Figure I illustrates that point for the trinomial already noted as prime-rich, 4x2 - 2x + 41. The progression ultimately follows a diagonal track, heading off to the upper right of the spiral square, but it does not begin that way, which it cannot because the initial value is 41 and the second value is an increase of only two at that point, going to 43. Yet, trinomials’ prime-loading under Hardy-Littlewood can be explained in the same manner as that for the spiral square’s diagonal prime densities. In Table II, the initial value for the trinomial increases at a predictable rate as the value of x is steadily increased by one at each step. Where x = 0, the expression is simply equal to 41. For x = 1, 4x2 -

16 2x will be two, or 4 - 2, representing a + b from the trinomial. The value of the expression now begins to accelerate, adding eight to the previous increase to generate the next increase in the trinomial’s value. The modified base-6 employed by the author increases by six for each cycle of the base while multiples of eight are inherent in the generation of the trinomial’s resulting values when the value of x is successively raised by one.

Figure I Plotting Initial Values of 4x2 - 2x + 41 on Ulam’s Spiral Square 324 323 322 321 320 319 318 317 316 315 314 313 312 311 310 309 308 307 257 256 255 254 253 252 251 250 249 248 247 246 245 244 243 242 241 306 258 197 196 195 194 193 192 191 190 189 188 187 186 185 184 182 240 305 259 198 145 144 143 142 141 140 139 138 137 136 135 134 133 182 239 304 260 199 146 101 100

99

98

97

96

95

94

93

92

91 132 181 238 303

261 200 147 102

65

64

63

62

61

60

59

58

57

90 131 180 237 302

262 201 148

103

66

37

36

35

34

33

32

31

56

89 130 179 236 301

263 202 149

104

67

38

17

16

15

14

13

30

55

88 129 178 235 300

264 203 150

105

68

39

18

5

4

3

12

29

54

87 128 177 234 299

265 204 151

106

69

40

19

6

1

2

11

28

53

86 127 176 233 298

266 205 152

107

70

41

20

7

8

9

10

27

52

85 126 175 232 297

267 206

153

108

71

42

21

22

23

24

25

26

51

84 125 174 231 296

268 207

154

109

72

43

44

45

46

47

48

49

50

83 124 173 230 295

269 208

155

110

73

74

75

76

77

78

79

80

81

82 123 172 229 294

270 209

156

111 112 113 114 115 116 117 118 119 120 121 122 171 228 293

271 210

157 158

159 160 161 162 163 164 165 166 167 168 169 170 227 292

272 211 212

213

214 215 216 217 218 219 220 221 222 223 224 225 226 291

273 274 275

276

277 278 279 280 281 282 283 284 285 286 287 288 289 290 Initial Values for Binomial Displayed in Red

The analysis may now proceed in the same manner employed to analyze the modularity of Ulam’s Spiral Square in the author’s paper on that subject (deAprix, 2009). Continuing with the trinomial 4x2 - 2x + 41, the values obtained by substituting consecutive natural numbers for x in the expression yield an interesting result, very reminiscent to that discovered for diagonals in Ulam’s Spiral Square. Table III displays how those values for the trinomial follow an E-A-E pattern when converted to the author’s analytical tool that springs from a modified base-6. Table IV details how that modularity works for 4x2 2x + 41; if the initial value for x is set at n, consecutive natural-number values for x will be n + 1, n + 2, and n + 3. The columns in Table IV detail the relationships between the initial (n) through fourth consecutive values for x (n + 3) beginning with the first three whole-number values of x for the trinomial. In each of those three progressions, eight (consecutive multiples of which, in this case, are added at each

17 step as the value of x is increased by one) and six mesh together modularly to make every third value of the trinomial a member of the base-6 set defined by one of the six symbols designating the last digit of each number (as with either A or E). In the case of 4x2 - 2x +4, two is added to the difference between the trinomial’s value at each increase in x of one because that is the initial difference between x = 0 and x = 1; increases accumulate which means that two is part of each succeeding increase and must be dealt with when exploring how the modularities mesh. Because eight is added to the increase between consecutive values of the trinomial over the preceding increase, a multiple of six is also achieved after three consecutive steps of adding one to the whole-number value of x. Table III Converting the Initial Values of 4x2 -2x + 41 to Modified Base-6 Value

Base-6

41 43 53

E A E

71 97 131

E A E

173 223 281

E A E

Table IV Calculating Differences between Values for 4x2 - 2x + 4 Where n Is Increased by Three

For Increasing Values of x from n to n + 1 from n + 1 to n + 2 from n + 2 to n + 3 Total

From 41 to 71 (n = 0 for 41)

From 43 to 97 (n = 1 for 43)

From 53 to 131 (n = 2 for 53)

0(8) + 2 1(8) + 2 2(8) + 2

1(8) + 2 2(8) + 2 3(8) + 2

2(8) + 2 3(8) + 2 4(8) + 2

3(8) + 6 30

6(8) + 6 54

9(8) + 2 84

Four other trinomials are detailed in Table V; in each case, the trinomial’s values pair up modularly with modified base-6 ending digits (A through F). The first of those trinomials, 4x2 + 2x + 41 changes b from a negative to a positive value (-2 to 2) in comparison with the trinomial examined above, ultimately changing it from a diagonal ray in Ulam’s Spiral Square. Although b’s value has been changed to a positive one, its progression in base-6 is also an E-A-E [although 4x2 + 2x + 41 starts off as E-E-A, that is just a different starting point for the same two Es and an A pattern; a similar phenomenon is found on opposite sides of the main diagonal in the Spiral Square for some progressions (deAprix, 2009)]. With all but two primes included within their progressions, A and E have greater prime-loadings than the progressions of the other modified base-6 ending digits and that trinomial, like its - 2x cousin will have greater prime-loadings than progressions featuring only B, C, D, or F progressions and should have greater

18 densities than those progressions that mix the two prime-rich progressions with any of the four prime-poor ones. Table V’s second and third trinomials have negative b values, like 4x2 - 2x + 41, but a changes from four to two in the second of those two cases while c is even for both The change in a from four to two changes the rate of increase in the increases of the trinomial’s successive values from eight to four. The change in c from odd to even has the very profound effect of changing all of the trinomial’s values from odd to even, which in these two instances have no primes. The base-6 representations of each of the two trinomials’ values will follow patterns like the previous (odd-value) trinomials, but no Table V Comparing Prime Loadings for Several Other Selected Polynomials Binomial; Value for X

Expression’s Value for Given X

Base-6 Representation

A. 4x2 + 2x + 41 0 1 2 3 4 5

41 47 61 83 113 151

E E A E E A

B. 4x2 - 2x + 40 0 1 2 3 4 5

40 42 52 70 96 130

D F D D F D

C. 2x2 - 2x + 36 0 1 2 3 4 5

36 36 40 48 60 76

F F D F F D

D. 2x2 - x + 25 0 1 2 3 4 5 6 7 8 9 10 11

25 26 31 40 53 70 91 116 145 178 215 256

A B A D E D A B A D E D

Primes highlighted in red

19 primes can or will be generated. Those examples, and the changes within them, demonstrate that at least some trinomials create progressions in their values that vary in their prime densities. That is a major step in validating that trinomials have different prime densities. As the Hardy-Littlewood Conjecture regarding prime densities specifies that there will be different densities, it is not necessary to prove it in each and every possible case; it need only be shown that different densities arise and explain why they arise and how their structure leads them to have those different prime-loadings. The ensuing section takes on those tasks.

The Mechanics of Trinomial Values The following brief review of polynomial (in this case, trinomial) value mechanics helps with the visualization of the modularity of such expressions. Returning to the introduction, trinomials in the HardyLittlewood conjecture take the form: ax2 + bx + c where a, b, c, and x are integers and a is positive. Decimal or fractional values for a, b, c or x would generate values, but would usually not result in trinomial values useful to the study of primes because those values would likewise be decimal or fractional; complex balancing would be necessary for the generation of primes. In the Hardy-Littlewood trinomials, c represents the expression’s initial value, where x = 0. Where x = 1, a + b will equal the initial increase over (or, in cases where b is negative and greater in absolute value than a, decrease below) c. Referring back to Table II, increases beyond the first step, governed as it is by a + b, increase at each step by 2a. Thus the value for the trinomial in Table II increases by 2 at the first step and subsequently by 10, 18, 26, and 34, with eight being added to the increase over x = 1 at each successive step. In this case, 2a = 8. The same pattern can also be found with the trinomials in Table V, with the rate of increase in each trinomial’s value rising by 2a at each step for values of x > 1. Table VI How Polynomial Values Increase as x Is Increased by Consecutive Integers A. 5x2 - 3x + 36 x

5x2 - 3x + 36

Value

0

5(0) - 3(0) + 36

36

-

1

5(1) - 3(1) +36

38

2

2

5(4) - 3(2) + 36

50

12

3

5(9) - 3(3) + 36

72

22

4

5(16) - 3(4) + 36

104

32

5

5(25) - 3(5) + 36

146

42

Increase Over Previous x

Rate of Increase

>

10

>

10

>

10

>

10

B. 5x2 +3x + 36 x

5x2 + 3x + 36

0

Value

Increase Over Previous x

5(0) + 3(0) + 36

36

-

1

5(1) + 3(1) + 36

44

8

2

5(4) + 3(2) + 36

62

18

3

5(9) + 3(3) + 36

90

28

4

5(16) + 3(4) + 36

128

38

Rate of Increase

>

10

>

10

>

10

>

10

20 5

5(25) + 3(5) + 36

176

48

Meanwhile, a + b also plays a role in the increase in value of trinomial expressions as x is increased. Part A in Table VII illustrates how consecutive squares increase in value. The rate of change is 2. That generates the 2a rate of change in trinomials, as x2 may likewise be conceived of as 1x2. If a > 1, that value for a is simply substituted for 1. Part B shows that there is no rate of change for increases in the value of x. It increases by 1 at each step. As shown above, both a and b contribute to the increase in value by being multiplied for successive steps beyond x = 1 for each trinomial. Table VII Genesis of the 2a Rate of Change

Part A Increase in Values of x2 Increase in x2 Values as x Increases by Consecutive Integers

x

x2

0

0

-

1

1

1

2

4

3

3

9

5

4

16

7

5

25

9

Rate of Change of Increase in x2

>

2

>

2

>

2

>

2

>

2

Part B Increases in Values of x

x

Increase in x Values as x Increases by Consecutive Integers

0

-

1

1

2

1

3

1

4

1

5

1

Rate of Change of Increase in x

>

0

>

0

>

0

>

0

>

0

Tables VI and VII seem to show that 2a constitutes the major portion in the increases in value of trinomial expressions. However, seeming to show something and irrefutably demonstrating it are two different propositions. The two tables provide anecdotal evidence, not irrefutable proof; they only represent illustrative cases. The parameters of the role of 2a as x is increased by successive whole numbers become evident through the derivation of an equation based upon the elements of the trinomial expression. That derivation begins with a simple identity statement:

21

(x - 1)(x) = (x - 1)(x) 2 2 The expression

(x - 1)(x) , more commonly written as (x)(x + 1) , is used to calculate the cumulative 2 2 addition of all consecutive natural numbers from 1 through n inclusive. Conventionally, x represents n, but nothing is violated by employing (x - 1) to represent n as long as x then represents the natural number 1 greater than (x - 1); it is just a different way of writing the expression which calculates the value of the cumulative addition, but the importance of writing the expression as (x - 1)(x) will be evident shortly. 2 One side of the identity statement above will be replaced with an expression that will represent cumulative addition (just as x! represents the factorial of x, or the cumulative multiplication of the consecutive natural numbers from 1 through x inclusive; a # following a number or algebraic expression will represent the cumulative addition of the successive natural numbers from 1 through that given number or expression inclusive). Using that symbol in an equation will help identify the process of cumulative addition in a larger equation. Although the symbol ‘#’ is new - created by the author because he could not find a representation for cumulative addition in use elsewhere - the concept of cumulative addition is not new. (x - 1)(x) = (x - 1)# 2 Next, both sides of the equation will be divided by x, and then multiplied by 2a, the role of which is here being investigated: (x - 1) = (x - 1)# 2 x a(x - 1) = 2a[(x - 1)#] x ax - a = 2a[(x - 1)#] x The expression (a + b) is then added to both sides of the equation, followed by multiplying both sides by x: ax + b = (a + b) + 2a[(x - 1)#] x ax2 +bx = (a + b)x + 2a[(x - 1)#] This equation now demonstrates that the increase over the initial value of the trinomial, c (where x = 0), is the result of two processes. The first, (a + b)x, generates the second value of the trinomial, where x = 1, when it is combined with c. However, a + b will continue to contribute to the subsequent values of the trinomial, multiplied by the ever-increasing values of x. The second component, 2a, quickly becomes the major contributor to the trinomial’s increase in value because it is multiplied by the cumulative addition of x - 1. The equation, as a whole, shows that any increase over the constant c will always include 2a as an growing proportion of the trinomial’s increase over its initial value, c. The set of values for the trinomial, beginning with c and continuing onward through infinity as x in ax2 + bx increases thus becomes predictable and can be plotted out as repeating sets of modified base-6 terms. As the Hardy-Littlewood Conjecture F speaks to some polynomial expressions having greater prime densities than the expected average, demonstrating that Conjecture’s truth for trinomials validates the Conjecture because some

22 trinomials are prime-rich, as demonstrated by example above, and others are prime-poor. Had Conjecture F been demonstrated correct first, the apparent prime patterns in Ulam’s Spiral Square could have been proven real as a spin-off of the Hardy-Littlewood proof; as it turns out, the reverse happened. The differing prime densities also occur with binomials (which may be considered trinomials where a = 0, though that is not commonly done). The expression 2x + 3 will include all of the primes except for 2 while 2x + 2 will only include the prime 2. The need to prove anything further is moot, however, because proving how and why trinomials have differing prime densities demonstrates the validity of Hardy and Littlewood’s conjecture without going through the infinite number of polynomial orders.

Calculating the Number of Primes in an Expression As part of Conjecture F, Hardy and Littlewood also developed an expression that could be used to calculate the ratio of the number of primes through a given x in a polynomial expression compared to what number would be expected from random number of that size. The author’s modified base-6 system of analysis (deAprix, 2009) set density values for A through F. Greater specificity can be achieved by using larger modified base systems, such as base-30 and base-210. Hardy and Littlewood’s density calculations are therefore not contradicted by the density values for base-6. A modified base system could provide specificity on prime density only if it were infinitely large. Modified base-6 helps demonstrate the accuracy of conjectures and initial impressions of prime densities, but it does not provide absolute specificity for any span of values for a polynomial. Nonetheless, the foregoing analysis of Conjecture F using base-6 does indicate that more specific density values could be obtained and it thereby enhances the likelihood that Hardy and Littlewood’s formula is accurate.

Historical References deAprix, Albert H. Jr. “Decoding the Prime Patterns on Ulam’s Spiral Square.” Examining Certain Open Questions in Number Theory. (2009). https://www.pdfcoke.com/Al%20deAprix Hardy, G.H. and Littlewood, J.E. “Some Problems of ‘Partito Numerorum’; On the Expression of a Number as a Sum of Primes.” (1923). Acta Mathematica, 44: 1-70.

23

Pythagorean Triples Pythagorean triples have always held a fascination for this author. They have served as a gateway for understanding geometry, trigonometry, number systems, and even cosmology (the author hopes to include several papers on the latter two topics in this e-book before it is completed). One of the first problems with which the author dealt in high school involved the calculation of triples (or PTs). It was not until about a dozen years ago that he completed a complete formula for the calculation of all PTs. Others have also devised systems for calculating triples using just one side, but what is exceptionally interesting is the diverse ways in which they have proven such systems correct. The author added in his system in part because it offered another unique means of validating calculations using just one side and because it also led, through the construction of his three tables (I, II, and III in the succeeding article) to a system of organizing Pythagorean Triples and Primitive Pythagorean Triples (PPTs) into a format that leads in “The Pattern to Primitive Pythagorean Triples” to an algorithm that organizes all PTs and PPTs in a logical grouping by magnitude. The key to all of that came out of realizing that the Pythagorean Theorem could be written as a2 = c2 - b2. Organizing triples on the basis of a enabled the author to more easily visualize new patterns in the data.

24

Generating Pythagorean Triples Using Only Side a c. 2009 Mathematicians function like clever detectives, piecing clues together until a complete solution emerges. False starts, errors, and dead-ends may postpone success, but the mathematical investigator will labor onward, consumed by a passion for the quest. Oftentimes, solutions emerge only after many mathematicians have contributed insights over the years - even centuries - of effort that may be required to master a given problem, as witnessed with Fermat’s Last Theorem. But time and difficulties do not matter to mathematics’ devotees, or those of any other human endeavor. The vision that an elegant solution awaits discovery continually reinvigorates the quest. Along the way, such efforts have built up much of modern mathematics. The Pythagorean Theorem and Pythagorean triples have led to important contributions in mathematics, including trigonometry, surveying, Fermat’s Last Theorem, and everything that Fermat’s classical problem has generated serendipitously. A modest increment in knowledge is being added here: algorithms, or mathematical processes, that permit the calculation of all Pythagorean triples using only a from the relationship c2 = a2 + b2. Educators and students will be interested in the equations and the algorithms because their discovery illustrates how the organization of data can facilitate a problem’s solution. Pythagoras’ theorem was actually known before that Greek philosopher-mathematician made his debut in the sixth century B.C.E. Babylonians were aware of the relationship a millennium before Pythagoras, and apparently had ways of calculating triples before Pythagoras became eternally linked to the process. In fact, its modern expression, c2 = a2 + b2, would have made no sense to the ancients of either culture because algebra was unknown in the West until the Arabs borrowed it and decimal numeration from India, with Mohammed ibn Musa alKhwarizmi’s early ninth century book Hisab al-jabr w’almuqabala setting forth the basics of algebra (al-jabr). Greek mathematicians did their work through geometry and understood the Pythagorean Theorem as a way of decomposing one square into two smaller ones. Several means, other than brute experimentation, have existed for calculating Pythagorean triples. Pythagoras and the Babylonians [Hollingdale] possessed a three-part algorithm that generated an infinite series of triples where n is any positive integer greater than 1: 2n,

n2-1,

n2+1

For n = 2, 2n will be greater than n2 - 1, otherwise n2 - 1 will be the greater of the two quantities. The early Greeks also realized, even though they did not have algebraic notation, that: a = m2-n2,

b = 2mn,

c = m2+n2

where m>n, m and n are positive integers that are relatively prime (having no common factor greater than 1), and where m and n are of opposite parity (which means that one of them is even while the other is odd). The former of the two algorithms will not generate all Pythagorean triples while the latter will do so. In the series of triples generated using either method, some results will be primitive triples, which means that they are triples that have no common factor between all three that is greater than 1. Others will be nonprimitive, or multiples of primitive triples. R.H. Dye and R.W.D. Nickalls from England published an algorithm in 1998 [Dye and Nickalls] that would express a triple as: x,

(x2-b2),

(x2+b2)

25 2b

2b

That allowed them to set up the equation: x2 - 2bx - (b2 + 2b) = 0 Specifying rules for the use of the equation in different mathematical situations, the equation could be solved to produce roots, one of which would lead to the calculation of a triple by substituting that root into the expressions for the three sides of the right triangle. Another interesting approach was developed by B. Richardson, who employed matrices to generate Pythagorean triples [Richardson]. A close approach to the author’s system came in 1997, with R. Simms, whose formulae represented special cases of the current author’s general equations for generating b and c from a [Simms]. W.J. Spezeski developed a method for deriving Pythagorean triples that produced a formula like the author’s, publishing it in 2008 [Spezeski], but he arrived at his equations using a different approach - which stands as testament to the adage that there may be many roads to the truth. The author’s equations generate all Pythagorean triples using just one side of the right triangle, but they require specifying a whole-number c-b difference and the author explains how that may be determined. That new algorithm for generating all Pythagorean triples uses two equations to calculate b and c from a: b = a2 - (c - b)2 2 (c - b) and c = a2 - (c - b)2 + (c - b) 2 (c - b) From a cursory examination, it is evident that the validity of the second equation (for c) hinges upon the validity of the equation for calculating b, as c is greater than b by the difference between the two numbers. That requires demonstrating how the equation for b was derived. That derivation begins with a simple identity statement: b = b One side of that equation can be multiplied by an equivalent of 1 without altering the statement’s equality: b = 2b 2 By next adding the equivalent of 0 to the right-hand side of the equation and applying the commutative property of addition to rearrange the order of the terms, a more useful expression begins to emerge: b = 2b + (c - c) 2 b = (c + b) - (c - b) 2 Next, the right-hand side of the equation is multiplied by another, different, value that is also equivalent to 1: b = (c + b) - (c - b) . (c - b) 2 (c - b) b = (c + b)(c - b) - (c - b)2

26 2(c - b) Digressing for a moment, the Pythagorean Theorem holds that: c2 = a2 + b2 The terms of the theorem may be rearranged, and then factored, to yield: a2 = c2 - b2 a2 = (c + b)(c - b) At this point, a2 may now be substituted into the equation for its equivalent to produce: b = a2 - (c - b)2 2(c - b) or the equation for calculating b. The equation does employ c - b, but it does not require knowledge of the values for either c or b. The equations for generating b and c are valid for all possible values for right triangles including decimal and fractional values, as well as the sought-after triples. Recognizing that the equation for b is valid, the companion equation for calculating c may now be simplified a bit: c = a2 - (c - b)2 + (c - b) 2(c - b) c = a2 - (c - b)2 + (c - b) . 2(c - b) 2(c - b) 1 2(c - b) c = a2 - (c - b)2 + 2(c - b)2 2 (c - b) c = a2 + (c - b)2 2 (c - b) Knowing how to apply the equation to attain the desired end now becomes important, and an arrangement of Pythagorean triples into three types, or subcategories, will make it clear how that must be done by focusing on primitive triples. Organizing Pythagorean triples into three types based upon the values of a and (c - b), which for simplicity will now be expressed as d (for difference), enables the mathematical detective to observe patterns in the primitive triples and develop algorithms for calculating them. With the substitution of d for (c - b), the two equations now become: b = a2 - d2 2d

and

c = a2 + d2 2d

The equation for c now looks similar to the one employed by the ancients, though theirs used two numbers that were not a and d. The student of number theory should recognize at this point that the equations derived for describing the relationships between a, b, and c and calculating the latter two sides employing a place constraints upon the values for d. D and a must have a common factor(s) because 2d must divide cleanly into a2 - d2 and a2 + d2 for Pythagorean triples to result. The absence of a common factor would result in the generation of decimal values for b and c. Decimal values will also result when a and d have opposite parity, again because the difference and sum of their squares must be cleanly divisible by 2d.

27 The three types of primitive triples are illustrated in Tables I - III. Some of the triples in each of those tables have been highlighted in red; those are multiples of other triples and were included in the tables as placeholders to make the mathematical progression of triples clearer and to provide for the most effective illustration of the algorithms for calculating the primitives. Table I displays the primitive triples where a is odd (thus the Type O designation). The first dozen triples for the initial five values of d are included. At the top, d is set forth both as a whole number and as an expression, e.g. (1 . 52). In that expression, 1 is designated the multiplier and 5 is labeled the root (r); that distinction is made because the r will have an application in understanding the relationship between a and d. Through the operation of the equations for b and c, r will comprise a progression of the odd whole numbers from 1 through infinity. While it may seem unnecessary to list the multiplier in Table O, its inclusion provides for similar treatment for Type O and Type E (even) triples (for which the multiplier is 2). The a for the first triple for each permissible value for d must be equal to d + 2r. The values for a in each column then proceed by intervals that equal 2r. R or any factor of r if r can be factored, becomes the common factor that a and d must share. Values for a that do not share a common factor with r will therefore result in decimal values for b and c due to the equations that generate them from a. For Type O triples, then, a will be an odd multiple of r for a given d, provided that a > d. Sir Arthur Conan Doyle, speaking through his famous fictional detective, Sherlock Holmes, observed that sometimes what is not perceived is more important than what is heard or seen. Loosely borrowing that concept from The Hound of the Baskervilles, certain nonprimitive triples, displayed in red, have been added to Tables I - III. They would not ordinarily be included in a listing of primitive triples, but their presence aids in the visualization of the relationships being discussed. Just as the absence of a barking dog solved the case for Holmes, the missing triples are a key to creating the new algorithms. The patterns to the primitive triples are much more difficult to spot in straight listings of primitive triples, like those published on websites like those of Rutgers, Clark University, and the University of Utah’s mathematics website, all of which the author has used upon occasion. Rutgers’ list of primitives puts primitive triples in order by the magnitude of side c [Rutgers]. The lists that the author has employed are very useful in providing the raw data needed for research, but they must be refined through a sorting process, like that at work in Tables I-III, before certain problems can be effectively investigated. Many researchers undoubtedly missed the relationship explored here because their data was not arranged in a format that facilitated their inquiries. A successful mathematical investigator may need to experiment with different data arrangements before selecting the most fruitful route for inquiry. The patterns to primitive triples are not truly clear until they been arranged in the three tables that the author employed in his research; those tables also assisted in the visualization of the formulae for b and c, making clear the relationship between a and d. Tables II and III set out the initial primitive triples where a is a multiple of 4. They are divided into two tables because there are slight differences between them that are better addressed by compiling the initial triples of each type separately from each other. Type E1 primitives have accompanying values for d whose roots comprise the set of all odd whole numbers, just as did the Type O primitive triples. Where, in Table II, n is the root for any given permissible value of d, every nth triple in the progression of Pythagorean triples for that value of d will be a multiple of some smaller-value triple, as {36, 27, 45} where d = 18 is a multiple of the primitive triple {4, 3, 5}. One of the important differences between Type O and Type E 1 primitives is that a begins at d + 2r for counting down the table’s columns to determine which are the n th Type O triples that are multiples. The same counting for a begins at d for Type E1’s, even though a = d would be impermissible for a Pythagorean triple because a2 - d2 would then equal zero, meaning that b would have no dimension and that a = c, which cannot be a true statement for any right triangle where c is the hypotenuse. While r equals the square root for d for Type O primitives, and could have been accurately labeled the square root, it is clearly not that for either Types E1 or E2 primitives; that is why the term ‘root’ and not ‘square root’ was used. For all three types of primitive triples, r is a factor of d, but it is not always a square root. The designation of r for root was made in preference to f for factor so as to not confuse the symbol with that for function. Factoring Type O values for d into 1 . r2 provides for identical treatment for all three types of primitive triples, eliminating the need for special-case handling for the Type O primitives. Factoring d into either 1 or 2 times r provides a tool that aids in the calculation of Pythagorean triples and in an understanding of their relationship as expressed through the two formulae derived by the author for b and c. R is the factor shared by both d and a. Examining the triple {9, 40, 41} from Table I, 9 is divisible by 3. That means that even though that triple may not be a primitive (or a placeholder in the table), there must be another triple in which a = 9. To discover what that triple other two members are, 81 needs to be divided by 3; that will yield 27. For this new triple, c - b = 3, which requires subtracting 3 from 27, which leaves 24. Dividing that by 2 sets b = 12. Adding back 3 (or d) sets c = 27, which generates the triple {9, 12, 15}. That triple does not appear in one of the three tables because it is neither a primitive triple (because it is a multiple of {3, 4, 5}) nor a nonprimitive

28 placeholder. In fact, none of the triples where d = 3 will be primitives. The process, though, does illustrate how triples may be calculated knowing just a and d. The primitive triples follow interesting patterns. In Table I, for example, all members of the r = 1 column are primitives, but where r = 3, every third member is a nonprimitive (or a trivial, as it is often called) while where r = 5, every fifth member of that column is a nonprimitive. Tables II and III exhibit other patterns. The columns continue downward infinitely while new columns are created to the right by infinitely adding two to r. All three tables help demonstrate that the number of primitives is infinite and that they follow clear patterns. One more thing may be said about Pythagorean triples: they may be comprised of numbers that are terminating decimals. Terminating decimals are merely whole numbers divided by some power of 10. For example, a2

=

c2

-

b2

0.52

=

1.32

-

1.22

0.25

=

1.69

- 1.44

0.25

=

Substituting and calculating,

0.25

That is not really anything new, but it does serve as a helpful guide to those first starting to work with Pythagorean triples.

Table I Type O Primitive Triples d = 1 = (1 . 12)

d = 9 = (1 . 32)

d = 25 = (1 . 52)

d = 49 = (1 . 72)

d = 81= (1 . 92)

a b c a b c a b c a b c a b c 3 4 5 15 8 17 35 12 37 63 16 65 99 20 101 5 12 13 21 20 29 45 28 53 77 36 85 117 44 125 7 24 25 27 36 45 55 48 73 91 60 109 135 72 153 9 40 41 33 56 65 65 72 97 105 88 137 153 104 185 11 60 61 39 80 89 75 100 125 119 120 169 171 190 221 13 84 85 45 108 117 85 132 157 133 156 205 189 180 261 15 112 113 51 140 149 95 168 193 147 196 245 207 224 305 17 144 145 57 176 185 105 208 233 161 240 289 225 272 353 19 180 181 63 216 225 115 252 277 175 288 337 243 324 405 21 220 221 69 260 269 125 300 325 189 340 389 261 380 461 23 264 265 75 308 317 135 352 377 203 396 445 279 440 521 25 312 313 81 360 369 145 408 433 217 456 505 297 504 585 Author’s note: Triples displayed in red are not primitives. They were included as placeholders to illustrate the progression of values for a, b, and c

29

Table II Type E1 Primitive Triples d = 2 = (2 .12)

d = 18 = (2 .32)

a

a

b

c

b

c

d = 50 = (2 . 52) a

b

c

d = 98 = (2 . 72) a

b

c

d = 162 = (2 . 92) a

b

c

4 3 5 24 7 25 60 11 61 112 15 113 180 19 6 8 10 30 16 34 70 24 74 126 32 130 198 40 8 15 17 36 27 45 80 39 89 140 51 149 216 63 10 24 26 42 40 58 90 56 106 154 72 170 234 88 12 35 37 48 55 73 100 75 125 168 95 193 252 115 14 48 50 54 72 90 110 96 146 182 120 218 270 144 16 63 65 60 91 109 120 119 169 196 147 245 288 175 18 80 82 66 112 130 130 144 194 210 176 274 306 208 20 99 101 72 135 153 140 171 221 224 207 305 324 243 22 120 122 78 160 178 150 200 250 238 240 338 342 280 24 143 145 84 187 205 160 231 281 252 275 373 360 319 26 168 170 90 216 234 170 264 314 266 312 410 378 360 Author’s note: Triples displayed in red are not primitives. They were included as placeholders to illustrate the progression of values for a, b, and c.

181 202 225 250 277 306 337 370 405 442 481 522

30

Table III Type E2 Primitive Triples d = 8 = (2 . 22) a

b

c

d = 32 = (2 . 42) a

b

c

d = 72 = (2 . 62) a

b

c

d = 128 = (2 . 82) a

b

c

d = 200 = (2 .102) a

b

12 5 13 40 9 41 84 13 85 144 17 145 220 21 16 12 20 48 20 52 96 28 100 160 36 164 240 44 20 21 29 56 33 65 108 45 117 176 57 185 260 69 24 32 40 64 48 80 120 64 136 192 80 208 280 96 28 45 53 72 65 97 132 85 157 208 105 233 300 125 32 60 68 80 84 116 144 108 200 224 132 260 320 156 36 77 85 88 105 137 156 133 205 240 161 289 340 189 40 96 104 96 128 160 168 160 232 256 192 320 360 224 44 117 125 104 153 185 180 189 261 272 225 353 380 261 48 140 148 112 180 212 192 220 292 288 260 388 400 300 52 165 173 120 209 241 204 253 325 304 297 425 420 341 56 192 200 128 240 272 216 288 360 320 336 464 440 384 Author’s note: Triples displayed in red are not primitives. They were included as placeholders to illustrate the progression of values for a, b, and c.

c 221 244 269 296 325 356 389 424 461 500 541 584

31

Historical References Dye, R.H. and Nickalls, R.W.D. “A New Algorithm for Generating Pythagorean Triples.” Mathematical Gazette. March, 1998. Vol. 82, pp.86-91. Hollingdale, Stuart. Makers of Mathematics. Pp. 8-9. Viking Penguin Inc.: New York, 1989. Richardson, Bill. “Pythagorean Triples.” www.math.wichita.edu/~richardson/pythagoreantriples.html Rutgers University. “Primitive Integral Solutions to x2 + y2 = z2.” http://www.math.rutgers.edu/~erowland/ tripleslist-long.html Simms, Robert. “Using Counting Numbers to Generate Pythagorean Triples.” http://www.math.clemson.edu/ ~rsimms Spezeski, William J. “Rethinking Pythagorean Triples.” Applications and Applied Mathematics. http://pvamu. edu/aam

32

The Pattern to Primitive Pythagorean Triples c. 2010

Introduction In his efforts to find a mechanism for generating Pythagorean triples using just one side of the triangle, rather than using two sides or employing the two-number algorithm known to mathematicians since ancient times (where the two whole numbers are of the form, r > s > 0, r - s is odd, and the greatest common denominator, or GCD, of r and s is 1), the author examined primitive triples. Primitive Pythagorean triples are those triples for which the GCD for the three sides is 1. The triple {3, 4, 5} possesses the smallest values which constitute a primitive triple. The triple {6, 8, 10} is not, however, a primitive triple because its GCD is 2. No tables of primitive triples were readily available to the author when he began his work, so he devised tables of his own. This paper shares those tables and outlines the relationships between the triples in each table that would enable a researcher to expand their coverage. Rutgers (Rutgers), Clark, and Drexel are among the institutions of higher learning that have since hosted tables of primitive Pythagorean triples. Table I provides a sample listing of the first 15 PPTs listed by ascending value of c, which has been the format of preference for tables of triples. Tables in that format provide a good listing of PPTs under a given value for c, but they unfortunately do not present their information in a manner that establishes an easily-visible relationship between the elements in the tables that simultaneously relates them to the overall pattern of Pythagorean triples and facilitates easy calculation of all successive PPTs in some uniform, ascending order. They often present their triples by an ascending order of the hypotenuse’s value, which has the advantage of combining multiple values for the legs with a given value for the hypotenuse, but they do not suggest how all triples might be related other than through the algorithm (presumably the r/s method) for their calculation. Table I Primitive Pythagorean Triples (listed by ascending value of c) c

a

b

5 13 17 25 29

3 5 8 7 20

4 12 15 24 21

37 41 53 61 65

12 9 28 11 33

35 40 45 60 56

73 85 85 89 97

48 13 36 39 65

55 84 77 80 72

33

In mathematics, simplicity is a critical component of elegance. A mathematical expression can be elegant when it represents a relationship in a clear, concise, and simple manner. The Pythagorean Theorem, c2 = a2 + b2, is one such elegant expression; it conveys the relationship between the three sides of a right triangle and will, in certain instances, consist of three whole numbers, which can be calculated using a three-part algorithm (r/s above). The elements of Pythagorean triples have been calculable using the r/ s relationship by stipulating: a = r 2 - s2

c = r2 + s2

b = 2rs

Following the requirements for r and s, set forth in this article’s opening paragraph, the classic algorithm will generate all of the PTs, but again, they will not be in order even if r and s follow an increasing magnitude format, as done with Table II. The algorithm will generate both primitive and nonprimitive triples. Table II Using the r, s Method for Generating Pythagorean Triples (Listed by Ascending Order of r then s) r

s

a

b

c

2 3 4 4 5

1 2 1 3 2

3 5 15 7 21

4 12 8 24 20

5 13 17 25 29

5 6 6 7 7

4 1 5 2 3

9 35 11 45 40

40 12 60 28 42

41 37 61 53 58











Author’s mote: Triples displayed in red are not primitives. They were included to illustrate the progression of values for a, b, and c. Pythagorean triples - those right triangles for which the values of a, b, and c are all represented by whole numbers - have held a fascination for mathematicians. The most famous problem involving Pythagorean triples, or PTs, was Fermat’s Last Theorem, which postulated that the relationship between the three sides a, b, and c can never be duplicated with whole number values for any power greater than two. Other work with PTs has seen several means derived for calculating a PT from just one side, including the author’s “Generating Pythagorean Triples Using Only side a.” Tables III through V present the author’s work in three tables that organize PPTs and certain placeholders; the tables have been expanded modestly from his earlier paper in order to reveal how others may calculate the infinite progressions of PPTs with ease, either by hand or by computer. The PPTs have been divided into separate tables for odd values (Table III) and even values (Tables IV and V) based upon the characteristics of a value the author defines as r, which is used to calculate d, the difference between c and b in each table’s columns.

The Tables of Primitive Pythagorean Triples Table III, in its infinite extension either rightward or downward, will include all of the odd-number values of a that generate PPTs. The first column includes every odd number greater than one as a value for a (The integer one presents a special case - see below); all of the other columns have repetitions of the values for a found in

34 column one, but they will have different values for d as there are multiple sets of triples for some of the odd-number values for a. As explained in the author’s earlier paper on calculating triples, each column’s value for r, from which d is calculated in the three tables, increases by two per column moving infinitely rightward, making the values for d an infinite progression of odd perfect squares. In calculating d from r in Table III, it may seem unnecessary to multiply r2 by 1, but it is so done in order to establish as consistent as possible procedures for the three tables. Nonprimitives can be generated from the PPTs in Table I by multiplying any triple in the table by any natural number greater than one to yield infinite series of nonprimitives that begin with every PPT and nonprimitive in the table’s array of triples or any of the triples that result from horizontal or vertical extensions of the table. Each column in Table III begins with what the author terms an identity statement, where a = d and c = a. That identity statement is not a triple, but it does aid in the generation of triples. There are also a number of nonprimitives, displayed in red, that arise in all three tables. They may be found in the columns of each table by following specific counting algorithms. In Table III, the count proceeds downward in each column beginning with the first triple below the identity statement. For any number in the count that is a multiple of any prime factor of the r for that column, the triple will be a nonprimitive. In column one, there are no prime factors of r, so there will not be any nonprimitive triples in the column. In column two, 3 is the only prime factor of 3, so nonprimitives will be found by counting downward from the identity statement {9, 0, 9}, finding the first nonprimitive at {27, 36, 45}, the elements of which are all cleanly divisible by 3. Table III Type O Primitive Triples d = 1 = (1 . 12)

a 1 3 5 7 9

b

d = 9 = (1 . 32)

d = 25 = (1 . 52)

d = 49 = (1 . 72)

c

a

b

c

a

b

c

a

0 1 4 5 12 13 24 25 40 41

9 15 21 27 33

0 8 20 36 56

9 17 29 45 65

25 35 45 55 65

0 12 28 48 72

25 37 53 73 97

49 63 77 91 105

b

c

0 49 16 65 36 85 60 109 88 137

d = 81 = (1 . 92)

a

b

c

81 0 99 20 117 44 135 72 153 104

81 101 125 153 185

d = 121 = (1 . 112)

a

b

121 0 143 24 165 52 187 84 209 120



c 121 145 173 205 241

11 13 15 17 19

60 84 112 144 180

61 85 113 145 181

39 45 51 57 63

80 108 140 176 216

89 117 149 185 225

75 85 95 105 115

100 132 168 208 252

125 157 193 233 277

119 133 147 161 175

120 156 196 240 288

169 205 245 289 337

171 189 207 225 243

190 180 224 272 324

221 261 305 353 405

231 253 275 297 319

160 204 252 304 360

281 325 373 425 481

21 23 25 27 29

220 264 312 364 420

221 265 313 365 421

69 75 81 87 93

260 308 360 416 476

269 317 369 425 485

125 135 145 155 165

300 352 408 468 532

325 377 433 493 557

189 203 217 231 245

340 396 456 520 588

389 445 505 569 637

261 279 297 315 333

380 440 504 572 644

461 521 585 653 725

341 363 385 407 429

420 484 552 624 700

541 605 673 745 821



Author’s note: Triples displayed in red are not primitives. They were included as placeholders to illustrate the progression of values for a, b, and c. The table’s first row, displayed in green, constitutes what the author terms the identity statement; it is not a triple, but it does serve as the basis for calculating the members of its column’s infinite set of triples.

35

Table IV Type E1 Primitive Triples d = 2 = (2 .12) a

b

d = 18 = (2 .32)

d = 50 = (2 . 52)

c

a

b

c

a

2 4 6 8 10

0 2 3 5 8 10 15 17 24 26

18 24 30 36 42

0 7 16 27 40

18 25 34 45 58

50 60 70 80 90

12 14 16 18 20

35 48 63 80 99

37 50 65 82 101

22 24 26 28 30

120 143 168 195 324

122 145 170 197 326

b

c

d = 98 = (2 . 72) a

d = 162 = (2 . 92)

b

c

a

b

d = 242 = (2 . 112) c

a

b

… c

0 50 11 61 24 74 39 89 56 106

98 112 126 140 154

0 15 32 51 72

98 113 130 149 170

162 180 198 216 234

0 19 40 63 88

162 181 202 225 250

242 0 264 23 286 48 308 75 330 104

242 265 290 317 346

48 55 73 54 72 90 60 91 109 66 112 130 72 135 153

100 75 125 110 96 146 120 119 169 130 144 194 140 171 221

168 182 196 210 224

95 120 147 176 207

193 218 245 274 305

252 270 288 306 324

115 144 175 208 243

277 306 337 370 405

352 374 396 418 440

135 168 203 240 279

377 410 445 482 521

78 84 90 96 102

150 160 170 180 190

238 252 266 280 294

240 275 312 351 392

338 373 410 449 490

342 360 378 396 414

280 319 360 403 448

442 481 522 565 610

462 484 506 528 550

320 363 408 455 504

562 605 650 697 746

160 187 216 247 280

178 205 234 265 298

200 231 264 299 336

250 281 314 349 386



Author’s note: Triples displayed in red are not primitives. They were included as placeholders to illustrate the progression of values for a, b, and c. The table’s first row, displayed in green, constitutes what the author terms the identity statement; it is not a triple, but it does serve as the basis for calculating the members of its column’s infinite set of triples. The table is expanded from the author’s earlier version (in Generating Pythagorean…) by setting the difference between successive values for a in each column at 2d rather than 4d; all of the triples so added are nonprimitive, but they, too, serve as placeholders for better visualization of the progression of the primitives.

36

Table V Type E2 Primitive Triples d = 8 = (2 . 22) a

b

d = 32 = (2 . 42)

d = 72 = (2 . 62)

c

a

b

c

a

8 0 8 12 5 13 16 12 20 20 21 29 24 32 40

32 40 48 56 64

0 9 20 33 48

32 41 52 65 80

72 84 96 108 120

b

d = 128 = (2 . 82)

d = 200 = (2 .102)

c

a

b

c

a

b

c

0 72 13 85 28 100 45 117 64 136

128 144 160 176 192

0 17 36 57 80

128 145 164 185 208

200 220 240 260 280

0 21 44 69 96

d = 288 = (2 . 122) a

b

c

220 221 244 269 296

288 0 312 25 336 52 360 81 384 112

288 313 340 369 400

28 32 36 40 44

45 60 77 96 117

53 68 85 104 125

72 65 97 80 84 116 88 105 137 96 128 160 104 153 185

132 144 156 168 180

85 108 133 160 189

157 200 205 232 261

208 224 240 256 272

105 132 161 192 225

233 260 289 320 353

300 320 340 360 380

125 156 189 224 261

325 356 389 424 461

408 432 456 480 504

145 180 217 256 297

433 468 505 544 585

48 52 56 60 64

140 165 192 221 252

148 173 200 229 260

112 120 128 136 144

192 204 216 228 240

220 253 288 325 364

292 325 360 397 436

288 304 320 336 352

260 297 336 377 420

388 425 464 505 548

400 420 440 460 480

300 341 384 429 476

500 541 584 629 676

528 552 576 600 624

340 385 432 481 532

628 673 620 769 820

180 209 240 273 308

212 241 272 305 340

...



Author’s note: Triples displayed in red are not primitives. They were included as placeholders to illustrate the progression of values for a, b, and c. The table’s first row, displayed in green, constitutes what the author terms the identity statement; it is not a triple, but it does serve as the basis for calculating the members of its column’s infinite set of triples.

Comparison of Presentation Formats To bear out the author’s contention that the hypotenuse-based traditional format (t-format) of ascending values of c does not easily lend itself to analysis, Table VI compares the progression of a, b, and c for the t-format with a part of the author’s three-table revised format (r-format). C in the traditional format proceeds in jumps of 4x, where x at this early stage may equal 0, 1, 2, or 3 without any apparent intermediate-term patterns to the multiple of four involved. B in the t-format meanwhile tends to jump around, with a few negative steps thrown in. A has even more apparent variability to it. There is, therefore, a hint of pattern to c, but none for a or b in the t-format. When switching over to the r-format, a wealth of patterns suddenly appears. Examining just the triples in Table III’s first column, it is evident that organization by a creates reproducible patterns. In that column, the values for a all proceed by increments of +2. The values of b and c then increase by increasing steps of +4, with both b and c increasing by the same amount at each step because the value for c - b (or d) is held constant within each column. While the other two tables follow different increments for increasing the values for their a, b, and c elements, they too follow easily detected patterns, all resulting from the values for a and d, which must have r as a common factor in order to produce PTs, as explained in the author’s earlier paper (deAprix). Once a pattern has been discovered, prediction becomes easier and data can be generated employing simple calculations or a computerized algorithm.

37 Tables III through V provide a good introduction to PPTs, but they are only the briefest portion of an infinite array of primitives. However, they are set up in a manner that easily facilitates their infinite expansion, either to the right for ever-greater values of d or downward for increasing values of a while holding d constant. An infinite expansion of the tables would not, though, generate all PTs; the infinite number of nonprimitives that would not appear in any of the three tables can be generated by multiplying the members of any - or of all - triples in the tables by successively larger whole numbers. The tables also do not require that b be larger than a for any given

Table VI Comparing T-Format with R-Format Progressions of a, b, and c from Preceding Values T-Format Change in:

R-Format (Type O Primitives, Column 1) Change in:

c

a

b

a

b

c

8 4 12 4

2 3 - 1 13

8 3 9 - 3

2 2 2 2

8 12 16 20

8 12 16 20

8 4 12 8 4

- 8 - 3 19 - 17 22

6 5 5 15 - 4

2 2 2 2 2

24 28 32 36 40

24 28 32 36 40

8 12 0 4 8

15 - 35 23 3 26

- 1 29 - 7 3 - 8

2 2 2 2 2

44 48 52 56 60

44 48 52 56 60

triple: while it is customary for b to represent the larger of the two legs, an unbroken progression of the triples following the author’s format actually requires a to be the larger leg at the top of all but one of the three tables’ columns. That leads to duplication of some sets, such as {5, 12, 13} and {12, 5, 13}. Such duplications have been retained in the tables so that the progressions in each column may mathematically flow unimpeded down the columns or across the rows. Chart I outlines a selection of relationships within and between the rows and columns of triples in Tables III-V. Other relationships exist, and they could also be used to expand the tables, but the relationships set forth in Chart I will be sufficient to expand any of the trio either rightward or downward. Those relationships all follow from the author’s derivation of the algorithm for calculating a triple using just side a and choosing a value for d following specified rules. An examination of the rows in Chart I uncovers a strong similarity across the columns; in some instances, all three tables (each represented by its own column in the chart) will share an identical characteristic while in others, just the Type O primitives will share an identical feature. Chart I has certain properties that must be understood before employing them to expand Tables III-V. The chart has eight rows, with each examining one property or characteristic across the three tables. Rows 5 and 6, for example, result directly from the author’s derivation of the formula for calculating triples using a. Setting a at d (Row 1) for all three tables created what the author termed the identity statement: that then required c to equal a (Row 2) because d = c - b; setting c any lower than d would mean that b would have a negative length, which would not be possible in Euclidian plane geometry, from which the Pythagorean Theorem arose. If b = 0, then a = c and the geometric figure is reduced from a triangle to a line. If c is increased by at an appropriately greater rate than a from that point onward, as c must be because it is the hypotenuse, then b > 0 and a right triangle results. For {a, b, c} to be a Pythagorean triple, the natural-number value by which c must increase compared to a must be determined.

38 Through examination of the formula the author derived for calculating PTs using a and specifying d, it becomes clear how a must progress in value in each column of the three tables in order for valid PTs to be created, it establishes what c’s value must be and it reveals why c must exceed a by margins linked to perfect squares. In the author’s formula, Chart I Characteristics of Primitive Pythagorean Triples in R-Format Progressions (Comparing Tables III, IV, and V) Characteristic Initial ‘a’ value in each column

For PTs in Table III

For PTs in Table IV

For PTs in Table V

a=d

a=d

a=d

First row

an identity statement, where c = a and b = 0

An identity statement, where c = a and b = 0

an identity statement, where c = a and b = 0

Value of ‘d’ by columns

progresses as the squares of all odd numbers times 1; d = (1 . r2) where r = 1 + 0, 1 + 2, 1 + 4 …

progresses as the squares of all odd numbers times 2; d = (2 . r2) where r = 1 + 0, 1 + 2, 1 + 4 …

progresses as the squares of all even numbers times 2; d = (2 . r2) where r = 2 + 0, 2 + 2, 2 + 4 …

Progression of ‘a’ down a given column following the identity statement

+2r

+2r

+2r

Value of ‘b’

b = a2 - d2 2d

b = a2 - d2 2d

b = a2 - d2 2d

Value of ‘c’

c=b+d

c=b+d

c=b+d

Progression of ‘c’ expressed in terms of ‘a’ - down a given column following the identity statement

Begin counting down each column from the first triple below the identity statement; add twice the square of that count to a to obtain c; c - a thereby becomes two times a perfect square

Begin counting down each column from the first triple below the identity statement; add the square of that count to a to obtain c; c - a thereby becomes a perfect square

Begin counting down each column from the first triple below the identity statement; add the square of that count to a to obtain c; c - a thereby becomes a perfect square

Locating the nonprimitives in each column

(Column I has no prime factor for r); begin counting down each column starting with the first triple below the identity statement - any triple for which the count is a multiple of any prime factor of r is not primitive

(Column I has no prime factor for r); begin counting down each column starting with the first triple below the identity statement - any triple for which the count is a multiple of two or of any prime factor of r is not primitive

Begin counting down each column starting with the first triple below the identity statement - any triple for which the count is a multiple two or of any prime factor of r is not primitive

c = a2 - d2 + d 2d

39

Beginning with Tables IV and V, that formula for calculating c can be used to determine the numerical value of each successive c, designated as cn, in each of Table IV’s and V’s columns, working with the value of ai, which is defined as the value for a in the appropriate identity statement for each column in the table. That process begins with the author’s formula,

c = a2 - d2 + d 2d where r may then be substituted for d, as 2r2 = d: c = a2 - (2r2)2 + 2r2 2(2r2) Now c can be written as cn to mean any specific c while a2 is divided into two components - namely ai which is the initial value that a assumes in the identity statement at the head of each column in the two tables, and xr, for which x is the number of times r must be added to ai to bring it up to an, the value of a for the specific PT that includes cn: cn = (ai + xr)2 - 4r4 + 2r2 4r2 Squaring the binomial representing an yields: cn = ai2 + 2aixr + x2r2 - 4r4 + 2r2 4r2 2 Next substituting 2r for ai and dividing the fractional value into its components: cn = 2r2 + 2(2r2)xr + x2r2 - 4r4 + 2r2 4r2 4r2 4r2 4r2 Simplifying and rearranging the terms: cn = 4r4 + 4r2xr + x2r2 - 4r4 +2r2 4r2 4r2 4r2 4r2 cn = r2 + xr + x2 - r2 + 2r2 4 cn = 2r2 + xr + x2 4 Because ai = 2r2, 2r2 + xr can be replaced with an in the equation to produce: cn = an + x2 4 That means that any given cn is equal to the corresponding an plus the quantity x2. As r must be multiplied by 2 then 4 added to any xn in any column in Table IV or Table V to calculate the next xn in that column, x will always be an even number when calculating the difference between xi and any given xn. Dividing any square of an even number by four will produce a perfect square. Table VIII illustrates that principle. The square of any even number (e2) may be decomposed into two roots that can be designated 2 and y. That means,

40 e2 = (2y)2 which can also be written as, e2 = (2y)(2y) and through the commutative property of multiplication, e2 = (2)2(y)2 which means that division by 4 would leave y2, which would be a perfect square because half of an even whole number must be either a smaller even whole number or an odd whole number.

Table VIII Results of Dividing Squares of Even Numbers by Four e

e2

e2/4

2 4 6 8 10

4 16 36 64 100

1 4 9 16 25

The difference between any cn and its corresponding an in Table IV or Table V will therefore be a perfect square, and that perfect square will be the square of the counting order of that PT below the identity statement in its column. Because d in Table III equals 1 . r2 rather than 2 . r2, as it does in Tables IV and V, any given cn will exceed its corresponding an by twice a perfect square x2/2 when following the same calculation process used to derive x2/4 for Tables IV and V. Table IX illustrates. Table IX Results of Dividing Squares of Even Numbers by Two e

e2

2 4 6 8 10

4 16 36 64 100

Matrices for All Pythagorean Triples

e2/2 2 8 18 32 50

(2.1) (2.4) (2.9) (2.16) (2.25)

41 As all PPTs will fall within the infinite bounds of one of the three tables, all triples may be calculated from them. Every nonprimitive is a multiple of some primitive. Three-dimensional matrices may be calculated from Tables III-V by multiplying every PT in the tables - primitive and nonprimitive alike - by the succession of all whole numbers beginning with two. Considerable duplication will exist because the three tables include a fairly large proportion on nonprimitives, but they are needed for full appreciation of the progression of, or pattern to, triples; they could be removed using the counting algorithm outlined above for identifying the nonprimitives in the columns of each table, but doing so is a personal preference, not an essential step. Tables IV and V, which include all the primitive triples that have even values for r, could be combined, but the author has kept them separate to permit easy, visual synchronization of the triples between the three tables. Others may wish to combine them if creating their own tables or computerized databases. Still others have created different types of relationships, such as uniquely generated triples and co-triples that reverse a and b (Spezeski). The primary advantage of the revised format created by the author lies in the order provided by that newer format. Table VI provides one illustration of the order introduced by the author’s work; Table X offers an insight into another: permissible values for d, which are not even evident in the data appearing in Table VI for traditional display formats. They are not shown with as great an appreciation for their value in calculating triples even when they are placed in numerical order, as in Table X. The author’s three tables of PPTs put the data into a working format that enables further investigation into relationships that contribute to the calculation of primitive and nonprimitive triples. Ultimately, each table can be expanded into a three-dimensional matrix, with the combined matrices including all the Pythagorean triples, whether primitive or nonprimitive (though all of the primitives will be located on the original two-dimensional surfaces that gave rise to the three-dimensional matrices). All of the nonPythagorean triples that also form right triangles will be found in between the Pythagorean triples, creating thereby the basis for the trigonometric functions that flow in unbroken streams, with ever-more minute detail being achieved through ever-longer decimal sequences that disappear into the dust of infinity, creating solid cubes comprised of the data points that represent every triple imaginable. All of that constitutes a natural progression from the need to create a simple matrix that would include all of the primitive Pythagorean triples in the ascending order of their magnitude. Table X Permissible Values of d (for d < 200) for Primitive Triples

1

2

8

9

18

25

32

49

50

72

81

98

121

128

162

169

200

References deAprix, Albert H. Jr. “Generating Pythagorean Triples Using Only Side a.” http://www.pdfcoke.com/ %20deAprix

Al

Rutgers University. “Primitive Integral Solutions to x2 + y2 = z2.” http://www.math.rutgers.edu/~erowland/triple list~long.html Spezeski, William J. “Rethinking Pythagorean Triples.” Applications and Applied Mathematics. http://pvamu. edu/aam

42

Addendum to the Patterns to Primitive Pythagorean Triples c. 2013 One of the author’s advantages in presenting his work as a developing e-book is the ability to add to or correct anything previously published. It has been suggested to the author that Tables IV and V from pp. 35-36 be combined into one table for the even values of a in the infinite set of triples designated as a, b, c where c is the hypotenuse and a, b, and c are whole numbers. That combined table is herewith presented for readers.

Combined Table IV-V Type E Primitive Triples

d = 2 = (2 x 12)

d = 8 = (2 x 22)

d = 18 = (2 x 32)

d = 32 = (2 x 42)

a

b

c

a

b

c

a

b

c

a

b

2 4 6 8 10

0 3 8 15 24

2 5 10 17 26

8 12 16 20 24

0 5 12 21 32

8 13 30 29 40

18 24 30 36 42

0 7 16 27 40

18 25 34 45 58

32 40 48 56 64

12 14 16 18 20

35 37 48 50 63 65 80 82 99 101

28 45 53 32 60 68 36 77 55 40 96 104 44 117 125

48 54 60 66 72

55 73 72 90 91 109 112 130 135 153 160 187 216 221 280

22 24 26 28 30

120 143 168 195 324

122 145 170 197 326

48 52 56 60 64

140 165 192 221 252

148 173 200 229 260

78 84 90 96 102

178 205 234 229 298

d = 50 = (2 x 52) b

c

d = 72 = (2 x 62) …

c

a

0 9 20 33 48

32 41 52 65 80

50 60 70 80 90

72 80 88 96 104

65 84 105 128 153

97 116 137 160 185

100 110 120 130 140

75 96 119 144 171

125 146 169 194 221

132 144 156 168 180

85 108 133 160 189

157 200 205 232 261

112 120 128 136 144

180 209 140 273 308

242 241 276 305 340

150 160 170 180 190

200 231 264 299 336

250 281 314 349 386

192 204 216 228 240

220 253 288 325 364

292 325 360 397 436

0 50 11 61 24 74 39 89 56 106

a

b

72 84 96 108 120

c

0 72 13 85 28 100 45 117 64 136



Author’s note: Triples displayed in red are not primitives. They were included as placeholders to illustrate the progression of values for a, b, and c. The table’s first row, displayed in green, constitutes what the author terms the identity statement; it is not a triple, but it does serve as the basis for calculating the members of its column’s infinite set of triples.

43

Structural Analysis Solving certain problems in mathematics require first attaining an understanding of the underlying structure of the number system before a paradigm may be constructed that leads to such a problem’s solution. Such is the case with Goldbach’s Conjecture, which was not actually devised initially by Goldbach. In attempting to tackle this challenge, which the author first completed in 2002 but modified slightly in 2009 for independent internet publication, the author first investigated the problem’s structure before proposing a solution to it. The author apologizes to the reader for the rather complicated presentation, but the complex structure he discovered yielded a complicated explanation. He would like to attempt a further simplification in the not-too-distant future, but there are higher-ranking priorities at the moment. He would prefer a more elegant solution to the problem. The solution does not tackle the Conjecture directly. Instead, it sets up barriers, or – as they author likes to visualize them – hurdles that paired potential prime pairs must surmount to be primes that add to consecutive even natural numbers. The author found that raising the hurdle barriers well above those that do exist still fails to eliminate a certain minimal number of prime pairs for each family, or set, of even natural numbers. The author’s modified base-6 analytical tool that he used in his work on Ulam’s Spiral Square and the Hardy-Littlewood Conjecture F found application in his study of Goldbach; that hints that other problems may be waiting out in mathematical limbo that might benefit from its application to them. The author already has another problem’s resolution in which he employed that modified base tool. The easiest way to visualize the process of surmounting the barriers to creating new paired primes for successive numbers is to picture a tower of ornamental water pools in a garden: the top pool must first fill before water is free to flow down into the second-highest pool and then into successively lower pools. Once a pool is filled, the water is permitted to flow into the next lower pool in the tower. The water serves as an unending – or infinite – supply of numbers – that flow through the barriers.

44

Goldbach’s Conjecture c. 2009 In 1742 Christian Goldbach conjectured that every odd natural number greater than 6 was the sum of three primes, as 5+2+2=9. Goldbach sent his conjecture to Leonhard Euler, but that eminent mathematician could neither disprove it nor devise a proof for it. Euler, however, recast the conjecture in its more widely known form: that every even natural number greater than 2 is the sum of two primes. Euler’s recasting of Goldbach’s Conjecture in its more commonly known form leads to the original conjecture because 3 can be added to any every even natural number starting with 4 to generate every odd natural number beginning with 7. Historically, Goldbach was not even the first to recognize the problem. Rene Descartes discovered the even natural number version of the problem before Goldbach penned his letter to Euler, but Paul Erdos suggested that “it is better that the conjecture be named after Goldbach because, mathematically speaking, Descartes was infinitely rich and Goldbach was very poor.” [Prime Glossary] Ivan Vinogradov proved in 1932 that all sufficiently large enough odd integers can be written as the sum of three primes. Number theorists have meanwhile demonstrated the Conjecture’s validity up to at least 4x1014 [Prime Conjectures], but no one has been able to prove Goldbach’s Conjecture in the general case. Current work includes the use of supercomputers to partition primes using advanced algorithms, and assaults on the problem employing probability analysis indicate that it is more than extremely unlikely that the Conjecture is invalid due to some case far down the number line, but even a vanishingly small probability that such a number exists means that it could well exist. An examination of just the first 66 even natural numbers reveals that while the number of prime pairs that sum to the members of a series of even natural numbers generally increases as the magnitude of those numbers increases, the relationship is not at all direct; that variability would seem to make a proof more difficult to construct. Table I Numbers of Prime Pairs Adding to Given Even Natural Numbers N No. Pairs 2 0 4 1 6 1 8 1 10 2 12 1 14 2 16 2 18 2 20 2 22 3 24 3 26 3 28 2 30 3 32 2 34 4 36 4 38 2 40 3 42 4

N No. Pairs 46 4 48 5 50 4 52 3 54 5 56 3 58 4 60 6 62 3 64 5 66 6 68 2 70 5 72 6 74 5 76 5 78 7 80 4 82 5 84 8 86 5

N No. Pairs 90 9 92 4 94 5 96 7 98 3 100 6 102 8 104 5 106 6 108 8 110 6 112 7 114 10 116 6 118 6 120 12 122 4 124 5 126 10 128 3 130 7

45 44

3

88

4

132

9

The Conjecture’s intractability has been a considerable frustration to professional and amateur mathematicians. That frustration served as the basis for a 2000 novel, Uncle Petros & Goldbach’s Conjecture by Apostolos Doxiadis. Faber and Faber and Bloomsbury Publishing, the novels two publishers, offered a $1 million prize for a valid proof of the Conjecture provided that it was submitted by March 15, 2002. That observable variance in the number of prime pairs that add to just the first few even natural numbers out of the infinite total leads to the possibility, however miniscule, that the number of paired primes could drop back to zero for some yet unexamined even natural number lurking somewhere in the dark regions beyond 4x1014. If an underlying rule or mechanism governing the number of paired primes that sum to a given even natural number could be discovered, then it could be determined whether or not the number of prime pairs could ever drop to zero. The following analysis of the Conjecture will reveal the mechanisms that govern the number of prime pairs that sum to any given even natural number. For simplicity, a few abbreviations will be used throughout this article to represent frequently employed terms: ENN will represent even natural number; ONP, odd-number pair; and PP, prime pair. An ONP will be any pair of odd natural numbers that add to a given ENN; ONPs may both be primes, both may be composites, or one member of the pair may be a prime while the other is a composite. A PP will only consist of two primes that sum to a given ENN. Though primes create their own unique progression and cannot be generated in their infinite succession by any known mathematical expression without the inclusion of numerous composites, patterns can be imposed upon them. The simplest of such imposed patterns results when the even numbers, which include only the prime 2 for which all subsequent even numbers constitute composites, are segregated from the odd numbers, thereby concentrating the infinite remainder of primes in a set that encompasses only half of the natural numbers. A series of operations or manipulations will be arbitrarily performed in this analysis that will reveal how and why the number of paired primes summing to any series of ENNs varies. The resulting information will then enable a paradigm, or model, to be constructed that will demonstrate that every ENN greater than 2 has at least one set of PPs that sum to it.

Assignment of Natural Numbers to Six Sets A simple operation can distribute the natural numbers into six sets, labeled A through F, with 1, 2, 3, 4, 5, and 6 designated as the first members of those sets in that counting order and the remaining natural numbers through infinity being assigned to those sets in similar counting order on the basis of 6n + x where x is the initial member of the set. The first five members of each set are set forth in Table II. Table II Assignment of the Natural Numbers to Six Sets A 1 7 13 19 25 …

B 2 8 14 20 26 …

C 3 9 15 21 27 …

D 4 10 16 22 28 …

E 5 11 17 23 29 …

F 6 12 18 24 30 …

Because 6 is a composite of 2 and 3, the assignment of all natural numbers to the six sets defined by x + 6n segregates 2 and its composites into Sets B, D, and F and 3 and its odd composites into Set C. That isolates all other primes and their odd composites, which are not also composites of 3, into Sets A and E.

46 Next, the set of pairs of natural numbers that sum to any given ENN can be arranged in dual vertical columns, with the left-hand column starting at the top with 1 and continuing downward through n/2 (which will be a whole number because every ENN, being a composite of 2, is divisible by 2 without remainder). The right-hand column will begin at the bottom with n/2 and continue upward through n-1. That process is illustrated in Table III for the ENN 20.

Table III Division of a Representative ENN into its Component Summing Pairs

1 2 3 4 5 6 7 8 9 10

20 + + + + + + + + + +

19 18 17 16 15 14 13 12 11 10

primes are noted in red With 1 and n-1 paired, 2 and n-2 paired, and so forth, the entire set of summing pairs through n/2 and n/2 are aligned visually. Primes that are paired in this fashion can be relatively easily observed and counted to yield the total number of prime pairs that add to any given ENN. For 20, there are two prime pairs that sum to the number: 3 and 17 plus 7 and 13. If the paired natural numbers that sum to each ENN are replaced with their modular set designations of A through F (from Table II) for six consecutive ENNs (labeled as N 1 through N6 in Table IV below), three patterns of pairings emerge due to the manner in which the summing pairs were aligned mechanically. The six possible pairing designations for any Nx/2 can be AA, BB, CC, DD, EE, or FF. With all possible upper cells having six rows because all of the natural numbers belong to one of the six sets, and with the first number in the left column for each pair of summing numbers starting with 1 which belongs to Set A, the pairing designation for Nx/2 will control the pattern of pairings across the two columns for each ENN by serving as the starting point for the ascending right-hand column for each ENN. Because there are only six set memberships possible for Nx/2 (which are A through F), the patterns will repeat through infinity. Under this arrangement, 20 from Table III would belong to set D and the DD pairings. Paired numbers in the upper cell are then examined to determine if the summing pairs for an ENN include AA, EE, or AE/EA match-ups. Whenever Nx/2 is a member of either Set A or D, one AA pairing per upper cell will result due to the modular pattern to the pairings. If Nx belongs to either Set B or E, one EE pairing per upper cell will result. If, however, Nx/2 belongs to either Set C or F, there will be an AE plus an EA pairing in each upper cell. There will be only one bottom cell for a given ENN, but there can be anywhere from zero (where Nx equals from 1 to 6) to an infinite number of upper cells (for an infinitely large ENN), that number being determined by how many times 6 divides into Nx/2 exclusive of any remainder. As sets A and E (from Table II) contain all of the primes except for 2 and 3, those ENNs that have AE/EA pairings ought to have a greater potential for paired primes than those ENNs that have either AA or EE pairings. That general, though clearly not absolute trend is evident in Table V, which rearranges the number of paired primes for each ENN from Table I into a new format based upon A and E pairings. Each ENN in the horizontal set for AA, EE, and AE/EA pairings is six greater than the ENN to its immediate

47 left; for AA, for example, the set runs 2, 8, 14…through infinity. Table V thereby reveals that some of the variability of PPs is due to the format of the A and E pairings.

Table IV Possible Pairings and the Resulting Group Memberships for Nx/2 I N1 AA BF CE DD EC FB

II N2 AC BB CA DF EE FD

III IV N3 N4 AE AA BD BF CC CE DB DD EA EC FF FB

V N5 AC BB CA DF EE FD

VI N6 AE BD CC DB EA FF

AA AC AE AA AC BB BD BF BB CC CE CA DD DF EE

AE BD CC DB EA FF

immediate upper cell (duplicated by any cells yet higher)

bottom cell

Table V Number of PPs by A and E Pairings Pairings Number of Prime Pairs AA (2-128) 0 1 2 2 3 2 2 EE (4-130) 1 2 2 3 2 4 3 AE/EA (6-132) 1 1 2 3 3 4 4

3 4 5

4 3 5

3 4 6

3 5 6

2 5 6

5 5 7

4 5 8

5 4 9

4 5 7

3 6 8

5 6 8

6 7 1 0

6 6 1 2

4 5 1 0

3 7 9

The total number of AA, EE, and AE/EA pairings for each ENN from 2 through 132 can be substituted for the data in Table V to produce a useful pattern. The number of AA and EE pairings equals the number of upper cells for each ENN plus the number of pairings (0 or 1) found in the bottom cell for the ENNs belonging to Groups I, II, IV, or V from Table IV. The number of AE/EA pairings equals two times the number of upper cells plus the number of pairings (1 or 2) found in the bottom cell for the ENNs belonging to Groups III or VI from Table IV.

48

Table VI Total Number of A and E Pairings Pairings Number of A and E Pairings, by ENN AA (2- 1 1 2 2 3 3 4 4 5 5 6 128) EE (4- 0 1 1 2 2 3 3 4 4 5 5 130) AE/EA 1 2 3 4 5 6 7 8 9 1 1 (6-132) 0 1

6

7

7

8

8

9

9

6

6

7

7

8

8

9

1 0 9

1 2

1 3

1 4

1 5

1 6

1 7

1 8

1 9

1 0 1 0 2 0

1 1 1 0 2 1

The total number of pairings displayed in each data cell in Table VI consists of match-ups that include two primes, two composites, or a prime plus a composite. The number in each of those data cells sets an upper limit for the total possible number of PPs for a given ENN except for 4 (for which 2+2=4) and for cases where 3 is paired with another prime (such as 3+5 =8 or 3+97=100). This upper limit not only helps make the pattern of PPs clearer, it, too, also plays a role in analyzing Goldbach’s Conjecture.

How the Number of PPs Is Determined Table VII sets forth the ONPs for each ENN from 6 through 90 that have AE/EA pairings, with the number being reduced from 132 in this table and for Tables IX and XII due to space considerations. Even-number pairs were not included in this table and its companion tables that shortly follow because such pairings are irrelevant as only 2+2=4 has any bearing upon the Conjecture’s validity and because their exclusion makes each of the three tables a bit more readable. The quantity of the odd-number pairings for successive ENNs in Table VII increases by the pattern +1, +2, +1, +2 … because one pair is added whenever Nx/2 is odd but not when N2/2 is even. Since every third ENN has AE/EA pairings, there will always be at least one odd Nx/2 between AE/EA pairings and alternating AE/EA pairings will end up with two, all due to the mechanical way each ENN had its pairings set up. Table VIII displays the ONPs for each ENN with AE/EA pairings in column two. Column three then lists the number of ONPs that have at least one composite of 3 in any given pair. That third column (except for 6, which is the special case 3+3=6) also follows a progression, in this case +0, +1, +0, +1 … which, like the number of odd-number pairings for ENNs, also results from the mechanical way the natural number sets and the summing pairs for ENNs were created. Column four subtracts column three’s results from the figures in column two to yield remainders.

11 11 22

49

Table VII Odd-Number Pairings for ENNS with AE/EA Pairings, 6 through 90 6

12

18

24

30

36

42

48

54

60

66

72

78

84

90

1 5 1 11 1 17 3 3 3 9 3 15 5 7 5 13 7 11 9 9

1 23 1 29 1 35 1 41 1 47 1 53 1 59 1 65 1 71 1 77 1 83 1 89 3 21 3 27 3 33 3 39 3 45 3 51 3 57 3 63 3 69 3 75 3 81 3 87 5 19 5 25 5 31 5 37 5 43 5 49 5 55 5 61 5 67 5 73 5 79 5 85 7 17 7 23 7 29 7 35 7 41 7 47 7 53 7 59 7 65 7 71 7 77 7 83 9 15 9 21 9 27 9 33 9 39 9 45 9 51 9 57 9 63 9 69 9 75 9 81 11 13 11 19 11 25 11 31 11 37 11 43 11 49 11 55 11 61 11 67 11 73 11 79 13 17 13 23 13 29 13 35 13 41 13 47 13 53 13 59 13 65 13 71 13 77 15 15 15 21 15 27 15 33 15 39 15 45 15 51 15 57 15 63 15 69 15 75 17 19 17 25 17 31 17 37 17 43 17 49 17 55 17 61 17 67 17 73 19 23 19 29 19 35 19 41 19 47 19 53 19 59 19 65 19 71 21 21 21 27 21 33 21 39 21 45 21 51 21 57 21 63 21 69 23 25 23 31 23 37 23 43 23 49 23 55 23 61 23 67 25 29 25 35 25 41 25 47 25 53 25 59 25 65 27 27 27 33 27 39 27 45 27 51 27 57 27 63 29 31 29 37 29 43 29 49 29 55 29 61 31 35 31 41 31 47 31 53 31 59 33 33 33 39 33 45 33 51 33 57 35 37 35 43 35 49 35 55 37 41 37 47 37 53 39 39 39 45 39 51 41 43 41 49 43 47 45 45 prime pairs designated in red

The same process is then repeated in Table VIII for the composites of 5, 7, and 11. However, a new factor comes into play due to the manner through the columnar pairings were created. In column five every fifth ENN, all of which are composites of 5, have composites of 5 paired against each other, which reduces the total number of ONPs with composites of 5 in them. This then yields a higher remainder in column six and ultimately a greater number of PPs in most cases. That effect becomes noticeable with 90 and 120. The same thing happens with composites of 7 for every seventh ENN and with composites of 11 for every eleventh ENN, but that is not apparent due to the Table VIII’s brevity. It also holds true for the composites of the larger primes involved with larger ENNs. That is the same basic process that occurred when there were CC pairings, but the larger the prime, the less obvious the process because larger primes have fewer composites for any given ENN. Variation in the numbers of PPs becomes substantially a function of the location of composites in one column versus the location of primes in the other. If a given ENN is a composite of a particular prime (or primes), the composites of that prime (or primes) will be paired against each other and will only bar that prime (or those primes) from matching up with a larger prime to form a PP. But, as with the match-ups of 3’s composites one-third of the time, composites of larger primes do not usually make up all of the pairings lost by any smaller prime that is a factor of a given ENN. That means more PPs result, as can be observed in Table V, for which the AE/EA also have CC pairings. Another, though more modest, variation occurs because every prime does not divide without remainder into every ENN. With 3, for example, only every third ENN is a composite of 3; dividing 3 into any of the others leaves a remainder of two or four. The existence of those remainders ever-so-slightly lowers the ratio of ONPs knocked out of contention as PPs by composites of 3. That happens for every prime’s composites where the square of that prime is smaller than a given ENN. That also happens with

50 CC pairings; the existence of remainders for any of the primes that have squares less than an ENN lowers the ratio for those composites slightly. One more factor that plays a role in the variance of the number of PPs must likewise be noted. The number 1 is a “unit,” not a prime or a composite. But in every case, because it is not a prime, it behaves like a composite when paired with a prime, such as with 41 for 42; PPs can only include primes, so 1 is never part of a PP. That reduces by one the number of PPs for every ENN that is 1 greater than a prime. The interplay of these patterns produces the variability found in column twelve of Table VIII, which is the total number of PPs for each of the table’s ENNs. It should also be noted that when composites are paired for an ENN, each composite will be counted as a composite of the smallest prime which is a factor of that composites (35, for example, being counted as a composite of 5 and not 7) when counting up the number of composites to calculate the remainder at any given step in the calculation of the number of PPs for each ENN.

1 ENN

2 ONPs

6

2

12 18 24 30 36 42 48 54 60 66 72 78 84 90 96 102 108 114 120 126 132

3 5 6 8 9 11 12 14 15 17 18 20 21 23 24 26 27 29 30 32 33

Table VIII How the Number of PPs Are Derived for ENNs with AE/EA Pairings 3 4 5 6 7 8 9 10 Pairs Remainder Pairs Rem Pairs Rem Pairs Rem With With With With Comp Comp Comp Comp Of 3 Of 5 Of 7 Of 11 0 2 0 2 0 2 0 2 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

0 0 0 1 2 2 2 2 2 4 4 4 4 3 6 6 6 6 4 8 8

2 3 4 4 4 5 6 7 8 7 8 9 10 12 10 11 12 13 16 13 14

0 0 0 0 0 0 0 1 1 1 1 2 1 2 3 2 3 2 4 2 3

2 3 4 4 4 5 6 6 7 6 7 7 9 10 7 9 9 11 12 11 11

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1

2 3 4 4 4 5 6 6 7 6 7 7 9 10 7 9 9 11 12 10 10

11 Adj For 1

12 PPs

1

1

1 1 1 1 0 1 1 1 1 0 1 0 1 1 0 1 1 1 0 0 1

1 2 3 3 4 4 5 5 6 6 6 7 8 9 7 8 8 10 12 10 9

51

Pairings with the composites of a given prime on occasion seem to exceed the expected limit that should exist for the ability of that prime’s composites to be paired with primes as summing pairs. For example, composites of 7 should not account for more than 2/7ths of the remainder from column six in Table VIII (rounded, 28.6%). However, composites of 7 pair up with three primes from the remainder of 10 for the ENN 96, which is 30.0%. That seeming contradiction, and others like it, occurs due to the arrangement of the smaller primes and composites in the summing pairs. In the case of 96, 5 is knocked out by 91, but 5 is a member of the prime-composite family for 5, so had 5 been a composite instead of a prime, 91 would not have knocked out that ONP as a PP because doing that would already be credited to the composites of 5. Knocking out 5 as part of a potential PP thereby raises the number of remaining pairs in column six with which the composites of 7 are paired. That only happens when the composite of a larger prime pairs up with a smaller prime, as when 91 is paired with 5 for 96. Tables IX and X for AA pairings and Tables XI and XII for EE pairings repeat the analytical process from Tables VII and VIII. In these cases, however, the members of Set C are not paired against each other as they were for ENNs with AE/EA pairings but rather against members of Sets E and A respectively. Only 3 in Set C is prime so those CE and CA pairings reduce the total number of PPs that result from the pairings in comparison with the ENNs that have AE/EA match-ups. CC pairings reduce the impact of composites of 3, generally resulting in the higher number of PPs for AE/EA match-ups seen in Table V. However, unlike with the AE/EA pairings, 3 does not get paired with composites for every ENN, so it creates a PP whenever a given ENN=Px+3. This adds to the variance by increasing the number of PPs by 1 beyond what would be expected for those ENNs that have AA or EE pairings. As 1 is a member of Set A, it will always be paired with a member of Set C for ENNs that have EE match-ups, so 1 has virtually no impact on the number of PPs in Table XII, unlike in Tables VIII and X, because it only knocks out 3 for the ENN 4.

52

Table IX Odd-Number Pairings for ENNs with AA Pairings, 8 through 86 8

14

20

17 35

1 13 3 11 5 9 7 7

1 19 3 17 5 15 7 13 9 11

26

32

38

44

1 25 1 31 1 37 1 43 3 23 3 29 3 35 3 41 5 21 5 27 5 33 5 39 7 19 7 25 7 31 7 37 9 17 9 23 9 29 9 35 11 15 11 21 11 27 11 33 13 13 13 19 13 25 13 31 15 17 15 23 15 29 17 21 17 27 19 19 19 25 21 23

50

56

62

68

74

80

86

1 49 3 47 5 45 7 43 9 41 11 39 13 37 15 35 17 33 19 31 21 29 23 27 25 25

1 55 3 53 5 51 7 49 9 47 11 45 13 43 15 41 17 39 19 37 21 35 23 33 25 31 27 29

1 61 3 59 5 57 7 55 9 53 11 51 13 49 15 47 17 45 19 43 21 41 23 39 25 37 27 35 29 33 31 31

1 67 3 65 5 63 7 61 9 59 11 57 13 55 15 53 17 51 19 49 21 47 23 45 25 43 27 41 29 39 31 37 33 35

1 73 3 71 5 69 7 67 9 65 11 63 13 61 15 59 17 57 19 55 21 53 23 51 25 49 27 47 29 45 31 43 33 41 35 39 37 37

1 79 3 77 5 75 7 73 9 71 11 69 13 67 15 65 17 63 19 61 21 59 23 57 25 55 27 53 29 51 31 49 33 47 35 45 37 43 39 41

1 85 3 83 5 81 7 79 9 77 11 75 13 73 15 71 17 69 19 67 21 65 23 63 25 61 27 59 29 57 31 55 33 53 35 51 37 49 39 47 41 45 43 43

prime pairs designated in red

53

Table X How the Number of PPs Are Derived for ENNs with AA Pairings 1 ENN

2 ONPs

2 8 14 20 26 32 38 44 50 56 62 68 74 80 86 92 98 104 110 116 122 128

1 2 4 5 7 8 10 11 13 14 16 17 19 20 22 23 25 26 28 29 31 32

3 Pairs With Comp Of 3 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

4 Remainder

1 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12

5 Pairs With Comp Of 5 0 0 0 0 1 1 2 1 1 2 2 3 2 1 3 3 4 3 2 4 4 5

6 Rem

1 2 3 3 3 3 3 4 5 4 5 4 6 7 6 6 6 7 9 7 8 7

7 Pairs With Comp Of 7 0 0 0 0 0 0 0 0 1 1 1 1 0 2 1 2 2 1 2 1 3 2

8 Rem

1 2 3 3 3 3 3 4 4 3 4 3 6 5 5 4 4 6 7 6 5 5

9 Pairs With Comp Of 11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1

10 Rem

11 Adj. For 1

12 PPs

1 2 3 3 3 3 3 4 4 3 4 3 6 5 5 4 4 6 7 6 4 4

1 1 1 1 0 1 1 1 0 0 1 1 1 1 0 0 1 1 1 0 0 1

0 1 2 2 3 2 2 3 4 3 3 2 5 4 5 4 3 5 6 6 4 3

54

Table XI Odd-Number Pairings for ENNs with EE Pairings, 10 through 88 10

16

19 37 55

1 15 3 13 5 11 7 9

22

28

34

40

46

1 21 1 27 1 33 3 19 3 25 3 31 5 17 5 23 5 29 7 15 7 21 7 27 9 13 9 19 9 25 11 11 11 17 11 23 13 15 13 21 15 19 17 17

1 39 3 37 5 35 7 33 9 31 11 29 13 27 15 25 17 23 19 21

1 45 3 43 5 41 7 39 9 37 11 35 13 33 15 31 17 29 19 27 21 25 23 23

52

58

64

70

76

82

88

1 51 3 49 5 47 7 45 9 43 11 41 13 39 15 37 17 35 19 33 21 31 23 29 25 27

1 57 3 55 5 53 7 51 9 49 11 47 13 45 15 43 17 41 19 39 21 37 23 35 25 33 27 31 29 29

1 63 3 61 5 59 7 57 9 55 11 53 13 51 15 49 17 47 19 45 21 43 23 41 25 39 27 37 29 35 31 33

1 69 3 67 5 65 7 63 9 61 11 59 13 57 15 55 17 53 19 51 21 49 23 47 25 45 27 43 29 41 31 39 33 37 35 35

1 75 3 73 5 71 7 69 9 67 11 65 13 63 15 61 17 59 19 57 21 55 23 53 25 51 27 49 29 47 31 45 33 43 35 41 37 39

1 81 3 79 5 77 7 75 9 73 11 71 13 69 15 67 17 65 19 63 21 61 23 59 25 57 27 55 29 53 31 51 33 49 35 47 37 45 39 43 41 41

1 87 3 85 5 83 7 81 9 79 11 77 13 75 15 73 17 71 19 69 21 67 23 65 25 63 27 61 29 59 31 57 33 55 35 53 37 51 39 49 41 47 43 45

prime pairs designated in red

55

Table XII How the Number of PPs Are Derived for ENNs with EE Pairings 1 ENN

2 ONPs

4 10 16 22 28 34 40 46 52 58 64 70 76 82 88 94 100 106 112 118 124 130

1 3 4 6 7 9 10 12 13 15 16 18 19 21 22 24 25 27 28 30 31 33

3 Pairs With Comp Of 3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

4 Remainder

1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12

5 Pairs With Comp Of 5 0 0 0 0 1 0 1 1 1 2 1 2 2 2 3 2 2 3 3 4 3 3

6 Rem

1 2 2 3 2 4 3 4 4 4 5 5 5 6 5 7 7 7 7 7 8 9

7 Pairs With Comp Of 7 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 2 1 1 0 1 2 2

8 Rem

1 2 2 3 2 4 3 4 3 4 5 5 5 5 4 5 6 6 7 6 6 7

9 Pairs With Comp Of 11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0

10 Rem

11 Adj. For 1

12 PPs

1 1 1* 2 0 2 2 0 2 3 0 3 2 0 2 4 0 4 3 0 3 4 0 4 3 0 3 4 0 4 5 0 5 5 0 5 5 0 5 5 0 5 4 0 4 5 0 5 6 0 6 6 0 6 7 0 7 6 0 6 5 0 5 7 0 7 *Special case of 2 + 2 = 4

Reviewing How PPs Are Created The foregoing discussion illustrates how prime pairs are mechanically generated, but it does not resolve the question of whether or not cases exist which invalidate Goldbach’s Conjecture. The complicated process, which involves a number of factors, produces the variance in the number of PPs observed in Table I. Critical information, however, has emerged from the examination of the structure of the summing pairs that will enable a final paradigm to be constructed that will reveal why no invalidating cases exist. When the natural numbers are assigned to one of the six sets designated A through F, only matchups of Sets A, C, and E become relevant to the problem except for the special case of 2+2=4 involving Set B. Sets B, D, and F comprise the even numbers, or 2 and all of its composites. The only prime that belongs to Set C is 3, with the rest of that set consisting of the odd composites of 3. When members of Set

56 C are matched-up with members of either Set A or E to form odd-number pairs that sum to a given ENN, all primes in the set whose members are paired with Set C end up paired with composites, knocking them out of contention as potential members of a PP. The only exception to that rule occurs when a given ENN is three greater than a prime so that 3 gets paired with another prime, as with 3+47=50. The types of match-ups that occur for a given ENN are controlled by the set membership of ENN/2. If that number belongs to Set C or Set F, CC pairings result which consequently create AE/EA match-ups. If it belongs to either Set A or Set D, AA match-ups result while if it is a member of Set B or Set E, EE pairings are created. As a consequence of six being chosen as the number of sets for the assignment of all natural numbers, all odd composites of 3 are confined to Set C. The odd composites for all other primes meanwhile rotate through Sets A, C, and E due to the Fundamental Theorem of Arithmetic. Any prime multiplied by 2 will yield a result unique from the multiplication of any other prime times 2. No prime (Px) is divisible by another prime without remainder due to the definition of primes, so 2x3 will never divide without remainder into any 2P x where Px>3. That means that two successive odd composites of any prime greater than 3 cannot be members of the same set. Three successive odd composites of any given prime likewise greater than 3 must be members of three different sets because 6 cannot divide without remainder into 4Px. However, an odd composite of Px that is three odd composites greater than another composite of Px will be in the same set as that smaller one because 6P x is divisible by 6 without remainder. A corollary to that is that every P xth member of all natural numbers, all odd natural numbers, and each set of natural numbers that have been designated A through F will be a composite of P x. For example, multiples of 5 can be counted out in the set of all natural numbers at 10, 15, 20, 25 … while such multiples appear every fifth member of the odd natural numbers: as in the case of 5, 7, 9, 11, 13, and 15. Meanwhile, using Set A as an example, the progression follows the same pattern, as with 25, 31, 37, 43, 49, and 55. This is all made possible because counting order was chosen as the basis for assigning the natural numbers to their sets. Multiples of 6P x cause those patterns to occur because 6 is the number of sets chosen created for the assignment of the natural numbers in counting order and because any N6P x, where N is any natural number, is divisible without remainder by P x. Employing the same principle, every Pyth composite of Px in a given set will also be a composite of P y counting from the first appearance of such a composite in whichever progression of numbers (all natural numbers, all odd natural numbers, all composites of Px, or all composites of P x in a given set of natural numbers) because all such numbers are products of PyPx, which means that they are divisible without remainder by both P y and Px. That says, for example that 105, which is a composite of both 5 and 7 is 7 odd multiples of 5 from 35 and 5 odd multiples of 7 from 35. This process is what limits the impacts of the composites of odd primes greater than 3 on the number of PPs for a given ENN. One-third of all odd composites of 5 are already composites of 3 because composites in this analysis are counted in the sets of the smallest prime for which they are a composite. It also means that composites of 3, which with 3 constitute the members of Set C of the natural numbers, will be paired against composites of 5 when Set C members are matched against Set E members, for those ENNs that have prime pairs that result form AA pairings, every fifth composite of 3 and every third composite of 5.

Validating the Conjecture? The investigation into the validity of Goldbach’s Conjecture next sets up conditions that contradict either some aspect of the Conjecture or some mathematical principle to demonstrate that the Conjecture cannot be invalid. First, Goldbach’s Conjecture would be invalid if the supply of primes ran out at some point. That does not happen; Euclid proved that the supply of primes is infinite. Although the frequency of primes thins out gradually as a consequence of the Prime Number Theorem, the density of the composites is the critical concern because as the density of primes thins out, the density of the composites, as a consequence, increases but that density of composites is ultimately stopped from attaining a value of 1.00 for the ratio of composites to all natural numbers beyond any given point because the supply of primes is infinite. If primes were to run out at some point along the infinite number line, then eventually all primes would be matched up with composites for sufficiently large enough ENNs (those that are greater than two times the largest prime, whatever it might be hypothetically).

57 This investigation next examines what could be done to alter the parameters that govern the generation of PPs for each ENN to discover what would have to be done to bring about the elimination of PPs. The matching of composites with primes in the summing pairs of each ENN, as has been demonstrated above, has been the controlling factor in the number of PPs that sum to any given ENN. What, then, could be done to alter the pattern of composites to insure that there were no PPs for some unknown ENN? If that alteration either violated mathematical principles or still could not eliminate PPs for one or more ENNs, then Goldbach’s Conjecture would be validated. Table XIII provides the illustration for the discussion of that effort. First, an ENN must exist in every case; changing that violates the problem, so the starting point cannot be changed in column one of Table XIII by substituting an odd natural number or some other number that is not an ENN. The supply of ENNs is infinite because 2 can be consecutively added through infinity to any last ENN to generate more ENNs. The ONPs for a given ENN could be reduced or increased in column two, but the number of ONPs is established for a given ENN by the mechanical manner in which ONPs are generated. Starting with just the first ONP (which is 1+N where N=ENNx-1) for a given ENN, next an even number pair then another odd number pair are added in sequence below the first pair (as in Table III) until the number of summing pairs equals ENN/2. That means that the number of ONPs for a given sequence of ENNs will increase by the pattern +1, +0 … because summing pairs are added in an odd-even sequence as the ENNs increase by 2 at each step in their progression. After considering and discarding the foregoing options for finding an ENN greater than 2 that does not have any PPs, the only remaining possibility is to find an ENN for which all primes are paired either with composites or the number 1. Composites of a given prime are paired up with primes for a given ENN following a very specific template, as explained through the exploration of the Conjecture’s mechanics above. Setting aside 2 and its composites, the composites of 3 then eliminate R(2/3)-1 (where R is the initial number of ONPs when dealing with composites of 3 or the remainder of ONPs not eliminated by composites of the prime immediately smaller than the prime-composite family under consideration) all ONPs from contention as PPs (except when CC pairings occur, for which the composites of 3 eliminate only 1/3 of all ONPs). Fractions are dropped from the results of the calculation because they represent a partial distance to the next composite in a prime’s family of composites, but because that value is fractional, the next composite in the series would be greater than the ENN for which the calculation is made. The smallest odd prime (3) was chosen to begin that process because one-third of all odd natural numbers are composites of 3 and they thereby have the greatest potential for impact upon the generation of PPs as they have the greatest density of the composites of any odd prime. It is also easier to begin with the smaller primes first because additional, larger primes and their composites only become relevant as the ENNs increase; starting with the smallest primes enables the investigator to employ consistent procedures.

Table XIII Results of Altering the Parameters 1 ENN

2 ONPs

6 8 10 12 14 16 18 20 22 24 26

2 2 3 3 4 4 5 5 6 6 7

3 Pairs With Comp Of 3 1 1 2 2 2 2 3 3 4 4 4

4 Remain -der 1 1 1 1 2 2 2 2 2 2 3

5 Pairs With Comp Of 5 0 0 0 0 1 1 1 1 1 1 1

6 Rem

1 1 1 1 1 1 1 1 1 1 2

7 Pairs With Comp Of 7 0 0 0 0 0 0 0 0 0 0 0

8 Rem

1 1 1 1 1 1 1 1 1 1 2

9 Pairs With Comp Of 11 0 0 0 0 0 0 0 0 0 0 0

10 Rem

1 1 1 1 1 1 1 1 1 1 2

11 Pairs With Comp Of 13 0 0 0 0 0 0 0 0 0 0 0

12 Rem

1 1 1 1 1 1 1 1 1 1 2

13 Pairs With Comp Of 17 0 0 0 0 0 0 0 0 0 0 0

14 Rem

15 Adj For 1

16 Min Num PPs

1 1 1 1 1 1 1 1 1 1 2

1 1 1 1 1 1 1 1 1 1 1

0 0 0 0 0 0 0 0 0 0 1

58 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108 110 112 114 116 118 120 122 124 126 128 130 132 134 136 138 140 142 144 146 148 150 … 300 … 600

7 8 8 9 9 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18 19 19 20 20 21 21 22 22 23 23 24 24 25 25 26 26 27 27 28 28 29 29 30 30 31 31 32 32 33 33 34 34 35 35 36 36 37 37 38 … 75 … 150

4 5 5 6 6 6 6 7 7 8 8 8 8 9 9 10 10 10 10 11 11 12 12 12 12 13 13 14 14 14 14 15 15 16 16 16 16 17 17 18 18 18 18 19 19 20 20 20 20 21 21 22 22 22 22 23 23 24 24 24 24 25 … 50 … 100

3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 9 9 9 9 9 9 10 10 10 10 10 10 11 11 11 11 11 11 12 12 12 12 12 12 13 13 13 … 25 … 50

1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 … 15 … 30

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 6 6 6 … 10 … 20

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 … 4 … 8

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 … 6 … 12

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 … 1 … 3

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 … 5 … 9

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 … 1 … 2

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 … 4 … 7

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 … 0 … 1

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 … 4 … 6

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 … 1 … 1

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 … 3 … 5

59 Alt. 600

150

100

50

40

10

5

5

1

4

1

3

0

3

1

The remaining ONPs that have not been eliminated by the composites of 3 then face similar hurdles from the composites of successive primes up through those of the prime the square of which is the largest prime square smaller than the ENN in question (as, for example, 49 is for the ENNs from 50 through 120). The ratio of the composites follows the same pattern as with the composites of 3: R(2/5)-1 for the composites of 5 except when an ENN is divisible by 5, for which the composites of 5 in the parallel columns will be paired against each other and thereby eliminate only 1/5 of the ONPs that remain after the composites of 3 have eliminated their share of the ONPs from being PPs. After each prime’s composites remove ONPs from contention as PPs, the next larger prime’s composites are applied to the ONPs that still remain. That will continue for all the infinitude of primes for every ENN, but only the primes for which the squares are less than the ENN need be considered because primes which have squares larger than the ENN will not have any composites smaller than the given ENN that are not already a simultaneous composite with a prime that has a square smaller than the ENN. Most of the larger primes will not have any composites that are smaller than a given ENN, as any given ENN is finite and the number of primes is infinite. That explanation may seem complicated, but it is similar to the way through which the Sieve of Eratosthenes mechanically eliminates whole numbers from the number line as potential primes. Both are sieving processes employing composites. Making the hurdles higher and therefore harder to pass over for ONPs becomes the next step in determining if there is an ENN for which there are no PPs. First, the ratio for composites of 3 in the set of ONPs will be raised to 2/3 for all ENNs, which would count 3 as a composite and would mathematically ignore the third of all cases in which composites of 3 are paired against each other. The numerator of the ratio for the composites of other primes will then be increased to 3 (3/5, 3/7, 3/11…), thereby increasing the share of successive remainders of ONPs knocked out by the composites of those primes. That procedure was followed in the construction of Table XIII, which sets up that process. Although Table XIII looks like Tables VIII, X, and XII, those latter three tables represented actual results while Table XIII is an artificial construct. In Table XIII the ratio’s numerator remains at 3 for every prime’s composites even when an ENN is a composite of that prime (which in actual cases winds up having that prime’s composites paired against each other and limits the ratio’s numerator for that prime to 1, as in 1/5). In addition, the unit 1, which is neither a prime nor a composite, will always be considered paired with a prime, thereby increasing the impact of 1 upon the process. Making those alterations raises the barriers that the ONPs must cross to become PPs in Table XIII. Those higher ratios and always considering 1 to be paired with a prime substantially reduce the number of PPs that result from the process, as a comparison of Tables I and XIII reveal. By doing that, the analyst created conditions that violate basic mathematical operations, such as setting the numerators for the ratios used to calculate the number of pairs with composites of 5, 7, 11 and larger primes at 3 for all pairings. Likewise, always considering 1 to be paired with a prime also violates basic mathematical operations; primes simply do not follow any known pattern, which means that they will not all be two apart, which would be necessary for 1 to pair only with primes. Even so, PPs are still created under those more stringent conditions, beginning with the ENN 26. What happens is that each time the number of ONPs is increased by 1 by increasing the ENN by 2, that added value (of 1 ONP) can wind up artificially paired with a composite of 3 or composite of any subsequent prime through the mathematical functioning of their ratio. Note that the term ‘value’ is used. Location of the pairs possessing the composites of various primes is not in question, only their number. That way, the number of the PPs becomes the critical factor, not the identity and the exact location of the primes that comprise those PPs. It is important to remember that such mathematical pairing in Table XIII is not an actual pairing; as an artificial construct, Table XIII is only concerned with how many ONPs could be optimally eliminated as PPs through mathematical ratios, not actual pairings. That construct uses higher ratios and other devices to make it harder for PPs to emerge at the end of the process. Once that value makes it through the process to become a PP, however, that value remains in the PP column due to basic mathematics. The functioning

2

60 of those processes can be seen in simple division: dividing a number by 5 will generate a specific quotient (as with 95 divided by 5 yielding 19). Increasing the dividend will eventually increase the quotient (100/5=20), but nowhere along the infinite number line can the division of ever-larger dividends by 5 produce smaller quotients. Therefore, even with these higher barriers, the number of PPs will gradually increase because the factors that originally produced the variance in the actual results have been eliminated and even though the higher barriers generate PPs (in this case beginning at 26) at a higher number and at a slower rate, the alterations do not permit some unknown large number to be devoid of PPs. The observed variance in the actual results is equal to or greater than the minimum number of PPs established for each ENN by the artificial construct, so use of that construct creates a minimum floor below which the number of PPs cannot fall. Raising the barriers even higher will slow the creation of PPs, as seen for the alternate 600 at the bottom of Table XIII; for that alternate result, the numerators for the ratios involving primes greater than 3 were raised to 4. In that alternate paradigm, there were still two PPs at 600, though there were five in the construct that used 3 in the numerators of the ratios for the primes greater than 3. Depending upon how much higher the numerator values are set (as long as the ratios do not equal 1.00), the arrival of the value 1 and any higher values will be delayed in the PP column. If any of the ratios were set at 1.00, no PPs would result, but that could not happen because altering the sieving process in that way would violate the same mathematical principles that would be compromised if Eratosthenes’s Sieve had a prime that could eliminate all subsequent primes in violation of the numerous proofs that verify that the supply of primes is infinite. Once the value of an ONP reaches that final, PP column (which would have to be moved to the right as the number of ENNs was increased and additional primes became consequential in the calculation) it means that from that point forward there must always be at least that minimum number of PPs for all subsequent ENNs. The hurdles erected serve as sieves that individually strain out ONPs for each ENN. If something happens when a 15th ONP first comes up in the process, it will happen the same way each time the sieving process is employed. One just has to picture each ONP passing through the process one at a time. By dropping the -1 from the ratios used to calculate how many ONPs are eliminated from consideration as PPs by the composites of a given prime [as with R(3/5) -1 for the composites of 5], the variance created by composites of primes with squares less than the ENN pairing with smaller primes found in the first column of the summing pairs for that ENN is eliminated. All of the primes with squares less than that ENN are counted as composites by the artificially increased ratios used to calculate how many ONPs are eliminated by a given prime’s composites, making moot those match-ups that can slightly increase the ratios above expected values (as described above for the composites of 7 for the ENN 96 (where the composite 91 knocks out 5); both members of the pair artificially wind up being primes where one would occasionally be a smaller prime than the factor of the composite of the larger prime. Even though the barriers in the artificial constructs are set well above those that exist in reality, PPs are still mathematically generated. Where 3 is used in the numerator for the primes greater than 3, PPs still wind up being mathematically generated at the ENN 26. Even when 4 replaces 3 in the ratio’s numerator for primes greater than 3, PPs still show up early along the number line, at 122. If 6 then replaced 4 for the numerator of the ratios for primes greater than 5, the first PP would arrive at 842. Each ENN through 26, 122, or 842 can be visually inspected to confirm in reality that they have PPs. In fact, the barriers would have to be set exceedingly high at this point in our mathematical knowledge because it has already been demonstrated that there are PPs for every ENN up through at least 4x10 14 [Prime Conjectures]. Since setting the barriers in the artificial construct so far above those that actually winnow out PPs from ONPs cannot bar PPs from making it through the process, the use of that construct to set minimum values for PPs produces results that argue forcefully that Goldbach’s Conjecture is valid. All of the variance in the results has been eliminated by the more stringent conditions, which means that even with higher barriers to the production of PPs, they are still created for ENNs equal to or greater than a certain ENN, which is determined by how high the ratio numerators are set. Choosing 6 for the ratio numerator for all primes greater than 5 and 4 for 5’s numerator and 2 for 3’s, always using the higher ratio numerator even when a prime is a factor of a given ENN, making all primes composites when their squares are less than the ENN, and always considering 1 to be paired with a prime place far higher barriers than those that actually exist, but even though basic mathematical principles are violated to do so, PPs would still eventually be created and would exist for all larger ENNs once the first PP arrives, as explained above. Therefore, there are no even natural numbers waiting somewhere down the dark, unexplored reaches of the number line waiting to invalidate Goldbach’s Conjecture as it was restructured by Euler.

61

Historical References Doxiadis, Apostolos. Uncle Petros & Goldbach’s Conjecture. Bloomsbury Publishing, New York, 2000. Hoffman, Paul. Archimedes’ Revenge. W.W. Norton & Company, New York, 1988. The Prime Glossary. Goldbach’s conjecture. http://primes.utm.edu/glossary/page.php/GoldbachConjecture.html Verifying Goldbach’s Conjecture up to 4 x 1014. http://www.informatik.unigiessen.de/staff/richstein/ca/ Goldbach.html

62

The Twin Prime Conjecture: Proven at Last c. 2013 Certain problems in mathematics are considered unsolvable – perhaps due to the lack of an appropriate technique for solving them or perhaps because their parameters create impossible conditions. Sometimes that insolvability results from those seeking solutions having continually travelled down barren roads which have offered no chance for success. As put by Hardy and Wright in their 1979 edition of An Introduction to the Theory of Numbers and quoted in Wolfram Mathworld, proof of conjectures like the Twin Prime “is at present beyond the resources of mathematics.” 1 Apparently, there has been a need for more productive techniques. The Twin Prime Conjecture has appeared to be one of those inscrutable mathematical challenges that have lain just beyond the reach of known techniques. Previous attempts to solve it, particularly by formulaic means, have seemed destined to fail, though there is one polymath effort that appears to be closing in on a solution based upon a University of New Hampshire math professor’s analysis of the problem. This paper will, however, demonstrate the conjecture’s validity, but it will do so through a structural analysis that employs several simple, arbitrary constructs which are acceptable tools to employ in the quest for the proof. It will first explore the relevant structures employed in the proof, then move onto the proof itself, which will be very simple and very brief.

Introduction There are two twin number conjectures, the better known of which contends that there are an infinite number of primes that differ by two. While this has hitherto not been proven, mathematicians generally believe that “the evidence [for the correctness of that conjecture] is overwhelming.” 2 The discovery of ever-larger twins are continuously announced and considerable work has been done to demonstrate that there is a limiting bounds by which primes can differ. This is the conjecture that will be proven in this paper. A second twin prime conjecture involves a modification to Brun’s constant, with a version of it known as the strong twin prime conjecture, or Hardy and Littlewood’s first conjecture; it will not be dealt with here. In the vein of the author’s earlier papers on Ulam’s Spiral Square, Hardy and Littlewood’s Conjecture F, and Goldbach’s Conjecture, the approach will be based upon the structure of the number system as it can be permissibly bent to the specifics of the problem.

A Very Brief History of the Twin Prime Conjecture Twin primes and their nature have been considered at least as far back as Euclid in the 3 rd Century B.C.E. Some attribute the conjecture to the Greek mathematician Euclid of Alexandria, which would make it one of the oldest open problems in mathematics. 3 That means that the problem is something over 2,300

63 years old, making it perhaps the oldest remaining unresolved problem in the field of number theory. Alphonse dePolignac expanded the problem into an infinite set of problems, known collectively and justifiably as dePolignac’s Conjecture, by asserting that an infinite number of primes (p and p’) exist such that p’ - p = 2k where k is a natural number > 1.4 The Twin Prime Conjecture then becomes the case where k = 1.5 G.H. Hardy and John Littlewood proposed a stronger version of the Twin Prime Conjecture, proposing a distribution law for the twins similar to that of the Prime Number Theorem, which finds an asymptotic distribution of primes. Due to the researcher’s age, the AARP recently posted – as did many other sources – the story of Titang Zhang’s May 22, 2013 paper that has taken a tremendous step forward in resolving the Twin Prime Conjecture. Zhang, a previously obscure, but popular University of New Hampshire mathematics instructor (reportedly in his 50s) broke Hardy’s truism that “I do not know of an instance of a major mathematical advance initiated by a man past fifty.” 6 Zhang established a bound of 70 million for some integer N that serves as the gap between infinitely many prime pairs. His research took advantage of work published in 2005 by Daniel Goldston, Janos Pintz, and Cem Yildirim, “which had shown there would always be pairs of primes closer than the average distance between two primes. 7 Following up on Zhang’s paper, Tarrence Tao proposed a Polymath project (Polymath8) to limit the bound further, reaching an announced value of 4,680 on July 27, 2013.8

Parallel Number Lines The progression of numbers along their dimming pathway toward infinity is often conceived as a number line, with both distinct – or exactly definable – and imprecise points along its course, designating the different numbers as locations along that line. The creation of a number line is an arbitrary intellectual exercise. Numbers do not actually exist on a master line someplace, but the use of a number line helps visualize their relationships.9 If one number line can be so created, one could also create paired number lines. They could be placed alongside each other and be offset by two in a similar arbitrary, but perfectly allowable, manner. One could then choose to highlight only the odd whole numbers on the two parallel lines (in this case doing so only because no even whole numbers constitute twin primes, as twin primes can only differ by two and two is the only even prime). Figure I below has parallel number columns that represent parallel number lines (actually line segments) as described, though the columns in this visual example are broken apart for conservation of presentation space and all even numbers have been arbitrarily removed as unessential.

Figure I An Example of Arbitrary Parallel Number Lines Offset by Two 1 3 23 25 3 5 25 27 5 7 27 29 7 9 29 31 9 11 31 33 11 13 33 35 13 15 35 37 15 17 37 39 17 19 39 41 19 21 41 43 21 23 43 45 _______________________________________________________________ Primes in locations where they form a twin prime denoted as 7 Primes in locations where they do not form a twin prime denoted as 7

64 In Figure I, twin (or paired) primes are marked in red and solitary primes – those without a prime immediately opposite on the parallel number line – are marked with yellow. It is interesting to note that Figure I, with its limited number of entries, already shows the density of twin primes getting thinner. They do thin out considerably faster than primes.

Employing Arbitrary Constructs Thus, while the supply of twin primes seems inexhaustible, the infinitude of their total number is still only surmised, not proven. The effort at hand to demonstrate that the number of twin primes is infinite will begin with the paired number lines illustrated in Figure I; it will proceed through how composites block paired integers on that line from being twin primes, and will conclude with the proof that shows why composites can never fully block the creation of new twin primes along the course of the paired number lines. The analytical constructs employed to winnow out the infinitude of twin primes will be arbitrarily selected for the task. Their design – such as the use of the two parallel number lines introduced above – will avoid violating any mathematical principles. They are being employed to help promote an understanding of the structure of twin primes. The pantheon of twin primes does have a general structure and some of it will be revealed through the use of certain constructs. The first of those constructs was the paired number lines (in Figure I). The next involves the division of those two number lines into (1) blocs defined by the squares of the consecutive odd integers and (2) sets comprised of four consecutive paired integers from those two number lines. When working with the infinite parallel number lines, as initiated in Figure I’s line segments, each set of paired numbers on the two lines may be conceived as one entity (such as 3,5 and 15,17). Nothing prohibits this from being done. In this respect the parallel number lines would be treated as if they were a single line, but with doubled entries of numbers along that line. Harking back to the Sieve of Eratosthenes, nonprimes – known as composites – could be winnowed out of that double-line by dividing each number by the primes that are equal to or smaller than its square root. If one were to work their way through the double-line, first dividing by 2, then 3, then 5, and onward by each prime in succession, all of the composites would be identified and removed, ultimately leaving only primes. In some cases there would be only one number (a solitary prime) remaining from a set of paired numbers, but in other instances, two numbers would remain, which would be twin primes. So, if 2 and all of its composites (known collectively as the even numbers) were removed, no harm would be done because no even numbers are part of any twins. Once the even numbers have been removed from the doubled number line, 3 would divide cleanly, or without remainder, into two-thirds of the remaining numbers that follow 3 along the downsized, paired line.10 That will be treated as a simple given. From this point forward it becomes useful to understand the structure of the blocs and sets that comprise Table I.

65

Table I Dividing the Paired Number Lines into Blocs and Sets Bloc 1 Set 1

Bloc 2 Set 1

Set 2 .

1 1 3 3* 5* 5* 7*

Bloc 3 Set 1

7 9 9 11 11* 13* 13 15

Set 2

15 17* 19 21

17 19* 21 23

Set 3

23 25 27 29*

25 27 29 31*

31 33 35 37

33 35 37 39

39 41* 43 45

41 43* 45 47

Bloc 4 Set 1

Set 2

Set 3

47 49 51 53

49 51 53 55

55 57 59* 61

57 59 61* 63

63 65 67 69

65 67 69 71

Set 3

73* 75 77 79

Set 4 103 105 105 107 107* 109* 109 111

Set 4 71* 73 75 77

Bloc 5 Set 1

79 81 83 85

81 83 85 87

Set 2

87 89 91 93

89 91 93 95

95 97 97 99 99 101 101* 103*

Set 5 111 113 113 115 115 117 117 119 _____________________________________________________________________________________ Perfect squares denoted by 49 Twin Primes denoted by *

Blocs, Sets, and Twin Primes The number of odd integers in each bloc can be readily calculated using the formula: Bn = (n + 2)2 – (n)2 2 where Bn is the number of odd integers in the bloc that begins with n2. For Table I’s Bloc 3, Bn = (7)2 - (5)2

66 2 Bn = 12 That simple formula leads to a more general expression that calculates the number of paired odd integers, Ba, that are consecutively added to the blocs in their order of progression: Ba = (n + 4)2 – (n + 2)2 - (n + 2)2 - (n)2 2 2 Ba = (n2 + 8n +16) – (n2 + 4n +4) - (n2 + 4n + 4) - (n2) 2 2 Ba = (4n + 12) - (4n + 4) 2 2 Ba = 4 This, with its removal of the even numbers, is only a variation of the difference between consecutive squares.11 So far, nothing is very complicated, but the increase in paired numbers from one bloc to the next is a crucial element in the proof of the infinitude of prime numbers. One bloc to the next consecutive bloc will always exhibit an increase of four pairs of numbers. In addition to dividing the paired number lines into blocs and sets, Table I also denotes the locations of the first ten twin primes. There are two twin primes in each of the first five blocs, as shown in the table, but unfortunately that pattern does not hold for very long, going to three twin primes in Bloc 7 and four in Bloc 8 before falling back to two again in the ninth and tenth blocs. Table II illustrates.

Table II Twin Primes in the First Ten Blocs Bloc

Number of Twin Primes

1 2 2 2 3 2 4 2 5 2 6 2 7 3 8 4 9 2 10 2 __________________________________________

Eliminating Paired Numbers as Twin Primes As part of the effort to prove that the supply of twin primes is infinite, an algorithm has been established for the way in which paired numbers along the combined number lines will be removed from consideration as twin primes because at least one member of a given pair is a composite. There are several ways that pairs could be removed from consideration, but the way that is here being specified must be followed due to the mechanics of the proof. As stated above, the paired number lines may be conceived as one number line with paired numbers, some of which will be twin primes (as displayed in Figure I). Proceeding in a manner similar to that of Eratosthenes’ Sieve, composites of each prime are removed in order from consideration as twin primes, but working with twin primes sets up a slightly different process. Beginning with 3, each prime is divided into all remaining numbers beginning with the prime’s square to

67 determine which numbers along the line are its composites (and therefore not members of twins). Figure II illustrates how this is to be done:

Figure II Removing Composites from the Paired Number Line by 5

by 3

Pairs

by 3

by 5

1 1,3 3,5 5,7 7,9 x x 9,11 11,13 13,15 x x 15,17 17,19 19,21 x x 21,23 23,25 x 25,27 x x 27,29 29,31 31,33 x x 33,35 x 35,37 37,39 x x 39,41 ____________________________________________________ Pairs with one or more composites are marked in red. Twin Primes are marked in blue. Both numbers at a given point are removed from consideration as twin primes if either or both of the pair is a composite. However, if one of the pair is a prime, it could still be part of a twin; to be so it would have to be paired with a prime at its other position along the modified number line. The process of removal from consideration stops once one member of the pair has been identified as a composite by the sieving. Whether the other member of the pair is a prime or a composite is irrelevant. Removal from consideration is accomplished by the smallest magnitude prime that divides cleanly into either member of the pair; that larger magnitude primes may divide into either member of the pair is also irrelevant. The smallest prime that divides into either member of the pair gets all of the credit for the pair’s removal from consideration. However, unlike Eratosthenes’ Sieve, composites of 3 remove two-thirds of all pairs down the infinitude of this special, combined number line, not one-third. Five then removes twofifths of the remaining pairs from consideration, and the process follows in the same manner for each prime in succession (each removing the fraction 2/p x from the new total ). In each case, a prime can only remove pairs beginning with the pair that is the first to contain that prime’s square; 5, for example, does not remove 15’s pairs from consideration because 15 is first considered a composite of 3 and, as such, was already removed from consideration.

68 Summarizing Progress with the Problem At this point, four critical components of the solution have been covered: • • •



Two odd-only number lines have essentially been combined into one line with two numbers, differing by 2, at each point along its infinite length; A rigid, prioritized process has been adopted for removing paired numbers along that line from consideration as twin primes; As consequence, a prime’s composites can only begin to remove paired numbers from consideration as twin primes beginning with the square of that prime (though a composite of a smaller prime may do so at that point as 51 does with 49, 51 because 51 is a composite of 3 while 49 = 72); and, That blocs progressively and consistently increase by four pairs over the number contained in their immediate predecessors because successive blocs begin with the squares of successive odd whole numbers.

Yet, all of that does not provide enough tools up to this point to construct the conjecture’s proof. That is because there is an illusion of disorder to primes; one cannot determine exactly where either primes or twin primes will fall along the paired number lines without resorting to brute arithmetic methods even though the density of primes and twins, when graphed, appear to follow generally predictable functions. Table III offers a snapshot of an interesting relationship. The table’s first column sets up the initial ten blocs and lists – for each bloc – the initiating odd root, the square of which starts off the righthand column of the combined number line. The table then lists [in C] how many pairs are in each bloc. Column D provides the fractional value of each pair within each bloc (which is the inverse of the total number of pairs in the bloc). Column E presents what will become an important value in the process of proving the Twin Prime Conjecture. It is the fraction of pairs along the combined number line that are not removable as possible twin primes by composites of primes equal to or less than the defining number from Column B for a given bloc. That may seem complicated, but it is not. The prime 3 removes 2/3s of all the pairs beginning with Bloc 2 and progresses on through infinity continuing to do so. That means that 1/3 of all paired numbers along the combined number line are not removed from consideration as twin primes by the action of odd composites of 3. Column F then converts Column E’s fractions to rounded decimal values. By simple deduction, as long as Column F is not permanently reduced to 0 from some point, or bloc, onward, the supply of twin primes will not be exhausted. From knowledge of the studies of twin primes, ever-larger twins are being discovered; no end to them has been found. A given bloc could be reduced to 0, but that will not demonstrate that all successive blocs will also be 0; likewise a fractional value in Column F for any bloc will not guarantee that successive blocs will also display non-zero values. The values in Column F do not even seem to follow an orderly progression. Following the action of 3’s odd composites, the prime 5 comes into play beginning with Bloc 3. Remembering that the composites of the smaller prime(s) take effect before considering the composites of the new large prime (in this case 5), the composites of 5 remove 2/5s of the remaining paired numbers along the number line beginning with Bloc 3, which means that it does not remove 3/5s of those remaining numbers. Figure III illustrates how that works. Composites of 3 and 5 remove pairs in a pattern that is infinitely repeated based upon the algorithm: RN = 1 - (p1 - 2)…(px - 2) p1 px RN is the fraction of the pairs not removed from consideration as twins by the actions of the primes from p1 (3) through (px), largest prime serving as a defining number through some arbitrary bloc along the combined number line. For purposes of the proof, all primes and their composites are considered, but only in consecutive order and at the appropriate point. Moving on, Table III’s Column G lists the actual number of twin primes within each bloc, followed by the fractional value of pairs in each bloc that are twin primes [H]. As long as Column E is not permanently reduced to zero from some bloc onward, there are more twin primes and the supply is not

69 exhausted. Ever-larger twin prime pairs are being discovered, so no end has yet been found to their supply for Column G (and therefore also H) and the value in Column E has therefore not been permanently reduced to zero. However, the data so far does not give any indication in the other direction either; nothing in Table III suggests that the progression of fractions in Column E cannot be permanently reduced to zero at some point; no boundary is evident below which the values in Column E cannot fall, so a zero value is possible for some as-yet unknown large value for Column B. No clear pattern has emerged. But, there is a way to create a base pattern that demonstrates that the data in an infinite continuation of Column E will never be permanently reduced to zero after some arbitrarily large n is reached in Column B.

Table III The Elimination of Paired Numbers along the Combined Number Line as Twin Primes A B C Bloc Defining Pairs in Number Bloc (e)

D 1/e

E Fraction (f) of Pairs Remaining

1 2 3 4 5 6 7 8 9 10

1/4 1/8 1/12 1/16 1/20 1/24 1/28 1/32 1/36 1/40

… 1/3 1/5 1/7 1/7 9/77 9/91 9/91 135/1547 135/1729

1 3 5 7 9 11 13 15 17 19

4 8 12 16 20 24 28 32 36 40

F 1.000(f)

… .3333 .2000 .1429 .1429 .1169 .0989 .0989 .0873 .0871

G Number of Twin Primes in Bloc 2 2 2 2 2 2 3 4 2 2

H Fraction of Pairs in Bloc that are Twins 1/2 1/4 1/6 1/8 1/10 1/12 3/28 1/8 1/18 1/20

70

Figure III An Example of the Pattern of Composites Paired Number

Position

Composite of 3

Composite of 5

13, 15 1 x x 15, 17 2 x x 17, 19 3 19, 21 4 x 21, 23 5 x 23, 25 6 x 25, 27 7 x x 27, 29 8 x 29, 31 9 31, 33 10 x 33, 35 11 x x 35, 37 12 x 37, 39 13 x 39, 41 14 x 41, 43 15 -----------------------------------------------------------------------------------------------------------------------43, 45 16 x x 45, 47 17 x x 47, 49 18 49, 51 19 x 51, 53 20 x 53, 55 21 x ___________________________________________________________________________ Red denotes twin primes. Green indicates pairs with composites of both 3 and 5.

The Proof: Twin Primes Are Infinite in Number One could guess, based upon the discovery of ever-larger twin primes, that the pattern to the data in Table III probably continues infinitely, but probably and absolutely are radically different concepts. Proof or rejection of the conjecture’s validity will require finding a relationship where it can be demonstrated that some mathematical linkage requires either the elimination of opportunities for twin primes to exist after some point along the number line or a clear demonstration that a boundary absolutely equal to or more mathematically stringent than the existing pantheon of primes is incapable of eliminating the opportunities for the existence of twins. While the primes themselves cannot offer up a convincing

71 argument in either direction, an algorithm can be constructed employing a bit of Table III as its starting point. Doing so will require the use of an acknowledged false assumption that would mathematically create a situation which would more aggressively eliminate opportunities for twin primes to occur than do the existing composites, but would nonetheless be demonstrably unable to block the infinite supply of twins in a manner that would prove that ever-larger twin primes cannot be blocked by the progression of composites down the number line. Table IV sets up that proof’s mechanics.

Table IV Inability to Eliminate Twins through a More Aggressive Elimination of Paired Numbers A B C Bloc Defining Number Pairs in Bloc (e)

D 1/e

E 1.0000(1/e)

F G Fraction (f) 1.0000(f) of Pairs Remaining

H Column G Divided by Column E

1 1 4 1/4 .2500 … … … 2 3 8 1/8 .1250 1/3 .3333 2.666 3 5 12 1/12 .0833 1/5 .2000 2.401 4 7 16 1/16 .0625 1/7 .1429 2.286 5 9 20 1/20 .0500 1/9 .1111 2.222 6 11 24 1/24 .0417 1/11 .0909 2.180 7 13 28 1/28 .0357 1/13 .0769 2.154 8 15 32 1/32 .0313 1/15 .0667 2.131 9 17 36 1/36 .0278 1/17 .0558 2.115 10 19 40 1/40 .0250 1/19 .0526 2.104 … … … … … … … … 99 197 396 1/396 .0025 1/197 .0051 2.040 … … … … … … … … 999 1997 3996 1/3996 .00032502 1/1997 .0005007 2.0012 ______________________________________________________________________________________

Columns A through D in Table IV are taken, with the exception of the data for Blocs 99 and 999, directly from Table III (for purpose of quick comparisons). Table IV’s Column E meanwhile converts the fractional value of 1/e to a decimal; that will facilitate the easiest division of the data in Column G by that in Column E. Column E’s data is the decimal value that a given pair of numbers along the combined number line possesses as a share of the total number pairs within its specific bloc (as one pair in Bloc 2 accounts for 1/8 of the eight pairs in the bloc, which can be decimally recorded as .1250). Table III’s Column E, with its actual number of twin primes per bloc, and Column F, with its fraction of pairs in a given bloc that are twin primes, are not carried over to Table IV. The patently false assumption upon which Table IV is mathematically constructed is that every odd number in Column B is, and will be treated as, a prime for the purpose of eliminating number pairs along the infinite number line as twin primes. A new Column F for Table IV records the fraction of the paired numbers beginning with each bloc that are not removed from consideration as twin primes by the mathematical action of the primes and falseprimes acting against the number line’s paired numbers. The fraction recorded in Column F is calculated by multiplying the fraction of the pairs not removed by the action of composites of 3 (1/3), then multiplying that remainder by the fraction (3/5) of pairs not removed as possible twin primes by the action of 5 and

72 onward through the odd numbers effective within each specific bloc. Thus, the fraction displayed for each bloc in Column F is calculated by multiplying the fraction of paired numbers for each prime or false-prime that cannot be removed from consideration as twin primes. Because false-primes are included, a greater fraction of pairs are removed from the number line and a smaller fractional value remains than is found in Table III’s Column E. That means that Column F’s fractional values and Column G’s accompanying decimal values are lower than those that can be obtained down each column using only true primes and their composites.12 Those calculations are done for both primes and false-primes by the author to establish a more restrictive bound than would be possible with just true primes and their composites. The calculations are quite simple, as illustrated for 3 through 9: For just 3: For 3 and 5 combined: For 3, 5, and 7 combined: For 3, 5, 7, and 9 combined:

R = (1/3) R = (1/3)(3/5) = 3/15 = 1/5 R = (1/3)(3/5)(5/7) = 15/105 = 1/7 R = (1/3)(3/5)(5/7)(7/9) = 105/945 = 1/9

New twin primes are added in each bloc of Table IV because the primes and false-primes upon which the composite calculations are based cannot generate enough composites at any given point to eliminate all pairs as twins. Column H in Table IV is the result of dividing Column G by Column E to show how many twin primes ought to be found in each bloc. 13 It is apparent from the limited entries in the table that the values in H are decreasing towards 2.000 as one descends the column. The values in H will never actually fall to exactly 2 or lower (such as to the 0 needed to block the creation of new twins) due to the relationship between the fractions in D and F. The number of odd pairs in a bloc, as shown earlier, is calculated by the formula: Bn = (n + 2)2 – (n)2 2 The number of pairs in a given bloc will therefore always be twice the defining number for that bloc plus two. Meanwhile, it has already been shown that for combination of primes and false-primes composed for Table IV, the fraction of pairs remaining will be the inverse of the defining number. Division of the number of pairs in a bloc by the defining number will be equivalent to: X = 2n + 2 n Such a relationship will always produce a result that is a decimal value greater than 2, with that decimal constantly decreasing as n increases. While decimal values are employed from Columns E and G to calculate H, the same relationship results and the value found for any given bloc in H will always be greater than 2. Use of the false-primes thus produces values in Table IV that are both predictable and that will always be greater than 2, thereby insuring that even with the extra mathematical impact against the existence of twin primes, twin primes cannot be extinguished by the more stringent constraints on twins found in Table IV. Table IV thereby provides minimum bounds on the number of twin primes, which is clear are infinite in number just as they would have to be under the more stringent constraints envisioned for Table IV. The question is resolved: twin primes have to be infinite in number in reality because twins would still be infinite in number under a prime plus false prime arrangement that would remove more pairs from the combined number line than could just real primes and their composites alone. In addition, the real prime plus false-prime combination has the advantage of predictability through infinity that is not evident with just the real primes, ensuring that a minimum boundary for twin primes will exist through an infinite and predictable progression.

73

Afterword It seems that the proof cannot be that simple, but it is. A minimum boundary that is absolutely predictable in its mathematical behavior has been found that demonstrates that twin primes cannot fall below a certain density per bloc ensures the infinitude of twin primes because the number of blocs is infinite as their infinitude, in turn, is based upon the infinitude of odd numbers. Simultaneously, the proof’s mechanics match up well with the decreasing density of twin primes. The prime/false-prime algorithm upon which the proof is constructed reveals a minimum of 2+ twins per bloc, and that matches up quite well with the need for a decreasing density for twins; the size of blocs steadily and predictably increases due to the increasing distance between the squares of the consecutive odd numbers that define each bloc. The true number of primes will vary from that, but the number of twins will consistently exceed the minimum set by the boundary, which does not conflict with the need for an infinite number of twin primes as set by the conjecture. What is clear is that that the composites of real primes alone can never wipe out the opportunities for twin primes along the dual number line. Now that it is proven that there is an infinite number of twin primes, what, then, about other 2k differences between primes where k is equal to any whole number? The arrangement for twins was accomplished by combining two parallel number lines whose variance was “arbitrarily” selected to be two because any even-number difference could have been set; two was selected because that met the conjecture’s parameters. Two number lines can be moved relative to each other to create any difference between them and their then-paired numbers. The author will follow this paper with examples of such other differences, creating new Tables III and IV to prove such relationships create infinite numbers for any 2k arrangements.14

Notes 1

”Twin Prime Conjecture.” http://mathworld.wolfram.com/TwinPrimeConjecture.html

2

Ibid.

3

McKee, Maggie. ”First proof that infinitely many prime numbers come in pairs.” Nature, 14 May 2013. http://www.nature.com/news/first-proof-that-infinitely-many-prime-numbers-come-in-pairs-1.12989 4

”Twin Prime.” http://en.wikipedia.org/wiki/Twin_prime

5

Ibid.

6

Kiger, Kenneth. AARP Blog. May 24, 2013. mathematician-solves-twin-prime-conjecture-problem/

http://blog.aarp.org/2013/05/24/50-something-

7

Chang, Kenneth. “Solving a Riddle of Primes.” New York Times, May 20, 2013.

8

”Bounded gaps between primes.” between-primes

http://michaelnielsen.org/polymath1/index.php?titla=Bounded-gaps-

9

There is an aspect of quantum theory that proposes all of existence is a vast hologram, but number lines do not exist in nature – they are human intellectual constructs. 10

Because the number lines are offset by two, composites of three are not found opposite of each other, leading to 2/3s of all number pairs being removed from consideration as twin primes, with one member of each of those pairs being a composite of three.

74

11

The removal of even numbers from consideration, and calculations, can be done because they are in all cases irrelevant. If even numbers were included in this instance, Ba would equal 8. 12

Of course, the composites of the false-primes are really composites of smaller primes, but they wind up being counted twice to arithmetically cut down on the number of possible twin primes. They do not actually remove any twin primes, but the calculations make it appear that the number of twins has been reduced. 13

It is important to remember that both primes and twin primes do not appear to follow orderly progressions. With calculations projecting 2+ twin primes per bloc, some blocs may have more primes than that, accumulating some of those decimal values to result with more than two twins in a given bloc. That happens with the twins that are permitted by the actions of the real primes and their composites. However, the actions of the composites of the real primes – say 3 and 5 – follow a cyclical pattern, as explained in the text, so that on the average there are more than two twin pairs per bloc due to the composites of 3 and 5; the addition of larger primes to the mix mask those effects by knocking out pairs not removed by the actions of 3’s and 5’s composites, but the composites for 3 and 5 follow a cyclical, or modular, pattern. 14

The introductory explanations will not be needed again, as they would remain the same. What will need to be determined is whether or not a similar relationship exists between D and F, producing a similar H, for 2k pairs to be infinite in number for any given whole-number value of k.

75

Part II: Science

76

Applied Mathematics and Physics Xeno of Elea created series of now-famous paradoxes that sought to defend the philosophical viewpoint that the then-known universe as a single entity in which motion and divisibility were illusions. That philosophical contention, known as the Elean School, has since been generally rejected, though quantum mechanics holds out tantalizing hints that there may be some truth to that viewpoint at the smallest levels of existence within the universe. The following paper disputes several of Xeno’s paradoxes and it presents a new argument against his Fletcher’s Paradox (or Paradox of the Arrow) based upon a calculable truth that even nondimensional instants of time carry an imprint or signature where motion exists.

77

Xeno’s Paradoxes: Discovering Motion’s Signature at a Dimensionless Instant of Time c. 2012

Xeno’s Paradoxes Xeno of Elea (alt. spelling Zeno, ca. 490-430 B.C.) created a series of paradoxes that ostensibly demonstrated the difficulties inherent in working with concepts involving infinity and they supported the philosophical contention that motion and divisibility were illusions, that the universe is singular, eternal, and unchanging.1 His paradoxes have led to controversies in the mathematical and philosophical worlds which linger – a bit – today. A member of the Eleatic School founded by Xenophanes and championed by Xeno’s mentor Parmenides in southern Italy (an area then known as Magna Graecia), Xeno supported the school’s doctrine of “all is one,” crafting his paradoxes to counter those devised by philosophers who opposed Parmenides’ ideas. Even though the Eleans believed that motion and divisibility were illusions, they applied considerable intellect to their philosophical arguments, as witnessed by the continuing disagreements over the solution of Xeno’s paradoxes. Xeno employed the technique of reducto absurdium, challenging opponents’ arguments and supporting his own through proof by contradiction. Xeno devised an estimated 40 puzzles, now termed paradoxes, to support the Eleatic philosophy, though most of those puzzles were variations on just several themes. His only known work, Epicheivemata, was a compendium of those puzzles. Most of his original text has unfortunately been lost, like many other ancient treatises; only 200 words of it yet remain. 2 What the modern world knows of Xeno’s paradoxes has arrived via other Greek philosopher-scientists, including Aristotle and Plato. With seemingly nothing left of Xeno’s Epicheivemata, and with the general lapse of learning in Europe during the Dark Ages, Xeno’s legacy dwelt in obscurity until his work was popularized by Bertrand Russell and Lewis Carroll. With that popularization, four of Xeno’s paradoxes have become generally familiar today: o o

The Dichotomy – which argues that absolute motion is impossible; The Achilles – relative motion is likewise impossible;

o

The Arrow (or The Fletcher’s Paradox) – time cannot be divisible; and,

78 o

The Stadium – half the time is equal to twice the time.

This paper will briefly examine The Dichotomy and The Fletcher’s Paradox to summarize Xeno’s reasoning behind them. After challenging the two paradoxes along classical lines of opposition to them, the paper will present a new argument against The Arrow that demonstrates that motion can be perceived at a dimensionless instant of time, which interestingly enough opens the door a tiny crack to uncover a little quantum weirdness in the macro world.

No Paradox Exists with The Dichotomy In The Dichotomy, Xeno proposed that before a person could traverse a given distance, he would first have to cross half that distance. But before that person could cross half of the initial distance, he would first have to cover half of that first half, and before that could take place, he would have to move across half of the first quarter of the distance. As any distance can be infinitely subdivided into eversmaller halves, motion would become impossible under The Dichotomy because the individual attempting to move could never initiate his journey as he would first be compelled to take an infinitely small step that equaled zero. If zero distance becomes the first step, then no motion could be possible and our world would become an illusion of divisibility and motion. However, no paradox exists.3 To demonstrate that contention and, coincidingly, that motion and divisibility are not illusions, Xeno’s Dichotomy can be recast into its alternative version that instead of barring the initiation of motion, it keeps any journey, however short, from ever being completed, again due to an infinite subdivision of the proposed journey into ever-smaller segments. Changing that aspect of the problem will make the absence of an actual paradox clearer, and what can be discerned from that recasting will then be applied to Xeno’s primary format. In that recasting, the perplexed traveler would first be forced to cover half of his proposed journey, then cross half of the remaining distance, and then half of what yet remained, ad infinitum. In that manner, an infinite number of steps, with a last step of zero, would be required to complete any journey, regardless of their physical length, thereby erecting, from Xeno’s perspective, an unspannable barrier to the completion of any movement from one point to another. Xeno’s alternative paradox ultimately breaks down because the Greek philosopher adroitly confused the process of infinite subdivision with motion. They are not equivalents, or even complementary operations. Someone giving thought to the paradox might not initially catch that distinction because it is easy to assume that the process of infinite subdivision requires time, so he or she might think that anyone performing an infinite series of subdivisions and traversings thereof must consume an infinite amount of time, as each subdivision requires some finite time for its completion, making the total process infinitely long. Infinite subdivision, however, exists outside of time. What does that mean? While the application of mathematics to a problem requires time for its completion, and mathematics would frequently employ numbers in the solutions to problems, no number has any temporal aspect simply as a number. A number is dimensionless until it is used or applied in some manner that, for example, defines a dimension or is used for counting. Infinite subdivision is likewise an intellectual concept that is not performed as an everyday, realworld process. While any distance could be infinitely subdivided, it is not how motion progresses. Motion involves moving from some starting point to a finishing point. In between those two points lies a measurable distance. A person moving between those two points has speed or velocity (two similar, but not identical concepts, as velocity involves a vector). If the course between the two points is a straight line, both speed and velocity would take the distance covered by the person in moving from the start to the finish and divide it by the time, yielding a value that can be expressed, for example, in miles per hour or centimeters per second. Energy is expended moving from the initial to the final point and time passes. 4 One can be said to cross any mental subdivisions of a given distance, but such subdivisions would be crossed in a time that is a function of distance divided by speed. Thus, infinitesimally small distances would be crossed in infinitesimally short time intervals. In contrast, Xeno’s paradox assumes that all subdivisions of a distance must apparently be delineated and crossed in finite times so that crossing an infinite number of such subdivisions would require an infinite amount of time; at a minimum there would be some zero-distance segment (a point) that could not be crossed as no motion could occur in such a segment. The process of infinite subdivision is thereby confused to create a paradox in which no distance

79 could ever be fully crossed. In The Dichotomy Xeno poses the impossible task of ever being able to start because the process of infinite subdivision requires the individual to first cross an initial subdivision of zero before being able to cross any other subdivisions of the total distance. But as seen in the alternative above, motion progresses with a speed that is determined by dividing the distance by the time expended. An infinitesimally small distance would require an infinitesimally small time to cross, given any normal average speed. Infinite subdivision is not a process undertaken by the traveler but rather only by the mathematician or philosopher in a thought-experiment. Nothing is ever actually infinitely subdivided in the physical world. As stated, motion is movement between two definable, essentially adjoining points – the start and the finish. If infinite subdivision could somehow decrease the distance between two points to the extent that they become one dimensionless point, there would then theoretically be no motion at that ultimate, dimensionless subdivision (though, technically speaking, no subdivision would yield just a point because half of something cannot be nothing; that would make all distances equivalent to nothing as infinite subdivision should then ultimately yield nothing but strings of dimensionless points along any course of travel, making any motion thereby impossible). Such an outcome is what Xeno cleverly argued would result if the universe were not a singular, indivisible entity. Xeno appeared to be arguing that is the outcome that would result from any attempt at motion, but assuming both that an infinite number of steps is required for any distance and that any trip would be barred from being initiated because any first or subsequent step would be limited to covering a zero distance, Xeno confused the issue for his readers because no infinite number of subdivisions would ever yield a distance equal to a dimensionless point and because, as pointed out above, infinitesimal subdivisions would require only infinitesimal amounts of time to cross, given a finite speed of motion. To summarize: motion occurs between dimensionless points, not within them, because they are artificial constructs which possess no internal passage of time. Xeno attempted to defend the belief that all motion was illusory because any distance was a collection of dimensionless points that could not be crossed and because infinite subdivision produced an infinite succession of divisions of a distance that would consume infinite time in their crossing, which he reasoned must be impossible.

The Fletcher’s Paradox The question of whether or not motion exists was dealt with by Xeno through The Fletcher’s Paradox. Xeno perceived that at any given dimensionless instant of time, the arrow could not be in motion. As the arrow must move through a series of dimensionless moments of time, passing from one definable point to another, no motion could take place within any of those dimensionless moments, making motion impossible if time is composed of (merely) a series of moments. Aristotle sums up the paradox, explaining that “If everything when it occupies an equal space is at rest, and if that which is in locomotion is always occupying such a space at any moment, the flying arrow is therefore motionless.” 5 Time must, in Xeno’s usage in his paradox, consist of an infinite string of dimensionless moments. With no motion taking place in any of them, motion must, as a consequence, be an illusion. The initial, fairly obvious solution parallels that for The Dichotomy; dividing time into dimensionless units is like dividing space through infinite subdivision. Such division is an intellectual, mathematical process, but it is divorced from movement in the physical world. Dimensionless instants of time do not comprise time any more than dimensionless points comprise distance or a line. The location of a point in either space or time can be defined, but it has no real existence because it has no dimension. Points (in space or time) are theoretical constructs that enable mathematicians and physicists to make calculations regarding processes in the physical world. Those dimensionless points provide references; just as calculations are made ignoring forces like gravity and friction to calculate ideal cases involving another property, the instant or moment of time has been employed as an intellectual tool to enable calculations to be made without the results being blurred by motion across time and space. Yet, even though the instant of time is a theoretical construct, motion can, in a fashion, be perceived at that construct. That ability to perceive motion as part of an instant overturns Xeno’s paradox through a second argument that, by means of mathematical exposition, illustrates how an observer ‘sees’ an object’s motion that accompanies, or is attached to, any given dimensionless instant. Such motion is not seen in the normal sense as progressing along the path of travel, but it does leave an almost, but not totally, imperceptible trace on that moment,

80 verifying that motion is in progress. Anything in motion carries an observable characteristic that sets it apart from an identical object or entity that is not in motion. A recognizable difference thereby exists between an object in motion and one at rest relative to an observer. The Fletcher’s Paradox sets up a problem concerning the divisibility of time that parallels that of the divisibility of space. Xeno maintains that for motion to take place, an object – specifically an arrow in this paradox – must move from the place it occupies at one moment to some other place at some other moment. In the case of the arrow, Xeno contends that no motion occurs within any dimensionless moment, so therefore the arrow cannot be in motion. Regardless of which moment is selected for examination, no motion ought to be detected according to Xeno. The ancient Greeks, as talented as they were, could not measure the speed of light; they did not even realize that light had a speed of propagation. Modern scientists have a fairly good understanding of its speed and have defined it to be exactly 299,792,458 km/sec. But apparently until now no one has had a means for detecting motion at a dimensionless moment of time. With no duration to an instant, there has theoretically been no way to measure motion, even motion with the swiftness of light, because motion requires time for an expenditure of energy to generate or sustain it. Nonetheless, motion is measurable at (though not across or within) a specific instant of time, even if the technology for doing so must advance for the observation to be actually made at the low speeds common in the everyday physical world. A thought-experiment illustrates how that observation can be accomplished. The experiment will also show how the Fletcher’s Paradox is resolved by demonstrating that an arrow’s motion can be observed at a frozen moment of time, thereby overturning Xeno’s contention that since motion cannot be seen, it must therefore not exist.

Measuring Motion at an Instant Xeno’s arrow is taken as the departure point for a thought-experiment that demonstrates that, contrary to Xeno’s understanding, motion can be observed at a dimensionless instant of time, and that experiment will then be extrapolated to note where a little touch of quantum weirdness can be found in the macro world. For the experiment, the arrow will be arbitrarily defined as one meter in length and its midpoint will be marked exactly by a dimensionless point. It will travel at a constant velocity of 100 km/hr. along a straight flight path. That midpoint will be located at point M for purposes of the experiment. The effects of gravity and any air or wind resistance will be ignored for simplicity of calculation, though those factors would have to be worked into any calculations involved in a laboratory performance of the experiment. Figure I illustrates that thought-experiment. In it an observer (at O) is located 10 m from the midpoint at the instant that a line drawn from O to M would be perpendicular to the arrow’s path of travel. The arrow’s point (located at P 1) thereby forms the right triangle OMP 1. P2 would be a point behind P1 along the arrow’s path of travel that creates the line P 2P1, which represents the visible length compression of the front half of the arrow due to the arrow’s straight-line motion from the observer’s left to his right at 100 km/hr. R would be a point at the arrow’s rear, situated at the end of the arrow at the point in time when O and M form the described right triangle’s longer leg (OM). OM, at 10 m, is the longer of the triangle’s two legs (OM and MP 1). OP1 meanwhile serves as that right triangle’s hypotenuse. The key point here is that OP 1 is slightly longer than OM, specifically by 1.25 cm (rounded) as calculated using the Pythagorean Theorem. At the specified moment chosen for the thought-experiment, M is the closest point along the arrow’s path of flight to the experiment’s observer at point O. That means that the light-carried image of the arrow’s midpoint at M that reaches the observer at O arrives before the light and the corresponding image of the arrow’s point at P 1 that started simultaneously from its position as the midpoint image started from M. The image from P 1 arrives later, however, because it had to travel a path that is 1.25 cm longer. Therefore, from the observer’s viewpoint, the arrow’s point has not yet reached the position on the diagram defined as P 1, but is rather seen at point P2, which is a point at an as-yet unspecified position ever-so-slightly behind P 1’s actual position but is situated so that the light, and the point’s image from its earlier arrival at P 2, arrives at the same time that the midpoint’s image does at O. The difference is exceptionally miniscule, but it is real. That means that the front half of an arrow in motion, under the conditions specified, must appear shorter, or compressed, to an observer than an identical 1 m long arrow at rest at exactly the same distance from the observer. Applying similar logic, it can be shown that the end of the arrow, physically at R 1 when the midpoint is at M, has to start from an earlier

81 point in time than the image’s departure time from M (when the rear of the arrow was at R 2) to arrive at O at the same time as the midpoint’s image at M; the back half of the arrow will therefore appear lengthened or expanded in comparison with a stationary arrow. That may seem a bit odd, but it really is not. The stationary arrow would also have different arrival times for a given instant’s image from P 1 than it would for the image coming to the observer from M, but no image compression would take place because the image seen at a given point in time from P 1 would have started earlier than the image from M. If the arrow is stationary, the complete image would keep reinforcing itself even though the starting times from different places along the arrow’s length would have begun at earlier times than the shortest possible pathway from M to the observer at O. If that seems at all confusing, consider two different types of energy travelling from the same point to the same observer, but at different speeds. The visible flash and smoke from a track meet starter’s gun fired at a track’s 200 m mark are seen by timers at the finish line before they can hear the sound of the gun firing. The sound wave arrives later than the light carrying the image, which means that when the sound wave arrives at the timers’ positions at the finish line, they are already seeing images that began their path to the timers later than the sound waves started. A timer is therefore hearing something that happened further in the past than the light that arrives at the same moment. One may think of such situations as gateways into slightly different pasts. The arrow in motion under the thought-experiment’s conditions sees a midpoint and a front point whose light-carried images

Figure I Simplified Diagram of Apparent Visual Compression R

M P2 >>--------------------------------------------------------------------------------------------------------> |_| / / | / / | / | / | / | / | / | / | / | / | / | / | / | / | / | / | / | / | / | / | / |/ |/ _____ | _____ observer’s visual plane

P1

82 O Note: Diagram not drawn to scale

began their journeys to the observer at different points in time in order to simultaneously arrive at the observer’s position. The image from P1 that began at the same time as the light-carried image from M will not arrive until a later instant because it has 1.25 cm further to travel. The front half of the arrow in motion will therefore appear ever-so-slightly shorter, or compressed in length, than the arrow that is not in motion. That tiny, but calculable, compression would provide the evidence that a given arrow is in motion; one can “see” motion at a given dimensionless motion in time because, with sensitive enough equipment, one could measure that compression. That compression is the motion that one sees at the dimensionless instant. Xeno was therefore wrong because one could, with advanced-enough equipment observe the effect of motion at a dimensionless moment, so the paradox dissolves into the nothingness of a failed idea. The ancients of Xeno’s time did not understand the speed of light and what individuals actually saw. Light carrying images to an observer serves, in a sense, as a time machine because time must pass for any image to reach an observer; more distant objects are further in the past than closer ones, which is a point wellunderstood by astronomers.

Calculating Apparent Compression A mathematical relationship can be established that relates the difference between P 1 and P2, which represents the apparent compression of the front half of the arrow, to the velocity of the arrow times the difference in the length of OMP 1’s hypotenuse minus the OM leg, all divided by the speed of light, beginning with: arrow velocity apparent compression) speed of light

varies as

P1 - P2 (or

OP1 (hypotenuse) – OM (leg)

What this says is that the arrow’s travel time from P 2 to P1 will be controlled by the difference between the length of the hypotenuse and the length of the specified leg and the amount of time required for light to travel their different distances. It would not take long for light to travel the difference in distance between those two sides of OMP 1, which means that there would be little time elapsing between the arrow’s point moving from P2 to P1, resulting in an almost miniscule distance being travelled between P 2 and P1 at the arrow’s slower speed of 100 km/hr. An algebraic expression of that relationship, with the variables rearranged for calculation of the apparent compression of the arrow’s front half with C a equal to the apparent length of the compression (or reduction in front-half length, as represented by P1 – P2), becomes: Ca

= v (OP1 – OM) c

Ca

= (2.778 x 103 cm/sec)(1,001.249 cm – 1,000.000 cm) 3.0 x 1010 cm/sec

Ca

= (2.778 x 103 cm/sec)(1.249 cm) 3.0 x 1010 cm/sec

Ca

= 1.157 x 10-7 cm

Solving:

This figure of 1.157 x 10-7cm represents motion’s signature on the arrow in flight under the conditions specified and is represented by the line segment P 1P2, which is greatly exaggerated in Figure I. It is the apparent foreshortening or compression of the front half of the arrow; while it is extremely minute,

83 its magnitude can be calculated, making it a real quantity and, as such, it contradicts Xeno’s contention that, since motion cannot be seen at a dimensionless instant of time, motion cannot be real because time consists of a string of such instants. Motion’s signature could be seen with technologically advanced enough equipment, so the paradox falls because motion could be determined at a dimensionless instant of time. What would be seen is a bit counterintuitive – a shortening of the front half of the arrow in the direction opposite to its path of flight, but it happens due to the different distances that light, and therefore the arrow’s image, must travel from different parts of the arrow in the thought-experiment as it is conceived. What one sees are images of parts (in reality an infinite number of parts) from ever-so minutely different times blending together to produce the image of the arrow that the observer sees. Now, an astute observer would note that another calculation must be made before the issue can be fully and finally resolved. Based upon the mathematics of the Pythagorean Theorem, OP 1 is minutely longer than OP2 due to P1P2 being a calculable reduction of P 1M. At the velocity of the arrow in this experiment, it actually does not amount to a difference that would change the experiment’s conclusion. To demonstrate, calculations will be done side-by-side comparing the time the arrow’s point takes to travel from P2 to P1 versus the difference in time that light would take to cover the difference in length of OP 1 (earlier calculated to be 1,001.2492 cm) and the length of OP 2 (calculated to be 1,001.2491 cm utilizing Pythagoras’s exceptionally wonderful theorem). The formula for velocity (v = d) will be used to compare t the travel times involved: Arrow point travel time P2 to P1

(OP1 – OP2)(c)

t = d v

t = d v

t = 1.1568 x 10-8 cm 2.278 x 103 cm/sec

t = 1.0 x 10-4 cm 3.0 x 1010cm/sec

t = 4.1641 x 10-12 sec

t = 3.3333 x 10-15 sec

The arrow point travel time from P2 to P1 is 1.2492 x 103 times greater than the time that light would require to cross a distance equal to OP1 – OP2. A slight alteration of the P1P2 distance would be needed to adjust the relationships so that the image from the adjusted P 2 position would arrive at the observer at the same time that the midpoint’s image arrived from M, but it is not critical to the conclusion reached here about the “visibility” of motion at a dimensionless instant of time. As one might surmise from the particulars of the thought-experiment, the numerical results could be changed by altering the experiment’s parameters, though motion would still leave an imprint at a dimensionless instant of time. An object’s closest point to its observer could be changed to any point on the object; it could even be replaced by a different object, but an observer would still note either a compression in front of the nearest point, a lengthening behind that point, or both. The size of the experimental object can be varied to any degree desired as could its distance from the observer. The object could meanwhile be moving towards the observer or on a path away from him. The object’s path need not even be straight; it could be in orbit around the observer or an astronomical body on which the observer is located. All cases would demonstrate apparent compression and/or lengthening/expansion. The following calculation illustrates one of many different possibilities. In it, the arrow and all of the other parameters from the original thought-experiment are retained, but its velocity is increased to 0.75c: Ca = v(OP1 – OM) c Ca = (2.25 x 1010 cm/sec)(1,001.249 cm – 1,000.000 cm) 3.0 x 1010 cm/sec Ca = (2.25 x 1010 cm/sec)( 1.249 cm)

84 3.0 x 1010cm/sec Ca = .9368 cm At high velocities, approaching that of light, the apparent compression (and, likewise, the apparent expansion behind the midpoint in this experiment) becomes very visible. There would be a larger adjustment for the different lengths of OP1 and OP2, but as OP2>OM due to the Pythagorean Theorem, there will be a noticeable compression/expansion effect and a clear refutation of Xeno’s reasoning in the Fletcher’s Paradox.

A Bit of Quantum Weirdness Quantum weirdness occurs at the ultra-small level of atoms and subatomic particles. One of the quandaries of quantum mechanics arises with the location/velocity problem: an observer can determine either the location or the velocity of a particle - but not both – because measuring one of those characteristics mediates the other. Due to forces and interactions, physicists cannot even predict a particle’s location with great accuracy because too many factors need to be controlled. The same problem arises with Xeno’s arrow. “Seeing” the arrow, even at a dimensionless instant of time, does not help an observer predict its future location. Due to the compression/expansion phenomenon resulting from motion, different possible velocities for the arrow, different distances from an observer, and different angular relationships between the arrow and the observer, the exact location of the arrow cannot be pinpointed at any given moment because what is “seen” is in the past, and different parts of any object so observed are in slightly different pasts. The differences in observed and true location may be extremely small, but their existence illustrates how, due to just this particular observational phenomenon, things are just not where they ought to be. The existence of the compression/expansion observational phenomenon crushes the Fletcher’s Paradox because motion’s signature can be calculated and, with improving nanotechnology tools, will be measurable. Apparent compression and expansion do not create much of a problem with the sizes and speeds that normally exist in the macro world, being well within the tolerances of most operating systems found at the macro level. But such differences could become important at the nanotech scale when high velocities are involved. It is now just a thought, but the time approaches when such concerns will become critical.

Notes 1

The Elean School proposed that everything was one, that motion was an illusion, and that most things perceived were likewise illusions. Xeno attempted to refute the non-Eleans by demonstrating the impossibilities behind their thinking, that, for example, the divisibility of space leads to the conclusion that motion is impossible. 2 http://www.masthcs.org/analysis/reals/history/zeno.html 3 Aristotle was an early Greek philosopher-scientist who presented a series of arguments against Xeno’s paradoxes, by pointing out fallacies in his logic (Physics). 4 Physicists still debate the nature of time; it is part of Einstein’s space-time continuum, but how it functions is still open to discussion. 5 Aristotle, Physics, 239b30. 6 The rounded figure used herein for the speed of light is the same for light travelling in a vacuum as it is for light travelling through Earth’s atmosphere.

85

A Change of Pace As an astronomy student and laboratory assistant, the author often wondered why Venus was so inhospitable to life while Earth was the only known home for it in the Solar System. Yes, Venus is closer to the Sun, but it is still beyond the inner edge of what astronomers call the “Goldilocks Zone,” where life ought to be possible. It would be hotter than its near-twin, the Earth, but temperatures there should have been within the range possible for life and its polar regions would probably not be much warmer than the tropics on Earth; it ought to be habitable. However, with temperatures hot enough to melt lead and a poisonous/choking atmosphere of sulphuric acid and carbon dioxide, what might have been apparently never was, or at most did not last for long. What could have driven Venus down a pathway so different from that of Earth – or, conversely and critically for the search for life elsewhere – what might have enabled Earth to develop so differently from Venus. An insight into the cause or causes of those differences might help astronomers gauge the possibility for success for the SETI project, the Search for Extra-Terrestrial Intelligence. It is just possible that Venus is the norm for what will likely be many Earth-sized planets to be discovered by astronomers employing ever-advancing technology. The following brief article sketches out several reasons for the differences between the two planets and proposes the root cause behind those differences. Future research into the questions touched upon by the article will undoubtedly bring new theories, clarifications, and insights as part of the ongoing process of science. It is the first of a projected group of pieces concerning science that will comprise a modest section of this e-book, which is still under construction. Enjoy the article and consider its implications.

86

Implications of Venus Being a Failed Home for Life c. 2011

Introduction Appreciation for the role that lunar gravity has played in making Earth more hospitable to life has been increasing in recent years. It has been generally accepted that having the Moon in orbit around Earth has stabilized its axial tilt, keeping the planet more consistently around a value – 23 o 27’ – that provides seasonal variations without extremes, as is the current case with Venus and was apparently the case with Mars in that planet’s geological past. Scientists have likewise theoretically connected tidal pools that fluctuate daily with the evolution of life on Earth. Computer simulations that have modeled a violent lunar birth have likewise now achieved widespread acceptance in the astronomical community. Many theories take time to attain acceptance in the general scientific community. One of those, which plays a critical role in this paper, was Alfred Wegener’s 1915 book, The origin of continents and oceans (originally published in German), that presented the ideas on tectonics and continental drift that he first proposed in 1912. His ideas did not gain academic credibility until fifty years later, when marine geology and the discovery that the continents appeared to move relative to the North Pole forced geologists to adopt Wegener’s ideas as the best explanation for the phenomena that they were discovering. That the Moon’s existence has been necessary, or at least extremely beneficial, for the development of life on Earth is now gaining general acceptance. The ensuing paper will optimistically add another credential to our satellite’s resume of accomplishments in fostering life on Earth. It will do so in the context of the failure of life to take hold on Earth’s nearest planetary neighbor, Venus. This paper, in an attempt to provide the broad context for the theories that it expresses, will first explore planetary evolution and the Moon’s connection to it, contrasting geological and biological conditions on Earth and Venus, which the author dubs “the failed home for life,” leaving the possibility that Venus may well have once harbored life forms similar to those found on Earth during the early days of biological evolution on our planet. It will then move forward, explaining a broader scope for the Earth-Moon relationship than is currently accepted. How Earth is being affected by the evolution and aging of that relationship will be tied into the generally accepted explanation of what went wrong for life on Venus and how Venus, and not Earth, might provide the basic model for the viability of life on the escalating number of extra-solar planets being discovered.

Why Not Twin Earths?

87 Comparing what we know about Earth and Venus, it would seem that the latter ought to be a reasonable candidate for life beyond the Earth, but that is extremely far from reality. If a hell exists in the Solar System, it exists on our nearest planetary neighbor. Lead can melt on its surface and the atmosphere is an inhospitable mix of an extremely thick CO2 shroud topped by a sulphuric acid cloud cover. Table I sets forth important similarities and differences between the two planets including the two stark differences of temperature and present-day atmospheric content; even a casual review of the table’s data would suggest that life ought to have achieved a beachhead on our closest planetary neighbor - if our understanding of solar system history is correct - had it not been for the Cytherean1 surface temperature and atmosphere. Something therefore must have happened differently on the two planets for Earth to have travelled one path towards an abundance of life forms and Venus another, completely different trail towards hostility to terrestrial-type life.

Table I Comparing Earth and Venus Characteristic

Earth

Venus

Distance from Sun

149.6 x 106 km (1.00 AU)

108.2 x 106 km (0.72 AU)

Relative Insolation

Earth = 1.0

1.9 x Earth

Mean Surface Temperature

15o C

464o C

Planetary Diameter

12,756 km

12,104 km

Mass

Earth = 1.00

0.82 x Earth

Global Structure

thin crust, mantle, molten outer core, solid inner core

thin crust (but thicker than Earth’s), mantle, solid core

Contemporary Atmospheric Composition

N 2, O2, H2O, trace CO2

Early Atmospheric Composition

H2, He

Volcanism

active plate tectonics, hot spots

CO2, H2SO4 then probably the same as Earth’s hot spots

Cratering minor; mostly weathered, subducted minor; volcanic resurfacing ____________________________________________________________________________________________________________ Significant differences highlighted in red

Examining Table I’s data, it is obvious that insolation, or the amount of solar radiation received at our neighbor’s distance from the Sun is almost twice as great as that received by the Earth, but those average figures do not take into consideration either substantial variations on each planet due to equatorial versus polar insolation levels or for the long Cytherean night, which ought to contribute to atmospheric and surface cooling. Earth’s axial tilt of 23.5o allows for frozen polar zones and torrid equatorial conditions. The Cytherean tilt of 177.3 o, combined with its long nights, should produce a variation there also, leaving large regions seasonally temperate, even if just barely by Earth standards. Historically, the Cytherean axial tilt was not likely 177 o at all times; lacking a stabilizing satellite, it probably varied over time, even if a collision knocked it into a greater tilt at some point in its past. Unfortunately

88 for the cause of life, all of Venus is excessively hot due to the high atmospheric CO 2 content. While there is no detectable molecular oxygen in Venus’s atmosphere, considerable oxygen is bound molecularly to Cytherean carbon to produce the planet’s crushing atmospheric pressure and the hot-house effect that results from carbon dioxide’s ability, as a greenhouse gas, to trap heat. Something happened differently on the two planets to smother Venus in that suffocating CO2 while Earth was left with just enough CO 2 to support plant life while substantial molecular oxygen facilitated the rise of the animals. There appears to be general agreement in planetary astronomy that active plate tectonics on Earth, contrasted with now inactive tectonics on Venus, is the significant difference between the two planets,2 but the root cause behind Earth’s active plate tectonics versus Venus’ now apparent lack thereof remains an open question. The following analysis proposes that the existence of Earth’s large lunar companion is the basic cause for the Earth’s hospitable conditions for life and it suggests that this finding will have implications in the search for intelligent life beyond the Solar System (SETI) and in the determination of the remaining time of habitability for life on Earth.

Brief Planetary Histories Much of the geological histories of Earth and Venus, based upon the best and latest scientific evidence, would have the two neighbors experiencing many similar creative processes. Both undoubtedly began through the aggregation of dust, then planetesimals, planetoids, and finally protoplanets, as is currently taking place around younger stars such as Beta Pectoris. Earth wound up somewhat larger than Venus, but the difference is very modest. At some point in its very early history Earth was hit a glancing blow by a Mars-sized protoplanet, with the collision resulting in one or possibly two moons partially comprised of material from the interloper and partially made up of Earth crust and mantle.3 The Accretion Era The four terrestrial planets accreted into rock and metal bodies in which the metal gradually differentiated down into the core of each planet, aided by the heat generated by the accretion process and by gravity. Earth had an early H2 and He atmosphere and probably also had some water. However, solar radiation would have driven the two lightest elements off into interplanetary space and broken down the stability of the water molecules. 4 Gamma rays plus ultraviolet light and other damaging radiation from the Sun would have had an easy time breaking down molecules in the early atmosphere because the Earth’s still undifferentiated core would not have been able to generate the magnetic lines of force that protect the planet today and there would not have been enough oxygen to create the O3 (ozone) that today blocks out much of the harmful ultraviolet. A strong planetary magnetic field requires a spinning, molten metal outer core surrounding a solid inner core for its generation; because their interiors were still in the process of differentiating, none of the early terrestrial planets had inner dynamos capable of generating strong magnetic fields.5 Venus, Mercury, and Mars were undoubtedly molded by the same processes that shaped the Earth during those early times, with differences being modest results of the varying planetary masses and their distances from the Sun, or Jupiter in the case of Mars. With the Asteroid Belt composed primarily of rock or nickel/iron bodies and inner Mercury having a metallic core, planetary compositions were generally similar; no available evidence suggests that the four terrestrial planets formed in any radically different way from each other or that they were formed from significantly different materials.6 What is accepted as fact in astronomy today is that the accretion process rapidly assembled the planets through a process of collisions, building ever-larger bodies. At one point in the latter part of the process there were up to 100 planetoids orbiting in the inner Solar System. Being inside the solar system’s ice zone boundary, water would have existed in the inner system, but it would not likely have lasted long. As the solar nebula condensed, water would have been gravitationally attracted to the growing planetesimals and been included in the early, merging bodies. The heat of the collisions that resulted from that accretion process would have driven water – as steam – out of the condensing bodies and into their atmospheres where, due to the lack of magnetospheres in those very early times, radiation would have disassociated the water molecules allowing most of the hydrogen to escape. 7 The accreting planets would have been left very hot, with surfaces comprised of molten rock and minerals. The entire accretion process took something on the order of 100 million years to complete.8 The “Billiard Hall” Era

89 More planets, planetoids, asteroids, and Kuiper bodies existed near the end of the Accretion Era than now remain in the Solar System. 9 Likewise, planets did not follow the same orbits that they do today. As they accreted, the planetary bodies began a winnowing process that continued until the illusion of seeming stability that comprises our modern Solar System resulted. The author calls that violent period of planetary collisions the “Billiard Hall” Era because there were so many collisions and planets changing their orbits so dramatically that the Sun’s retinue must have seemed to smash into each other like the balls in a recklessly active billiard parlor. During the Accretion Era, Earth orbited the Sun as a solitary planet prior to acquiring its lunar companion. The protoplanet Theia10 revolved around the Sun in an orbit very close to Earth’s. Theia was a Mars-sized object and gravitational interactions between it and Earth resulted in a collision, modeled by astronomers, that landed a glancing blow on Earth that ripped off terran crust and mantle that then merged with rubble left behind by Theia to form either one or two moons that circled Earth in a tight orbit. 11 Earth’s mantle was reduced in mass and at least some of Theia’s core merged into the Earth, modestly increasing our planet’s mean density. Because Earth’s nascent crust and its mantle were still very hot and plastic from the heat of accretion, terran gravity reestablished the planet’s spherical shape within about one day, leaving no scarring apparent today. Whatever remained of Theia headed out into interstellar territory, its damage done and its blessing bestowed. Earth’s collision with Theia was not an exception to business as usual in the solar neighborhood in the late Accretion Era, when the evolving system was less than 100 million years old. Rationale and some evidence exist to argue that each of the remaining inner planets experienced one or more major collisions prior to commencement of the Late Heavy Bombardment (LHB). Even the Moon, as already noted, had a major collision, which in the Moon’s case merged two satellites travelling in roughly the same orbit, combining a smaller body that now makes up the highlands on the side of the Moon not visible from Earth with the much larger body that comprises most of the mass of today’s Moon in an impact that had a relatively slow closing rate, facilitating a merger rather than a shattering crash. Mercury also suffered a major collision, one that ripped away a considerable portion of its crust and mantle. Mercury’s core had already substantially differentiated, which led to a much higher mean density for the innermost planet, just as the collision that had created the Moon did with Earth. If a moon could have coalesced from the debris of Mercury’s collision, it did not remain in orbit around the planet for very long because the Mercury’s Hill Sphere is too small for the innermost planet to retain a moon in competition with the nearby massive solar gravity well.12 Mars experienced a series of major collisions, with the most powerful of those carving out the Borealis Basin in the planet’s northern hemisphere; the Basin was only discovered by Mars orbiters. Earth-based observers failed to detect evidence of that collision because the Basin’s edges were depressed and because the depression covered such an immense area. 13 The collisions it suffered reduced the bulk of the Red Planet’s atmosphere, severely restricting its greenhouse capability and leaving the planet cold and dry.14 That leaves only Venus in the inner system. The massive tilt of the Cytherean axis of rotation could be the result of an oblique collision with a smaller body that did not have enough mass and relative speed to put sufficient debris into orbit that could coalesce into a moon. Venus has a slightly lower density than Earth, which lost crustal and mantle materials due to its impact with Theia, thereby giving Earth a resulting higher average density than it had previously possessed. It is therefore unlikely that Venus lost any appreciable mass due to that collision. That proposed collision with Venus is speculative; as of the date of this article it has not yet even been modeled, but it is a likely scenario given the amount of material orbiting the Sun in the inner system at that time. Collisions were thus the norm in the inner Solar System of the late Accretion Era. With more objects in orbit around the Sun in the inner system, the likelihood for collisions was good. Had those major collisions occurred during or after the LHB there would have been clues in the amount of cratering on Mercury, the Moon, and even Mars that would have dated those massive impacts to the LHB. A crucial question does arise with the Moon’s creation: what is the probability that major moons have been created by similar collisions in the zones that are populated by terrestrial-type planets around other stars. The answer to that question will have a great deal of influence on where to search for extra-terrestrial life once the search technology has been upgraded to the point that the markers for life can be detected on distant earths and on super-earths. The Giant Planets Following the initial accretion of Sol’s major planetary retinue, three outer gas giants are believed to have migrated outward from the Sun, establishing orbits similar to those that they have today. Neptune, the outermost and third-most massive giant, accreted about 30% closer to the Sun than it is today. According to the Nice Model, Neptune ostensibly accreted inside the orbit of Uranus, and those two planets began a dance that saw them exchange positions.15 After Neptune’s and Uranus’ exchange or in any event about a billion years after the formation of the

90 Solar System, Neptune began its steady drift further outward from the Sun. Beyond it laid the Kuiper Bodies and the Oort Cloud. A little rocket science makes what happened next a little more comprehensible. Applying standard Newtonian physics, gravity serves as a function of mass, causing bodies to be attracted towards each other in a process that is strengthened, as they close the gap between them, by the inverse square of the distance that separates them. Newton’s Second Law of Motion [published in his 1687 Philosophiae Naturalis Principia Mathematica] explains that the mutual forces of action and reaction between two bodies are equal, opposite, and co-linear. Combining Newton’s work on mathematics and physics with Kepler’s Laws of Planetary Motions, it quickly becomes evident that the Solar System, through a complex clockwork of ever-moving masses, is held together by gravity. Rocket scientists have employed those Newtonian principles to give space probes speed boosts through gravitational assists that bring the probes close to planetary bodies to pick up acceleration on their journeys of exploration. But it must be remembered that if a probe uses a planet’s gravity to pick up a little acceleration, the helpful planet in turn loses a little velocity that is proportional to the masses of the two objects, as is required by the Second Law. In that way a probe can gain a considerable boost while minimally impacting the planet’s orbital velocity; the difference between a probe’s mass and a planet’s mass is so great that the effect upon the planet’s velocity is infinitesimal. Neptune hovered on the edge of a wider Kuiper Belt, which is believed to have extended farther inward during the early years of the Solar System than it does today. With approximately 17 Earth masses, Neptune had considerable gravitational impact upon the rock and ice bodies that revolved about the Sun beyond that ice-giant’s then-closer orbital path. Neptune itself was a rock and ice body, as models of its interior strongly suggest. There simply was not enough material left beyond Neptune to create another giant world. But Neptune would not leave the outer Solar System at peace. The planet’s powerful gravity began tugging at the rock and ice bodies that lay out beyond it in deeper space. Those that could not lock themselves into a simple mathematical resonance with Neptune [like Pluto’s 2:3 orbital synchronicity with Neptune] were drawn inward through Neptune’s warping of space-time. Some of those bodies, including Neptune’s moon Triton, were captured by one of the outer giants. The giant-planet satellites with retrograde motion compared with their primary’s equatorial rotation are believed to be such captured Kuiper bodies.16 Uranus and Saturn then provided additional gravitational boosts to the inward-bound Kuiper and Oort objects. Massive Jupiter finally either (1) pulled those migrating bodies into itself or into orbit around it, (2) expelled them from the Solar System, or (3) sent them further sunward. As a result of Newton’s Second Law, Neptune’s orbit ratcheted outward, as did Uranus’ and Saturn’s to lesser degrees, all due to the acceleration they imparted to the Kuiper and Oort bodies dislodged by Neptune. It is theorized that Jupiter’s orbit remained fairly similar to what it had originally been, perhaps actually shrinking by 12%.17 If Neptune’s distance from the Sun did increase by 30%, or anything close to that figure, a fair amount of mass had to have been gravitationally dislodged by that blue ice-giant and sent sunward. At this juncture in the discussion it is important to remember that great amounts of ice, which is a major component of Kuiper bodies, thereby streamed sunward after the planets had formed or had almost completely formed.

Water Arrives Many astronomers now believe that the LHB included objects from the ice-rich regions of the Solar System brought water to the terrestrial bodies. There is evidence for ice in deep polar craters on the Moon, 18 and Mars seems to have limited amounts of ice on its surface plus evidence for earlier flowing water and shallow seas; 19 even Mercury now seems to have small amounts of ice. 20 The Moon and Mercury lack protective atmospheres, so any water that they did acquire through the LHB would logically have rapidly evaporated off into space; if any was preserved, it would therefore have to be under the surface or in deep craters where sunlight cannot penetrate, as very recent discoveries suggest. While Mars still holds some water from its ancient seas, Venus has none now evident 21 in contrast with Earth, which has retained massive oceans of that liquid so essential for life. Evidence was recently announced (October 5, 2011, reported online in Nature) that strengthens the theory that Earth’s large supply of water originated in the Kuiper Belt.22 Comet Hartley 2, which likely descended into the inner Solar System from the Kuiper Belt, has a hydrogen-to-deuterium ratio that matches of the water in Earth’s oceans. Six other comets whose HI to HII ratios have been studies, had a different ratio; however, their source of origination was believed to have been the theorized Oort Cloud, which lies far beyond the Kuiper Belt. Lead author of the study, Paul Hartogh of the

91 Max Planck Institute for Solar System Research, said that the team will have to study other comets to determine if their ratio’s match those of terrestrial water. Should additional studies determine a similar HI to HII ratio for the short-term comets that originate in the Kuiper Belt, the Belt will more assuredly stand out as the source of the water in the inner system that would have arrived during the LHB. As Belt objects are mixtures of rock, ice and other frozen gasses, their prevalence in the LHB would strengthen the case for the existence of water on Venus following the LHB. Then it would be judged that Venus had possessed water to help lubricate the planet’s early plate tectonics and quite possibly have filled oceans, like on Earth, which could have harbored early, primitive life-forms. It is unlikely that the Sun would have evaporated away the entire ancient Cytherean hydrosphere just through insolation. The Sun has gradually increased its luminosity as it has aged, with Venus receiving about 20% less insolation at the time of the LHB by current estimates, which in turn suggests that there would have been less chance of the Sun boiling away all of the water Venus received from collisions with incoming ice-laden bodies. Something in addition to solar radiation alone had to be acting upon Venus, in a way that did not occur on Earth, for Venus to lose all of its water. The primary culprit in that loss would, of course, have been the greenhouse effect brought on by the thick, CO2-dominated Cytherean atmosphere. Earth at the same stage of planetary evolution – at about the time of the bombardment of ice-rich bodies – also possessed an atmosphere rich in CO 2, but it did not suffer from the runaway effect that evaporated the Cytherean hydrosphere and put that water vapor within reach of the solar radiation that would break down the H2O molecules and allow the hydrogen to escape into space from the upper reaches of the planet’s atmosphere. That means that some process was initiated on Earth that was either absent on Venus or was shut off in an early epoch there. That process was plate tectonics. On Earth, plate tectonics create new oceanic and continental crust through extensive volcanic activity in places where lava wells up through long volcanic chains, such as the Atlantic Ocean’s mid-oceanic ridge and the Pacific Ocean’s Ring of Fire. In other areas, subduction involves large pieces of existing crust being forced down below a neighboring plate, taking with it carbon, which is then stored in rock metamorphosed through that subduction. Carbon was thereby cycled out of the atmosphere, assisted by life processes. Those life processes removed CO2 from the atmosphere and in return released O2 back into it. As molecular oxygen gradually built up, metals were oxidized, O3 was generated protecting the evolving life from the Sun’s intense ultraviolet radiation, animal life that required oxygen arose, and carbon from dead, decaying life forms was deposited and eventually subducted. It had to be an ongoing process because volcanoes released fresh CO2 into the atmosphere. Venus has numerous volcanoes and volcanic remnants, but no active volcanoes have been spotted to date by missions to the planet or through earth-based radar imaging. Most of the surface is comprised of cooled lava flows, which have resurfaced the planet since the cessation of the earlier bombardments. The existence of those volcanic features indicates that Venus has experienced volcanic activity like Earth and has had massive lava flows not unlike the Deccan Traps that spread across the Indian subcontinent 65 million years ago. Volcanic activity on Venus may have ended, or substantially ended, because the Cytherean mantle has cooled more than has Earth’s and that cooling would produce a crust thicker than that of the Earth. 23 As plate tectonics on Earth are driven by heat from the planet’s interior as part of the planetary cooling process, it would be reasonable to hypothesize that the cooling of Venus and the thickening of the planet’s crust in comparison with Earth’s would have strangled plate tectonics and volcanism due to that cooling. Venus could very easily have had plate tectonics and a substantial hydrosphere following the LHB, but cooling of the Cytherean interior ended the tectonics and their carbon recycling, allowing the dense CO 2 component of the Cytherean atmosphere to take off on a runaway greenhouse spiral that boiled off the water and caused it to rise into the upper atmosphere where solar radiation attacked the water’s molecular integrity. The Cytherean gravity was unable to retain the planet’s hydrogen any more than Earth’s gravity can retain considerable atomic hydrogen, so Venus lost its water and became inhospitable to life. Only hints of plate tectonics remain there to suggest that the early history of Venus may have been very similar to Earth’s, 24 but the cooling of the Cytherean mantle and core set Venus on its divergent path, away from its twin. The key question then arises as to how Earth, further from the Sun than Venus, retained greater interior heat, particularly in light of Earth’s mass and diameter being only modestly greater than Venus’s.

The Additional, Critical Difference of Earth’s and Venus’s Magnetospheres

92 Earth possesses a very strong magnetic field; Venus does not. This is a second very significant difference, and it provides an additional clue as to why Venus and Earth followed such strikingly divergent evolutionary pathways. Among the Solar System’s eight planets, neither Venus nor Mars has a notable magnetic field, though geological evidence exists pointing to an ancient magnetic field for Mars. The author theorizes that Venus also once had a magnetosphere like the other planets, but lost it, with any traces of its prior existence being buried deep within the planet’s unusually thick crust that was in part built up from the planet’s periodic widespread lava flows, which would not easily yield underlying evidence of magnetic orientation in the substrata. Iron particles in the crust will align with a planet’s magnetic field, but the massive lava flows that have covered Venus after the planet lost its field would have left no apparent orientation to any iron particles in the upper portions of the planet’s thick crust. Magnetic fields arise, as scientists believe, from turbulence within a terrestrial planet’s liquid metallic outer core. The existence of a liquid metal core within the Earth has long been determined through studies of earthquake p- and s-waves. With temperatures equaling those of the Sun’s outer layer, a great deal of turbidity and convection takes place in the Earth’s core. What are still primitive computer models agree in their general outlines with what is theorized to exist within the core and with what is needed to create Earth’s magnetic field. Earth’s field helps protect the planet’s surface from the solar wind, generating the magnetosphere which traps the potentially harmful particles streaming in from the Sun.25 Venus lacks a magnetic field and accompanying magnetosphere. The Cytherean ionosphere partly mitigates the absence of a magnetosphere, but damaging solar radiation that would destroy terrestrial-type life does reach down through the planet’s atmosphere, unlike on Earth where most harmful radiation is blocked. That lack of a protective magnetic field is attributed to a cooling of the Cytherean metallic core. Venus’s density of approximately 5.2 grams/cm3 in comparison with Earth’s 5.5 gm/cm 3 argues strongly that the two planets have very similar overall compositions.26 With similar densities and with births from the same section of the early proto-solar disc, the two planets ought to have very similar compositions and structures. Some divergence in geological history would be possible, but Venus and Earth have followed strikingly different pathways with significantly different results, as is summarized in Table II.

Root Cause of the Terran-Cytherean Differences Earth and Venus condensed from the same proto-solar disc of gas and dust in basically the same neighborhood. That being the case, it is logical to assume that they accreted from basically the same elements and compounds and it is not unreasonable to assume that they would have wound up with similar masses and volumes. According to the most recent thinking, Mercury had been larger than it is today, but it lost considerable mass from its early crust and mantle through a collision with a planetoid following the period during which Mercury’s metallic core had differentiated out from its mantle and crustal materials. Mercury’s mass would also have been restricted due to its proximity to the Sun; both the circumference of the inner planet’s orbit, its proximity to the solar gravity well, and the power of the early solar wind would have restricted its mass. Mars’s size was meanwhile affected by its proximity to Jupiter and the disruptive influence of the powerful Jovian gravitational force. 27 With Venus and Earth condensing and accreting just 41 million kilometers apart in the solar nebula, it is hard to picture how the two planets would have condensed out of different materials, or in greatly different proportions, when Mercury and Mars are dissimilar from Earth primarily in size but not in composition. Nothing in current scientific thinking provides a reason for the pathways of Venus and Earth to diverge so substantially from the time of their accretion through to the modern epoch. Table II Significant Earth-Venus Differences Characteristic

Earth

Plate Tectonics

Active

Volcanic Activity

Worldwide activity

Venus Inactive; some evidence for ancient activity Dormant now, but possibly periodic

93

Magnetic Field/ Magnetosphere

Strong Activity

Inactive

Atmosphere

N2, O2, H2O, small CO2

Heavy CO2 concentration; H2SO4 clouds

Hydrosphere

Abundant H2O

No H2O

Crust

Moderate thickness

Thicker than Earth’s

Mantle

Plasticity in upper region (Asthenosphere)

Not believed to have significant plasticity

Core

Molten metal surrounding solid inner core

Solidified metal

_____________________________________________________________________________________________________ Plate tectonics, volcanic activity, and atmosphere recapitulated from Table I for ease of reader review.

Following the Accretion Era, the inner Solar System then experienced the LHB, which brought water in the form of ice from the Kuiper Belt due to gravitational disruption of that region by Neptune and the channeling influence of the gravitational attractions of the three other giants planets. Both Venus and Earth should have acquired considerable water, creating substantial hydrospheres. Venus, surprisingly, would have been an easier target for those icy bodies to hit due to the Cytherean orbit being smaller than Earth’s, creating a greater density of incoming bodies being attracted sunward by Sol’s gravity; Venus should therefore have received more water than Earth for the same geometrical reason that it receives greater insolation than its outer neighbor. Venus ought to have had oceans, which would originally have helped lubricate tectonic activity, but it was shortly after the LHB that the planets’ pathways diverged. Venus began cooling internally faster than Earth. The Cytherean mantle began to cool and its plastic asthenosphere began to harden. The core likewise cooled and solidified, or ‘froze.’ Tectonics is a cooling process, utilizing heat that rises from below through convective currents to drive the plate motions. As Venus cooled, plate tectonics would have slowed, then halted; only periodic lava flows, like those of the Deccan Traps, remained to resurface the planet. With plate tectonics halted on Venus, no mechanism remained to cycle Cytherean carbon, so the planet’s atmosphere and surface heated up through a runaway greenhouse effect, which in turn boiled off all of the planet’s water and allowed the water molecules to be broken up in the planet’s atmosphere and the hydrogen to escape into space because the Cytherean gravity was not strong enough to retain the lighter and most common isotope of that lightest of all elements. 28 At the same time, with its core cooling, Venus gradually lost its magnetic field and magnetosphere, which in turn meant that the planet’s surface would have steadily less protection from solar radiation that would be damaging to life, if any had arisen there [and it did arise very early on Earth, as testified by Australian stromatolites]. Venus became inhospitable to life while Earth’s plate tectonics continued functioning through to today and its outer core remained molten enough to permit the planet to retain its magnetic protection from the solar wind.29

The Ultimate Question What mechanism enabled Earth to retain its internal heat, thereby allowing life to develop and thrive there? Remembering that plate tectonics are a cooling process, something must have been generating additional heat for the Earth that was not doing the same for Venus. Earth and Venus do have one very significant difference, which the author hypothesizes put humanity’s home on a different pathway from its twin: Earth has a major moon while Venus has no satellite whatsoever. That made the difference. To understand how, one must first recognize that all astronomical bodies are storage batteries. They are not so much electrical batteries – though electricity is involved in some of their mechanics – they are heat storage batteries. Heat can be stored much like electrical charge, but it does dissipate through various means, which are often referred to as entropy. Like a gas released into a large enclosure, heat will dissipate until it is evenly distributed. In the case of astronomical bodies, that heat is distributed out into space. Stars are energy generators and storage batteries that produce heat through their nuclear reactions, fusing their raw materials into ever-heavier elements until their nuclear reactions can no longer sustain them and they are then forced onto one of several possible evolutionary pathways, as determined by their masses. Gravitational energy, through compression of the proto-stellar material, builds up the heat in a nascent star until it reaches the

94 point where the heat and pressure turn on the fusion reaction that generates helium from hydrogen. Planets and moons, as smaller bodies, do not produce heat through fusion reactions. The terrestrial planets receive some heat from the Sun, but their internal heat, found in their mantles and cores, results from the decay of radioactive elements and, also significantly, from compression caused by the gravity of their masses acting upon their internal structures. So far, there is not much to distinguish Earth from Venus. Both Venus and Earth have generated a good portion of their own internal heat through radioactive decay. The three primary elements involved in radioactive decay on Earth are Uranium, Thorium, and Potassium. Table III provides their radioactive half-lives.

Table III Half-Lives of Earth’s Most Significant Radioactive, Internal Heat-Generating Isotopes Isotope 238

U Th 40 K

232

Half-Lives 4.47x109 yrs. 1.41x1010yrs. 1.28x109yrs.

_______________________________________________________________________ These three elements reside in the mantle, 30 wherein they radiate their heat upward, driving plate tectonics, and downward, helping keep the outer core molten. Some scientists will say that there was a different distribution of those elements between the two planets, leaving Venus with less, but that leaves the unanswered question of why Earth would have actually received such a substantially greater concentration of radioactive elements than the other terrestrial bodies if it indeed had received such a greater supply. Some means, other than the heat from radioactive decay, must be found to account for the differences between Earth and Venus. Earth still has active plate tectonics, a molten outer core, and a magnetosphere whereas Venus has none of those or at least not at a level presently perceptible. Something significant must be added to the heat budget equation for Earth to have retained those characteristics which Venus no longer displays. Radioactive decay is important, but using the half-lives of the three radioactive isotopes from Table III, almost half of the original 238U, a bit over seven-eighths of the original 232Th, and nine-tenths of the starting supply of 40K have decayed since Earth formed. There would be intermediate products in the decay process, but the only conclusion possible is that both Venus and Earth have been losing their supply of radioactive materials that have been heating the planets internally. The Cytherean interior no longer produces enough heat from radioactive decay to drive plate tectonics and keep enough of the core molten to preserve the planet’s magnetic field. Something else must have contributed significantly to Earth’s total heat budget for it to have retained those characteristics so necessary for the continuation of life on the planet. Both planets also retained heat from their accretion processes, but as noted regarding radioactive isotope decay, that residual accretion heat combined with radioactive decay no longer drives tectonics or retains a molten outer core within Venus, arguing that neither retained accretion heat nor radioactive decay would be enough, even combined, to sustain Earth’s tectonics and the planet’s molten outer core. No reason exists to suggest that Venus and Earth have significantly different compositions, particularly with radioactive materials, that would have provided Earth with greater internal heat than Venus. Venus is closer to the Sun, so it does get more insolation, but no mechanism is currently known that would greatly increase internal heat loss by raising surface temperatures. Venus’s closer position to the Sun would create greater gravitational heating of the Cytherean interior due to the planet’s closer orientation to the Sun’s powerful gravity well, but that would be offset by Venus having the least orbital eccentricity of any of the eight planets, cancelling out much or

95 most of the advantage in internal heating that Venus would experience due to its closer proximity to that gravity well. The Sun does have an effect on the Earth’s tides, but the Moon exerts a greater influence, which leads to the inference that the combined lunar and solar influence on Earth’s internal heat budget is greater than the sole influence of the Sun on Venus. The Moon adds a significant gravitational component that so far seems to be overlooked in assessing Earth’s heat budget. It is a declining in its absolute contribution to Earth’s heat budget, just as are radioactive decay and latent heat from planetary accretion, but it has been historically significant on a geological time scale. The Moon’s gravity pulls on Earth’s mantle and the core just as it does on the hydrosphere and the atmosphere. That constant churning both creates heat and impacts upon the convective currents that drive plate tectonics, modifying the convection that helps the Earth’s interior cool through tectonics. Plate tectonics would thus be prolonged on Earth in comparison with Venus due to the Moon’s gravitational effect. Modeling of the Earth’s core in order to determine how the planet’s magnetic field is generated is still in its mathematical infancy, so any modeling of the Moon’s effect upon Earth’s internal heat budget would be at best a very rough approximation.31 Critics of an enhanced lunar role will argue that the Moon’s mass is too small to have such a striking impact upon Earth’s geological history and the rise of life on the planet. What, then, could account for the different geological pathways followed by the two planets? Similar sizes, masses, distance from the Sun, and densities argue for similar histories. Earth has a slightly higher density, but if astronomers are correct about the Moon’s creation, greater average planetary density would have resulted from the loss of crustal and mantle material in the collision with Theia that created the Earth’s satellite. One could then argue that Earth’s loss of part of its mantle helped facilitate plate tectonics, but that benefit would have to include an offset for easier heat loss through a thinner mantle, which ought to have helped cool the planet and slow down, or even end, plate tectonics if there were no other source adding to Earth’s heat budget. It should also be noted that the Moon once orbited the Earth considerably closer than it does today. 32 The Moon is gradually spiraling outward due to tidal (gravitational) forces. The gravitational energy that slowed the Moon’s rotation, locking one face toward the Earth, and is causing the Moon to ever-so-slowly spiral outward, winds up adding energy to the Earth, ultimately contributing to its heat budget. So far, the Earth has, as a storage battery, retained enough heat internally to both keep the outer core molten and drive plate tectonics, providing the planet with its protection from the solar wind and the runaway greenhouse effect that choked off any possibility for life or for continuing life on Venus. Earth, then, becomes the exception to the rule that Venus sets: terrestrial bodies of approximately the Earth’s mass and diameter would cool and lose the plate tectonics and magnetic fields that would make them hospitable to life unless they had a large companion satellite that increased their internal heat, helping to keep those two processes active longer than would be expected on a solitary planet.

Evidence for Gravitational Heating from Elsewhere in the Solar System Evidence for gravitational heating elsewhere in the Solar System is very recent in its origin and it did not become evident until the NASA Voyager probes caught dramatic photographic evidence for it. The first evidence of such heating came from Voyager’s images of Jupiter’s innermost Galilean moon, Io. Great plumes of sulphur were being blown out of volcanic cones, with some of those ejecta forming a torus of sulphur surrounding Jupiter. Io is affected by Jupiter’s gravity as well as that of the moons Europa and Ganymede, which are locked into mathematically simple orbital resonances with Io. Europa has a global ice crust, which has a smoothness that indicates a resurfacing that must be driven by internal heat likely caused by gravitational heating of the moon’s interior. Ganymede’s surface is meanwhile a mixture of ancient and relatively young rocky and icy regions; some warming force – again likely gravity – must be heating the interior to drive out the ejecta that produces the newer regions. Further out, Callisto has a dark surface speckled with bright, icy ejecta. A subterranean ocean serves as a reservoir for the bright ejecta. Gravity and proximity provide the energy that warms the moons’ interiors; they are far too small to have retained any internal heat from their accretion or from radioactive elements. Mercury, which is smaller than Ganymede, has some source for its internal heat. Although there is no current volcanism on the planet, it seems to still possess a molten outer core that enables it to possess a magnetosphere. Venus does not have a magnetosphere even though it is considerably larger than Mercury, so something must still be acting on Mercury that does not have a similar effect on Venus. Mercury (with an orbital semimajor axis of 57.9 x 10 6 km.) experiences roughly eight times the tidal effects as Venus (with a semimajor axis of 108.2 x 106 km.). In addition to that, Mercury’s orbit has the greatest eccentricity of any of the eight planets (0.2056) while Venus has the least (.0068). 33 Tidal heating due to Mercury’s proximity to the Sun combined with

96 the planet’s orbital eccentricity appears to heat the interior enough, on combination with the other, standard internal heat sources to keep some of the core molten and thus able to generate an albeit weak planetary magnetic field.34 In 2005, NASA’s Cassini spacecraft found a large plume of salt water erupting from Enceladus, a small, icy moon of Saturn, about 1/6 th the diameter of Earth’s Moon. NASA scientists announced October 6, 2010 that “tidal heating” combines with a wobble in Enceladus’ rotation to generate the heat that powers those geyser eruptions. Radioactive decay heating would not be enough to prevent cooling and the water’s solidification. The diminutive moon’s oval-shaped orbit brings Enceladus closer to Saturn and then takes it farther away, “the gravity squeezing the body’s insides [varying] accordingly, creating friction and thus heat.” 35 Following that determination, it was announced at the joint meeting of the European Planetary Science Congress and the Division for Planetary Sciences October 3, 2011 in Nantes, France that some areas of Enceladus’ surface have snow crystals piled up to a depth of 100 meters, which scientists believe have taken millions of years to accumulate as the result of the spray from the moon’s geysers.36 Those snowy piles demonstrate that the geysers are not a transient phenomenon. One final clue to the sources of Earth’s total heat budget comes from geologists who also wonder from where it all originates. If they still ponder from where the total internal heat budget originates, then room remains in the equation for gravitational or tidal heating. Astronomers and geologists observe it occurring on other worlds and moons, ranging outward from Mercury to the gas giants’ satellites. The Moon does not benefit from its proximity to Earth and the tidal tug of war between its primary and the Sun because its other sources of internal heat have long since failed to produce enough heat to keep the interior molten in combination with gravitational heating; it is a clear indication that worlds’ interiors will eventually go (relatively) cold when radiational heating and heat from the early formation are exhausted. However, the Moon’s maria stand as reasonable arguments for an ancient molten interior that is now cooled, evidenced by its present lack of a magnetosphere.

Implications of Having a Major Moon The Cytherean hostility to life as it is known on Earth has significant implications concerning searches for extra-terrestrial life, including SETI – particularly now that new techniques and technology have found well over 1,200 planetary candidates, and technology is being improved to enable astronomers to locate Earth-sized extrasolar planetary bodies. Instead of locating new Earths that could be candidates, they may well be finding new Venuses that will not be abodes for life. Earth needed its satellite to generate the internal, gravitationally-induced heat that powered the plate tectonics that kept the planet hospitable to life. As there is little difference in size or general planetary composition between Venus and Earth, only the existence of the Moon kept plate tectonics active on Earth. Those on Venus would have died out far too early for life to have continued there; radiational heating of the Cytherean mantle and core could not be sustained. The entire core of Venus solidified (or froze) due to lack of sufficient heat to keep the outer core molten enough to generate a magnetosphere, and that strongly suggests that the mantle was not kept plastic and active enough to power tectonics. Carbon was not therefore recycled and Venus became an uninhabitable hothouse. Earth had the Moon, and, as with an estimated 8-plus percent of Earth-sized planets as predicted by computer modeling,37 Earth had the good fortune to have a nearby body that could generate the gravitational heat to keep tectonics active. Referring back to Table III, nearly half of the 238U, roughly seven-eighths of the 40K, and approximately 15 percent of the 232Th present at Earth’s birth has already decayed, generating shorter-lived radioactive elements to continue producing internal heat. Earth also has a large, solid inner-core, which suggests that the cooling process is well under way. The existence of the Moon is the big difference between the two planets, and by process of elimination, its gravitational heating of the Earth’s interior seems to be the primary reason why life-sustaining tectonics have continued on Earth while they are not evident on Venus. Though considerable computer modeling would be needed to ascertain the possibility, it is even possible that tectonics did not get started on Venus because the planet may have needed a large, close satellite to provide enough heat to initiate the tectonic process. If it did get started on Venus, the planet lacked enough internal heat to continue it, condemning any life that might have gotten started there to an early extinction as the planet’s water boiled off and the Cytherean atmosphere became toxic to oxygen-breathing life. While the Moon’s gravitational attraction affects all parts of the Earth, it creates notable tides within Earth’s three great oceans. The crust and mantle are also subject to the Moon’s pull, but their measurable movement is very slight. On the other hand, Earth’s oceans, where measurable, evidence major bulges or tidal swells. Two of Earth’s great oceans are unbounded, so tides can be observed as rises and falls in their surfaces. The water ocean – the one best known to most people – has tides that, due to the interplay between the Moon’s gravitational attraction

97 and Earth’s rotation, actually run ahead of the Moon’s local zenith. Due to local factors the magnitude of the water ocean’s tidal bulge varies by location. Earth’s great ocean of air is also attracted by the Moon’s gravity. As the atmosphere involves less mass than the water ocean, 38 the atmospheric ocean has the least impact upon the Earth’s overall heat budget. Neither the water-ocean nor the atmosphere is bounded as their upper surfaces can expand, with the ocean’s water against the yielding atmosphere and the atmosphere ultimately against the vacuum of space. The third great ocean has by far the greatest mass. Earth’s outer core comprises an ocean of molten metal – primarily iron – that completely envelops the solid inner core and is, in turn, bounded by Earth’s mantle. It possesses a viscosity lower than that of water and has a thickness of 2,310 km (1,370 mi). Having a low viscosity, the outer core is subject to lunar gravity, but because it is bounded on its top and bottom by essentially unyielding, solid material, it reacts to lunar attraction with tidal bulges that are forced into becoming density waves bounded or constrained by the mantle above and the inner core below. That density wave would create pressure against both the mantle above and the inner core below, and pressure creates heat. Pressure would increase the heat in the molten outer core and in both the lower mantle and the upper part of the inner core. The density wave would also generate friction as the current created by the moving density wave pushed laterally along the underside of the mantle. Similarly, there would be pressure and friction applied against the surface of the solid inner core. That density wave may be thought of as a piston creating pressure in a cylinder, like a manually operated tire pump does when it creates pressure to force air into a tire. The pump’s cylinder reacts to the pressure and friction with the creation of waste heat, only inside the Earth that heat is not wasted. Instead, convection and conduction push heat up into the mantle, eventually helping to drive plate tectonics, and downward into the inner core helping to slow the outward spread of its solidification. As the density wave passes through the outer core, it leaves turbulence, like eddies, in its wake, making the outer core harder to model, but it does help mix the heat and may well add to the generation of the Terran magnetic field. All of this suggests that more computer power will be needed to fully model Earth’s heat budget and how that heat mechanically drives the plate tectonics that so fortuitously accommodate terrestrial life. The foregoing means that those searching for Earth-sized worlds would need to modify their searches in one of two ways. First, they could search for Earths with relatively large satellites like the Moon. That could require more sensitive equipment than would be necessary to search for extra-solar Earths, but the existence of such moons would give greater hope for life on any worlds that they do discover. There would, of course, be possible spectral signatures for oxygen on habitable worlds that would also help distinguish habitable worlds from their inhospitable siblings. Searchers could also seek out “Super Earths” which ought to have enough mass to retain heat from their initial formation and enough radioactive material to have sufficient internal heat to drive plate tectonics without the existence of a major satellite. 39 A combination of the two strategies – seeking planets with large satellites plus greater size than Earth – could also work, but there would be limits imposed by excessive plate tectonics and volcanism that would inhibit the rise of life rather than foster it, as the Moon has fostered life on Earth. The age of the planet would also be an important factor because moons can gradually spiral away from their primary and because plate tectonics are a cooling process; after sufficient time, any planet’s interior would cool off and the planet would become less hospitable because the tectonic process would no longer remove carbon that would otherwise contribute to a CO2-driven greenhouse. Seeking Earths with large moons and Super Earths would be the likeliest, best strategies for locating life-bearing planets; planets of Earth-Venus masses without satellites would very probably be barren, barring some other mechanism for limiting the CO2 build-up in their atmospheres. While it certainly will be argued that the Moon’s influence on Earth’s internal heat budget is not that great, some mechanism must exist to explain the difference between Earth’s still-viable plate tectonics and the lack of them on Venus. Following Occam’s precept, lunar gravitational heating represents the simplest explanation, and therefore the more likely correct one, given the known and hypothesized factors; any other explanation will be considerably more complicated, requiring greater certainty in its data. Earth has had more internal heat than its near-twin even though, from their inner-system proximity, the two planets ought to have similar compositions. Considerable evidence exists elsewhere in the Solar System that validates gravitational heating enough to make it feasible for it to have been an important factor in sustaining plate tectonics. The data concerning Earth suggests that radioactive heating and residual accretion heat ought to have lessened as they did on Venus and that gravitational heating has also decreased substantially since the Moon has steadily receded from its remaining parent. Plate tectonics will therefore have a limited remaining lifespan on Earth, geologically speaking.

What Is Ahead for Earth?

98 The evolution and aging of the Moon’s gravitational heating relationship with Earth finds both support and concern with emerging discoveries about what is happening within the Terran core. According to a May 29, 2012 New York Times story reporting on a London University College study, lead researcher Dario Alfe and his colleagues have discerned that outer core iron is rapidly releasing its heat through conduction rather than through the convection that drives planetary plate tectonics. 40 The research team’s findings, initially publicized in Nature, state that, “The theoretical consequences of this discrepancy are far-reaching. The scientists say something else must be going on in Earth’s depths to account for the missing thermal energy in their calculations.” A series of possible mechanics have been proposed as a result: • More radioactive material exists than previously suspected; • Inner core iron is solidifying faster than expected and releasing latent heat of crystallization; • Chemical reactions among the core’s iron alloys and among the mantle’s silicates are fiercer and more energetic than previously believed; and, • Something novel and bizarre is happening, but is as yet undetermined. Taken in the broadest view, this new data indicates that the Earth’s core is rapidly cooling on a geological time scale and, that over such a time scale, plate tectonics will be facing cessation to the detriment of climate and the continuing existence of life on the planet. Earth’s rapid dissipation of the heat through conduction is likely a good explanation of what happened on Venus. The Cytherean core cooled, tectonics slowed, then stopped, and the generation of the planet’s magnetosphere stopped. As no simple explanation has emerged as to why Venus would have considerable less radioactive material relative to Earth and why its heat from accretion should be significantly less than Earth’s, it must mean that the Moon in its orbit revolving around Earth has provided mankind’s home with considerable internal heat due to gravitational action stirring the Terran interior. The London findings suggest that radioactivity is likely lessening within the Earth as it had to have done inside Venus for the observable differences between the planets to exist. The same would hold true for any latent heat from accretion. That would leave Earth with the advantage of gravitational heating due to the Moon carrying on its constant churning of the Earth’s interior, but as the Moon’s orbital semi-major axis has been gradually increasing, that too is of ever-lesser efficacy in heating Earth’s interior. Earth is simply cooling off and it is evolving into another Venus. Its plate tectonics will cease before volcanoes totally die out, so sulphur dioxide will be released into the atmosphere as a source for sulphuric acid clouds. Carbon would no longer be recycled, building up the CO2 in the atmosphere that would eventually lead to a runaway greenhouse effect. The oceans would boil off, with the result that water molecules would be split apart by incoming radiation that is no longer blocked by a magnetosphere that died out when the outer core crystallized, thereby losing its ability to generate a magnetic field in conjunction with a contrasting solid inner core. None of that would happen instantaneously, but as the tectonics gradually lessened, sulphur and carbon would become increasing burdens for Terran life, as may well have happened to nascent life on Earth’s twin. A tipping point would be reached, but it is more likely to be on a geological timeframe, with effects being noticed only in minute increments. That does not mean that humanity could not produce a runaway independently of planetary geology by increasing its emissions beyond Earth’s ability to recycle deleterious waste products. Venus and Earth may well therefore be the once and future twins.

Notes 1

The author prefers using the adjective “Cytherean” over “Venusian” when referring to Venus. By removing a sizeable portion of Earth’s carbon from the biosphere, plate tectonics help keep carbon from combining with oxygen in a runaway greenhouse effect. A lower availability of carbon means a lesser chance for a runaway to get started. Other carbon has been bound up and buried in the substances used today as fossil fuels. 3 The most recent scientific theory suggests that today’s Moon resulted from the merger of two breakaway bodies that were parked in close orbital proximity to each other; that is not to be confused with Earth’s recent, temporary second moon (2007-8), a captured tiny asteroid. See “A second moon for Earth.” 2

4

Any water resulting from the accretion process would not have been able to condense on the hot planetary surfaces still cooling from the accretion. That water would have evaporated into the atmosphere where radiation would have disassociated the molecules; the hydrogen from that process would have escaped off into space.

99 5

The turbulence and spinning of the outer liquid core generates the strong electric currents that must surround a spinning solid inner core in order for a terrestrial planet to generate a magnetic field. Terrestrial planets cool from the outside in (crustal cooling) and the inside out (core cooling, or freezing). Inner core cooling had to have begun before a magnetic field could have been generated. That strongly suggests that Venus possessed a magnetic field while its interior cooling was still incomplete. Venus is now believed to have a totally solid core because the planet has no magnetic field. 6

Mercury seems to be a modest exception because its core is, for the planet’s diameter, proportionally larger than Earth’s, but astronomers now hypothesize that Mercury, like Earth, suffered a collision with another large, planetsized body that blew off much of the planet’s crust and mantle. But in Mercury’s case, either no moon formed or, if one formed, it did not remain in orbit very long due to the Sun’s proximity and the disruptive power of its intense gravitational field. As Earth and Mercury would still have been semi-plastic at the times of those proposed early collisions, the planetary scarring from those impacts would have filled in, leaving nothing other than Earth’s satellite and Mercury’s relatively large metallic core as testimony to those events. 7

Deuterium (2H), being heavier than regular hydrogen, is retained more easily. It can serve as a marker for early water in the inner Solar System. 8

The giant planets are believed to have accreted faster than the terrestrial ones. The terrestrial bodies accreted within a 100 million year span that began with the outset of the condensation of the solar nebula. 9

There were approximately 100 times as many asteroids and Kuiper bodies at that time than now remain in the Solar System 10

Theia, a Titan, was said to have been the mother of the Moon, hence the selection of that name for the protoplanet that collided with Earth, creating the Moon as one of the results of that collision. 11

Planetary scientist Erik Asphaug of the University of California, Santa Cruz, And Martin Jutzi of Switzerland’s University of Bern have run computer simulations that indicate a high likelihood that Earth’s present satellite formed from the collision of two predecessors in close orbit to each other. More recently, Earth captured a very small asteroid from near-Earth orbit, with the tiny moonlet going into a polar orbit for 13 months in 2006-7 before it escaped back into a heliocentric orbit. Liz Kruesi, “A second moon for Earth?” Astronomy, February, 2011, p.34. 12

The Hill Sphere is that region surrounding a planet where the planet’s gravity exceeds that of the Sun. Satellites generally have to orbit within the inner half of the sphere in order to have a chance for a stable orbit. Mercury, being exceptionally close to the Sun has a miniscule Hill Sphere, which means that it would have little chance of retaining a satellite. 13

The Borealis Basin was recently discerned from data provided by the Mars Reconnaissance Orbiter and the Mars Global Surveyor. It is at least 3.9 billion years old and it covers an area of 8,500 km across by 10,600 km long, covering roughly 40% of the planet’s surface. Project scientists determined that the impacting object was at least 2,000 km in diameter and it struck Mars at a 45 o angle. The northern hemisphere basin it left behind presents the smoothest terrain on the planet, leading some geologists to believe that it was once covered by an ocean [see “Borealis basin” and Chandler]. An ocean forming in the basin would have done much to erode away impact scars from the event and any impacts from the LHB, negating the need for plate tectonics and volcanic resurfacing to eliminate impact scars from that collision. 14

Mars is believed to have experienced at least five major collisions, with one with a Pluto-sized planetesimal creating a vast crater or depression known as the Borealis Basin [above], discovered through data from Mars orbiters. There would have been some reduction in the Red Planet’s mass, but it would have been smaller than the major collisions that blew off considerable parts of the crusts and mantles of both Earth and Mercury. As they are not in retrograde orbits, Deimos and Phobos may well have aggregated from materials blown into orbit from one of those collisions, but with less material available and Jupiter exerting its influence, the moons did not become major satellites. According to recent information, Phobos’s interior is about one-third porous space.

100 15

Originally developed at the Cote d’Azur Observatory near Nice, the Nice Model was proposed by R. Gomes, Hal Levinson, Alessandro Morbidelli, and Kleomenis Tsiganis in three papers published in Nature in 2005. 16Liz Kruesi, “Captured moons of the giant planets,” Astronomy, February, 2011, p.30. 17

Jupiter had a tremendous gravitational impact upon the Asteroid Belt. Jupiter’s orbit would have shrunk if it was gravitationally casting more asteroidal material, mass-wise, out of solar orbit than it was sending Kuiper and Oort objects sunward. 18

In a July, 2012 report in Geophysical Research Letters, NASA’s Lunar Reconnaissance Orbiter has provided data that indicates that Shackleton Crater, near the southern lunar pole, has crater walls composed of 5-10 percent water by weight. The 3 km deep crater is in permanent shadow and is therefore cold enough for ice to be retained. 19

Ice can be found in the higher Martian latitudes, especially during winter. The new NASA rover Curiosity will search Mars for signs of ancient life, which would have required H 2O. The rover will investigate clay-like sediments in the Gale Crater that could have sheltered life. Meanwhile, the European Space Agency’s Express orbiter discovered that much greater amounts of water than suspected exist in the Martian atmosphere, which at times becomes supersaturated. 20

Arecibo’s radio telescope had previously imaged locations at both of Mercury’s poles that had a high likelihood of being ice. NASA’s Messenger probe team announced November 29, 2012 that ice and organic compounds had been found in deep polar craters based upon observations for which they are the best fit answers. www.telegraph.co.uk/science/space/9713116/ice-found-on-Mercury.html 21

Deuterium, which is not as quick to escape into space as the more common lighter isotope of hydrogen, has been detected in the upper levels of the Cytherean atmosphere, providing evidence for an earlier existence of a Cytherean hydrosphere. 22

Reported online in Nature and in “A Delivery From Space That Made a Big Splash,” The New York Times, October 11, 2011, p. D3. 23

Earth’s oceanic crust, which comprises roughly three-quarters of the planet’s total crust, runs 5-10 km in thickness while continental crust (the continents and the continental shelves) is 30-50 km thick. Magellan mission gravity measurements have indicated that the Cytherean crust is 50 km thick. That result would be expected if the planet’s interior was cooling while declining convection was no longer powering plate tectonics. 24

Continental-sized uplands – Ishtar Terrain in the Cytherean southern hemisphere and Aphrodite Terra along the equator - may show evidence of much earlier plate tectonics. In any case, no sign exists of active tectonics either today or in recent geologic times. 25

Earth’s magnetic field does not have the same axis as the planet’s overall axis of rotation. It has also slowly wandered over time relative to the geographical poles, suggesting that the inner, solid core rotates on a different axis than the Earth as a whole. That would not be impossible, given that the outer core is molten. The viscosity of the outer core would moderate drastic changes in the orientation of the inner core’s axis of rotation, but it would not prevent variability in the magnetic axis’ alignment relative to the planetary axis. The two axes could have been set on different orientations by earlier impacts that came after the differentiation of iron and nickel out of the mantle and into the core. That thinking is not too different than that expressed by a team led by Dr. Christina Dwyer from the University of California at Santa Cruz and by Michael Le Bars of the Research Institute for Out-of-Equilibrium Phenomena of Marseille, France. Both tackled the cause behind the magnetization of some of the rocks brought back from the Moon by Apollo 11; coming to different conclusions about how the magnetization came about, the researchers posited that the Moon’s mantle rotated against the outer core, creating a lunar magnetic field [King]. Variations in rotation are thus not judged to be impossible. The inner and outer cores need not be locked onto one axis of rotation with the mantle and crust.

101 26

Some of that density difference would be accounted for through the Moon’s violent birth, with lower-density Terran crust contributing to the Moon’s mass. 27 Jupiter has enough gravitational force to affect Mercury’s orbit; disrupting the orbits of the planetesimals and planetoids that aggregated into Mars would not be beyond its power. 28

Gravity did not retain hydrogen in Cytherean atmosphere, leaving the atmospheric oxygen free to bind with other elements, particularly carbon. 29

Geophysics professor Bruce Buffett at UC Berkeley’s Department of Earth and Planetary Science notes that the cores like Earth’s that still generate magnetic fields need to keep generating heat: “Planets do die. They run out of heat and wind down. So the Earth has to keep regenerating.” http://phys.org/news/2010-planetary-magneticfields.html 30

Those three elements are known to be rock-loving rather than metal-loving. That accounts for their abundance in the mantle versus the core. 31 Professor Buffett (see note 29) meanwhile sees computer modeling of the Earth’s interior in its infancy. “I would claim that the models are very far from being realistic, and therefore the inferences we draw from the about the Earth are going to be questionable.” One serious problem with the modeling is the scale on which the planet’s interior can be modeled. Another problem is with the viscosity of the outer core used for the models, because “the viscosity of the liquid core is so low – more fluid than even water – the stirring motions there span the whole range all the way down to one-meter scales, which the models cannot handle.” http://phys.org/news/2010-planetarymagnetic-fields.html 32

Lunar recession has been measured by laser ranging employing mirrors left on the Moon by Apollo mission astronauts. The current rate is approximately 3.2 cm per year, which would give a much closer past orbital distance regardless of whether or not the rate varied in the past. Tidal forces vary inversely by the cube of the distance, which means that lunar tidal forces would have been considerably greater in the past, which in turn would have meant greater internal heat production from the effects of lunar gravity on the Earth. 33

Interestingly, the Moon has the greatest orbital eccentricity of the Solar System’s larger planetary satellites at 0.0549, nearly twice that of number two Titan’s 0.0288. Its orbital eccentricity would contribute to the Moon’s ability to stir up Earth’s interior and thereby generate internal heat. In return, there is some evidence for Earth stirring up the lunar interior; to other proposed explanations for lunar magnetism lasting longer than it should have due to a molten outer core, one needs to consider Terran gravity churning up the lunar interior and thereby allowing the Moon to retain a molten outer core like Earth’s longer than original thought through gravitational heating. 34

See, for example, “Past and present tidal dissipation on Mercury,” A. Rivoldini, M. Beuthe, and T. Van Hoolst from the 2010 Copernicus meeting at http://meetings.copernicus.org/epsc2010/abstracts/EPSC2010-671.pdf. The authors state: “Tidal forces from the Sun deform Mercury and because of internal friction generate heat. The amount of generated heat depends on the internal structure of the planet, its rheology, and on the tidal potential. 35

Bill Andrews, “Mysterious water on Enceladus results from lunar ‘wobble,’” Astronomy, February, 2011, p.21.

36

Liz Kruesi, “Enceladus is a snowy moon,” Astronomy, February, 2012, p.18.

37

Jason Palmer, BBC News “Moons like Earth’s could be more common than we thought,” www.bbb.co.uk/news/science-environment-13609153 reports on computer simulations run at the University of Zurich’s Institute of Theoretical Physics and the University of Colorado’s Laboratory for Atmospheric and Space Sciences. The simulations found a 1 in 12 chance of generating a system in which a planet with at least half of Earth’s mass had a satellite of at least half the Moon’s mass arose, with the full range of probabilities running from 1 in 45 to 1 in 4. 38

That can be seen by the entire mass above a given location at sea level producing only one standard atmosphere of pressure while just 33 feet of water generates one atmosphere of additional pressure above a diver.

102 39

Harvard researchers Dimitar D. Sasselov and Diana Valencia hypothesize that super-Earths would have more radioactive materials and greater retained heat with which to generate more vigorous mantle churning. “Planets We Could Call Home,” Scientific American, August, 2010, pp.38-46. Planetary cores of super-Earths could also be completely solidified through great pressure, dampening any magnetosphere and limiting opportunities for life thereon. Prospect s for life might therefore require a limited range of planetary sizes without the existence of a large satellite. 40

Natalie Angier, “The Enigma 1,800 Miles Below Us,” The New York Times, May 29, 2012, p.D1.

Author’s Biographical Information At the advice of others, the author has included biographical information by way of his professional resume. While having had a career in government research as well as a lesser one in education, the author is not a mathematician or a scientist per se. He has been one who takes on seemingly impossible, or at least fairly difficult, challenges, mostly because he enjoys difficult challenges. He disassembles problems into their smallest data components and then examines how that data interrelates, in essence discovering the system that drives a particular problem. In his work on Ulam’s Spiral Square, for example, he found a repetition (read modularity) of the positions in the number system that permits primes, easy to spot in base-6 but exceedingly difficult with which to work in base-10. That was already known to mathematicians, but that pattern was not applied previously in conjunction with the pattern of embedded (not spiral) squares that establishes the problem. Obviously, the problem was not impossible, it had just not been seen in the manner that would contribute most efficiently to its solution. Problems are systems. One needs to analyze how they work to solve them. Reducing problems (or systems) to their smallest components has thus been the author’s strategy. The foregoing solutions have been put forward for your enjoyment. The author will add to them as he is able to do so. If someone has a seemingly impossible problem requiring analytical help, they may contact the author via e-mail.

103

Albert H. deAprix, Jr. 7 Wallace Street Scotia, NY 12302-2308 E-mail: [email protected]

Education: M.A. Political Science. Graduate School of Public Affairs, SUNY Albany. Concentrations in American political/governmental system, political history, behavior analysis, and research methodology. Received research assistantship. Masters’ essay examined history and impact of segregationist attitudes on Southern voting behavior. B.A. Ohio Wesleyan University. History and journalism majors; political science, education, English, and astronomy minor equivalents. Copy editing intern Schenectady Union Star as part of journalism major. Student teacher at Olentangy H.S. (Ohio). Astronomy Department laboratory (teaching) assistant. Kappa Delta Pi (education) and Phi Alpha Theta (history) honoraries. History senior research seminar examined segregationist voting in Alabama; journalism senior seminar tested for bias in 1948 presidential campaign news coverage. Possess a New York State Permanent Certificate in secondary social studies (7-12). Scored 283 on teaching methodologies, 286 on LAST, and 287 on social studies content examinations (100-300 scoring curve) taken for recertification.

104 Government and Management Training Received: Intergovernmental Personnel Act training in civil service management with N.Y.S. Civil Service Commission. Intensive six-month program; examined legal basis for classification, position classification techniques, designing and conducting examinations, and salary studies. Collective Bargaining with U.S. Civil Service Commission and with Pace University Institute for Sub-urban Governance. Both programs dealt with collective bargaining techniques and fiscal research in the public sector. Governmental Affairs Leadership Seminar with U.S. Jaycees. participation in politics and development of model legislatures.

Washington training program focused on

Hugh O’Brian Foundation Leadership training seminar for East Region. Dealt with program administration and leadership training techniques. Officers’ Training School with U.S. Jaycees. Focused on leadership training, finance, and program management. Legislative Functions with Cornell Cooperative Extension and N.Y.S. School of Industrial and Labor Relations. Examined operations of county legislatures and other local policy-making bodies.

Government Research, Policymaking, and Administrative Experience: Local Fiscal Impact Note Coordinator: (N.Y.S. Senate). Examined all state legislation to determine if it had a local fiscal impact, required production of fiscal notes, maintained database, and advised legislative staffs on how to calculate fiscal impacts. Undertook special research projects regarding legislative issues. Researched special reports. Policy Analyst/Research Associate: (N.Y.S. Senate Research Service/Task Force on Critical Problems). Duties included redesigning and managing Senate’s model student legislature and supervising the staff developing educational packets for participants. Conducted research on a wide variety of local government, public employee, and economic development issues and wrote papers from five to 200+ pages on issues of legislative concern. Municipal Personnel Technician: (Schenectady County Civil Service Commission and County Manager’s Office). Duties involved conducting personnel studies and deriving cost estimates for County Manager, researching contract negotiation data, and representing county on Governor’s Manpower Development Council. Resolved a longstanding scheduling problem for county air traffic personnel. Schenectady County Legislator: Scotia-Glenville-Niskayuna Held numerous leadership positions, including: Majority Leader, Deputy Chairman Chaired Education and Planning (included library and college matters) and Ways and Means Committees Special accomplishments included: Chaired selection process for new county manager Reapportioned county legislature after Board of Elections was unable undertake it Led legislative efforts that established Niskayuna and Glenville branch libraries Led intergovernmental cooperation efforts Served on interview and selection panels Personnel Director Information Technology Manager Co-chaired Cultural and Educational Strategy Subcommittee, Hunter Plan Implementation Task Force Oversaw county contract negotiations team for county legislature

105 SUNY Albany Research Assistant to Dr. Clifford Brown on his election analysis, The Jaws of Victory monitored presidential polling, assembled contribution data

Teaching and Education Experience: Social Studies Teacher: (Canajoharie Central School District) Global Studies I and II, American History and Government, Economics, and Participation in Government. Enriched courses with public opinion polling (conducted in-school), a mock election, a model legislature, and a mock investment portfolio. Served as a cooperating teacher with St. Rose’s teacher training program. (Limited to two years by state retiree earnings cap). Political Science Instructor: (Schenectady County Community College) United States Government and Politics (POL123). Enriched course through a public opinion survey; worked with a computerized class management platform (Angel). Summer School Teacher: (Schenectady City School District) American History and Government and substituted in other subjects; graded Regents’ exams. Substituted in summer school two more years and graded Regents’ exams. Substitute Teacher: (Capital Region BOCES, six years). Served as a substitute middle school and high school teacher through the BOCES substitute coordination service in the Schenectady, Cohoes, Mohonasen, Voorheesville, Niskayuna and South Colonie school districts and the BOCES special education program. Served as a long-term substitute in special education in Buckeye Valley School District, Delaware, Ohio. Trustee with Schenectady County Community College Chairman, Vice Chairman, Treasurer Negotiated additional funding from sponsor Helped organize Board training sessions Developed efforts to provide trustees with greater exposure within SCCC community Co-originator of community conference to focus on needs SCCC could fulfill Active with statewide community college trustees association (ABC/NYCCT) Presenter, NYCCT Annual Conferences Role of the Legislator-Trustee in Facilitating Sponsor Support The Legislator-Trustee and Information Flow Presenter, ABC Trustee Institutes Developing Successful Relationships with Local Sponsors Participant and Delegate, ABC/NYCCT Annual Conferences and Institutes Member, NYCCT Education Committee Guest Speaker, Faculty Council of Community Colleges Community Colleges Face Challenges from the State’s Strategy of Raising the Bar Academically Assisted N.Y.S. Civil Service Department’s Municipal Services Division with I.P.A. program Facilitator, county government and public administration training Oral test panel member N.Y.S. Senate Coordinator of Senate Student Policy Forum redesigned and managed Senate’s model legislature Supervised staff development of educational packets for over 400 annual participants (supervised work of over 20 team members) Designed and initiated model legislature for N.Y.S. Jaycees Interviewer for Dr. Leigh Stelzer’s (SUNY Albany) surveys on attitudes toward public education

106 Member, Scotia-Glenville School District Homework Committee Member, Middle School Subcommittee Provided in-service training program on county government health issues for capital district nursing directors’ association Hugh O’Brian Youth Foundation Chaired initial Youth Leadership Seminar N.Y.S. East Region Assistant Head Counselor, Counselor N.Y.S. South Judge, N.Y.S. South Roundtable Facilitator on careers in government Organized Schenectady Jaycees County Government for a Day program Led seminar on county government and helped students prepare model county legislative meeting Instructor for N.Y.S. Jaycees at annual Officers’ Training School Developed and conducted training programs for Jaycees on project management, chapter management, public speaking, and press release writing (including development of training manuals)

Research, Writing, and Related Experience: State Legislative Publications: • The NYS Senate Minority Conference Legislative Proposals. N.Y.S. Senate Local Fiscal Impact Note Office • Bolstering New York State’s Human Services…Any Volunteers? N.Y.S. Senate Task Force on Critical Problems; examined impact of volunteers and incentives for increasing volunteer efforts in NYS • The Economic Eclipse of New York State…The Shadow is Passing. N.Y.S. Senate Task Force on Critical Problems; reviewed NYS’s economic progress and proposals for strengthening state’s economy • The 1980 Census…Where Have All the People Gone? N.Y.S. Senate Task Force on Critical Problems; publication helped uncover a half-million undercount in New York’s 1980 Census total and save $50-100 million in federal local aid per year through the stimulation of timely local efforts • The Governor’s Promises vs. The Governor’s Actions (editor). N.Y.S. Senate Research Service • Deregulation…An Idea Whose Time Has Come. N.Y.S. Subcommittee on Government Regulation of Business • Government Costs Can Be Cut. N.Y.S. Subcommittee on Government Cost-Saving Systems • Federal Aid…Where Does New York State Stand? N.Y.S. Senate Standing Committee on Taxation • Wrote numerous short papers on state issues (Issues in Focus) and annual issue-area summaries (Summary of Legislation) for N.Y.S. Senate Research Service Journalistic Work: • Un-healthy debate: Attacks on Canada’s system are unwarranted scare tactics. The Sunday Gazette OpEd piece • Losing freedom to gain security: Homeland efforts go overboard in name of protecting citizens . The Sunday Gazette Op-Ed piece • Party politics at jail. The Sunday Gazette Op-Ed piece • Let county take lead on development. The Sunday Gazette Op-Ed piece • Downtown vital part of county. The Sunday Gazette Op-Ed piece • MERGER DEBATE: Streamlining services feasible, necessary for revitalization. The Sunday Gazette OpEd piece • SPORTS: Schenectady Rugby. Capital District Insight (co-author) • Wrote numerous sports stories and drafted over 1,000 press releases for politics, government, and community service organizations

107 •

Ohio Wesleyan University journalism experience; OWU Transcript (student newspaper) reporter Fastimes (faculty-staff newsletter) editor-reporter

Mathematics: • Examining Certain Open Questions in Number Theory published on http:www.pdfcoke.com/Al%20deAprix; a compilation of mathematics and science articles organized as a developing e-book, 2011-2; articles todate include: o Ulam’s Spiral Square Patterns Explained (reached #1/19,900 on Yahoo in 2/12) o Goldbach’s Conjecture: Analysis and Validation (reached #20/199,000 on Yahoo in 9/12) o Generating Pythagorean Triples Using Only a (reached #28/141,000 on Yahoo in 9/12) o The Pattern of Pythagorean Triples (reached #28/141,000 on Yahoo in 9/12) o Validating Hardy and Littlewood’s Conjecture F (reached #1/13,800 on Yahoo in 9/12) o Xeno’s Paradoxes: Discovering Motion’s Signature at a Dimensionless Instant of Time • Internet mathematics articles have been accessed in a minimum of 56 countries by 11/1/12 Fiction: • Annually contributed poetry to Rhythms, the Schenectady County Community College literary publication • Write poetry and short stories Science: • Developed a coherent theory explaining why plate tectonics ceased on Venus, leading to its thick CO2 atmosphere and its importance to SETI investigations; published on http://pdfcoke.com/Al%20deAprix, 2012 • Created a series of short pieces on unusual or interesting observations for publication on http://www.pdfcoke.com/Al%20deAprix • Disproved Xeno’s arrow (motion) paradox by discovering that motion leaves a signature on dimensionless instants of time; published on http://pdfcoke.com/Al%20deAprix,2012 Art: • •

Photography Create pencil-sketch art pieces

Community Service Experience: Jaycees • President, vice president (twice), treasurer Schenectady Jaycees; led chapter to major programming awards • District director (three times), state programming chairman (four times), state treasurer, state vice president New York State Jaycees Schenectady County Christmas Bureau • Rescued program and extended its service efforts to over 4,000 persons annually • General chairman and/or president for 14 years Human Services Planning Council • Chaired various programming committees, studies, and community conference groups Freedom Park Foundation • Board member • Responsible for program collections Center Glenville United Methodist Church • Member, operations board

108 •

Member, Board of Trustees

Schenectady Salvation Army Advisory Board • Chairman and vice chairman • Capital Drive fundraising cabinet Volunteered with Boy Scouts, Scotia-Glenville track and cross country, Alcoholism Council, United Way, YMCA

Personal: • • •

Recreational activities include running (20 miles per week); camping; snorkeling; creative writing; sketching and photography; number theory; and reading science fiction stories and history, science, and astronomy articles. Serve as school track official in Capital District Sports participation has included track, rugby, and softball

Related Documents

Conjecture Ix
April 2020 13
Carmichael Conjecture
November 2019 21
Goldbach's Conjecture
June 2020 8
The Second Conjecture
April 2020 18
The Fifth Conjecture
April 2020 19
The Fifth Conjecture
May 2020 22

More Documents from "Spanu Dumitru Viorel"