Magnus Numerus

  ·   3 min read

From building a half-adder(1) to dealing with the overflows of life, it has been a ride. Contemplation of my limits about mathematics has stirred a deep seated feeling of hopelessness. It’s working. I chose to study it, because it was fun and made me feel dumb(a.k.a. challenged). Never expected to feel this dumb, though.

To error is human, and to persist is demonic, according to good Augustine. Also, according to him, we are basically irrelevant except to the eyes of Lord. This made me think of the little things, the more mundane things. In this case, numbers.

When I speak of numbers, I’m not talking about Conway’s constructions(2), but of the more natural ones, like integers and rationals. It’s only logical that it’s an inevitability to return to old projects, such as those of the 1st semester.

The choice to look to egc was obvious. It started as a small C program. It computed these sequences that were required in a History of Mathematics(3) assignment. The whole thing was about methods of Egyptian mathematics. This little program computed a series of unit fractions, called Egyptian Fractions(4).

So what are these fractions? Egyptians only represented fractions as either a sum of unit fractions or two thirds$\left(\frac{2}{3}\right)$. A simple example, below, is how three fourths would be written.

\[ \frac{3}{4} = \frac{1}{2} + \frac{1}{4} \]

Any other fraction would be the sum of unit fractions, with distinct denominators. Hence for any given $\frac{N}{D} \in \mathbb{Q}$, you’d have a sum of fractions with a denominator $d_i \in \mathbb{N}$, such that

\[ \frac{N}{D} = \sum_i \frac{1}{d_i} \]

You can spot the overflow issues from a mile away, given enough experience.

What is an overflow? It’s a property of how our computers represent integer numbers. Computers store numbers in binary, which is 0’s and 1’s, and not our familiar decimal base. Representations have to be finite as well, and so we can add more that we can store. Imagine if we stored with just 2 bits, an unsigned number, we’d get the following numbers:

decimal binary
0 00
1 01
2 10
3 11

If we added 3 to 1 to result in 4, we’d probably get 0. Computers normally have a maixum size for binary aritmetic in their processors, which in my case is 64 bit. This gives an unsigned maximum of 18,446,744,073,709,551,615. It looks big, but you can go bigger.

For example the factorization of $\frac{99}{10000}$ contains a denominator bigger than that. My poor egc project can’t cope. My other project egcl because of how it stores number, which is different and besides the scope here.

\[ \frac{99}{100000} = \frac{1}{102} + \frac{1}{10409} + \frac{1}{129477805} + \frac{1}{137468916168990000} \]

This makes me wonder if my next step is to undertake a project which I’ve been avoiding for years. I’m lazy like Garfield, but it’s a jolly good challenge. So what is it? It’s to implement a bignum library in C, which is a way of storing and computing which would enable the same capabilities of my other project, in this egc one. Who knows?


References

  1. Half-Adder: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Electrical.Digital.Examples.Utilities.HalfAdder.html
  2. Surreal number: https://mathworld.wolfram.com/SurrealNumber.html
  3. History of Mathematics course: https://guiadoscursos.uab.pt/ucs/historia-da-matematica/
  4. Egyptian Fraction: https://mathworld.wolfram.com/EgyptianFraction.html