Programming
1.5 is the Midpoint Between 0 and Infinity

Summarized using AI

1.5 is the Midpoint Between 0 and Infinity

Peter Zhu • November 13, 2022 • Houston, TX

In this talk by Peter Zhu at RubyConf 2022, he explores the intriguing concept of 1.5 being the midpoint between 0 and infinity, particularly in the context of Ruby's binary search algorithm. The discussion begins with a question posed by a co-worker regarding why 1.5 is the initial number inspected when Ruby's binary search is executed between 0 and infinity. He highlights how both binary search and floating-point number representation are crucial to understanding this phenomenon.

Key Points Discussed:
- Binary Search Basics: Zhu explains how binary search works, contrasting it with linear search. In binary search, two cursors define a search window, and the midpoint of this range allows for halving the number of elements considered each time, significantly increasing efficiency compared to linear search.
- Floating-Point Representation: The talk delves into the intricacies of IEEE 754 floating-point numbers, focusing on how these numbers are represented internally, particularly in 32-bit and 64-bit formats. This information sets the foundation for understanding searches within infinite ranges.
- Connecting Binary Search and Floating Points: Zhu emphasizes that while floating-point numbers can represent very small and large values, binary search can still be applied due to the sorting properties inherent in their representation. The use of the exponent and significant bits ensures that floating-point values can be compared as integers.
- Midpoint Calculation: By interpreting floating-point numbers (like 0 and infinity) as integers, Zhu navigates to the conclusion that 1.5 emerges naturally as a midpoint by directly manipulating their bit representations. This unique approach illustrates how 1.5 is derived mathematically by averaging the integer representations of 0 and the maximum finite floating-point number.

The talk concludes by reinforcing the concept that 1.5 is indeed the logical midpoint between 0 and infinity within the context of Ruby and floating-point arithmetic, drawing attention to Ruby's sophisticated memory handling of these operations, while encouraging further exploration of floating-point behaviors.

Overall, Zhu successfully merges mathematical concepts with practical programming applications, offering insights into Ruby's handling of binary search for floating-point ranges and prompting the audience to consider deeper implications of notionally infinite datasets in programming.

1.5 is the Midpoint Between 0 and Infinity
Peter Zhu • November 13, 2022 • Houston, TX

What’s the midpoint between 0 and infinity? Well, the answer differs depending on whether you are asking a mathematician, philosopher, or a Ruby developer. I’m not a mathematician or a philosopher, but I am a Ruby developer, so I can tell you that 1.5 is the midpoint between 0 and infinity. In this talk, we'll discuss the binary search algorithm, IEEE 754 floating-point numbers, and a clever trick Ruby uses to perform binary search on floating-point ranges.

RubyConf 2022

00:00:00.000 Ready for takeoff.
00:00:18.539 Hello, RubyConf! My name is Peter. I'm on the Ruby core team and I'm a senior developer at Shopify based out of Ottawa, Canada. Today, I'll be talking about why 1.5 is the midpoint between 0 and infinity.
00:00:30.599 So, what do I mean when I say that 1.5 is the midpoint between 0 and infinity? It all started when a co-worker asked this question on Slack: Why is 1.5 the first number Ruby's binary search inspects when searching between 0 and infinity?
00:00:42.300 Attached to this question was a code snippet. At first, I found the question and the code snippet a bit odd. How can you search over an infinite range, and how is 1.5 related to this in any way? Well, let's first take a look at this short Ruby code.
00:01:01.559 First, it creates a range between 0 and infinity. It's then calling the range B search method over that range, outputting every number that it inspects. The block returns a Boolean value indicating whether the floating point number passed in is greater than 42 or not. Essentially, we're looking for the first floating point number that is strictly greater than 42.
00:01:10.560 Now let’s run this script and see the output. We see that a lot of numbers are inspected, with the first number being 1.5, followed by a lot of very large numbers that seem random, and then we see the numbers converging closer and closer to 42.
00:01:24.659 So, how does range B search figure out what numbers to inspect here? To understand this output, we first have to understand how binary search works and also how floating point numbers work.
00:01:46.980 Before we talk about how binary search works, let's discuss how linear search works. It's really simple: we inspect every element in the list and compare whether or not it's equal to the element we're looking for. For example, attempting to find the number 15 in an array of integers, we simply scan from the start of the array and check if each element is equal to 15, the element we're searching for, until we find the integer 15.
00:02:11.460 Now let’s see how binary search works. Before we can do that, we need to have a restriction on the list that we're searching over; it must be sorted in either ascending or descending order. Let’s see an example using the same array of integers but sorted in ascending order, again trying to find the integer 15.
00:02:29.040 To use binary search, we need two cursors: one low cursor that points to the first element of the array and one high cursor that points to the last element of the array. The two cursors define the window we are searching over. Using these two cursors, we calculate a third cursor called the mid cursor, which points to the element at the midpoint.
00:02:45.060 We compare the value of the element at the midpoint to 15, the number we are searching for. Since 48 is greater than 15, we move the high cursor to the mid cursor. In this single comparison, we have halved the number of elements we are searching over. We know that 15 must be in the window between the low and the high cursors. We repeat this process. Using the low and the high cursors, we calculate a mid cursor at the index halfway between them, which is integer 12. We compare 12 to 15; since 12 is less than 15, we move the low cursor to the mid cursor.
00:03:15.200 Once again, this comparison halves the number of elements we are searching over. We proceed again and find the mid cursor at integer 15. We compare 15 to 15; since they're equal, we have found the element we are searching for. While I could have probably explained binary search in the time it took me to explain linear search five times due to its simplicity, binary search is used because if the list is sorted, it is much faster, especially when searching over a large number of elements.
00:03:39.240 Take a look at this graph. The horizontal x-axis shows the number of elements we are searching over, while the vertical y-axis shows the number of comparisons we have to make in the worst case. For the computer scientists here, you might be familiar with Big O complexity, which we are graphing here. For linear search, we must perform a number of comparisons equal to the number of elements we are searching over in the worst case. This is graphed as the red line. In contrast, for binary search, each comparison halves the number of elements we consider, resulting in a number of comparisons equal to the logarithm base two of the number of elements in the worst case; this is graphed as the blue line.
00:04:14.760 We can see that the blue line is significantly lower than the red line. For example, for an array of 100 elements, a linear search would require at most 100 comparisons, whereas binary search requires at most seven comparisons.
00:04:49.560 Now that we've looked at binary search over a sorted array of finite size with definite endpoints, you might wonder how we can perform binary search over floating point numbers. There seems to be an infinite number of floating point numbers. Furthermore, when we're searching between 0 and infinity, it raises the question: how is infinity a finite endpoint?
00:05:07.560 To understand how we can perform binary search in a range between 0 and infinity, we need to first understand how floating point numbers are represented internally, specifically focusing on IEEE 754 floating point numbers. All modern CPU architectures, such as x86, ARM, and RISC-V, use IEEE 754 floating point numbers internally, so what I am about to show you applies to all modern computers.
00:05:23.520 For simplicity, I'll discuss 32-bit floating point numbers, also known as single precision floating point numbers. However, Ruby uses 64-bit floating point numbers internally, known as double precision floating point numbers. Both types function similarly; the difference lies in the representation precision, with 64 bits providing more precision than 32 bits.
00:05:44.880 An IEEE 754 floating point number has three parts. While I could present this with a formula, I find that often to be unintuitive and hard to grasp. So instead, let's break down these components simply. First, we have the sign, which is pretty self-explanatory; it’s a single bit that is zero when the number is positive and one when it’s negative.
00:06:15.660 The sign is followed by the exponent, which is 8 bits in size, allowing a value range from 0 to 255. Intuitively, the exponent indicates the range into which this floating point number falls. Ranges start at powers of two and end at the subsequent powers of two. Using powers of 2 for ranges allows us to represent both very large and very small numbers in magnitude.
00:06:39.900 This table shows some of the exponent values in decimal on the left and the corresponding ranges on the right. Note that the square bracket on the left indicates that the starting point is inclusive, while the round bracket on the right shows that the endpoint is exclusive. Ranges start at 2 to the power of the exponent minus 127 and end at the next power of two. We subtract 127 to capture ranges that represent numbers with magnitudes between 0 and 1.
00:07:05.880 For instance, an exponent value of 125 starts at 2 to the power of -2, which is 0.25, and ends at 2 to the power of -1, which is 0.5. Now that we know the range of the floating point number from the exponent part, the significant part tells us precisely where the number resides within that range.
00:07:30.780 Using 23 bits, the significant can range between 0 and 2 to the power of 23 minus 1. This divides the range into approximately 8.4 million parts. The significant helps determine how deep into the range the floating point number lies, effectively splitting the range into 8.4 million equal sections.
00:08:03.780 It's important to note that because the significant can only divide the range into a finite number of parts, certain numbers cannot be represented precisely as floating point numbers. You might observe this by trying to add 0.1 and 0.2 in IRB or a Ruby script; the result will not be exactly 0.3.
00:08:41.160 This discrepancy leaves it as an exercise for you to contemplate following this talk. Now let's demonstrate converting a floating point number represented in binary into a decimal number.
00:09:11.880 Here we see the sign bit is zero, indicating that this is a positive number.
00:09:13.500 Next, converting the exponent into a decimal number gives us 130. This corresponds to the range starting at 2 to the power of 3 (8) and ending at 2 to the power of 4 (16). The significant corresponds to a decimal number of approximately 2.6 million.
00:09:37.980 To determine how far into the range between 8 and 16 this floating point number lies, we divide this number by the 2 to the power of 23, a number of 8.38 million. This gives us 0.3125, indicating that we’re 31.25% into the range between 8 and 16. This allows us to calculate that the number represented here is 10.5.
00:10:03.420 Now that you've seen how binary search works and how IEEE 754 floating point numbers function, let's explore how these two concepts interconnect. Before we link binary search and floating point numbers, we must examine how the special floating point numbers, 0 and infinity, are represented.
00:10:20.820 For zero, it is simply represented by all bits being zero. On the other hand, infinity is represented by the exponent being all ones while the significant is all zeros. It's important to note that this represents the largest possible floating point number since IEEE 754 treats an exponent of all ones with a non-zero significant as signaling a 'not a number' (NaN).
00:10:45.960 All right, let's examine our initial two numbers and see if we can engage in a rather curious approach: what if we read these floating point numbers as if they were 32-bit integers? For the floating point number representing zero, it would simply be zero as an integer.
00:11:14.160 For the floating point number representing infinity, it would equate to a number nearing 2.1 billion in the integer representation. It’s crucial to clarify that what we are doing here is directly accessing the bits of the floating point number as if it were an integer.
00:11:40.680 This differs from casting a floating point number to an integer; you cannot cast a floating point number representing infinity into an integer, as integers do not embody the concept of infinity. In Ruby, attempting to convert infinity into an integer results in an error, while in C it can lead to undefined behavior.
00:11:59.940 Now let's calculate the integer that is positioned at the midpoint between 0 and approximately 2.1 billion. This integer amounts to approximately 1.06 billion in decimal by summing 0 and 2.1 billion and subsequently halving the result.
00:12:20.760 This integer appears as follows when we convert it into binary. Taking a closer look at this binary number, let’s endeavor to convert this integer back into a floating point number.
00:12:44.460 The sign here is zero, indicating this is a positive number. The exponent yields a value of 127 in decimal, which informs us that the range starts at 2 to the power of 0 (1) and concludes at 2 to the power of 1 (2). The significant gives approximately 4.19 million in decimal.
00:13:06.360 Dividing it by 2 to the power of 23 — the 8.38 million number — we achieve 0.5. Essentially, this means we're precisely halfway in the range between 1 and 2, enabling us to determine that the number represented here is 1.5.
00:13:29.880 Ladies and gentlemen, there we have it: 1.5 is the midpoint between 0 and infinity.
00:13:46.320 Now, that was a lot to take in! Let’s recap what we did. Initially, we took the floating point endpoints of our range, which were the floating point numbers 0 and infinity, and we interpreted these floating point numbers back as integers, valued at 0 and approximately 2.1 billion.
00:14:06.900 This process differs from casting to an integer because we directly read the bits of the floating point number as if it were an integer. We subsequently found the midpoint. By adding these two numbers and dividing by two, we derived the integer of approximately 1.06 billion.
00:14:29.700 Next, we read this integer midpoint back as a floating point number, leading us to conclude that the resultant floating point number is precisely 1.5.
00:14:57.960 Through this process, we can illustrate why the next expected number, the midpoint between 1.5 and infinity, is estimated to be around 1.67 times 10 to the power of 154, an immense number.
00:15:17.280 However, it's noteworthy that Ruby uses 64-bit floating point numbers internally, so to compute this manually, you would also need to use 64-bit floating point numbers rather than the 32-bit floating point numbers featured in previous examples.
00:15:37.560 Now you might wonder, why does this process yield results? Why can we read floating point numbers as integers and then perform a binary search over them? Recall that the prerequisites for performing binary search entail that the list being searched must be sorted in either ascending or descending order.
00:15:55.560 Does reading floating point numbers as integers guarantee their sorted ascending order? In simpler terms, does a larger floating point number correspond invariably to a larger integer when interpreted as an integer? Let's revisit the parts of a floating point number.
00:16:21.840 We know that a floating point number with the same sign and exponent, but a larger significant, will always represent a larger floating point number by magnitude. Thus, a larger significant will lead to a larger integer when read back as an integer.
00:16:47.760 What occurs if we increase the exponent? Since the exponent determines the range that the floating point number occupies, and because ranges do not overlap, a larger exponent results in a larger floating point number by magnitude, consequently yielding a larger integer when read as an integer.
00:17:05.880 Therefore, this aligns with the requirements of binary search, validating the technique of binary searching over floating point numbers.
00:17:27.180 Let’s review the Ruby code that performs the range B search method. This code resides within the Ruby source code, written in C, making it less readable than Ruby code. This is why I've simplified the code and omitted certain parts for clarity.
00:17:47.460 In examining the function range B search, which enacts the Ruby method, it accepts one argument, the range over which we are binary searching. We then derive the two endpoints: the beginning and the end.
00:18:05.760 When at least one of the two endpoints is a floating point number, we obtain the C floating point value from the Ruby float object. A key aspect of this line is the call to the double as int 64 function; let's explore its implementation.
00:18:28.680 This function takes a double precision floating point number and returns a 64-bit integer. It first initializes a local variable called convert, composed of a union between a 64-bit integer and a double precision floating point number.
00:18:46.560 If you are not acquainted with unions in C, they enable the storage of one of the data types at the same memory location. This approach allows you to store just one of these data types, not all of them, meaning the size of the union corresponds to the size of the largest element.
00:19:01.860 Since both types are 64 bits in size, the size of this union is also 64 bits, or 8 bytes. Returning to the double as int 64 function, we assign the double member of the union to the absolute value of the floating point number using the fabs function.
00:19:22.260 We use the absolute value due to the differences in how integers and floating point numbers handle negative numbers. Integers represent negative numbers in two's complement format, while floating point numbers only set the sign bit.
00:19:44.280 For example, this means that floating point numbers such as 1.5 and -1.5 will differ by just a single bit, the sign bit, whereas integers like 10 and -10 will differ by more than one bit. While we won't dive into the details of two's complement today, you can explore it if interested.
00:20:06.540 To avoid this complication, we use the absolute value here.
00:20:23.640 We then read the union back as an integer and set it to negative as necessary. In the prior line, we established the union from the floating point number, and here we read it back as an integer—this bypasses the need for casting.
00:20:40.920 Castings would attempt to convert the floating point number into an integer instead of directly reading it as one. Returning to the range B search function, we then perform binary search over the 64-bit integers derived from the floating point endpoints.
00:20:57.720 Hopefully, you’re following along and have a foundational understanding of how this code functions. If you're interested in experimenting with floating point numbers, I highly recommend visiting a site called float.exposed.
00:21:09.540 This site provides an abundance of useful information about floating point numbers, including the floating point value, the bits that represent the starting point number, the values of the sign, exponent, and significant as decimals, and the value of this floating point number when read as a hexadecimal or decimal integer.
00:21:33.000 This talk was adapted from my blog post, which you can access via the provided QR code. Feel free to reach out to me on Twitter; my handle is @peterz118. If you're watching this talk recorded in the future, I hope Twitter is still operational at that time. If it's not, you can always contact me via email at [email protected].
00:21:46.920 Thank you for listening to my talk.
00:22:02.460 We still have a couple of minutes if anybody has any questions. Feel free to ask now or come talk to me after this presentation; I'll be around at the conference for the next three days.
00:22:16.699 Right before this talk, Benoit and I actually took a look at the implementation of this function. The question being asked is why does Ruby specifically conduct this floating point to integer conversion instead of simply performing direct searches over floating point numbers.
00:22:33.300 The answer is that Benoit and I investigated the implementation of this function in Truffle Ruby beforehand. Intriguingly, due to its written context in Ruby, it becomes significantly easier to grasp, and it avoids such 'dirty tricks.' Instead, they first check if a number is infinity. If it's not infinity, they perform binary search between zero and the largest floating point number possible.
00:22:54.660 Ultimately, since you can divide the largest floating point number by two, it’s understandable that Ruby could have undertaken such an approach. However, given that it's implemented in C, they opted for this clever yet less readable method, producing seemingly arbitrary outcomes based on their manipulations.
00:23:34.500 Here in the mathematical aspects, infinity is represented similarly, where its sign, the exponent reflects all ones while the significant remains zero.
00:23:56.280 A representation of 'Not A Number' requires that the exponent is also all ones, with a non-zero significant, which gives rise to the ability to express this via multiple routes, totaling 2 to the power of 23 minus 1 possible representations of 'Not A Number.'
00:24:18.900 The question was posed concerning the rationale behind having 8 million methods for denoting 'Not A Number'.
00:24:30.780 The answer to this remains a mystery affiliated with the IEEE specifications.
00:24:41.250 Lastly, the query arises: Do I think 'Not A Number' should be categorized as a falsy value within Ruby? My response to that is no, as I would restrict falsy values to just 'false' and 'nil.' There is no requirement for additional falsy values.
00:25:09.000 Thank you very much!
Explore all talks recorded at RubyConf 2022
+62