These days, floating-point essentially always means IEEE 754; or, if not IEEE 754, then a slight variant of it. (Note that I'm counting a mere variation in parameters as no variation at all; those aren't interesting.) But IEEE 754 hasn't been around forever; what did old floating point formats look like?
This isn't going to be a truly thorough explanation of that question, since all I've done is poke around the web for a few hours, but I want to share what I've found. Note that some of the documentation I managed to turn up was pretty incomplete, so it's possible I've misunderstood some things. Any corrections are appreciated!
Note I'm not making a list of all the different formats; I just want to note down the ways they vary. I'm going to restrict attention to binary floating point formats, and leave out decimal and hexadecimal ones. These have less interesting variation to them; they necessarily lack a "hidden bit", and decimal has fewer reasonable ways to express negatives.
So, OK. First off, let's review (briefly) how IEEE 754 works. You've got sign, exponent, mantissa, in that order. Sign is 0 for positive, 1 for negative; the overall representation is sign-magnitude. Exponent is not read as signed, but instead can take on negative values due to being biased; if it has length k, it's biased by 2k-1-1. Mantissa is read with an implicit "1." before it (the "hidden bit"); this causes numbers (other than 0 and NaN) to have a unique representation. (Yes, yes, technically +0 and -0 are different, but I'm just going to gloss over this and say that 0 has two representations.) The maximum and minimum exponents are special. The minimum exponent is for denormals, meaning that the implicit "1." becomes a "0." but the exponent is bumped up by 1. This also includes zero (which could technically be considered a denormal but usually isn't), which is represented by all zeroes. The maximum exponent is for ±∞ (if the mantissa is 0) and NaN (if the mantissa is nonzero). (Again, technically different NaNs are allowed to differ in meaning, and there's a distinction between quiet and signalling NaNs, but I'm just going to ignore all this and instead just say that these all represent the value NaN.)
There are also some modern formats that are not quite IEEE 754 (again, beyond mere variation in the parameters) but are clearly imitating it, and basically stick to the above. Some have a value for the bias other than 2k-1-1. And some are for unsigned numbers and so don't include a sign bit! (Thus giving 0 a unique representation.)
But all this is a fair amount of bells and whistles, and so older formats typically don't have all this. Typically in such formats, the maximum exponent is not special in any way. In formats without a "hidden bit", the minimum exponent will typically also not be special. In formats with a "hidden bit", the minimum exponent is typically reserved for 0, with the value being interpreted as 0 regardless of the mantissa. This means that 0 has lots of representations! Some formats do specify that you're not supposed to use the weird representations of 0, only the one with sign 0 and mantissa 0...
(Note that it's not the case that in all formats, all zeroes in the binary represents the nubmer zero! Generally this is the case, but we'll discuss an exception later.)
OK. With all that out of the way, how do formats vary? Well, obviously, you could put the sign, exponent, and mantissa in a different order. I've found examples of exponent-sign-mantissa and sign-mantissa-exponent. Notionally any order is possible, but I haven't found any examples with the sign after the mantissa! (For some of these formats, as you'll see, that would be a bit awkward.)
Now I should warn you, I say "sign bit", but, well... not all of these formats are sign-magnitude! We'll get back to that in a bit. Regardless, we can still identify a sign bit, so, I'm not going to bother adding caveats to the above. (None of these old formats I've seen are unsigned... unsigned floating point seems to be a modern thing.)
But the big question with a binary floating point format is, is there a hidden bit or not? This is going to make a fundamental difference to the rest of the format. Formats without a hidden bit prepend "0." to the mantissa rather than "1."; this means that representations are very much not unique, although of course the representation where the mantissa starts with a 1 (for nonzero numbers) is (usually -- see below) the normalized one. (Some formats do specify that all numbers must be normalized, including specifying that 0 should be represented by all zeroes.)
No-hidden-bit formats have quite a bit of freedom to vary. Note that they eliminate any need for the minimum exponent to be treated specially, since an all-zero mantissa is sufficient to represent 0, regardless of the exponent.
So, first question -- how is the exponent represented? It could be unsigned-with-bias, like IEEE. However older formats that do this typically use a bias of 2k-1, rather than 2k-1-1, like IEEE. (Although some use other weird values instead.) However, since there's less need here for the minimum exponent to be represented by all zeroes, that means we don't have to use unsigned-with-bias. Instead, the exponent can be represented using 2's-complement! (Notionally, it could also be represented using 1's-complement or sign-magnitude, but I haven't seen any examples that do that.)
The MIL-STD-1750A format is interesting here. It lacks a hidden bit and uses 2's-complement for the exponent. It also specifies that all numbers must be normalized... and this means that the number 0 must be represented by all 0 bits. But this means that the canonical representation of 0 has an exponent that isn't the minimum! Well, that's fine, I guess.
Then there's the question of how negative numbers are represented. Obviously sign-magnitude is one option... but you can also combine sign and mantissa, and read it as 2's-complement or 1's-complement. (So, a sign bit of 1 tells you to transform the mantissa appropriately before prepending "0.", in addition to telling you to interpret it as negative. Note that in formats that use 2's-complement for this, the sign and mantissa should be considered to be combined as a single 2's-complement value, so a sign bit of 1 and a mantissa of all zeroes does not mean -0. All the ones I found that did this put the sign next to the mantissa for obvious reasons.) Using 2's-complement here will obviously get rid of the ±0 issue... but since you're not using a hidden bit, you'll still get nonunique representations (including of 0) in lots of other ways.
Let's look at MIL-STD-1750A again. It has sign and mantissa combined like this, using 2's-complement. It also requires all numbers to be normalized. For positive numbers that means the mantissa must start with a 1... but for negative numbers, it means the mantissa must start with a 0! Or in other words, the high bit of the mantissa must be the opposite of the sign bit. Obviously, this would also be the normalization condition for a format using 1's-complement.
But there's one more way of representing negative numbers... rather than taking the 1's-complement or the 2's-complement of the mantissa, you can also take the complement of the entire representation! As best I can tell, some old formats really do this, but it's possible I've misunderstood. Obviously, the ones that do this all put the sign bit first, and the 2's-complement one I saw of course used the order sign-exponent-mantissa.
So those are formats without hidden bits. Formats with hidden bits don't seem to the vary to the same extent. Because of the hidden bit, using formats other than sign-magnitude to handle negative numbers don't make as much sense. And because we want the minimum exponent to be special and used in representing 0, we probably want to stick with a biased representation for the exponent, rather than a signed one.
One thing worth noting about old hidden-bit formats, though, is that they're often explained as prepending ".1" rather than "1."; but this is the same as just adjusting the bias by 1. So, if a format says it uses ".1", with bias 2k-1, we can also just think of this as using "1.", but with a bias of 2k-1+1. So, remember, IEEE uses 2k-1-1, but older formats typically use 2k-1, and if they're hidden-bit, may effectively use 2k-1+1. (Although, again, some use other seemingly-arbitrary values.)
As mentioned above, old hidden-bit formats typically don't have denormals; anything with that minimum exponent is interpreted as 0. Some of them, though, might have, like, bad denormals? Where the minimum exponent causes the implicit "1." to be replaced by "0.", but doesn't adjust the exponent by 1 like true denormals? I'm not really clear on whether this is actually a thing that occurs. Such hardware probably doesn't ever produce such denormals, even if perhaps it might be able to accept them.
So that doesn't seem to leave a lot of room for hidden-bit formats to vary, right? Well, it doesn't... if you want to abide by those constraints. You don't have to, you know! And so that brings us to the one format I found that truly does not fit into any of the above molds: the old Texas Instruments floating-point format.
This one is clever. It's also confusing. So, to start, it's a hidden-bit format, with the minimum exponent being special for representing 0; however, it uses 2's-complement for the exponent, rather than a biased representation. That means that the number 0 is not represented by all zero bits in this format, but rather (since the exponent goes first) by a 1 followed by all zeroes! Quite unusual.
However what makes this one so different is its handling of negative numbers. See, it's a hidden-bit format, but it's not sign-magnitude; rather it uses 2's-complement. (So, there'd be no ±0 issue, except of course that once again anything with the minimum exponent represents 0.) The way the documentation explains it is that, if the sign bit is 0, then you prepend an implicit "01."; but if the sign bit is 1, you prepend an implicit "10.", and intepret the whole thing using 2's-complement.
Now if your sign bit is 1 and the mantissa is not all zeroes, you could just interpret that as just saying, take the 2's-complement of the mantissa, prepend "1.", and interpret the result as negative. However, when the sign bit is 1 and the mantissa is all zeroes, that interpretation no longer works! Instead, um, you have to prepend "1.", double, and interpret as negative? I have to assume the consistent use of 2's-complement made it easier to implement, and it's certainly clever, but, like I said, confusing.
(Notionally, one could also make a 1's-complement format that works this way, but I haven't seen any examples of that.)
So yeah, most of these really just follow a few possible formulas, with this Texas Instruments format standing out as the only truly unusual one I encountered. Anyway there's likely lots more out there that I didn't find, but this is what I could find on a quick skim, and I think I'm going to stop here for now!
-Harry
This isn't going to be a truly thorough explanation of that question, since all I've done is poke around the web for a few hours, but I want to share what I've found. Note that some of the documentation I managed to turn up was pretty incomplete, so it's possible I've misunderstood some things. Any corrections are appreciated!
Note I'm not making a list of all the different formats; I just want to note down the ways they vary. I'm going to restrict attention to binary floating point formats, and leave out decimal and hexadecimal ones. These have less interesting variation to them; they necessarily lack a "hidden bit", and decimal has fewer reasonable ways to express negatives.
So, OK. First off, let's review (briefly) how IEEE 754 works. You've got sign, exponent, mantissa, in that order. Sign is 0 for positive, 1 for negative; the overall representation is sign-magnitude. Exponent is not read as signed, but instead can take on negative values due to being biased; if it has length k, it's biased by 2k-1-1. Mantissa is read with an implicit "1." before it (the "hidden bit"); this causes numbers (other than 0 and NaN) to have a unique representation. (Yes, yes, technically +0 and -0 are different, but I'm just going to gloss over this and say that 0 has two representations.) The maximum and minimum exponents are special. The minimum exponent is for denormals, meaning that the implicit "1." becomes a "0." but the exponent is bumped up by 1. This also includes zero (which could technically be considered a denormal but usually isn't), which is represented by all zeroes. The maximum exponent is for ±∞ (if the mantissa is 0) and NaN (if the mantissa is nonzero). (Again, technically different NaNs are allowed to differ in meaning, and there's a distinction between quiet and signalling NaNs, but I'm just going to ignore all this and instead just say that these all represent the value NaN.)
There are also some modern formats that are not quite IEEE 754 (again, beyond mere variation in the parameters) but are clearly imitating it, and basically stick to the above. Some have a value for the bias other than 2k-1-1. And some are for unsigned numbers and so don't include a sign bit! (Thus giving 0 a unique representation.)
But all this is a fair amount of bells and whistles, and so older formats typically don't have all this. Typically in such formats, the maximum exponent is not special in any way. In formats without a "hidden bit", the minimum exponent will typically also not be special. In formats with a "hidden bit", the minimum exponent is typically reserved for 0, with the value being interpreted as 0 regardless of the mantissa. This means that 0 has lots of representations! Some formats do specify that you're not supposed to use the weird representations of 0, only the one with sign 0 and mantissa 0...
(Note that it's not the case that in all formats, all zeroes in the binary represents the nubmer zero! Generally this is the case, but we'll discuss an exception later.)
OK. With all that out of the way, how do formats vary? Well, obviously, you could put the sign, exponent, and mantissa in a different order. I've found examples of exponent-sign-mantissa and sign-mantissa-exponent. Notionally any order is possible, but I haven't found any examples with the sign after the mantissa! (For some of these formats, as you'll see, that would be a bit awkward.)
Now I should warn you, I say "sign bit", but, well... not all of these formats are sign-magnitude! We'll get back to that in a bit. Regardless, we can still identify a sign bit, so, I'm not going to bother adding caveats to the above. (None of these old formats I've seen are unsigned... unsigned floating point seems to be a modern thing.)
But the big question with a binary floating point format is, is there a hidden bit or not? This is going to make a fundamental difference to the rest of the format. Formats without a hidden bit prepend "0." to the mantissa rather than "1."; this means that representations are very much not unique, although of course the representation where the mantissa starts with a 1 (for nonzero numbers) is (usually -- see below) the normalized one. (Some formats do specify that all numbers must be normalized, including specifying that 0 should be represented by all zeroes.)
No-hidden-bit formats have quite a bit of freedom to vary. Note that they eliminate any need for the minimum exponent to be treated specially, since an all-zero mantissa is sufficient to represent 0, regardless of the exponent.
So, first question -- how is the exponent represented? It could be unsigned-with-bias, like IEEE. However older formats that do this typically use a bias of 2k-1, rather than 2k-1-1, like IEEE. (Although some use other weird values instead.) However, since there's less need here for the minimum exponent to be represented by all zeroes, that means we don't have to use unsigned-with-bias. Instead, the exponent can be represented using 2's-complement! (Notionally, it could also be represented using 1's-complement or sign-magnitude, but I haven't seen any examples that do that.)
The MIL-STD-1750A format is interesting here. It lacks a hidden bit and uses 2's-complement for the exponent. It also specifies that all numbers must be normalized... and this means that the number 0 must be represented by all 0 bits. But this means that the canonical representation of 0 has an exponent that isn't the minimum! Well, that's fine, I guess.
Then there's the question of how negative numbers are represented. Obviously sign-magnitude is one option... but you can also combine sign and mantissa, and read it as 2's-complement or 1's-complement. (So, a sign bit of 1 tells you to transform the mantissa appropriately before prepending "0.", in addition to telling you to interpret it as negative. Note that in formats that use 2's-complement for this, the sign and mantissa should be considered to be combined as a single 2's-complement value, so a sign bit of 1 and a mantissa of all zeroes does not mean -0. All the ones I found that did this put the sign next to the mantissa for obvious reasons.) Using 2's-complement here will obviously get rid of the ±0 issue... but since you're not using a hidden bit, you'll still get nonunique representations (including of 0) in lots of other ways.
Let's look at MIL-STD-1750A again. It has sign and mantissa combined like this, using 2's-complement. It also requires all numbers to be normalized. For positive numbers that means the mantissa must start with a 1... but for negative numbers, it means the mantissa must start with a 0! Or in other words, the high bit of the mantissa must be the opposite of the sign bit. Obviously, this would also be the normalization condition for a format using 1's-complement.
But there's one more way of representing negative numbers... rather than taking the 1's-complement or the 2's-complement of the mantissa, you can also take the complement of the entire representation! As best I can tell, some old formats really do this, but it's possible I've misunderstood. Obviously, the ones that do this all put the sign bit first, and the 2's-complement one I saw of course used the order sign-exponent-mantissa.
So those are formats without hidden bits. Formats with hidden bits don't seem to the vary to the same extent. Because of the hidden bit, using formats other than sign-magnitude to handle negative numbers don't make as much sense. And because we want the minimum exponent to be special and used in representing 0, we probably want to stick with a biased representation for the exponent, rather than a signed one.
One thing worth noting about old hidden-bit formats, though, is that they're often explained as prepending ".1" rather than "1."; but this is the same as just adjusting the bias by 1. So, if a format says it uses ".1", with bias 2k-1, we can also just think of this as using "1.", but with a bias of 2k-1+1. So, remember, IEEE uses 2k-1-1, but older formats typically use 2k-1, and if they're hidden-bit, may effectively use 2k-1+1. (Although, again, some use other seemingly-arbitrary values.)
As mentioned above, old hidden-bit formats typically don't have denormals; anything with that minimum exponent is interpreted as 0. Some of them, though, might have, like, bad denormals? Where the minimum exponent causes the implicit "1." to be replaced by "0.", but doesn't adjust the exponent by 1 like true denormals? I'm not really clear on whether this is actually a thing that occurs. Such hardware probably doesn't ever produce such denormals, even if perhaps it might be able to accept them.
So that doesn't seem to leave a lot of room for hidden-bit formats to vary, right? Well, it doesn't... if you want to abide by those constraints. You don't have to, you know! And so that brings us to the one format I found that truly does not fit into any of the above molds: the old Texas Instruments floating-point format.
This one is clever. It's also confusing. So, to start, it's a hidden-bit format, with the minimum exponent being special for representing 0; however, it uses 2's-complement for the exponent, rather than a biased representation. That means that the number 0 is not represented by all zero bits in this format, but rather (since the exponent goes first) by a 1 followed by all zeroes! Quite unusual.
However what makes this one so different is its handling of negative numbers. See, it's a hidden-bit format, but it's not sign-magnitude; rather it uses 2's-complement. (So, there'd be no ±0 issue, except of course that once again anything with the minimum exponent represents 0.) The way the documentation explains it is that, if the sign bit is 0, then you prepend an implicit "01."; but if the sign bit is 1, you prepend an implicit "10.", and intepret the whole thing using 2's-complement.
Now if your sign bit is 1 and the mantissa is not all zeroes, you could just interpret that as just saying, take the 2's-complement of the mantissa, prepend "1.", and interpret the result as negative. However, when the sign bit is 1 and the mantissa is all zeroes, that interpretation no longer works! Instead, um, you have to prepend "1.", double, and interpret as negative? I have to assume the consistent use of 2's-complement made it easier to implement, and it's certainly clever, but, like I said, confusing.
(Notionally, one could also make a 1's-complement format that works this way, but I haven't seen any examples of that.)
So yeah, most of these really just follow a few possible formulas, with this Texas Instruments format standing out as the only truly unusual one I encountered. Anyway there's likely lots more out there that I didn't find, but this is what I could find on a quick skim, and I think I'm going to stop here for now!
-Harry