I’ve recently been doing quite a bit of Lua scripting for a client wanting some PowerDNS customizations. I’ve actually grown to quite like Lua, even though it’s very simple and quick, you can do some very complex programming with it reasonably straight forwardly. I think it could perhaps be compared to a stripped-down version of Perl which is also a language that I very much like because of its incredible flexibility.
Anyway, as part of this work are wanting to look up incoming IP addresses in a table of non-overlapping IP address ranges. For high performance I recommended LMDB as I’ve used it extensively before and I know that for its quirks and tendency to crash if you mishandle any aspect of its API, it is very high performance, low over head, scales very well to multiple cores, and can do pretty much anything you ask of it.
So basically the problem was “how do we store an IP range as an indexed key in LMDB” (which is just a key-value database where all keys are b-tree indexed). In the future we may want to support IPv6, and we may also want to support IP ranges which cannot be expressed in subnet-mask representation. The solution I came up with is to store the first IP in raw binary format (ie 4 bytes for IPv4, or 16 bytes for IPv6) as the key, and then as part of the value we store the end IP address. In order to see if a given IP is within a subnet, you look up you open a cursor on the table, seek to the position of the IP you are trying. If you get a direct hit, then obviously it has found the first IP in the subnet and so you know it is valid. If it does not get a direct hit you seek back to the previous entry (this is a great feature of LMDB and is found in surprisingly few indexed key-value data store APIs, even though it should be very simple to implement). You then take the value of that, get the end IP of the range and check to see if the requested IP is within the start and end of the range.
Because we wanted a very flexible and easily extensible data storage format for the values in this table we decided to encode it all as JSON. Lua has a number of JSON decoders and lua-cjson
seemed pretty quick and easy, and was also available as a pre-built ubuntu package so we went with that. As we were storing the key’s IP address in raw binary notation, we figured it would make the code-path simplest if we stored the end IP address in the same manner. So, we did this, wrote a test suite with some non-public IPv4 addresses (10.xxx and 127.xxx) and verified that it was all working correctly, and then launched the code.
A few days later we started getting some complaints from customers that some IP addresses in their network ranges were not being identified correctly. But when we added the exact same details into the test suite with our private IP ranges, it clearly worked fine.
Finally I started trying to use the exact IP addresses that the customers were reporting issues with in the test scripts and discovered that there was actually a problem. Basically, whenever a component of the address was greater than 127 and the code did not go down the direct hit code path (ie the address was part of a subnet larger than a /32 and not the first entry) the decoded end IP address would be incorrect. Very strange! So, our test code which was using ranges like 127.0.0.1-127.1.2.3
worked fine, but an IP range like 1.0.0.0-1.0.129.0
would fail!
Looking more closely at the cjson Lua documentation I saw the line “cjson.decode will deserialise any UTF-8 JSON string into a Lua value or table”. And reading through the C code I saw that the routines were hard-coded to treat any JSON escaped \uXXXX
value that was greater than 127 as part of a UTF-8 encoded character. This is because Lua uses the platform’s underlying char[]
to store strings which means usually each character in a string can only be 8-bits, meaning that in order to store wider characters the bytes need to be encoded into a single character which is what UTF-8 is for. With our encoding we knew that all parts of the string would fit into 8-bits, but there was no way to tell the decoder this. Because cjson is aiming to be a fast module, this is all hard-coded and there is no way that I could see to easily work around this utf-8 decoding. We tried some other Lua JSON modules but they either had the same problem, or were orders of magnitude slower than cjson.
Eventually a colleague suggested just hex-encoding the end IP address prior to including it in the JSON data which was the simplest solution we could find. It should also reduce the storage required for an IP address as assuming 50% of the characters are usually encoded with a \uXXXX
escape sequence in JSON, an average IPv4 address would take 14 bytes in the database, whereas with hex this would be a fixed 8 bytes per IPv4 address.
If the encoding program had been using perl we could probably have used some of the features of the JSON::XS
module (specifically, the utf8
flag) to write characters directly as bytes into the string, which although is perhaps not technically valid JSON, from my reading of the Lua module should have bypassed the UTF-8 encoding of escaped values. However we wern’t using perl in our encoding routines so this wasn’t possible.
I guess you were sending 10.0.129.0 as “\u000a\u0000\u0081\u0000” ?
If you want codepoints \u0080 to \u00ff to be stored as bytes 0x80 to 0xff, your JSON parser would need to be told to store strings using ISO-8859-1 (latin 1) rather than UTF8.
Python PEP383 has an interesting approach called “surrogateescape” when decoding a UTF8 byte stream which contains invalid bytes \x80 to \xff: it replaces the invalid bytes with codepoints \udc80-\udcff. This allows a lossless, reversible conversion from arbitrary byte stream to Unicode characters and back again.
But for your use case, I agree that hex representation is definitely the right way to go!