If you asked software engineers some of their “least hated” things, you’ll likely hear both UTF-8 and TCP. TCP, despite being 35 years old, is rock-solid, stable infrastructure that we take for granted today; it’s hard to sometimes realize that TCP was man-made, given how well it’s served us. But within every single TCP packet lies a widely misunderstood, esoteric secret.
Look at any diagram or breakdown of the TCP segment header and you’ll notice a 16-bit field called the “Urgent Pointer”. These 16 bits exist in every TCP packet ever sent, but as far as I’m aware, no piece of software understands them correctly.
This widely misunderstood field has caused security issues in multiple products. As far as I’m aware, there is no fully correct documentation on what this field is actually supposed to do. The original RFC 793 actually contradicts itself on the field’s exact value. RFC 1011 and RFC 1122 try to correct the record, but from my reading of the specifications, they seem to also describe the field incorrectly.
What is, exactly, the TCP URG flag? First, let’s try to refer to what RFC 793, the document describing TCP, actually says.
… TCP also provides a means to communicate to the receiver of data that at some point further along in the data stream than the receiver is currently reading there is urgent data. TCP does not attempt to define what the user specifically does upon being notified of pending urgent data, but the general notion is that the receiving process will take action to process the urgent data quickly.
The objective of the TCP urgent mechanism is to allow the sending user to stimulate the receiving user to accept some urgent data and to permit the receiving TCP to indicate to the receiving user when all the currently known urgent data has been received by the user.
From this description, it seems like the idea behind the urgent flag is to send some message, some set of bytes as “urgent data”, and allow the application to know “hey, someone has sent you urgent data”. Perhaps, you might even imagine, it makes sense for the application to read this “urgent data packet” first, as an out-of-band message.
But! TCP is designed to give you two continuous streams of bytes between computers. TCP, at the application layer, has no concept of datagrams or packetized messages in that stream. If there’s no “end of message”, it doesn’t make sense to define the URG packet to be different. This is what the 16-bit Urgent Pointer is used for. The 16-bit Urgent Pointer specifies a future location in the stream where the urgent data ends:
This mechanism permits a point in the data stream to be designated as the end of urgent information.
Wait. Where the urgent data ends? Then where does it begin? Most early operating systems assumed that this implied that there was one byte of urgent data located at the Urgent Pointer, and allowed clients to read it independently of the actual stream of data. This is the history and rationale behind the flag MSG_OOB
, part of the Berkley Sockets API. When sending data through a TCP socket, the MSG_OOB
flag sets the URG flag and points the Urgent Pointer at the last byte in the buffer. When a packed is received with the URG flag, the kernel buffers and stores the byte at that location. It also signals the receiving process that there is urgent data available with SIGURG
. When receiving data with recv(), you can pass MSG_OOB
to receive this single byte of otherwise inaccessible out-of-band data. During a normal recv(), this byte is effectively removed from the stream.
This interpretation, despite being used by glibc and even Wikipedia, is wrong based on my reading of the TCP spec. When taking into account the “neverending streams” nature of TCP, a more careful, subtle, and intentional meaning behind these paragraphs is revealed. One made clearer by the next sentence:
Whenever this point is in advance of the receive sequence number (RCV.NXT) at the receiving TCP, that TCP must tell the user to go into “urgent mode”; when the receive sequence number catches up to the urgent pointer, the TCP must tell user to go into “normal mode”…
Confusing vocabulary choices such as “urgent data” implies that there is actual data explicitly tagged as urgent, but this isn’t the case. When a TCP packet is received with an URG flag, all data currently in the socket is now “urgent data”, up until the end pointer. The urgent data waiting for you up ahead isn’t marked explicitly and available out-of-band, it’s just somewhere up ahead and if you parse all the data in the stream super quickly you’ll eventually find it. If you want an explicit marker for what the urgent data actually is, you have to put it in the stream yourself — the notification is just telling you there’s something waiting up ahead.
Put another way, urgency is an attribute of the TCP socket itself, not of a piece of data within that stream.
Unfortunately, several foundational internet protocols, like Telnet, are fooled by this misunderstanding. In Telnet, the idea is that if you have a large amount of data waiting in the buffer for a “runaway process”, it’s hard for your commands to make it through. From the Telnet specification:
To counter this problem, the TELNET “Synch” mechanism is introduced. A Synch signal consists of a TCP Urgent notification, coupled with the TELNET command DATA MARK. The Urgent notification, which is not subject to the flow control pertaining to the TELNET connection, is used to invoke special handling of the data stream by the process which receives it…
… The Synch is sent via the TCP send operation with the Urgent flag set and the [Data Mark] as the last (or only) data octet.
In a TCP world, this idea of course makes no sense. There’s no “last data octet” in a TCP stream, because the stream is continuous and goes on forever.
How did everyone get confused and start misunderstanding the TCP urgent mechanism? My best guess is that the broken behavior is actually more useful than the one suggested by TCP. Even a single octet of out-of-band data can actually signal quite a lot, and it can be more helpful than some “turbo mode” suggestion. Additionally, despite the availability of POSIX functionality like SO_OOBINLINE
and sockatmark
, there remains no way to reliably test whether the TCP socket is in “urgent mode”, as far as I’m aware. The Berkley sockets API started this misunderstanding and provides no easy way to get the correct behavior.
It’s incredible to think that 35 years of rock-solid protocol has had such an amazing mistake baked into it. You can probably count the number of total TCP packets sent in the trillions, if not more, yet 16 bits are dedicated to a field that nothing more than a handful of software has ever sent.