Will 4.2 select() catch OOB socket messages?
Roy Smith
roy at phri.UUCP
Mon Jun 13 23:35:15 AEST 1988
sean at ms.uky.edu (Sean Casey) writes:
>I can't seem to pick up Out Of Band messages with select().
I'm not surprised. I sat down a while ago to do a rewrite of the
BSD rlogin client (which depends on OOB data) to run as a native suntool
(as opposed to having to do shelltool/csh/rlogin). Anyway, what I
discovered is that 1) the BSD OOB documentation is sketchy and incomplete
(and possibly wrong) and 2) the whole concept behind the BSD OOB code is
horrible. I'm not sure how the original "framers of TCP" intended OOB to
work, but I can't believe it's how Berkeley implemented it. Possibly the
problem was that Berkeley was trying to make OOB work on XNS as well as
TCP? Disclaimer: I'm far from a networking guru; I fully expect better
answers to come from the regular network wizards. Also, my answer is based
on my experiences with MtXinu 4.3BSD/NFS on a vax and SunOS-3.2 (a 4.2
derivitive, with some 4.3isms thrown in).
> I'm calling select() with a read fdset and an exception fdset [...] if a
> client sends an OOB, the select does not return as it should.
One possible problem is that IB (in band) and OOB data seem to get
stacked, and the reception of the SIGURG and/or select(exceptionfd)
returning is done when the networking software first finds out that OOB
data is pending. Unfortunately, depending on how much IB data is in the
input queue, you may not yet be able to read the OOB data. This seems
pretty bogus to me and may be part of your problem. One way around it is
to make sure that you read any queued IB data (do non-blocking reads until
you get EWOULDBLOCK) before trying to read the recv() the OOB data.
Another problem with OOB data is that, as far as I can tell, the
concept of "at the mark" is very poorly defined. If you do the following
in a server:
fd = socket();
for (1) {
select (fd for exceptions);
recv (fd, MSG_OOB);
};
and have the client send() a single OOB datum, the server will loop forever
reading the same OOB message as if you had given MSG_OOB|MSG_PEEK. As
mankin at gateway.mitre.org put it in a recent letter to me on this subject,
"The trick is that the system signals the mark both before and immediately
after the octet has been read. The mark signal only goes off when data
after the mark is read from the receive queue. The mark is, in effect, on
both sides of the urgent byte"
I cannot find any reason why such behavior should be considered the
correct way of doing things and can only conclude that OOB data, while a
very nice concept, is broken enough in BSD as to be virtually unusable. In
my case, I didn't have a choice; I had to interact with an existing server
which use OOB. If you have the freedom to design the entities on both
sides of the connection, I would say to stay away from OOB if at all
possible.
--
Roy Smith, System Administrator
Public Health Research Institute
455 First Avenue, New York, NY 10016
{allegra,philabs,cmcl2,rutgers}!phri!roy -or- phri!roy at uunet.uu.net
More information about the Comp.unix.wizards
mailing list