How is OOB data supposed to work?
Roy Smith
roy at phri.UUCP
Mon May 30 01:12:18 AEST 1988
I'm having a bit of trouble figuring out how out-of-band data works
with stream sockets (i.e. TCP connections) in BSD Unix (I'm using a MtXinu
4.3BSD/NFS vax and a SunOS-3.2 Sun-3). Hopefully some of you network wizards
out there can set me straight. My server side looks basicly like:
sock = socket (AF_INET, SOCK_STREAM, 0);
bind (sock, &server, sizeof (server));
listen (sock, 5);
msgsock = accept (sock, 0, 0);
while (1)
{
rfd = efd = 1<<msgsock;
select (16, &rfd, 0, &efd, 0);
if (efd & (1<<msgsock))
recv (msgsock, buf, 1, MSG_OOB);
if (rfd & (1<<msgsock))
read (msgsock, buf, BUFMAX);
}
The problem is that once I've recv'd an OOB datum, the select keeps
insisting that there is OOB data pending and the recv keeps re-reading the
same datum. If some in-line data comes in, depending on the timing, I may see
see the in-line message, or I may not. For example, when my client is:
sock = socket (AF_INET, SOCK_STREAM, 0);
connect (sock, &server, sizeof (server);
write (sock, "foo", 3);
send (sock, "X", 1, MSG_OOB);
write (sock, "bar", 3);
my server does read ("foo"), recv ("X"), read ("bar"). If I insert 2-second
sleeps in front of each write and in front of the send, my server does read
("foo"), recv ("X"), recv ("X"), recv ("X"), ...
What am I doing wrong?
--
Roy Smith, System Administrator
Public Health Research Institute
455 First Avenue, New York, NY 10016
{allegra,philabs,cmcl2,rutgers}!phri!roy -or- phri!roy at uunet.uu.net
More information about the Comp.unix.wizards
mailing list