TCP/IP channel behaviour
Jonathan I. Kamens
jik at athena.mit.edu
Mon Mar 11 09:14:03 AEST 1991
In article <26226 at adm.brl.mil>, MANNS%DBNPIB5.BITNET at cunyvm.cuny.edu (Jochen Manns, PI der Uni Bonn, 732738/3611) writes:
|> 1. 'connect'-Timeout
|> Is there a way to modify the timeout the connect-systemcall uses for
|> TCP/IP connections? For some of our applications the time the channel
|> spent in 'netstat's state SYN_SENT could but much shorter.
My first impulse is to say, "You could probably do this by modifying a
constant somewhere in the network code in the kernel, but doing so would
probably violate the TCP/IP protocol in some way.
|> 2. RPC an TCP/IP
|> Using SUNs RPC (4.0) with TCP/IP we can see, that after a 'clnttcp_de
|> stroy'
|> the clients socket stays in TIME_WAIT but the servers socket is
|> closed normally. Since we are using RCP/TCP/IP very intensivly
|> I would like to know whether there is a way to close the channel
|> totally.
Appended to the end of this message is an article I recently posted to
comp.unix.programmer discussing the problem to which I think you are referring
here.
Basically, the TIME_WAIT is another thing that has to happen according to
the TCP/IP protocol, but you can get around it by setting the REUSEADDR option
on a socket you're using to reconnect to the port on the client.
--
Jonathan Kamens USnail:
MIT Project Athena 11 Ashford Terrace
jik at Athena.MIT.EDU Allston, MA 02134
Office: 617-253-8085 Home: 617-782-0710
Article: 668 of comp.unix.programmer
Newsgroups: comp.unix.programmer
Path: bloom-picayune.mit.edu!athena.mit.edu!jik
From: jik at athena.mit.edu (Jonathan I. Kamens)
Subject: Re: Problem with binding of socket addresses
Message-ID: <1990Dec10.194130.20414 at athena.mit.edu>
Sender: news at athena.mit.edu (News system)
Reply-To: jik at athena.mit.edu (Jonathan I. Kamens)
Organization: Massachusetts Institute of Technology
References: <epeterso.660257641 at houligan> <sean.660642165 at s.ms.uky.edu>
Date: Mon, 10 Dec 90 19:41:30 GMT
In article <epeterso.660257641 at houligan>, epeterso at houligan.encore.com (Eric Peterson) writes:
|> However, the server occasionally hits a bug and core dumps or dies off
|> in some other way. But it dies off and closes its end of the
|> connection before the client closes the other end. When this occurs
|> and I attempt to restart the server, the bind() call fails with the
|> error "Address already in use".
|>
|> Now, neither the client nor the server is running at the time I try to
|> restart the server, and there isn't a problem with address collisions
|> with another process. As far as I can tell, nothing else is using
|> this address. So why does bind() fail?
In article <sean.660642165 at s.ms.uky.edu>, sean at ms.uky.edu (Sean Casey) writes:
|> Set the "reuse address" socket option, between the socket() and the
|> bind() calls. Then your program can always immediately restart.
Sean's suggestion will solve the problem, but he does not explain why the
problem occurs, so I guess I'll do that :-).
The TCP protocol states that after a TCP stream connection has been closed
abnormally, the same local/foreign port combination cannot be used again for
(2 * MSL). MSL stands for the Maximum Segment Lifetime, which is usually set
to a minute, which means that it probably takes about two minutes before the
address is useable again.
The reason for this is to make sure that all packets which were supposed to
get to the old process connected to the socket don't accidentally get
delivered to the new process instead -- the delay is long enough so that
all the waiting packets should ge thrown away.
Using the reuse address socket option will make it possible for you to
rebind to the socket. It's also a violation of the TCP protocol. But what
the hell, sometimes pragmatism has to win out over theory. This is definitely
one of those times :-).
--
Jonathan Kamens USnail:
MIT Project Athena 11 Ashford Terrace
jik at Athena.MIT.EDU Allston, MA 02134
Office: 617-253-8085 Home: 617-782-0710
More information about the Comp.unix.questions
mailing list