volatiles
Walter Bright
bright at Data-IO.COM
Thu Apr 14 04:15:10 AEST 1988
In article <1394 at pt.cs.cmu.edu> edw at IUS1.CS.CMU.EDU (Eddie Wyatt) writes:
> 2) When variables are not correctly declared as volatiles,
> a program will exhibit different behavior between
> the optimized and unoptimized versions. I have two
> complaints about this. One being this sort of
> behavior is contradictory to the over all philosophy
> behind optimization. A optimization on a language is
> the set of transformations that do not change the
> behavior of the programs but are beneficial by
> some metric. Clearly, the first clause has been
> violated. Conclusion, it's inappropriate to
> try to perform standard data flow analysis techinques
> in a multi-threaded environment.
Programs that run successfully when unoptimized and fail when optimized
suffer from one of the following:
(1) The optimizer has bugs.
(2) The program is incorrect, i.e. it is dependent on coincidental
or undefined behavior.
Lets presume that the optimizer is bug-free. That leaves us with (2).
For years I've seen programmers present me with the following reasoning:
1. Program compiles and runs perfectly with compiler A.
2. Program compiles but crashes with compiler B.
3. Therefore compiler B has bugs in it.
Some of the causes for these problems have been:
o Program stores something 1 byte past the end of a malloc'd
array. Some libraries leave a 'pad' at the end of malloc'd
data.
o Program depends on char being signed/unsigned.
o Program depends on auto variables being initialized to 0.
o Program stores through uninitialized pointer, which winds
up pointing to different locations when compiled with
different compilers.
o Program depends on layout of storage of variables.
o Program depends on being able to decrement a pointer
below the value returned from malloc, and have it test
as 'less' than the malloc'd pointer. (Some of the examples
in the C++ book depend on this.)
Making the statement that instead of exorcising the above problems the
compiler shouldn't optimize is fixing the symptom not the problem.
On most machines, big wins are realized by avoiding redundant load/stores
of variables into/from registers. These wins are on the order of 20-30%,
in both speed and space.
Many applications would have to go back to assembly if these were removed.
Data flow analysis is a powerful method to determine which load/stores
are redundant. Adding a bit to the type of a variable to prevent
redundant load/store elimination is trivial (I implemented volatile in
my optimizer).
> how does one go about debugging a program that
> works in the unoptimized version, pukes in the
> optimized version.
As far as debugging optimized code, there's no easy answer. What I do
personally is look at a mixed source/assembly listing of the routine
with the problem, after I've tried to narrow the problem down to the
smallest sequence of lines possible.
More information about the Comp.lang.c
mailing list