volatile
Every system needs one
terry at wsccs.UUCP
Sun Apr 17 12:55:53 AEST 1988
In article <4346 at ihlpf.ATT.COM>, nevin1 at ihlpf.ATT.COM (00704a-Liber) writes:
> In article <150 at ghostwheel.UUCP> Ned Nowotny writes:
> |I suppose it also means that those which are written in K&R C are
> |incorrect... Uh... completely unoptimizable. (To -O or not to -O...)
>
> No, what this means is that in the future, when we have highly optimizing
> compilers and highly efficient hardware (such as caches,etc.), that this code
> may break.
It will only break if the same assumptions are not made for the new
hardware as were made for the old. It is stupid to require volitile to tell
the compiler not to optimize things outside your address space (such as the
hardware a device driver talks to); the compiler should recognized that what
you are getting at is not in your address space and assume that something
else will change it. While it isn't guaranteed that something out of your
address space is volitile, and it may not be true that everything volitile
is outside your address space, these are pretty safe assumtions, and make
better sense than the programmer having to tell the compiler "hey stupid, don't
make that assumption here". The only thing this tends to do is make it easier
to write compilers, not easier to write programs which those compilers are to
be used on. Face it, you're talking programmer interface to the system vs.
ease of compiler developement. For my money, it makes a lot more sense to
pay the developement penalty writing the compiler than in every subsequent
program that is compiled with it.
> In order to do certain types of optimizations, certain assumptions
> have to be made. For example, if I know that a certain variable has no
> aliases, then in the implementation I can keep it's value in a cache (which
> is faster) without having to update the actual memory location it is
> suppose to be stored at.
Checking for alias, again, seems to be a problem in lexical analysis. Can
you present a case where a compiler could NEVER know if it looked? Certainly,
it's possible to speed the compiler by hitting it over the head with the
two-by-four 'noalias', but at what a cost? You are paying for faster compiles
vs. faster execution speed by breaking working programs. You could not use the
caching feature, thus speeding up the compiler, or you could use the caching
feature, at a cost of a slower compile time. It's just a penalty you must pay
to use an architectural feature.
> As things stand now, these assumptions are made on a compiler-by-compiler
> basis.
What is the difference between using the word 'noalias' on a machine that can't
cache (and therefore can not optimize via cache-storage) vs. compiling the
code on a machine that can? The decision to implement cache-storage
optimization MUST, by virtue of not all machines having caching, be made on
a compiler-by-compiler basis. I see no real advantage in providing something
to a stupid compiler that loses any backward compatability I may have had so
that it doesn't have to be smart enough to know that caching is available on
the machine it is running on.
Machine dependancy is the responsibility of the compiler writer.
Optimization is the responsibility of the compiler writer.
If architectural differences prevent effective use of the compiler, this is
the responsibility of the compiler writer.
> |In fact, why is it assumed that silent optimaztion of poorly (or,
> |perhaps, correctly) written code is desirable?
To provide source-code compatability. Optimization should be done
with the understanding that the assumptions the optimizer makes have to
fall within possible parctice. If different vendor's compilers make several
different sets of assumtions, which they will, obviously, it is the vendor's
responsibility that the assumptions be valid for the code being compiled.
Basically, if it works without -O, it sould work with -O, regardless of what
the compiler writer's optimization does to achieve it's goal. If this makes
writing compilers harder, so what? Not only will new code be portable, old
code will be portable, as well.
> |Just because a compiler
> |can be made smart enough to move a loop invariant out of a loop does not
> |mean that it is a good idea.
I disagree. Please give an example where 'invariant' is 'variant'.
If an optimization occurrs incorrectly, then the optimizer is wrong.
> |It makes more sense to just provide the programmer with a warning.
This makes the assumption that the compiler is smart enought to know
it may be wrong. If it's likely enough wrong to require a warning, it
shouldn't do it.
> |If the code is incorrect, the programmer
> |learns something valuable. If it is correct, the programmer may save
> |himself (or herself) a frustrating bout of debugging.
Why is it incorrect to have a loop invariant in a loop? Agreed that it is
less efficient, but using that argument, I could easily make a case for the
use of an automatic weapon on traffic violators, as shooting a speeder is more
likely to prevent him speeding in the future than is giving him a ticket, and
thus more efficient. Issuing errors for optimization assumptions is simply
overkill, unless the method of arriving at the assumption is invalid.
> |Optimizers should not do optimazations which can be expressed in the
> |source language itself.
> Not all optimizations can be 'hand-coded'. For those which can be
> hand-coded, however, I would still rather have the compiler do all the work
> for me. Hand-optimizing takes a lot of time and usually turns well-wriiten
> readable code into something that can be entered in the Obfuscated C Contest
> :-).
Well stated.
I think the primary concern is who is going to have to give:
Will the hardware people build machines that think more like a
programmer so that compiler writers don't have to bend over backwards to
make something work?
(Side effects: verbose compiler output or slower compiles or machine
dependant programmer interfaces, such as ANSI-C)
Will compiler writes produce slower compilers or more verbose code
so that programmers don't have to worry about machine dependency in supposedly
machine-independant languages?
(Side effects: incorrect optimization assumptions, applications are
not backward compatible, old applications have to be rewritten)
Will programmers rewrite all their code in ANSI-C and hope it takes
all possible future hardware into account?
(Side effects: Millions of programmer hours)
Something has to give and ANSI seems to have decided on applications
programmers.
Can you guarantee in writing that ANSI-C will work on machines 15 years from
now, machines with unguessable architectures?
| Terry Lambert UUCP: ...{ decvax, ihnp4 } |
| @ Century Software : ...utah-cs!uplherc!sp7040!obie!wsccs!terry |
| SLC, Utah |
| These opinions are not my companies, but if you find them |
| useful, send a $20.00 donation to Brisbane Australia... |
| 'There are monkey boys in the facility. Do not be alarmed; you are secure' |
More information about the Comp.lang.c
mailing list