Need 286 "C" benchmark

Chuck McManis cem at intelca.UUCP
Wed May 29 01:41:43 AEST 1985

I really don't want to drag this whole discussion onto the net again and
won't. I will however correct Dave's misinterpretations of my orginal 
message and then let it rest. 

> In article <583 at intelca.UUCP> cem at intelca.UUCP (Chuck McManis) writes:
> >> [quoting someone else...]
> >> I just love the contact sport of "combative benchmarking".  I note how
> >> the source code for the Hofstader (sp?) benchmark just accidentally
> >> happens to declare its register variables from the least-used to the
> >> most used, the opposite of normal C convention.  And by coincidence,
> >> there are three of those little hummers... and we're comparing a
> >> 68K with >3 regvars against a 286 with only 2!
> >> This means that the single most heavily used register variable will
> >> be in a reg on the 68K and on the frame for a 286.  My my, what a
> >> terrible accident.
> When I posted the benchmark I was not aware of all that.  But what's the
> complaint? Are you saying that its not fair to use registers since one
> chip only has 2 of them?   In the real world programs would use a lot more
> than two registers.  Why are you trying to hide architectural weaknesses?
> Benchmarks should be just the thing to point out such weaknesses.
Dave, I don't know why you think that by pointing out differences in 
architecture someone is "hiding" them. I don't believe the person I 
quoted was complaining, merely pointing out how the source you posted
from the book was poorly written. I think the same person and myself 
would be quite suprised that you "didn't know" that the benchmark you
posted seemed particularly aimed at blowing up 16 bit compilers.

> By your analogy no benchmark run between an Intel vs <whatever> machine should
> have any statements such as the following:
> 			   I = J;
> because the 808x et. al. do not have a memory to memory scalar move and would
> thus be artificially handicapped.  That wouldn't be fair to Intel now, would
> it?
As above I think you misinterpreted his statement as an analogy, you can put
anything you want in you C programs. 

> >It is also by "accident" that of those three variables j, k, and max are
> >"assumed" to be 32 bits. ("Oh, did I leave that out?") And that the only
> >purpose of the histogram seems to be to try to allocate an array that has
> >250504 elements.
> I find this highly ironic coming from an Intel person.  Intel's latest
> benchmark booklet comparing the 286 with the 68k just happens to be full of
> C programs which have ints.  Intel doesn't bother telling anyone that the
> 68k versions all run with 32-bit integers while the 286 gets by with 16 bit
> integers.  Deliberate deception - but we all know why.

This is probably the most disturbing comment, and the reason I even bothered
to reply. If you programmed in C back in the good ol' days, you to would 
assume ints were 16 bits. Any source code I write that doesn't, points this
out in the comments. My original message was trying to point out why this 
code seemed to be targeted at "killing" 16 bit machines. (It would also not
run on the PDP-11 and on any C compiler that defaulted to 16 bit ints.) If
you had pointed it out then I would have simply replaced the required ints
with longs. I believe it was situations like this that #define was created 
for. As for "getting by", I assume you consider it a feature that your
compiler drags along an extra 16 bits when you don't need it. When I need
long ints, I use long ints. How do define 16 bit numbers? short? and if
so what is a byte in your compiler? Finally, deliberate deception? Come on,
lets be serious. As I mentioned in an earlier message, don't worry about
our benchmarks, run some tests yourself. That is always the only way you 
will believe anything. 

> This quibbling is all very telling.  If Intel advertizes that the 286 is not
> only far better than the several years old MC68000 but matches the speed of
> the new MC68020 one would think that these itty-bitty benchmarks certainly
> couldn't cause a problem.  After all, every M68000 chip from day one easily
> chews them up. So what's the hangup here?  If you have to go to LONGs then
> do it.  But don't sit and gripe if you chip can't hack it.
I did switch them to longs and only pointed out your omission of the 
requirement for 32 bits. Even a note in the message to the effect of 
"by the way these vars need to be 32 bits."

> As for the large array, I have compiled the program on my Macintosh at home.
> No sweat. It runs easily on a 1 Meg Lisa (Mac XL.)  Why is it such a big deal
> to run it on a 286 (which supposedly rivals the MC68020?)
Here we are discussing compilers again, Microsoft has yet to release a compiler
that can deal with large arrays, the 286 has a 1Gbyte virtual address space and
hence plenty of room. I personally can write the "benchmark" in assembly 
quite easily, again the COMPILER can't hack it but the chip can. 

> Ok I will.  Here's another dinky benchmark which I just compiled and ran on
> my Macintosh.  Lets hear some 286 times for it (and no excuses please.)
> int a[50000];
> main()
> {
>   int i;
>   for (i=0; i<50000; i++) a[i+1] = a[i];
> }
> Dave Trissel    {seismo,ihnp4}!ut-sally!oakhill!davet
> Motorola Semiconductor Inc.  Austin, Texas
> "I work with 'em and mine works"

Again you decide to break the compiler not the chip. Microsofts C cannot
declare an array larger than 64K, yet. Make it 25,000 and I will post it.
Other than that I'll have to wait for Microsofts new C compiler.

"Why do I even bother."

                                            - - - D I S C L A I M E R - - - 
{ihnp4,fortune}!dual\                     All opinions expressed herein are my
        {qantel,idi}-> !intelca!cem       own and not those of my employer, my
 {ucbvax,hao}!hplabs/                     friends, or my avocado plant. :-}

More information about the Comp.lang.c mailing list