matrix mult.

Jim Hester hester at ICSE.UCI.EDU
Thu Nov 21 06:31:29 AEST 1985


Your method is metter in terms of space, and can usually be made better
in terms of time.  There are some minor improvements, and some pro's and
cons to consider.

The basic change your code made, relative to the original procedure,
was to have the programmer explicitly deal with his own linear array
representation.  This is a bit more efficient space-wise (than the
pointer-vector representation), makes creating a new array much
easier, and can probably be made more efficient time-wise than a
compiler since, although a compiler might do static array lookups a
bit faster, your code could be modified to pre-calculate some terms
outside of loops for moderate speedups, which only an extremely
intelligent optimizing compiler (if any) could do.

The major disadvantage of this scheme is that the programmer cannot use
the facilities already provioded by the language for manipulating
multi-dimensional arrays, which leads to slight conceptual confusion
requiring a fair amount of documentation explaining to potential
modifiers your method and why they must uniformly conform to it.  The
programmer must also implement and consistantly use procedures/macros
for array access, changing all lines in all procedures which make use
of the arrays from (the system's notation):

	i = A[ x ][ y ];                A[ x ][ y ] = i;
to:
	i = lookup( A, n, x, y );       assign( A, n, x, y, i );
or:
	i = A[ loc(n, x, y) ];          A[ loc(n, x, y) ] = i;

(I generally use the second method, since it only requires one function,
is conceptually closer to what the compiler does, is visually closer to
the language's notation, is simpler, and much more easily allows the use
of macros.)

The original code was intended to be as general as possible, so the
program using explicit linear arrays and my examples above should be
modified to support n by m arrays (trivial).  If you want to limit the
problem to square arrays and believe they could get larger than about 50 by
50, there is a reasonably non-complex, asymptotically faster algorithm.
Strassen's multiplication algorithm (which I'll spare the readers: see
any decent algorithms text) takes Order( n^2.81 ) time as opposed to
Gaussian algorithms (all those presented recently), which take
Order( n^3 ) time.



More information about the Comp.sources.unix mailing list