Subject: Re: Optimization Opportunities 1 [long]
From: Ben Hall (bhall@scs.carleton.ca)
Date: Mon Feb 05 2001 - 08:57:05 CST
Hi Sam,
If you're serious about the slow machine I've got a siz year old Alpha
and enough parts to build a 486 with ~16MB RAM.  The Alpha (233 w/ 64MB
RAM) took about 2 hours to compile Abi 0.7.12 and is VERY slow.
Ben
On Mon, 5 Feb 2001, Sam TH wrote:
> On Mon, Feb 05, 2001 at 09:26:44AM -0500, Thomas Fletcher wrote:
> > On Mon, 5 Feb 2001, Sam TH wrote:
> > 
> > > Well, I've been playing with the wonderful gcov program (which has now
> > > tripled the size of my source tree), and I have some interesting
> > > data.  This is just on startup time, one of the key measures that
> > > users will judge the speed of AbiWord by.  More on other aspects
> > > later. =20
> > 
> > Just out of curiosity ... gcov is a code coverage tool (for example
> > checking test case coverage) and really has nothing to do with 
> > execution time performance profiling.  The output from gprof is 
> > really what you want here.  You might think that you can profile
> > code and then highlight the problem areas and look at the coverage, 
> > but this may provide you with some very mis-leading results showing
> > that a highly travelled path (ie high coverage numbers) might need
> > to be sped up ... when in fact it is the low coverage path that is
> > only incidentally executed that is consuming CPU resources.
> 
> Yes, all you say is true.  However, I did what I did for a number of
> reasons:  
> 
> 1) gcov and gprof are all I have, and gprof on a startup showed 4 or 5
> functions taking any time at all.  Therefore, to understand *why* the
> functions were slow, I had to turn to something else.  Thus, gcov.  
> 
> 2) AbiWord starts *really* fast.  With ispell, it takes about 0.05
> seconds, according to gprof.  With aspell, that drops to 0.02
> seconds.  Now, those numbers are *way* too low, but they do mean that
> I get very little useful data out of gprof.  I guess the solution is a
> slower computer.  :-)
> 
> 3)  In those functions, no single line of code outside of those loops
> was executed 100 times.  Those loops were executed 30,000+ times.  I
> thus feel at least somewhat confident that they had something to do
> with the speed.  
> 
> I admit that this isn't the best data.  But it's really all I've got.
> If someone want to give me an account on a 386, I'll get more
> interesting gprof data.  
> 
> > 
> > My first suggestion to anyone that is interested in doing any sort
> > of profiling and performance improvements is to walk through the
> > entire code from startup -> Screen on page through the debugger.
> > This will benefit for several reasons:
> > 1) You get to become comfortable with the current execution path
> >    and the code that is associated with it.
> > 2) You get an immediate feel for where loops occur and how long
> >    any of those loops might take (ie next, next, next ... god
> >    why are we doing all of this work, next, next, next ... )
> > 3) Often using a combination of function step into and a function 
> >    step over you can find out where the slow spots are in the
> >    code (gee that function took a long time to return ... this
> >    one took no time at all).
> > 
> > Having done this you are now in a pretty good position to understand
> > the result from the profiler and to think about how code might be
> > re-factored to be more efficient since lots of times that is the
> > work that has to be done to get performance improvements.
> > 
> > Just a couple of thoughts from someone who has been there before.
> 
> Sounds fun.  Right after I rewrite the Makefiles.  
>            
> 	sam th		     
> 	sam@uchicago.edu
> 	http://www.abisource.com/~sam/
> 	GnuPG Key:  
> 	http://www.abisource.com/~sam/key
> 
This archive was generated by hypermail 2b25 : Mon Feb 05 2001 - 08:58:00 CST