Profiling a Trac request
Every once in a while someone raises the question: "Why does this Trac request take so much time?"
I've been using a simple script that basically takes the web server out of the equation, by configuring a 'real' request and dispatching it directly to Trac. That also means that if the profile request takes significantly less time than the same request through a web server, then the answer will likely be found somewhere in server configuration instead of Trac or plugin code.
Here is the script: attachment:trac-req.py
All you need to do is update the script to suit your local installation:
... project_path = '/path/to/my_trac_project_directory' ...
If the suspected performance issue is tied to a particular URL or some particular request arguments, then change the variables as needed:
... url_project = '/timeline' req_args = {'daysback': 90, 'ticket': 'on'} username = 'osimons' req_method = 'GET' ...
Other than that there are some switches to reduce the output. By default it just dumps everything to stdout. Also, the script is usually configured to perform 2 requests, and profile the second one. The first request will usually be skewed by one-time imports, template caching and more. However, profiling a second request may not always make sense, so it can be disabled if not wanted:
... do_output_html = True do_output_debug_summary = True do_profile = True do_profile_second_request = True ...
The output includes the actual HTML and headers if not disabled, and the profiling will look like this (paths shortened to be easier to read):
140928 function calls (109261 primitive calls) in 0.303 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 77 0.066 0.001 0.066 0.001 .../trac/db/sqlite_backend.py:46(_rollback_on_error) 31360 0.015 0.000 0.015 0.000 .../genshi-0.6.x/genshi/template/base.py:206(get) 344 0.015 0.000 0.034 0.000 .../genshi/template/base.py:229(items) 2572 0.012 0.000 0.012 0.000 .../genshi/core.py:484(escape) ... lots more ... ...
That's it. It has worked OK for me, but suggestions for improving it is welcome. It would of course be useful to add such profiling to the http://trac-hacks.org/wiki/TracDeveloperPlugin, but I'll leave that patch as an exercise for the reader... :-)
Update 2011-09-14: What if it isn't the Trac code?
As the profiling example above shows, the request is actually quite fast. If the profiling is initiated due to perceived slow requests, lets complete the example and also include some hints for looking elsewhere to try to debug the "My Trac is slow" feeling.
- First natural place to look is to compare with same/similar request through the web server. Here is how that would look using Apache Bench (ab):
$> ab -c 1 -n 3 "http://tracdev-012.local/fullblog/timeline?format=rss&blog=on" ... Time per request: 190.929 [ms] (mean) ... Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 0 Processing: 123 191 96.1 259 259 Waiting: 86 156 99.4 227 227 Total: 123 191 96.1 259 259 ...
- Still looks fast. Let's continue looking at the client side. Perhaps the pages contain code that is slow to execute or render? Or require resources that are slow to load? Or even resources returning 404 errors that halt processing for long periods of time? Use Firebug for Firefox, Web Inspector for Safari and Chrome, or whatever else takes your fancy. Running the request may show something like this:
GET timeline?ticket=on&daysback=90 200 OK 122 ms GET trac.css 200 OK 69 ms ... GET chrome/site/myscript.js 200 OK 4588 ms ... GET chrome/site/mylogo 404 1320 ms ... ------------------------------------------------------------------ Total on load 8544 ms ------------------------------------------------------------------
8.5 seconds for a full page load is not perceived as "fast", and slow script processing and missing files may be the cause of the delay for this example. Or could it be web server configuration issues? Customization issues? Plugin issues? Proxy / Cache handling issues? Trust or authentication issues? Go figure...