Loading...

Posts in category trac

Bitbucket Mercurial Patch Queues

I use Mercurial. It use it for most of my open source development, and it is feature rich and powerful yet simple and intuitive. What I use most is the MQ (Mercurial Queues) feature for making patch queues (stacks really). With much of my work being Trac related, I can combine the Trac mirror at Bitbucket and share my patch queues - and work with others on their development.

There are many sources online for learning about Mercurial patch queues (1 2 3), but this post is really a plain how-to written for fellow Trac developers that need to work with my patch queues. Hopefully this live example will also explain the basics of why & how it works - and use hg help mq and help <command> for each to learn more.

How-To

Pre-history: I've created https://bitbucket.org/osimons/trac-t10425/src as a start to try to unwind a complicated set of features into a stack of simpler incremental changes. This was simply done by going to the main Trac repository at Bitbucket (https://bitbucket.org/edgewall/trac) and hitting the 'patch queue' link (right next to 'fork'). Provide details of your login if needed, and the name you want for the patch repository ('trac-t10425' is the name I used this time, combining 'trac' and the ticket number it corresponds to).

The work is started by adding a first default_columns.diff patch and committed it to the patch repository. The next would be to start attacking the remaining parts of the patch, adding each feature in its own patch. Here is the short guide for how to pick this up:

  1. Clone the repository; hg qclone https://bitbucket.org/osimons/trac-t10425 which actually clones BOTH the full Trac repository and the patch queue (hidden inside in .hg/patches).
  1. The basics of the patch queue:
    > cd trac-t10425
    > python -m trac.ticket.tests.query
    ............................................
    ----------------------------------------------------------------------
    Ran 44 tests in 0.577s
    OK
    > hg qseries
    default_columns.diff
    > hg qpush
    applying default_columns.diff
    now at: default_columns.diff
    > python -m trac.ticket.tests.query
    ................................................
    ----------------------------------------------------------------------
    Ran 48 tests in 0.660s
    OK
    > hg qpop
    popping default_columns.diff
    patch queue now empty
    
  2. So, you pop and push patches and they can be stacked of course - which is the whole idea for this set of changes. Get to the top of the patch queue, and make a new patch (named after feature, but can be renamed later if needed):
    > hg qpush
    applying default_columns.diff
    now at: default_columns.diff
    > hg qnew the_next_thing.diff
    > hg qseries # prints the patch stack
    default_columns.diff
    the_next_thing.diff
    > cat .hg/patches/the_next_thing.diff
    # HG changeset patch
    # Parent e4075428584eea0ca3b6c63984a1f4445d1f9814
    
  3. Hack away at the code for "the next thing", and continually refresh the patch if needed (useful for comparing and reverting baby steps). Finally the set of changes can be committed to the patch repository. The patch queue is a full repository, so any number of changes to any patches in the queue may be committed.
    > hg qrefresh # updates the current patch, repeat at any time
    > hg commit --mq -m "The next thing is now OK."
    > hg outgoing --mq
    comparing with https://bitbucket.org/osimons/trac-t10425/.hg/patches
    searching for changes
    changeset:   1:xxxxxxxxx
    ....
    
  4. For anyone wanting write access to this patch queue, just contact me with your Bitbucket user, and I'll add write permission for you. Then you can:
    > hg push --mq
    pushing to https://bitbucket.org/osimons/trac-t10425/.hg/patches
    searching for changes
    remote: adding changesets
    remote: adding manifests
    remote: adding file changes
    remote: added 1 changesets with 3 changes to 3 files
    
  5. Updating changes from others follows same principle:
    hg pull --mq --update
    

That's it for the basics. It is just a nested repository inside the regular Trac repository, and each can be updated, committed, pulled and pushed as needed. Update Trac with new changes and reapply patches in series one-by-one, adjusting them if needed to make sure they apply cleanly.

Bonus insight

For sporadic work on single patch queues, it becomes a bit cumbersome to keep so many full copies of the full Trac repository laying around - it wastes space, and it is a burden to continually keep reinstalling and hooking things up to Apache development server and more. Instead, it is possible to reuse one checkout of Trac with many patch queues - Mercurial supports multiple patch queues, and just like branches you just switch:

> hg qqueue
patches
t10425-bb (active)
> hg qq --create my-new-q
> hg qq
patches
t10425-bb
my-new-q (active)

By default patch queues are unversioned, so if you want to share or follow changes over time you can init a repository for the new active patch queue:

> hg init --mq
> hg status --mq
A .hgignore
A my_patch.diff
A series
> hg ci --mq -m "Adding stuff."

After working in a new patch queue, this patch queue can also be hooked up to Bitbucket. When making a new patch queue from a repository, just select to NOT create a series file - seeing you will provide your own full patch repository, so a dummy first changeset with an empty series file will just complicated things.

Edit .hg/patches-my-new-q/.hg/hgrc and add the patch to the patch repository. The trick here is that the public URL shown for the Bitbucket patch queue is not exactly what you want - you want to directly match the two nested patch repositories (local and remote). So, make the patch repos hgrc look like this instead:

[paths]
default = https://bitbucket.org/osimons/trac-t10425/.hg/patches

With paths matched, it just becomes a matter of pushing patch queue changes to the empty Bitbucket repository:

> hg push --mq
pushing to...

The other way around is very easy too. If you find a patch queue of interest, just make a checkout of it to your main project .hg directory - but be sure to name it patches-<queue-name>. And update the .hg/patches.queues to add queue-name to the list of available queues.

> cd .hg
> hg clone https://bitbucket.org/osimons/trac-t10425/.hg/patches patches-t10425
> echo "t10425" >> patches.queues
> hg qq
patches (active)
t10425
> hg qq t10425
> hg qpush --all
applying default_columns.diff
now at: default_columns.diff

Enjoy!

Trac Template Debug

Here is a simplified template for debugging rendering issues or helping out when developing your own Trac / plugin templates. All it does is to find the main information available for rendering, and provide a simple (big) printout of the information at the end of each HTML page.

Drop it into your Trac project as templates/site.html, or alternatively if you already have a site.html then just copy & paste the main py:match section into your own file.

It is currently restricted to TRAC_ADMIN permission (<div py:if="'TRAC_ADMIN' in req.perm"...), but please use with caution for anything production related! There is no telling what information it may print, and there are no further checks. Be warned.

<html xmlns="http://www.w3.org/1999/xhtml" 
       xmlns:py="http://genshi.edgewall.org/" 
       xmlns:xi="http://www.w3.org/2001/XInclude"
       py:strip=""> 

<!--! A new debug information <div> at the bottom of all pages -->
<py:match path="body" once="True">
<body py:attrs="select('@*')">
  ${select('*|text()|comment()')}
  <div py:if="'TRAC_ADMIN' in req.perm"
       id="debug"
       style="width: 98%; margin: 5px; border: 2px solid green; padding: 10px; font-family: courier;"
       py:with="b_dir = globals()['__builtins__'].dir">
    <div style="text-indent: -30px; padding-left: 30px;">
      <!--! Some potentially very long lists... -->
      <p style="">perm for ${perm.username}: ${repr(perm.permissions())}</p>  
      <p>project: ${repr(project)}</p>  
      <p>trac: ${repr(trac or 'not defined')}</p>
      <p>context: ${repr(context)}</p>  
      <p>context members: ${repr(b_dir(context))}</p>
      <p><strong>context __dict__:</strong>
        <div py:for="item in sorted(context.__dict__.keys())">
            ${item}: ${repr(context.__dict__[item])}</div></p>
      <p><strong>req.environ:</strong>
        <div py:for="item in sorted(req.environ.keys())">
            ${item}: ${repr(req.environ[item])}</div></p>
      <p><strong>req members:</strong> ${repr(b_dir(req))}</p>
      <p><strong>req __dict__:</strong>
        <div py:for="item in sorted(req.__dict__.keys())">
            ${to_unicode(item)}: ${to_unicode(repr(req.__dict__[item]))}</div></p>
      <p><strong>all objects from locals().['__data__']:</strong>
        <div py:for="item in sorted(locals()['__data__'].keys())">
            ${to_unicode(item)}: ${to_unicode(repr(locals()['__data__'][item]))}</div></p>
      <p><strong>__builtins__:</strong>
        <div py:for="key in sorted(globals()['__builtins__'].keys())">
            ${key}: ${repr(globals()['__builtins__'][key])}</div></p>
      <p py:with="sys = __import__('sys')">
        <strong>sys.path:</strong><br />
        ${pprint(sys.path)}</p>
    </div>
  </div>
</body>
</py:match>

</html>

The debug template also allows you to play around with the information and try it out interactively. Here are some examples:

<p>Try using req.hef(): ${req.href('wiki')}</p>
<p>Test fetching an element: ${select('div[@id="mainnav"]')}</p>

If you want to avoid having to reload the Trac process for each template change, just turn on auto_reload to have it picked up automatically:

[trac]
auto_reload = True

Enjoy!

  • Posted: 2011-12-13 13:06 (Updated: 2012-07-10 00:18)
  • Categories: trac
  • Comments (1)

Profiling a Trac request

Every once in a while someone raises the question: "Why does this Trac request take so much time?"

I've been using a simple script that basically takes the web server out of the equation, by configuring a 'real' request and dispatching it directly to Trac. That also means that if the profile request takes significantly less time than the same request through a web server, then the answer will likely be found somewhere in server configuration instead of Trac or plugin code.

Here is the script: attachment:trac-req.py

All you need to do is update the script to suit your local installation:

...
project_path = '/path/to/my_trac_project_directory'
...

If the suspected performance issue is tied to a particular URL or some particular request arguments, then change the variables as needed:

...
url_project = '/timeline'
req_args = {'daysback': 90, 'ticket': 'on'}
username = 'osimons'
req_method = 'GET'
...

Other than that there are some switches to reduce the output. By default it just dumps everything to stdout. Also, the script is usually configured to perform 2 requests, and profile the second one. The first request will usually be skewed by one-time imports, template caching and more. However, profiling a second request may not always make sense, so it can be disabled if not wanted:

...
do_output_html = True
do_output_debug_summary = True
do_profile = True
do_profile_second_request = True
...

The output includes the actual HTML and headers if not disabled, and the profiling will look like this (paths shortened to be easier to read):

         140928 function calls (109261 primitive calls) in 0.303 CPU seconds

   Ordered by: internal time, call count

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
       77    0.066    0.001    0.066    0.001 .../trac/db/sqlite_backend.py:46(_rollback_on_error)
    31360    0.015    0.000    0.015    0.000 .../genshi-0.6.x/genshi/template/base.py:206(get)
      344    0.015    0.000    0.034    0.000 .../genshi/template/base.py:229(items)
     2572    0.012    0.000    0.012    0.000 .../genshi/core.py:484(escape)
...
lots more ...
...

That's it. It has worked OK for me, but suggestions for improving it is welcome. It would of course be useful to add such profiling to the http://trac-hacks.org/wiki/TracDeveloperPlugin, but I'll leave that patch as an exercise for the reader... :-)

Update 2011-09-14: What if it isn't the Trac code?

As the profiling example above shows, the request is actually quite fast. If the profiling is initiated due to perceived slow requests, lets complete the example and also include some hints for looking elsewhere to try to debug the "My Trac is slow" feeling.

  1. First natural place to look is to compare with same/similar request through the web server. Here is how that would look using Apache Bench (ab):
    $> ab -c 1 -n 3 "http://tracdev-012.local/fullblog/timeline?format=rss&blog=on"
    ...
    Time per request:       190.929 [ms] (mean)
    ...
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.0      0       0
    Processing:   123  191  96.1    259     259
    Waiting:       86  156  99.4    227     227
    Total:        123  191  96.1    259     259
    ...
    
  1. Still looks fast. Let's continue looking at the client side. Perhaps the pages contain code that is slow to execute or render? Or require resources that are slow to load? Or even resources returning 404 errors that halt processing for long periods of time? Use Firebug for Firefox, Web Inspector for Safari and Chrome, or whatever else takes your fancy. Running the request may show something like this:
    GET timeline?ticket=on&daysback=90   200 OK     122 ms
    GET trac.css                         200 OK        69 ms
    ...
    GET chrome/site/myscript.js          200 OK            4588 ms
    ...
    GET chrome/site/mylogo               404                 1320 ms
    ...
    ------------------------------------------------------------------
    Total on load                                              8544 ms
    ------------------------------------------------------------------
    

8.5 seconds for a full page load is not perceived as "fast", and slow script processing and missing files may be the cause of the delay for this example. Or could it be web server configuration issues? Customization issues? Plugin issues? Proxy / Cache handling issues? Trust or authentication issues? Go figure...

  • Posted: 2011-09-06 22:12 (Updated: 2011-09-14 13:17)
  • Categories: trac
  • Comments (0)

Simple Notify script for TracTalk

At CodeResort.com we have recently made the source code available for the TracTalkPlugin. One of the nice things added for this plugin recently is RPC support - already in production here of course.

If you haven't looked at the RPC interface, take a look at the 'open' project RPC documentation. If you have access to projects here at CodeResort.com, then you will perhaps see 'Talk' in the project menus too (depending on permission and enabled status).

The primary interface for Talk is web-based - at least until someone makes an alternative client... So, trying to keep track of a number of browser windows with sporadic Talk activity was just a bit too much for me to manage manually. So, I turned to a programmatic solution...

I'm on OSX. And Growl is the natural notification solution for me. Being a Python developer, I naturally found and installed the Growl SDK (python bindings). The full script is attached to this post - along with the icon I use for the Growl notifications.

Install dependencies and save files, chmod u+x, and then do something like:

simon: /tmp> ./coderesort-talk-notifier.py name@example.org epicode open

So, here is the essence of how I use the RPC interface (all other lines are just 'nice-to-have'):

  1. For each of the projects I want to track (as input from command line), I first retrieve the available Talks in the project. Seeing each unique URL (=project) requires an xmlrpclib.ServerProxy, I create the proxy and retrieve the talks:
    server = xmlrpclib.ServerProxy(base_url % project)
    talks = server.talk.getTalks()
    
  2. Then, for each Talk in the project I want to retrieve the very last message number so that I can check that later to see if it has changed (=activity). Seeing a project can have many Talks, the most efficient solution is to use an RPC feature called MultiCall - it makes and retrieves many 'commands' using one POST request:
    multicall_msg = xmlrpclib.MultiCall(server)
    for r in talks:
        multicall_msg.talk.getMessages(r['room'], {'limit': 1, 'reverse': True})
    messages = multicall_msg().results
    
  3. Seeing I consider myself 'online' in the rooms that I poll for activity, I use the opportunity to also make a multicall_seen request along the same pattern as above.
  4. The seen feature is the reason why I poll every 115 seconds - the limit for 'online' marking in a room is defined as 'seen within last two minutes'. I wanted to make sure I'm inside so that I don't sporadically appear and disappear for others that follow the action in the room.

Go ahead, make your own Talk client! I challenge you to make a better one than my quick & dirty script - and it should really NOT be that hard... :-)

TracRPC plugin development setup

This post is just a summary of how to get a separate TracRPC Python development environment configured.

It presumes you have Python, Subversion and Python-Subversion bindings installed already. Other than that, it depends on virtualenv, a great way of making completely separate Python installations for test and development.

1. Make a virtualenv

simon: ~ > cd dev
simon: ~/dev > virtualenv tracrpc
New python executable in tracrpc/bin/python
Installing setuptools............done.
simon: ~/dev > source tracrpc/bin/activate
(tracrpc)simon: ~/dev > 

We are now running the virtual python.

2. Get Source

We want a recent stable Trac, and TracRPC development depends on running Trac for source so we neeed to do a checkout. Also, we want to checkout the TracRPC sourcecode itself.

(tracrpc)simon: ~/dev > mkdir tracrpc/src
(tracrpc)simon: ~/dev > cd tracrpc/src
(tracrpc)simon: ~/dev/tracrpc/src > svn co http://svn.edgewall.org/repos/trac/branches/0.12-stable trac
..... [snip] .....
Checked out revision 10122.
(tracrpc)simon: ~/dev/tracrpc/src > svn co http://trac-hacks.org/svn/xmlrpcplugin/trunk tracrpc
..... [snip] .....
Checked out revision 9092.

3. Install

Installing is done using develop mode, which means that we are running the modules and scripts from the checkout - without building and installing eggs to a separate location.

Install Trac:

(tracrpc)simon: ~/dev/tracrpc/src > cd trac
(tracrpc)simon: ~/dev/tracrpc/src/trac > python setup.py develop
..... [snip] .....
Finished processing dependencies for Trac==0.12.1dev-r10122

Install TracRPC:

(tracrpc)simon: ~/dev/tracrpc/src/trac > cd ../tracrpc
(tracrpc)simon: ~/dev/tracrpc/src/tracrpc > python setup.py develop
..... [snip] .....
Finished processing dependencies for TracXMLRPC==1.1.0-r8688

4. Run Tests

The functional Trac test suite uses twill, so that also becomes a dependency for the plugin tests. Install that first:

(tracrpc)simon: ~/dev/tracrpc/src/tracrpc > easy_install twill
..... [snip] .....
Finished processing dependencies for twill

All should now be installed and working, and to make sure lets run the TracRPC tests:

(tracrpc)simon: ~/dev/tracrpc/src/tracrpc > python setup.py test
..... [snip] .....
Found Trac source: /Users/simon/dev/tracrpc/src/trac
Enabling RPC plugin and permissions...
Created test environment: /Users/simon/dev/tracrpc/src/tracrpc/rpctestenv
Starting web server: http://127.0.0.1:8765
..... [snip] .....
Ran 32 tests in 42.156s
OK
Stopping web server...

Yeah!

5. Inspect

Following a testrun, there is a complete working Trac project with Subversion repository in src/tracrpc/rpctestenv. This is useful for a number of reasons...

  1. The project has debug logging enabled for the testrun, so by inspecting rpctestenv/trac/log/trac.log you can see all that happened server-side while executing the tests. The tests run in sequence, and there should be markers in the log indicating where each new tests starts.
  2. You can run the trac environment, and access as anonymous or login using the user:user or admin:admin users created and used in testing:
    (tracrpc)simon: ~/dev/tracrpc/src/tracrpc > tracd --port=8888 --basic-auth="/,rpctestenv/htpasswd,trac" rpctestenv/trac
    
  3. Running the Trac environment also means you can access the project using a client-side library, like the one from Python standard library - from a new shell:
    >>> import xmlrpclib
    >>> server = xmlrpclib.ServerProxy("http://admin:admin@localhost:8888/trac/login/xmlrpc")
    >>> print server.wiki.getPage('TitleIndex')
    = Welcome to Trac 0.12.1dev =
    ..... [snip] .....
    

URL Encoding and Quoting

I've just committed a change to the Trac [[Image]] macro that allows more flexible input for URL locations (see trac:changeset:6413).

Having to accept and handle a wider range of input made me start thinking about the security implications and how this could be abused. That got me into a vicious circle of pin-pointing possibilities, how and what to encode and quote, and how to handle some inconsistencies between the various inputs and types. Not good - and way out of scope for something that in theory was a simple change to the macro.

I won't bore with details here, but instead skip right to the conclusion:

  • Quoting of URLs is done by browsers automagically, and for HTML it is generally not needed anymore.
  • HTML escaping all content is plenty enough - it will ensure that a double quote (") in a URL will show as &#34; and not actually close the attribute.

Knowing that, the implementation is as simple as just taking the input and sending it off to rendering. However, it turned out that the Trac url-builder (trac.web.href.Href) quotes the input, so the simple solution was to unquote it and just let HTML escaping look after it - as done default by Genshi used by Trac for rendering.

In the end, the most 'complicated' line turned out as simple as it gets:

        # use href, but unquote to allow args (use default html escaping)
        raw_url = url = desc = unquote(formatter.href(filespec))

Learned something (again).

  • Posted: 2008-01-24 13:54 (Updated: 2008-01-26 00:31)
  • Categories: trac
  • Comments (0)