Loading...

Recent posts (max 10) - Browse or Archive for more

Bitbucket Mercurial Patch Queues

I use Mercurial. It use it for most of my open source development, and it is feature rich and powerful yet simple and intuitive. What I use most is the MQ (Mercurial Queues) feature for making patch queues (stacks really). With much of my work being Trac related, I can combine the Trac mirror at Bitbucket and share my patch queues - and work with others on their development.

There are many sources online for learning about Mercurial patch queues (1 2 3), but this post is really a plain how-to written for fellow Trac developers that need to work with my patch queues. Hopefully this live example will also explain the basics of why & how it works - and use hg help mq and help <command> for each to learn more.

How-To

Pre-history: I've created https://bitbucket.org/osimons/trac-t10425/src as a start to try to unwind a complicated set of features into a stack of simpler incremental changes. This was simply done by going to the main Trac repository at Bitbucket (https://bitbucket.org/edgewall/trac) and hitting the 'patch queue' link (right next to 'fork'). Provide details of your login if needed, and the name you want for the patch repository ('trac-t10425' is the name I used this time, combining 'trac' and the ticket number it corresponds to).

The work is started by adding a first default_columns.diff patch and committed it to the patch repository. The next would be to start attacking the remaining parts of the patch, adding each feature in its own patch. Here is the short guide for how to pick this up:

  1. Clone the repository; hg qclone https://bitbucket.org/osimons/trac-t10425 which actually clones BOTH the full Trac repository and the patch queue (hidden inside in .hg/patches).
  1. The basics of the patch queue:
    > cd trac-t10425
    > python -m trac.ticket.tests.query
    ............................................
    ----------------------------------------------------------------------
    Ran 44 tests in 0.577s
    OK
    > hg qseries
    default_columns.diff
    > hg qpush
    applying default_columns.diff
    now at: default_columns.diff
    > python -m trac.ticket.tests.query
    ................................................
    ----------------------------------------------------------------------
    Ran 48 tests in 0.660s
    OK
    > hg qpop
    popping default_columns.diff
    patch queue now empty
    
  2. So, you pop and push patches and they can be stacked of course - which is the whole idea for this set of changes. Get to the top of the patch queue, and make a new patch (named after feature, but can be renamed later if needed):
    > hg qpush
    applying default_columns.diff
    now at: default_columns.diff
    > hg qnew the_next_thing.diff
    > hg qseries # prints the patch stack
    default_columns.diff
    the_next_thing.diff
    > cat .hg/patches/the_next_thing.diff
    # HG changeset patch
    # Parent e4075428584eea0ca3b6c63984a1f4445d1f9814
    
  3. Hack away at the code for "the next thing", and continually refresh the patch if needed (useful for comparing and reverting baby steps). Finally the set of changes can be committed to the patch repository. The patch queue is a full repository, so any number of changes to any patches in the queue may be committed.
    > hg qrefresh # updates the current patch, repeat at any time
    > hg commit --mq -m "The next thing is now OK."
    > hg outgoing --mq
    comparing with https://bitbucket.org/osimons/trac-t10425/.hg/patches
    searching for changes
    changeset:   1:xxxxxxxxx
    ....
    
  4. For anyone wanting write access to this patch queue, just contact me with your Bitbucket user, and I'll add write permission for you. Then you can:
    > hg push --mq
    pushing to https://bitbucket.org/osimons/trac-t10425/.hg/patches
    searching for changes
    remote: adding changesets
    remote: adding manifests
    remote: adding file changes
    remote: added 1 changesets with 3 changes to 3 files
    
  5. Updating changes from others follows same principle:
    hg pull --mq --update
    

That's it for the basics. It is just a nested repository inside the regular Trac repository, and each can be updated, committed, pulled and pushed as needed. Update Trac with new changes and reapply patches in series one-by-one, adjusting them if needed to make sure they apply cleanly.

Bonus insight

For sporadic work on single patch queues, it becomes a bit cumbersome to keep so many full copies of the full Trac repository laying around - it wastes space, and it is a burden to continually keep reinstalling and hooking things up to Apache development server and more. Instead, it is possible to reuse one checkout of Trac with many patch queues - Mercurial supports multiple patch queues, and just like branches you just switch:

> hg qqueue
patches
t10425-bb (active)
> hg qq --create my-new-q
> hg qq
patches
t10425-bb
my-new-q (active)

By default patch queues are unversioned, so if you want to share or follow changes over time you can init a repository for the new active patch queue:

> hg init --mq
> hg status --mq
A .hgignore
A my_patch.diff
A series
> hg ci --mq -m "Adding stuff."

After working in a new patch queue, this patch queue can also be hooked up to Bitbucket. When making a new patch queue from a repository, just select to NOT create a series file - seeing you will provide your own full patch repository, so a dummy first changeset with an empty series file will just complicated things.

Edit .hg/patches-my-new-q/.hg/hgrc and add the patch to the patch repository. The trick here is that the public URL shown for the Bitbucket patch queue is not exactly what you want - you want to directly match the two nested patch repositories (local and remote). So, make the patch repos hgrc look like this instead:

[paths]
default = https://bitbucket.org/osimons/trac-t10425/.hg/patches

With paths matched, it just becomes a matter of pushing patch queue changes to the empty Bitbucket repository:

> hg push --mq
pushing to...

The other way around is very easy too. If you find a patch queue of interest, just make a checkout of it to your main project .hg directory - but be sure to name it patches-<queue-name>. And update the .hg/patches.queues to add queue-name to the list of available queues.

> cd .hg
> hg clone https://bitbucket.org/osimons/trac-t10425/.hg/patches patches-t10425
> echo "t10425" >> patches.queues
> hg qq
patches (active)
t10425
> hg qq t10425
> hg qpush --all
applying default_columns.diff
now at: default_columns.diff

Enjoy!

Trac Template Debug

Here is a simplified template for debugging rendering issues or helping out when developing your own Trac / plugin templates. All it does is to find the main information available for rendering, and provide a simple (big) printout of the information at the end of each HTML page.

Drop it into your Trac project as templates/site.html, or alternatively if you already have a site.html then just copy & paste the main py:match section into your own file.

It is currently restricted to TRAC_ADMIN permission (<div py:if="'TRAC_ADMIN' in req.perm"...), but please use with caution for anything production related! There is no telling what information it may print, and there are no further checks. Be warned.

<html xmlns="http://www.w3.org/1999/xhtml" 
       xmlns:py="http://genshi.edgewall.org/" 
       xmlns:xi="http://www.w3.org/2001/XInclude"
       py:strip=""> 

<!--! A new debug information <div> at the bottom of all pages -->
<py:match path="body" once="True">
<body py:attrs="select('@*')">
  ${select('*|text()|comment()')}
  <div py:if="'TRAC_ADMIN' in req.perm"
       id="debug"
       style="width: 98%; margin: 5px; border: 2px solid green; padding: 10px; font-family: courier;"
       py:with="b_dir = globals()['__builtins__'].dir">
    <div style="text-indent: -30px; padding-left: 30px;">
      <!--! Some potentially very long lists... -->
      <p style="">perm for ${perm.username}: ${repr(perm.permissions())}</p>  
      <p>project: ${repr(project)}</p>  
      <p>trac: ${repr(trac or 'not defined')}</p>
      <p>context: ${repr(context)}</p>  
      <p>context members: ${repr(b_dir(context))}</p>
      <p><strong>context __dict__:</strong>
        <div py:for="item in sorted(context.__dict__.keys())">
            ${item}: ${repr(context.__dict__[item])}</div></p>
      <p><strong>req.environ:</strong>
        <div py:for="item in sorted(req.environ.keys())">
            ${item}: ${repr(req.environ[item])}</div></p>
      <p><strong>req members:</strong> ${repr(b_dir(req))}</p>
      <p><strong>req __dict__:</strong>
        <div py:for="item in sorted(req.__dict__.keys())">
            ${to_unicode(item)}: ${to_unicode(repr(req.__dict__[item]))}</div></p>
      <p><strong>all objects from locals().['__data__']:</strong>
        <div py:for="item in sorted(locals()['__data__'].keys())">
            ${to_unicode(item)}: ${to_unicode(repr(locals()['__data__'][item]))}</div></p>
      <p><strong>__builtins__:</strong>
        <div py:for="key in sorted(globals()['__builtins__'].keys())">
            ${key}: ${repr(globals()['__builtins__'][key])}</div></p>
      <p py:with="sys = __import__('sys')">
        <strong>sys.path:</strong><br />
        ${pprint(sys.path)}</p>
    </div>
  </div>
</body>
</py:match>

</html>

The debug template also allows you to play around with the information and try it out interactively. Here are some examples:

<p>Try using req.hef(): ${req.href('wiki')}</p>
<p>Test fetching an element: ${select('div[@id="mainnav"]')}</p>

If you want to avoid having to reload the Trac process for each template change, just turn on auto_reload to have it picked up automatically:

[trac]
auto_reload = True

Enjoy!

  • Posted: 2011-12-13 13:06 (Updated: 2012-07-10 00:18)
  • Categories: trac
  • Comments (1)

Profiling a Trac request

Every once in a while someone raises the question: "Why does this Trac request take so much time?"

I've been using a simple script that basically takes the web server out of the equation, by configuring a 'real' request and dispatching it directly to Trac. That also means that if the profile request takes significantly less time than the same request through a web server, then the answer will likely be found somewhere in server configuration instead of Trac or plugin code.

Here is the script: attachment:trac-req.py

All you need to do is update the script to suit your local installation:

...
project_path = '/path/to/my_trac_project_directory'
...

If the suspected performance issue is tied to a particular URL or some particular request arguments, then change the variables as needed:

...
url_project = '/timeline'
req_args = {'daysback': 90, 'ticket': 'on'}
username = 'osimons'
req_method = 'GET'
...

Other than that there are some switches to reduce the output. By default it just dumps everything to stdout. Also, the script is usually configured to perform 2 requests, and profile the second one. The first request will usually be skewed by one-time imports, template caching and more. However, profiling a second request may not always make sense, so it can be disabled if not wanted:

...
do_output_html = True
do_output_debug_summary = True
do_profile = True
do_profile_second_request = True
...

The output includes the actual HTML and headers if not disabled, and the profiling will look like this (paths shortened to be easier to read):

         140928 function calls (109261 primitive calls) in 0.303 CPU seconds

   Ordered by: internal time, call count

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
       77    0.066    0.001    0.066    0.001 .../trac/db/sqlite_backend.py:46(_rollback_on_error)
    31360    0.015    0.000    0.015    0.000 .../genshi-0.6.x/genshi/template/base.py:206(get)
      344    0.015    0.000    0.034    0.000 .../genshi/template/base.py:229(items)
     2572    0.012    0.000    0.012    0.000 .../genshi/core.py:484(escape)
...
lots more ...
...

That's it. It has worked OK for me, but suggestions for improving it is welcome. It would of course be useful to add such profiling to the http://trac-hacks.org/wiki/TracDeveloperPlugin, but I'll leave that patch as an exercise for the reader... :-)

Update 2011-09-14: What if it isn't the Trac code?

As the profiling example above shows, the request is actually quite fast. If the profiling is initiated due to perceived slow requests, lets complete the example and also include some hints for looking elsewhere to try to debug the "My Trac is slow" feeling.

  1. First natural place to look is to compare with same/similar request through the web server. Here is how that would look using Apache Bench (ab):
    $> ab -c 1 -n 3 "http://tracdev-012.local/fullblog/timeline?format=rss&blog=on"
    ...
    Time per request:       190.929 [ms] (mean)
    ...
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.0      0       0
    Processing:   123  191  96.1    259     259
    Waiting:       86  156  99.4    227     227
    Total:        123  191  96.1    259     259
    ...
    
  1. Still looks fast. Let's continue looking at the client side. Perhaps the pages contain code that is slow to execute or render? Or require resources that are slow to load? Or even resources returning 404 errors that halt processing for long periods of time? Use Firebug for Firefox, Web Inspector for Safari and Chrome, or whatever else takes your fancy. Running the request may show something like this:
    GET timeline?ticket=on&daysback=90   200 OK     122 ms
    GET trac.css                         200 OK        69 ms
    ...
    GET chrome/site/myscript.js          200 OK            4588 ms
    ...
    GET chrome/site/mylogo               404                 1320 ms
    ...
    ------------------------------------------------------------------
    Total on load                                              8544 ms
    ------------------------------------------------------------------
    

8.5 seconds for a full page load is not perceived as "fast", and slow script processing and missing files may be the cause of the delay for this example. Or could it be web server configuration issues? Customization issues? Plugin issues? Proxy / Cache handling issues? Trust or authentication issues? Go figure...

  • Posted: 2011-09-06 22:12 (Updated: 2011-09-14 13:17)
  • Categories: trac
  • Comments (0)

GPL foundation is not future proof

Things are moving fast in software these days. In many ways much is the same as it has been - like how we write and license our code, but much have changed with regard to how code can be distributed. A couple of days ago, VLC for iOS was removed from the AppStore because it seems the store terms conflict with the letter of the GPLv2 under which VLC is licensed. This caused me to tweet:

oddsimons: #VLC iOS gone as store "imposes additional terms and conditions". Can Linux legally ship as new phone firmware? #android #GPL

The tweet says what I was thinking, I just did not have any answers. Since no one has stepped forward to answer either, I've ended up digging into details and reading various opinions of others to see what underlies all this. It seems to me that in the big scheme of things, the infringement claim from one of the VLC contributors (Rémi Denis-Courmont) and even the involvement of FSF in a similar case last year may just well be the start of something that will bring the whole house of cards down.

Not intended by all those involved, no doubt. But in the name of freedom. And without anyone having a right to complain. Hear me out...

Embedded Linux and DRM

A couple of years ago, the issue of embedding Linux was raised against Tivo. The GPLv2 license covers distribution on any media, so that naturally also includes flash memory and chips and whatever else plays host to the software.

The fact is that even though the source is available, downloading, improving and compiling will in fact not work. Because the various levels of signatures and DRM preclude you from running this software on the box.

According to the Wikipedia article, the truce of this clash was this:

  • GPLv3 draft was updated to certainly not accept this behaviour in the future
  • Linus Torvalds seems to accept this license breach as the security concerns outweigh the rights of users

Okeeey...

Usage Terms

So, VLC for iOS comes along. And, as far as I can tell, to avoid rehashing the DRM debate mentioned above, the actual target of the complaint is the AppStore Terms of Agreement that the user has to accept. This is a very long agreement covering all kinds of aspects. In some mailing list post I came across it seems others were not so certain about this incompatibility (the link to post eludes me now), but that gray areas are definitely present. No doubt people can (and will) continue to weigh words and intentions to somehow detect all, but at the end of the day I suppose it boils down to this one line in the GPLv2 agreement:

"You may not impose any further restrictions on the recipients' exercise of the rights granted herein."

If you believe the word, then even accepting a secondary level of usage terms in order to keep/use the software really becomes a no-no. It may conflict, it may not, and if not it may do so in the future when terms change.

Ooops...

Android

By far the most popular Linux-based recent-generation OS is of course Android. In use by a large number of device makers for all kinds of purposes. Android OS layer is released by Google with an Apache license that does not have these issues (smart move), but it runs on top of a modified version of Linux and contains a fair chunk of GPLv2 code.

Source code rightly published (in theory at least). And upgradable - again in theory, as shown by the recent Sony Ericsson Xperia debacle.

Here is some well-known secrets of the underlying business:

  1. Handset/Device manufacturers survive by selling NEW devices.
  2. Google makes its money from KNOWING YOU and selling MORE ADVERTS.

Did you know that even devices ship with their own EULA? Today I found a HTC EULA that contains this pretty nugget:

"Portions of the Software includes software files subject to certain open source license agreements, then such open source software files are subject to the notices and additional terms and conditions that are referenced in this section."

So the open source agreements are in ADDITION to the HTC conditions? What conditions you may ask?! As far as I can tell, certainly not any conditions that I'd consider compatible with the GPLv2. Opinions on my interpretation of course welcome here - as I may get parts of this wrong.

The other side of Android is the apps, and to the joyful cheer of VLC for iOS being pulled, the voices started singing the praise for the upcoming Android version of VLC. To make any sort of sensible comparison, this must then presume to mean that VLC will be available on Android Market (terms) - which of course requires a Google Account so make sure to agree to that too. So, in some not-so-distant future running VLC from Market you have already agreed...

"... that if Google disables access to your account, you may be prevented from accessing the Market, your account details or any files or other Products that are stored with your account."

Darn...

Individuals and Corporations

Are Android "compounded terms" worse than Apple? Most likely not. But the jumble of terms and conditions that follow a GPLv2 application running on a GPLv2 foundation OS on a new-generation device is very confusing. Fact is that it should be clear-cut according to the purist:

"Here is the software and the GPLv2 license. Have fun!"

It isn't like that at all. At least Apple owns their own stuff and may set the terms that you may agree with or not - take it or leave it. GPL is owned by you (in a broad sense) and apparently restricted by others. This will continue to aggravate people.

We are still at quite an early stage with regards to GPL licensed phones, tablets, set top boxes, TVs, embedded media devices and so on. New licensing issues will arise at equal speed in coming years. Hardly anything is really settled legally, and all is swept under the carpet in some stable status quo that certainly will not last. Richard Stallman and Linus Torvalds do not always agree, but they are strong people whose voices carries an opinion that is to a large extent unchallenged on their own turf.

What the VLC story also shows is that it just takes ONE important contributor to decide that "enough is enough" and request that any infringement (whatever that may be) ceases immediately. A single person - or corporation. And each contributor matter. That is thousands of people and organisations that have contributed to the Linux foundations.

Hmmm...

House of Cards

Does such a person work for large corporations? Like Apple? Microsoft? Or Nokia as Rémi does? Or perhaps such a person has fallen on hard times and is now currently unemployed and in major financial difficulty - perhaps ready to be taken on a SCO-like joyride by capitalist vultures? I don't know. I certainly don't consider it to be unrealistic. What my 20 years in software has taught me is never to be surprised at what may lay around the corner.

If this house of cards crumbles it will certainly have major effects. I think that anyone that builds a business on top of GPL-licensed land should be aware that their house is located on sand in an earthquake zone. High risk, and there is no one that provides insurance. Beware of seismic activity.

Simple Notify script for TracTalk

At CodeResort.com we have recently made the source code available for the TracTalkPlugin. One of the nice things added for this plugin recently is RPC support - already in production here of course.

If you haven't looked at the RPC interface, take a look at the 'open' project RPC documentation. If you have access to projects here at CodeResort.com, then you will perhaps see 'Talk' in the project menus too (depending on permission and enabled status).

The primary interface for Talk is web-based - at least until someone makes an alternative client... So, trying to keep track of a number of browser windows with sporadic Talk activity was just a bit too much for me to manage manually. So, I turned to a programmatic solution...

I'm on OSX. And Growl is the natural notification solution for me. Being a Python developer, I naturally found and installed the Growl SDK (python bindings). The full script is attached to this post - along with the icon I use for the Growl notifications.

Install dependencies and save files, chmod u+x, and then do something like:

simon: /tmp> ./coderesort-talk-notifier.py name@example.org epicode open

So, here is the essence of how I use the RPC interface (all other lines are just 'nice-to-have'):

  1. For each of the projects I want to track (as input from command line), I first retrieve the available Talks in the project. Seeing each unique URL (=project) requires an xmlrpclib.ServerProxy, I create the proxy and retrieve the talks:
    server = xmlrpclib.ServerProxy(base_url % project)
    talks = server.talk.getTalks()
    
  2. Then, for each Talk in the project I want to retrieve the very last message number so that I can check that later to see if it has changed (=activity). Seeing a project can have many Talks, the most efficient solution is to use an RPC feature called MultiCall - it makes and retrieves many 'commands' using one POST request:
    multicall_msg = xmlrpclib.MultiCall(server)
    for r in talks:
        multicall_msg.talk.getMessages(r['room'], {'limit': 1, 'reverse': True})
    messages = multicall_msg().results
    
  3. Seeing I consider myself 'online' in the rooms that I poll for activity, I use the opportunity to also make a multicall_seen request along the same pattern as above.
  4. The seen feature is the reason why I poll every 115 seconds - the limit for 'online' marking in a room is defined as 'seen within last two minutes'. I wanted to make sure I'm inside so that I don't sporadically appear and disappear for others that follow the action in the room.

Go ahead, make your own Talk client! I challenge you to make a better one than my quick & dirty script - and it should really NOT be that hard... :-)

TracRPC plugin development setup

This post is just a summary of how to get a separate TracRPC Python development environment configured.

It presumes you have Python, Subversion and Python-Subversion bindings installed already. Other than that, it depends on virtualenv, a great way of making completely separate Python installations for test and development.

1. Make a virtualenv

simon: ~ > cd dev
simon: ~/dev > virtualenv tracrpc
New python executable in tracrpc/bin/python
Installing setuptools............done.
simon: ~/dev > source tracrpc/bin/activate
(tracrpc)simon: ~/dev > 

We are now running the virtual python.

2. Get Source

We want a recent stable Trac, and TracRPC development depends on running Trac for source so we neeed to do a checkout. Also, we want to checkout the TracRPC sourcecode itself.

(tracrpc)simon: ~/dev > mkdir tracrpc/src
(tracrpc)simon: ~/dev > cd tracrpc/src
(tracrpc)simon: ~/dev/tracrpc/src > svn co http://svn.edgewall.org/repos/trac/branches/0.12-stable trac
..... [snip] .....
Checked out revision 10122.
(tracrpc)simon: ~/dev/tracrpc/src > svn co http://trac-hacks.org/svn/xmlrpcplugin/trunk tracrpc
..... [snip] .....
Checked out revision 9092.

3. Install

Installing is done using develop mode, which means that we are running the modules and scripts from the checkout - without building and installing eggs to a separate location.

Install Trac:

(tracrpc)simon: ~/dev/tracrpc/src > cd trac
(tracrpc)simon: ~/dev/tracrpc/src/trac > python setup.py develop
..... [snip] .....
Finished processing dependencies for Trac==0.12.1dev-r10122

Install TracRPC:

(tracrpc)simon: ~/dev/tracrpc/src/trac > cd ../tracrpc
(tracrpc)simon: ~/dev/tracrpc/src/tracrpc > python setup.py develop
..... [snip] .....
Finished processing dependencies for TracXMLRPC==1.1.0-r8688

4. Run Tests

The functional Trac test suite uses twill, so that also becomes a dependency for the plugin tests. Install that first:

(tracrpc)simon: ~/dev/tracrpc/src/tracrpc > easy_install twill
..... [snip] .....
Finished processing dependencies for twill

All should now be installed and working, and to make sure lets run the TracRPC tests:

(tracrpc)simon: ~/dev/tracrpc/src/tracrpc > python setup.py test
..... [snip] .....
Found Trac source: /Users/simon/dev/tracrpc/src/trac
Enabling RPC plugin and permissions...
Created test environment: /Users/simon/dev/tracrpc/src/tracrpc/rpctestenv
Starting web server: http://127.0.0.1:8765
..... [snip] .....
Ran 32 tests in 42.156s
OK
Stopping web server...

Yeah!

5. Inspect

Following a testrun, there is a complete working Trac project with Subversion repository in src/tracrpc/rpctestenv. This is useful for a number of reasons...

  1. The project has debug logging enabled for the testrun, so by inspecting rpctestenv/trac/log/trac.log you can see all that happened server-side while executing the tests. The tests run in sequence, and there should be markers in the log indicating where each new tests starts.
  2. You can run the trac environment, and access as anonymous or login using the user:user or admin:admin users created and used in testing:
    (tracrpc)simon: ~/dev/tracrpc/src/tracrpc > tracd --port=8888 --basic-auth="/,rpctestenv/htpasswd,trac" rpctestenv/trac
    
  3. Running the Trac environment also means you can access the project using a client-side library, like the one from Python standard library - from a new shell:
    >>> import xmlrpclib
    >>> server = xmlrpclib.ServerProxy("http://admin:admin@localhost:8888/trac/login/xmlrpc")
    >>> print server.wiki.getPage('TitleIndex')
    = Welcome to Trac 0.12.1dev =
    ..... [snip] .....
    

OpenVZ on OpenSUSE 11.1 on Mac mini

I like OpenVZ. I've been using it on some development setups using CentOS running on VMware Fusion. It makes it easy to run many Linux machines without exhausting resources on my old MacBook Pro (RAM limitations).

Now I needed a new development server, and working from home most of the time I figured I'd settle for a machine that I could put in my backpack and move between work and home if needed. The Mac mini fits the bill.

So, I progressed to install CentOS 5.3 on the mini as only OS. It installed, it booted and looked all right. But no Ethernet. Searching for a suitable NVIDIA driver, but no luck in getting it to work. It just turned out to be too much hassle. I took an early decision just to scrap CentOS. A nice, clean, oh-so-compatible (RHEL5) base for my OpenVZ - but I'd been sheltered from its hardware compatibility issues by VMware Fusion.

openSUSE 11.1 is my favorite distribution for "running things" - good packaging with fresh version for everything, especially through all the Novell Build service. Really nice. And yast - a really, really useful system administration tool that works just as well through an SSH terminal.

The problem? OpenVZ does not have releases for openSUSE and recent kernels (like 2.6.27 as used in openSUSE 11.1). Some searching lead me to some OpenVZ builds for kernel and tools in one of the build service repositories, and I decided to give it a try. The following steps gave me all that I needed for my use:

1. Install openSUSE 11.1

It installes on the Mac mini without issues. All the drivers I needed where there, and the thing just plain worked after install. Nice.

2. Add the OpenVZ repositories

I added the two repositories that I found:

  • Virtualization:OpenVZ:kernel-2.6.27
  • Virtualization:OpenVZ:tools

See attached script - run to have the files set up, or just add the repositories manually (I like making scripts of what I do - makes it easier to replay later).

3. Install OpenVZ kernel and tools

zypper install kernel-ovz-default kernel-ovz-default-base kernel-ovz-default-extra
zypper install vzctl vzquota

4. Enable OpenVZ

A init.d script is installed, use to load/start/stop OpenVZ. If not listed in chkconfig, install it:

chkconfig --add vz
chkconfig vz on

The service will then be available to start and stop as service vz start|stop|status|restart. DON'T do this yet - we haven't yet finished configuring the OpenVZ kernel and booting into it...

5. Bootloader

Use yast and check the bootloader, and see that the new ovz kernel is listed (and in my case as default).

6. Firewall

While in yast turn of the firewall service for now, and mark it as not started by default. The firewall just complicates when trying to get the basic services to work.

As usual, it should be turned on later when properly configured. I haven't gotten to it yet, so if anyone has advice for the rules I need for a) general two-way access and forwarding, and b) each exposed service inside each container - I would be very grateful for that...

7. Edit kernel parameters

OpenVZ install guide (and various posts on the internet) has lots of advice on the forwarding and proxy parameters needed. This is the list of parameters I now appended to /etc/sysctl.conf:

# OpenVZ settings
net.ipv4.ip_forward = 1
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.default.forwarding = 1
net.ipv4.conf.all.proxy_arp = 1
net.ipv4.conf.default.proxy_arp = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.proxy_ndp = 1
net.ipv6.conf.all.proxy_ndp = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.icmp_echo_ignore_broadcasts= 1
kernel.sysrq = 1

Are they all needed? Hard to tell - I just stopped fiddling when things finally worked.

8. vz.conf networking

I'm not sure about this, but some advice suggested a setting in /etc/vz/vz.conf should be like this:

NEIGHBOUR_DEVS=all

Not sure why as I haven't used that before. I got it now anyway.

9. Reboot

The moment of truth... If all goes well, all should look as before but uname -a should say something like:

Linux minivz 2.6.27.11-ovz-1.43-default #1 SMP .....

Hopefully OpenVZ modules should also be loaded - looking something like this:

$ lsmod | grep vz
vzethdev               76032  0 
vznetdev               86024  2 
vzrst                 189224  0 
vzcpt                 174776  0 
tun                    78596  2 vzrst,vzcpt
vzmon                  93968  5 vzethdev,vznetdev,vzrst,vzcpt
vzdquota              108676  0 [permanent]
vzdev                  69392  4 vzethdev,vznetdev,vzmon,vzdquota
ipv6                  343712  42 vzrst,vzcpt,vzmon,ip6t_REJECT,nf_conntrack_ipv6,ip6table_mangle

service vz status should report "OpenVZ is running..."

Also check that kernel parameters set earlier are effective - sysctl -A | grep forward should give settings we added.

10. Create containers

This was an install how-to, not an openvz-how-to. There are plenty of guides on the net showing you how to use this wonderful server virtualization. Just note that on openSUSE, the default location for OpenVZ data is /srv/vz and not just /vz as typical on RHEL5.

Being content with openSUSE 11.1, the same OS can now be used for containers using the OpenVZ-provided precreated templates:

cd /srv/vz/template/cache
wget http://download.openvz.org/template/precreated/suse-11.1-x86.tar.gz
wget http://download.openvz.org/template/precreated/suse-11.1-x86_64.tar.gz

One final note about openSUSE 11.1 containers: There is no yum. And no zypper repositories configured in the template. You'll want to add that. Again, I have a script for the repositories that I use for my web/python development - just remove the ones you don't need, but at least leave the regular openSUSE ones in place to allow you to run updates and install standard software inside your container:

vzctl runscript 101 opensuse-11.1-repos.sh

The end

Or, "the beginning" perhaps? I'd like to think so. Lots more to be discovered. I'd appreciate feedback and insights - leave a comment.

Reverting changes in Subversion

I just made a comment to a blog post asking about how to best revert changes to files in Subversion:

Make old svn revision the current revision

Looking up my wiki notes, I remembered I had a solution to this. As it may be of interest to others, here is my comment in full:

The trick is actually to use svn merge, and reverse the arguments to do a ‘negative’ (reverse) merge - as opposed to the usual forward merge where one compares old:new.

svn merge -r HEAD:247 myfile.php

Remembering that a changeset of a given number takes the revision of the repos to the same number, this will reverse any changesets down to (and including) 248 - but not 247 itself as that change is a delta against 246.

For any one-off mistakes, this can be simplified for individial changesets like this (this time just changes from changeset 248):

svn merge -c -248 myfile.php

Update: I was sure I got the first comment in as none was posted when I made mine. Turns out commenting was moderated, and 5 others had already pointed to the same solution - with some additional links to further reading (nice).

  • Posted: 2008-05-29 15:23 (Updated: 2008-05-29 15:28)
  • Categories: subversion
  • Comments (0)

Using Auto-Generated unique IDs

Auto-generated IDs are everywhere - as developers we use them all the time, and as users we see them in all kinds of contexts.

We do of course need them, but sometimes a lazy developer and/or a too security-confused project manager just takes this too far - putting mindless numbers in e-mails, on invoices or elsewhere exposed to the user that are frankly, out of this world.

I'm renewing our Microsoft Partnership, and needed to update some references. Microsoft sends mail between us and the customer, and includes a reference ID in the e-mail subject for our convenience - to make sure we do not confuse this reference with another no doubt. Here is top of the e-mail header with subject containing ID:

From:     partnote@microsoft.com
Subject:  Microsoft Partner Program Customer Reference Approved {0x006CEA0712C0BB99F4937CC30A0D9BD001000000F3FFA1F91C86A2365D973DE659A2D50F5407A462DAE9D665EF298C08DDDE05727D8F72484BED402D57EE2FAB181CB5FF}
Date:     5. mars 2008 12.02.31 GMT+01:00
To:       simon@....
....

When I was a kid I learned that a 1 (one) with 80 zeros behind it is an approximation of the number of atoms in the known universe. To save you all from counting, the Microsoft ID is 136 Hex digits. That is many, many orders of magnitude larger. Beyond comprehension.

Needed for the e-mail subject to avoid confusion - and of course, very handy to have if we for some reason need to call Microsoft partner support if we have any problems getting the reference registered or approved. Just mention the number.

I see similar examples all the time, and as I'm also paying all the company bills the use of KID on Norwegian invoices is another more of less daily fruststration. For readers that don't know the term, it is a handy reference for matching a payment to a given invoice and/or customer account for electronic payments. A good and noble idea used without concern for the user having to punch it back in when paying.

KIDs can be quite short - perhaps 5-7 numbers referencing our customer account or a specific invoice. Superb use, and this makes sense - I would need to punch some reference anyway, and that should be the minimum required to make a match in the other end.

Most of the time, however, they get generated as a combination of many factors totally un-needed: Perhaps the digit is composed of:

  • Invoice number
  • Customer number
  • Department ID
  • Internal references
  • Padding with zeros

All in all, perhaps a nice 25-30 digit number for me to punch. Mindless.

Web Bug Track - Keep watch on those pesky browser bugs

Discovered the Web Bug Track today while researching a Trac bug. Collecting browser bugs AND workarounds.

Got to love this idea - a useful resource, and I've added their feed just to keep in touch with what people discover. Spending my days in OSX, I'm also less aware of the frequent IE bugs discovered that seem to outnumber other browsers by a factor of 10.

How such a common function as document.createElement() can still be broken is beyond me... Not only broken, but so horribly implemented that it actually accepts an element-representation as input - here is the workaround in action:

  • trac/htdocs/js/query.js

     
    130130 
    131131    // Convenience function for creating an <input type="radio"> 
    132132    function createRadio(name, value, id) { 
    133       var input = document.createElement("input"); 
     133      if (!$.browser.msie) { 
     134        var input = document.createElement("input"); 
     135      } else { 
     136        var input = document.createElement('<input name="'+name+'"/>'); 
     137      } 
    134138      input.type = "radio"; 
    135139      if (name) input.name = name; 
    136140      if (value) input.value = value; 

We'll see how this bug report ends up being solved in the end, as that is not nice code to smear around the code base.

Anyone got other useful 'browser bug' resources that I should keep watch on? Add a comment.

  • Posted: 2008-02-05 16:37 (Updated: 2008-02-07 02:52)
  • Categories: browsers
  • Comments (0)