Simple Notify script for TracTalk
At CodeResort.com we have recently made the source code available for the TracTalkPlugin. One of the nice things added for this plugin recently is RPC support - already in production here of course.
If you haven't looked at the RPC interface, take a look at the 'open' project RPC documentation. If you have access to projects here at CodeResort.com, then you will perhaps see 'Talk' in the project menus too (depending on permission and enabled status).
The primary interface for Talk is web-based - at least until someone makes an alternative client... So, trying to keep track of a number of browser windows with sporadic Talk activity was just a bit too much for me to manage manually. So, I turned to a programmatic solution...
I'm on OSX. And Growl is the natural notification solution for me. Being a Python developer, I naturally found and installed the Growl SDK (python bindings). The full script is attached to this post - along with the icon I use for the Growl notifications.
Install dependencies and save files, chmod u+x, and then do something like:
simon: /tmp> ./coderesort-talk-notifier.py name@example.org epicode open
So, here is the essence of how I use the RPC interface (all other lines are just 'nice-to-have'):
- For each of the projects I want to track (as input from command line), I first retrieve the available Talks in the project. Seeing each unique URL (=project) requires an xmlrpclib.ServerProxy, I create the proxy and retrieve the talks:
server = xmlrpclib.ServerProxy(base_url % project) talks = server.talk.getTalks()
- Then, for each Talk in the project I want to retrieve the very last message number so that I can check that later to see if it has changed (=activity). Seeing a project can have many Talks, the most efficient solution is to use an RPC feature called MultiCall - it makes and retrieves many 'commands' using one POST request:
multicall_msg = xmlrpclib.MultiCall(server) for r in talks: multicall_msg.talk.getMessages(r['room'], {'limit': 1, 'reverse': True}) messages = multicall_msg().results
- Seeing I consider myself 'online' in the rooms that I poll for activity, I use the opportunity to also make a multicall_seen request along the same pattern as above.
- The seen feature is the reason why I poll every 115 seconds - the limit for 'online' marking in a room is defined as 'seen within last two minutes'. I wanted to make sure I'm inside so that I don't sporadically appear and disappear for others that follow the action in the room.
Go ahead, make your own Talk client! I challenge you to make a better one than my quick & dirty script - and it should really NOT be that hard... :-)
OpenVZ on OpenSUSE 11.1 on Mac mini
I like OpenVZ. I've been using it on some development setups using CentOS running on VMware Fusion. It makes it easy to run many Linux machines without exhausting resources on my old MacBook Pro (RAM limitations).
Now I needed a new development server, and working from home most of the time I figured I'd settle for a machine that I could put in my backpack and move between work and home if needed. The Mac mini fits the bill.
So, I progressed to install CentOS 5.3 on the mini as only OS. It installed, it booted and looked all right. But no Ethernet. Searching for a suitable NVIDIA driver, but no luck in getting it to work. It just turned out to be too much hassle. I took an early decision just to scrap CentOS. A nice, clean, oh-so-compatible (RHEL5) base for my OpenVZ - but I'd been sheltered from its hardware compatibility issues by VMware Fusion.
openSUSE 11.1 is my favorite distribution for "running things" - good packaging with fresh version for everything, especially through all the Novell Build service. Really nice. And yast - a really, really useful system administration tool that works just as well through an SSH terminal.
The problem? OpenVZ does not have releases for openSUSE and recent kernels (like 2.6.27 as used in openSUSE 11.1). Some searching lead me to some OpenVZ builds for kernel and tools in one of the build service repositories, and I decided to give it a try. The following steps gave me all that I needed for my use:
1. Install openSUSE 11.1
It installes on the Mac mini without issues. All the drivers I needed where there, and the thing just plain worked after install. Nice.
2. Add the OpenVZ repositories
I added the two repositories that I found:
- Virtualization:OpenVZ:kernel-2.6.27
- Virtualization:OpenVZ:tools
See attached script - run to have the files set up, or just add the repositories manually (I like making scripts of what I do - makes it easier to replay later).
3. Install OpenVZ kernel and tools
zypper install kernel-ovz-default kernel-ovz-default-base kernel-ovz-default-extra zypper install vzctl vzquota
4. Enable OpenVZ
A init.d script is installed, use to load/start/stop OpenVZ. If not listed in chkconfig, install it:
chkconfig --add vz chkconfig vz on
The service will then be available to start and stop as service vz start|stop|status|restart. DON'T do this yet - we haven't yet finished configuring the OpenVZ kernel and booting into it...
5. Bootloader
Use yast and check the bootloader, and see that the new ovz kernel is listed (and in my case as default).
6. Firewall
While in yast turn of the firewall service for now, and mark it as not started by default. The firewall just complicates when trying to get the basic services to work.
As usual, it should be turned on later when properly configured. I haven't gotten to it yet, so if anyone has advice for the rules I need for a) general two-way access and forwarding, and b) each exposed service inside each container - I would be very grateful for that...
7. Edit kernel parameters
OpenVZ install guide (and various posts on the internet) has lots of advice on the forwarding and proxy parameters needed. This is the list of parameters I now appended to /etc/sysctl.conf:
# OpenVZ settings net.ipv4.ip_forward = 1 net.ipv4.conf.all.forwarding = 1 net.ipv4.conf.default.forwarding = 1 net.ipv4.conf.all.proxy_arp = 1 net.ipv4.conf.default.proxy_arp = 1 net.ipv6.conf.default.forwarding = 1 net.ipv6.conf.all.forwarding = 1 net.ipv6.conf.default.proxy_ndp = 1 net.ipv6.conf.all.proxy_ndp = 1 net.ipv4.conf.all.rp_filter = 1 net.ipv4.icmp_echo_ignore_broadcasts= 1 kernel.sysrq = 1
Are they all needed? Hard to tell - I just stopped fiddling when things finally worked.
8. vz.conf networking
I'm not sure about this, but some advice suggested a setting in /etc/vz/vz.conf should be like this:
NEIGHBOUR_DEVS=all
Not sure why as I haven't used that before. I got it now anyway.
9. Reboot
The moment of truth... If all goes well, all should look as before but uname -a should say something like:
Linux minivz 2.6.27.11-ovz-1.43-default #1 SMP .....
Hopefully OpenVZ modules should also be loaded - looking something like this:
$ lsmod | grep vz vzethdev 76032 0 vznetdev 86024 2 vzrst 189224 0 vzcpt 174776 0 tun 78596 2 vzrst,vzcpt vzmon 93968 5 vzethdev,vznetdev,vzrst,vzcpt vzdquota 108676 0 [permanent] vzdev 69392 4 vzethdev,vznetdev,vzmon,vzdquota ipv6 343712 42 vzrst,vzcpt,vzmon,ip6t_REJECT,nf_conntrack_ipv6,ip6table_mangle
service vz status should report "OpenVZ is running..."
Also check that kernel parameters set earlier are effective - sysctl -A | grep forward should give settings we added.
10. Create containers
This was an install how-to, not an openvz-how-to. There are plenty of guides on the net showing you how to use this wonderful server virtualization. Just note that on openSUSE, the default location for OpenVZ data is /srv/vz and not just /vz as typical on RHEL5.
Being content with openSUSE 11.1, the same OS can now be used for containers using the OpenVZ-provided precreated templates:
cd /srv/vz/template/cache wget http://download.openvz.org/template/precreated/suse-11.1-x86.tar.gz wget http://download.openvz.org/template/precreated/suse-11.1-x86_64.tar.gz
One final note about openSUSE 11.1 containers: There is no yum. And no zypper repositories configured in the template. You'll want to add that. Again, I have a script for the repositories that I use for my web/python development - just remove the ones you don't need, but at least leave the regular openSUSE ones in place to allow you to run updates and install standard software inside your container:
vzctl runscript 101 opensuse-11.1-repos.sh
The end
Or, "the beginning" perhaps? I'd like to think so. Lots more to be discovered. I'd appreciate feedback and insights - leave a comment.