Scientific Linux and Point Releases

November 30, 2011 – 12:58 am

Since I left the Centos Project’s QA team in August, I’ve slowly been moving all my workstations to Scientific Linux 6. This is not for any real need of their “scientific spins”, it’s more that I need a EL6 clone with regular updates, and I’m not a fan of the centos cr/ repository.

Having some copious free time tonight, I was catching up on bookmarked and tagged blogposts when I came across a post by Major Hayden (a rackerhacker), regarding the automatic updating of SL to the newest point releases when it drops.

It turns out that SL won’t update you to the newest point release automatically. Most distros in the RH world simply force an update by symlinking the value of $releasever (ie 4, 5, 6)to the current release (ie – for centos, 5/ currently links to 5.7/, when 5.8 drops, it will be relinked to 5.8/) and all machines running EL5 will pull down the new point release when they next run a ‘yum update’. (There are ways around this, I’ll explain them in another post if there is interest). SL appears to map $releasever to the full release.point value, which means your machine stays at that point release, even well after the .point+2 release drops.

So, how to get around this? The best way is to install the sl6x repository with

yum install yum-conf-sl6x

and then confirm it’s operation by looking at the output of

$ yum repolist
repo id repo name status
sl Scientific Linux 6.1 - x86_64 6,251
sl-security Scientific Linux 6.1 - x86_64 - security updates 556
sl6x Scientific Linux 6x - x86_64 6,251
sl6x-security Scientific Linux 6x - x86_64 - security updates 556

this will mean that a regular ‘yum update’ will keep your SL machine up to date as new point releases drop.

For Servers I’m intending to use a new server-orientated distro that’s currently in cloak state until all the build queue and everything are bedded down, and we can two full releases through without issues, but there’ll be more to come on that front later.

LCA2013 Bid Process opens – Canberra at the ready !

January 25, 2011 – 4:22 pm

For the last several months, a small group of people in Canberra including myself have been preparing a bid for LCA 2013. This is not just to give us more time to make the conference the most awesome, mind-numbingly good LCA you’ve ever been to. No – 2013 is also the centenary of the founding of Canberra as the nation’s capital. It’s a very significant year for us and we’d all be thrilled if we could show the attendees of LCA our great city and Canberra’s the great work the FOSS community does to improve everyone’s lives.

So we’re really stoked that the bidding process is going to be opened early, and I think it’ll lead to a really interesting competition that will result, whoever wins, in the best LCA ever!

If you’re interested in getting involved, head over to our mailing list, sign up, and hold on tight.

Happy 20th Birthday

June 23, 2009 – 11:37 pm

From: Geoff Huston
Sent: Tuesday, 23 June 2009 6:27 PM
Subject: [AusNOG] Happy 20th Birthday Australian Internet
On the night of the 23rd June 1989 Robert Elz of the University of Melbourne and Torben Neilsen of the University of Hawaii completed the connection work that bought the Internet to Australia. It was a 56k satellite circuit, and the Australian end used a Proteon P4100 router.

20 years, eh?

I wonder if you could make money off something like that….

v6 at 09

January 20, 2009 – 1:16 am

As part of the prep for LCA09 in Hobart, the network team have got to do something we’ve never had the chance to before; Deploy ipv6 natively. The configuration was pretty simple; We’re running stock out of the box Centos, which supports v6. Mike Groeneweg from aarnet, the conference bandwidth sponsor, provided us with a /48 allocation – 2001:388:A001::, and configured their router port with 2001:388:A001::FE/64 (or the equivalent of .254 on the conference management VLAN).

He then routed 2001:388:A001::/48 to 2001:388:A001::1/64 and I configured our gateway by adding


to /etc/sysconfig/network



to /etc/sysconfig/network-scripts/ifcfg-eth0. A ‘service network restart’ later, and I could ping the new ipv6 address. Next up, I moved onto the internal interface, which was simply a matter of adding the range to the interface file and enabling ipv6 on that interface too;


Another ‘service network restart’, and hey presto! I couldn’t ping the address. It took a minute or two before I realised I hadn’t enabled ipv6 forwarding. I poked 1 into all sorts of places in /proc/sys/net/ipv6/conf/ without luck, then I came across the shotgun approach; edit /etc/sysconfig/network and add


then restart the network again. Hey presto, I could ping the internal interface with ping6.

At this point, I had the choice of either Stateless or Stateful networking. Realising that letting everyone’s laptop’s work it out was was the best bet, I decided on stateless. Under stateless IPv6, hosts can configure themselves automatically when connected to a routed IPv6 network using ICMPv6 router discovery messages. When first connected to a network, a host sends a link-local multicast router solicitation request for its configuration parameters; if configured suitably, routers respond to such a request with a router advertisement packet that contains network-layer configuration parameters. Stateful would have required me running DHCPv6.

So, a quick ‘yum install radvd‘ later, I fired up vim again and edited the default config file, removing the comments from the stanza, then updating the prefix option.

$vim /etc/radvd.conf
interface eth1
AdvSendAdvert on;
MinRtrAdvInterval 30;
MaxRtrAdvInterval 100;
prefix 2001:388:a001:1::/64
AdvOnLink on;
AdvAutonomous on;
AdvRouterAddr off;

The ‘service’ tool got another workout with a ‘service network restart’, and we were in business. Except for the lack of Route Advertisements. People playing along at home will realise my mistake here, but it took me about an hour of packet sniffing and tcpdumping before I went back and took another look at the config file. Sure enough, changing it to

AdvRouterAddr on;

Worked a treat, and I could fire up the ipv6 stack on my laptop and get an address. A ‘ping6’ returned the beautiful response

$ ping6
PING 56 data bytes
64 bytes from icmp_seq=0 ttl=50 time=100 ms
64 bytes from icmp_seq=1 ttl=50 time=107 ms

But alas, I was thwarted again when trying to view the dancing turtle at by missing something, and it took an hour or two longer, and various bits of telnet, netcat and other tools before the little light went on in my head and I remembered the ‘ip6tables’ service starting when I booted the box. A quick hole for http traffic later, and we had dancing turtle goodness.

marching south

January 13, 2009 – 4:37 pm

As it type this I’m sitting at the Pier at Port Melbourne waiting to drive onto the Spirit of Tasmania to head to Hobart via Davenport for 2009.

I’ve got the kit for 2 radio shots to cover the Casino and another venue, plus I’ve got 6 Sony HVR Cameras which I’m bringing down for the 09 AV team in return for somewhere to sleep tomorrow night. This time last year I was running around trying to cover the 4 million things that needed to be done for LCA2008. It was a strange sense of DejaVu to spend today running around chasing things down like cameras, tripods and other bits, and had to keep correcting the year and Venue when talking to the suppliers 09 were using.

Only 1 more sleep, then I hit the ground stumbling south for LCA2009!

scraping away

October 15, 2008 – 1:54 pm

My Blog’s been scraped again, this time by a site called “TryNewSh*t” who claim to “ readers find new blogs and we help bloggers expand their audience.”.


Going away to a dark place..

October 15, 2008 – 11:42 am

A few people have commented on how quiet my blog has been over the last few months, and it’s mainly been related to the direction my contract took for a period of time.

My current contract is orientated towards producing solutions primarily with open source products. Wireless redirectors based around squid, master/master postgresql and master/slave Mysql clusters, Xen VM farms, spacewalk servers, nothing overtly cutting edge, but nicely technical work that every now and again takes me out of my comfort zone and into areas that mean I learn something new.

Unfortunately, when you’re good at giving people what they want, you tend to come to the attention of other areas, and I recently found myself seconded without any notice or input into working on the deployment of systems I knew nothing about. Namely, Microsoft Exchange and Microsoft Office Sharepoint Services.

Whilst the work was challenging, both technically and mentally, and if faced with a contract that will be a walk in a park and pays well, versus one that is technically challenging, but pays lower, I will always take the technically challenging one, I’ve completed the work with a better understanding of how the two systems operated, how they co-operate, and how they just plain won’t talk to each other, and increased my technical knowledge in areas that I never expected to, as well as technologies and subjects I thought I knew pretty well, but found I had a lot more to learn.

Am I happy I had to do the work? no. Both me and my wife noticed an increase in depressive episodes (more on that later) during the time, and I’ve put on 6kgs in the last 2.5 months, surely a sign that I’ve been in a “destructive phase” of my life, as well as being thrown in the deep end working with production systems I know nothing about, and can’t easily find references for.

Am I glad I had to do the work? yes. It’s taken me well outside my comfort zone, and made me revisit decisions on my choice of career path (IT rather than other areas), but has also made me rethink the way I’m doing things, and I now have a long list of better ways those things can be done. That has come about from working with contractors whose skills lie in the closed source path, but who look at problems from all angles, and are happy to accept an open source solution to a closed source problem, and are happy to share their knowledge in the way a closed source system works, even though their employer’s policy may not lie down that path, which means I can now reach the root cause of a problem long before I could previously have done so.

But, at the end of the day, I’m back in my comfort zone, spending the day migrating lists from exchange DL’s to a shiny spanking new mailman box, and understanding what every error means, and how to fix it without spending days googling obscure error codes…

scraped again..

June 7, 2008 – 9:02 am

It looks like my posts are appearing on another syndication site without attribution again, this time at a site called “”.

Might have to add a redirection somewhere in my apache config..

Brain Dump

June 6, 2008 – 9:27 am

With the end of the financial year coming up, I’ve been doing a lot of machine change overs for clients. One of the things I’ve been trying to do where possible is make the installed software play nice with SELinux, so this post is a brain dump for the setsebool statements need to get the bits working. Huge thanks to Ralph Angenendt for his list of SELinux Booleans on the Centos Wiki (and thanks to Jim Perrin for blogging about it!)

vsFTPd returns “500 OOPS: cannot change directory /home/abc” when user logs in.
“/usr/sbin/setsebool -P ftp_home_dir=1”

Bind can’t write zone files to the /var/named/chroot/var/named directory when acting as a slave*
“/usr/sbin/setsebool -P named_write_master_zones=1”

{Drupal|Wordpress|mediawiki} fail to connect to database when first installed
“/usr/sbin/setsebool -P httpd_can_network_connect_db=1″

Nagios doesn’t like working, but doesn’t actually complain..
/usr/sbin/setsebool -P allow_httpd_nagios_script_anon_write=1”

*Yes, I know they should live in the var/named/slave path, but I was building a backup master server for the site which used a bespoke application to read information from the var/named directory, and for whatever reason couldn’t be set to read said files from the var/named/slave directory.


May 30, 2008 – 9:45 am

Keeping up with Erik de Castro Lopo, Steve Hanley, Jon Oxer and Mike Beattie isn’t that hard;

more power!

The HP nx6320 running Fedora7 (about to go to Fedora9), then the four screens run 3 desktops.

The desktop machine itself is a standard Fedora8 install, the machine has two Nvidia cards. The first two screens from the left are a normal gnome 4-way workspace with twin-view, the third 22″ runs it’s own separate “screen” and runs the XP VM I need for the Helpdesk software and lotus, and the little 19″ on top normally has logs or Nagios on it.

I was originally happy with 2 x 22″, but the Manager I report to for this contract wanted 3, so it came in one day to a third one on my desk, then the 19″ arrived as a joke, but quickly got used. What’s missing is the 19″ I use for video conferencing, which lives above the middle 22″, but that’s off being re-pixeled.

Edit: Corrected spelling on Erik de Castro Lopo’s name.