Search Results: "Craig Sanders"

30 June 2020

Craig Sanders: Fuck Grey Text

fuck grey text on white backgrounds
fuck grey text on black backgrounds
fuck thin, spindly fonts
fuck 10px text
fuck any size of anything in px
fuck font-weight 300
fuck unreadable web pages
fuck themes that implement this unreadable idiocy
fuck sites that don t work without javascript
fuck reactjs and everything like it thank fuck for Stylus. and uBlock Origin. and uMatrix. Fuck Grey Text is a post from: Errata

16 February 2017

Craig Sanders: New D&D Cantrip

Name: Alternative Fact
Level: 0
School: EN
Time: 1 action
Range: global, contagious
Components: V, S, M (one racial, cultural or religious minority to blame)
Duration: Permanent (irrevocable)
Classes: Cleric, (Grand) Wizard, Con-man Politician The caster can tell any lie, no matter how absurd or outrageous (in fact, the more outrageous the better), and anyone hearing it (or hearing about it later) with an INT of 10 or less will believe it instantly, with no saving throw. They will defend their new belief to the death theirs or yours. This belief can not be disbelieved, nor can it be defeated by any form of education, logic, evidence, or reason. It is completely incurable. Dispel Magic does not work against it, and Remove Curse is also ineffectual. New D&D Cantrip is a post from: Errata

9 October 2016

Craig Sanders: Converting to a ZFS rootfs

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died. I ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it. NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I m betting on that getting merged before it becomes an issue for me. Here s the procedure I came up with: 1. Buy new disks, shutdown machine, install new disks, reboot. The details of this stage are unimportant, and the only thing to note is that I m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G at around $100 AUD each, they re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better. When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it s easiest to use the /dev/sd? device nodes. 2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk. I need: ZFS On Linux uses partition type bf08 ( Solaris Reserved 1 ) natively, but doesn t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 ( Solaris Reserved 2 ) and bf09 ( Solaris Reserved 3 ) for easy identification. I ll set these up later, once I ve got the system booted I don t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition. I used gdisk to interactively set up the partitions:
# gdisk -l /dev/sdp
GPT fdisk (gdisk) version 1.0.1
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdp: 537234768 sectors, 256.2 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 537234734
Partitions will be aligned on 8-sector boundaries
Total free space is 6 sectors (3.0 KiB)
Number  Start (sector)    End (sector)  Size       Code  Name
   1              40            2047   1004.0 KiB  EF02  BIOS boot partition
   2            2048         2099199   1024.0 MiB  EF00  EFI System
   3         2099200         6293503   2.0 GiB     8300  Linux filesystem
   4         6293504        14682111   4.0 GiB     8200  Linux swap
   5        14682112       455084031   210.0 GiB   BF07  Solaris Reserved 1
   6       455084032       459278335   2.0 GiB     BF08  Solaris Reserved 2
   7       459278336       537234734   37.2 GiB    BF09  Solaris Reserved 3
I then cloned the partition table to the other three SSDs with this little script: clone-partitions.sh
#! /bin/bash
src='sdp'
targets=( 'sdq' 'sdr' 'sds' )
for tgt in "$ targets[@] "; do
  sgdisk --replicate="/dev/$tgt" /dev/"$src"
  sgdisk --randomize-guids "/dev/$tgt"
done
3. Create the mdadm for /boot, the zpool, and and the root filesystem. Most rootfs on ZFS guides that I ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1. There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I ll use just $(hostname)/root for the rootfs. i.e. ganesh/root I wrote a script to automate it, figuring I d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines. create.sh
#! /bin/bash
exec &> ./create.log
hn="$(hostname -s)"
base='ata-Crucial_CT275MX300SSD1_'
md='/dev/md0'
md_part=3
md_parts=( $(/bin/ls -1 /dev/disk/by-id/$ base *-part$ md_part ) )
zfs_part=5
# 4 disks, so use the top half and bottom half for the two mirrors.
zmirror1=( $(/bin/ls -1 /dev/disk/by-id/$ base *-part$ zfs_part    head -n 2) )
zmirror2=( $(/bin/ls -1 /dev/disk/by-id/$ base *-part$ zfs_part    tail -n 2) )
# create /boot raid array
mdadm "$md" --create \
    --bitmap=internal \
    --raid-devices=4 \
    --level 1 \
    --metadata=0.90 \
    "$ md_parts[@] "
mkfs.ext4 "$md"
# create zpool
zpool create -o ashift=12 "$hn" \
    mirror "$ zmirror1[@] " \
    mirror "$ zmirror2[@] "
# create zfs rootfs
zfs set compression=on "$hn"
zfs set atime=off "$hn"
zfs create "$hn/root"
zpool set bootfs="$hn/root"
# mount the new /boot under the zfs root
mount "$md" "/$hn/root/boot"
If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you ve got the system up and running on ZFS. If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion. I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:
zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \
  -o primarycache=metadata ganesh/postgres
zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \
  -o primarycache=metadata ganesh/pg-xlog
followed by rsync and then started postgres again. 4. rsync my current system to it. Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven t booted into recovery/rescue/single-user mode, then you should be as close to it as possible everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway). Then:
hn="$(hostname -s)"
time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"
After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression. Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something. You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s I was kind of expecting closer to 800M/s but that s good enough .the Crucial MX300s aren t the fastest drive available (but they re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have. For real benchmarking, use bonnie++ or fio. 5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub. This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting: chroot.sh
#! /bin/sh
hn="$(hostname -s)"
for i in proc sys dev dev/pts ; do
  mount -o bind "/$i" "/$ hn /root/$i"
done
chroot "/$ hn /root"
Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:
/ganesh/root    /         zfs     defaults                                         0  0
/dev/md0        /boot     ext4    defaults,relatime,nodiratime,errors=remount-ro   0  2
I haven t bothered with setting up the swap at this point. That s trivial and I can do it after I ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven t done that :). add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that s:
GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"
NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs 6. Install grub Here s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into. But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:
/usr/sbin/grub-probe: error: failed to get canonical path of  /dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.
IMO, that s a bug in grub-probe it should look in /dev/disk/by-id/ if it can t find what it s looking for in /dev/ I fixed that problem with this script: fix-ata-links.sh
#! /bin/sh
cd /dev
ln -s /dev/disk/by-id/ata-Crucial* .
After that, update-grub works fine. NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you ll get that error every time you run update-grub in future. 7. Prepare to reboot Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to / umount-zfs-root.sh
#! /bin/sh
hn="$(hostname -s)"
md="/dev/md0"
for i in dev/pts dev sys proc ; do
  umount "/$ hn /root/$i"
done
umount "$md"
zfs umount "$ hn /root"
zfs umount "$ hn "
zfs set mountpoint=/ "$ hn /root"
zfs set canmount=off "$ hn "
8. Reboot Remember to configure the BIOS to boot from your new disks. The system should boot up with the new rootfs, no rescue disk required as in some other guides the rsync and chroot stuff has already been done. 9. Other notes 10. Useful references Reading these made it much easier to come up with my own method. Highly recommended. Converting to a ZFS rootfs is a post from: Errata

15 September 2016

Craig Sanders: Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24

It s Alive! The day before yesterday (at Infoxchange, a non-profit whose mission is Technology for Social Justice , where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it. Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing but it turned out that many programs would segfault e.g. it couldn t run bash, but sh (dash) was OK. I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn t yet run in jessie). After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23. Anyway, the point of all this is that if anyone else needs to run a wheezy on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6). In my case, I was using docker but I expect that other container systems will have the same problem and the same solution: install libc6 from jessie into wheezy. Also, I haven t actually tested installing jessie s libc6 on squeeze if it works, I expect it ll require a lot of extra stuff to be installed too. I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie. To build it, I had to use a system which hadn t already been upgraded to libc6 2.24. I had already upgraded libc6 on all the machines on my home network. Fortunately, I still had my old VM that I created when I first started experimenting with docker crazily, it was a VM with two ZFS ZVOLs, a small /dev/vda OS/boot disk, and a larger /dev/vdb mounted as /var/lib/docker. The crazy part is that /dev/vdb was formatted as btrfs (mostly because it seemed a much better choice than aufs). Disk performance wasn t great, but it was OK and it worked. Docker has native support for ZFS, so that s what I m using on my real hardware. I started with the base wheezy image we re using and created a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf:
APT::Default-Release "wheezy";
Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, and had to waste another 10 minutes or so building the app s container again. I then installed the following:
apt-get -t jessie install libc6 locales libc6-dev krb5-multidev comerr-dev zlib1g-dev libssl-dev libpq-dev
To minimise the risk of incompatible updates, it s best to install the bare minimum of jessie packages required to get your app running. The only reason I needed to install all of those -dev packages was because we needed libpq-dev, which pulled in all the rest. If your app doesn t need to talk to postgresql, you can skip them. In fact, I probably should try to build it again without them I added them after the first build failed but before I remembered to set Apt::Default::Release (OTOH, it s working OK now and we re probably better off with libssl-dev from jessie anyway). Once it built successfully, I exported the image to a tar file, copied it back to my real Docker machine (co-incidentally, the same machine with the docker VM installed) and imported it into docker there and tested it to make sure it didn t have the same segfault issues that the original wheezy image did. No problem, it worked perfectly. That worked, so I edited the FROM line in the Dockerfile for our wheezy app to use frankenwheezy and ran make build. It built, passed tests, deployed and is running. Now I can continue working on the feature I m adding to it, but I expect there ll be a few more yaks to shave before I m finished. When I finish what I m currently working on, I ll take a look at what needs to be done to get this app running on jessie. It s on the TODO list at work, but everyone else is too busy a perfect job for an unpaid volunteer. Wheezy s getting too old to keep using, and this frankenwheezy needs to float away on an iceberg. Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24 is a post from: Errata

28 August 2016

Craig Sanders: fakecloud

I wrote my first Mojolicious web app yesterday, a cloud-init meta-data server to enable running pre-built VM images (e.g. as provided by debian, ubuntu, etc) without having to install and manage a complete, full-featured cloud environment like openstack. I hacked up something similar several years ago when I was regularly building VM images at home for openstack at work, with just plain-text files served by apache, but that had pretty-much everything hard-coded. fakecloud does a lot more and allows per-VM customisation of user-data (using the IP address of the requesting host). Not bad for a day s hacking with a new web framework. https://github.com/craig-sanders/fakecloud fakecloud is a post from: Errata

9 August 2016

Reproducible builds folks: Reproducible builds: week 67 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday July 31 and Saturday August 6 2016: Toolchain development and fixes Packages fixed and bugs filed The following 24 packages have become reproducible - in our current test setup - due to changes in their build-dependencies: alglib aspcud boomaga fcl flute haskell-hopenpgp indigo italc kst ktexteditor libgroove libjson-rpc-cpp libqes luminance-hdr openscenegraph palabos petri-foo pgagent sisl srm-ifce vera++ visp x42-plugins zbackup The following packages have become reproducible after being fixed: The following newly-uploaded packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.) Some uploads have addressed some reproducibility issues, but not all of them: Patches submitted that have not made their way to the archive yet: Package reviews and QA These are reviews of reproduciblity issues of Debian packages. 276 package reviews have been added, 172 have been updated and 44 have been removed in this week. 7 FTBFS bugs have been reported by Chris Lamb. Reproducibility tools Test infrastructure For testing the impact of allowing variations of the buildpath (which up until now we required to be identical for reproducible rebuilds), Reiner Herrmann contribed a patch which enabled build path variations on testing/i386. This is possible now since dpkg 1.18.10 enables the --fixdebugpath build flag feature by default, which should result in reproducible builds (for C code) even with varying paths. So far we haven't had many results due to disturbances in our build network in the last days, but it seems this would mean roughly between 5-15% additional unreproducible packages - compared to what we see now. We'll keep you updated on the numbers (and problems with compilers and common frameworks) as we find them. lynxis continued work to test LEDE and OpenWrt on two different hosts, to include date variation in the tests. Mattia and Holger worked on the (mass) deployment scripts, so that the - for space reasons - only jenkins.debian.net GIT clone resides in ~jenkins-adm/ and not anymore in Holger's homedir, so that soon Mattia (and possibly others!) will be able to fully maintain this setup, while Holger is doing siesta. Miscellaneous Chris, dkg, h01ger and Ximin attended a Core Infrastricture Initiative summit meeting in New York City, to discuss and promote this Reproducible Builds project. The CII was set up in the wake of the Heartbleed SSL vulnerability to support software projects that are critical to the functioning of the internet. This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

19 April 2016

Craig Sanders: Book Review: Trader s World by Charles Sheffield

One line review: Boys Own Dale Carnegie Post-Apocalyptic Adventures with casual racism and misogyny. That tells you everything you need to know about this book. I wasn t expecting much from it, but it was much worse than I anticipated. I m about half-way through it at the moment, and can t decide whether to just give up in disgust or keep reading in horrified fascination to see if it gets even worse (which is all that s kept me going with it so far). Book Review: Trader s World by Charles Sheffield is a post from: Errata

20 January 2016

Craig Sanders: lm-sensors configs for Asus Sabertooth 990FX and M5A97 R2.0

I had to replace a motherboard and CPU a few days ago (bought an Asus M5A97 R2.0), and wanted to get lm-sensors working properly on it. Got it working eventually, which was harder than it should have been because the lm-sensors site is MIA, seems to have been rm -rf -ed. For anyone else with this motherboard, the config is included below. This inspired me to fix the config for my Asus Sabertooth 990FX motherboard. Also included below. To install, copy-paste to a file under /etc/sensors.d/ and run sensors -s to make sensors evaluate all of the set statemnents.
# Asus M5A97 R2.0
# based on Asus M5A97 PRO from http://blog.felipe.lessa.nom.br/?p=93
chip "k10temp-pci-00c3"
     label temp1 "CPU Temp (rel)"
chip "it8721-*"
     label  in0 "+12V"
     label  in1 "+5V"
     label  in2 "Vcore"
     label  in2 "+3.3V"
     ignore in4
     ignore in5
     ignore in6
     ignore in7
     ignore fan3
     compute in0  @ * (515/120), @ / (515/120)
     compute in1  @ * (215/120), @ / (215/120)
     label temp1 "CPU Temp"
     label temp2 "M/B Temp"
     set temp1_min 30
     set temp1_max 70
     set temp2_min 30
     set temp2_max 60
     label fan1 "CPU Fan"
     label fan2 "Chassis Fan"
     label fan3 "Power Fan"
     ignore temp3
     set in0_min  12 * 0.95
     set in0_max  12 * 1.05
     set in1_min  5 * 0.95
     set in1_max  5 * 1.05
     set in3_min  3.3 * 0.95
     set in3_max  3.3 * 1.05
     ignore intrusion0
#Asus Sabertooth 990FX
# modified from the version at http://www.spinics.net/lists/lm-sensors/msg43352.html
chip "it8721-isa-0290"
# Temperatures
    label temp1  "CPU Temp"
    label temp2  "M/B Temp"
    label temp3  "VCORE-1"
    label temp4  "VCORE-2"
    label temp5  "Northbridge"         # I put all these here as a reference since the
    label temp6  "DRAM"                # Asus Thermal Radar tool on my Windows box displays
    label temp7  "USB3.0-1"            # all of them.
    label temp8  "USB3.0-2"            # lm-sensors ignores all but the CPU and M/B temps.
    label temp9  "PCIE-1"              # If that is really what they are.
    label temp10 "PCIE-2"
    set temp1_min 0
    set temp1_max 70
    set temp2_min 0
    set temp2_max 60
    ignore temp3
# Fans
    label fan1 "CPU Fan"
    label fan2 "Chassis Fan 1"
    label fan3 "Chassis Fan 2"
    label fan4 "Chassis Fan 3"
#    label fan5 "Chassis Fan 4"      # lm-sensor complains about this
    ignore fan2
    ignore fan3
    set fan1_min 600
    set fan2_min 600
    set fan3_min 600
# Voltages
    label in0 "+12V"
    label in1 "+5V"
    label in2 "Vcore"
    label in3 "+3.3V"
    label in5 "VDDA"
    compute  in0  @ * (50/12), @ / (50/12)
    compute  in1  @ * (205/120), @ / (205/120)
    set in0_min  12 * 0.95
    set in0_max  12 * 1.05
    set in1_min  5 * 0.95
    set in1_max  5 * 1.05
    set in2_min  0.80
    set in2_max  1.6
    set in3_min  3.20
    set in3_max  3.6
    set in5_min  2.2
    set in5_max  2.8
    ignore in4
    ignore in6
    ignore in7
    ignore intrusion0
chip "k10temp-pci-00c3"
     label temp1 "CPU Temp"
lm-sensors configs for Asus Sabertooth 990FX and M5A97 R2.0 is a post from: Errata

4 November 2015

Craig Sanders: Part-time sysadmin work in Melbourne?

I m looking for a part-time Systems Administration role in Melbourne, either in a senior capacity or happy to assist an existing sysadmin or development team. I m mostly recovered from a long illness and want to get back to work, on a part-time basis (up to 3 days per week). Preferably in the City or Inner North near public transport. I can commute further if there is scope for telecommuting once I know your systems and people, and trust has been established. If you have a suitable position available or know of someone who does, please contact me by email. Why hire me? Pros: Cons: Full CV available on request. I m in the top few percent on ServerFault and Unix & Linux StackExchange sites if you want to get a preview of my problem-solving and technical communication skills, see my profile at: http://unix.stackexchange.com/users/7696/cas?tab=profile CV Summary: Systems Administrator and Programmer with extensive exposure to a wide variety of hardware and software systems. Excellent fault-diagnosis, problem-solving and system design skills. Strong technical/user support background. Ability to communicate technical concepts clearly to non-IT people. Significant IT Management and supervisory experience. Particular Skills Part-time sysadmin work in Melbourne? is a post from: Errata

22 August 2014

Russell Coker: Men Commenting on Women s Issues

A lecture at LCA 2011 which included some inappropriate slides was followed by long discussions on mailing lists. In February 2011 I wrote a blog post debunking some of the bogus arguments in two lists [1]. One of the noteworthy incidents in the mailing list discussion concerned Ted Ts o (an influential member of the Linux community) debating the definition of rape. My main point on that issue in Feb 2011 was that it s insensitive to needlessly debate the statistics. Recently Valerie Aurora wrote about another aspect of this on The Ada Initiative blog [2] and on her personal blog. Some of her significant points are that conference harassment doesn t end when the conference ends (it can continue on mailing lists etc), that good people shouldn t do nothing when bad things happen, and that free speech doesn t mean freedom from consequences or the freedom to use private resources (such as conference mailing lists) without restriction. Craig Sanders wrote a very misguided post about the Ted Ts o situation [3]. One of the many things wrong with his post is his statement I m particularly disgusted by the men who intervene way too early without an explicit invitation or request for help or a clear need such as an immediate threat of violence in womens issues . I believe that as a general rule when any group of people are involved in causing a problem they should be involved in fixing it. So when we have problems that are broadly based around men treating women badly the prime responsibility should be upon men to fix them. It seems very clear that no matter what scope is chosen for fixing the problems (whether it be lobbying for new legislation, sociological research, blogging, or directly discussing issues with people to change their attitudes) women are doing considerably more than half the work. I believe that this is an indication that overall men are failing. Asking for Help I don t believe that members of minority groups should have to ask for help. Asking isn t easy, having someone spontaneously offer help because it s the right thing to do can be a lot easier to accept psychologically than having to beg for help. There is a book named Women Don t Ask which has a page on the geek feminism Wiki [4]. I think the fact that so many women relate to a book named Women Don t Ask is an indication that we shouldn t expect women to ask directly, particularly in times of stress. The Wiki page notes a criticism of the book that some specific requests are framed as complaining , so I think we should consider a complaint from a woman as a direct request to do something. The geek feminism blog has an article titled How To Exclude Women Without Really Trying which covers many aspects of one incident [5]. Near the end of the article is a direct call for men to be involved in dealing with such problems. The geek feminism Wiki has a page on Allies which includes Even a blog post helps [6]. It seems clear from public web sites run by women that women really want men to be involved. Finally when I get blog comments and private email from women who thank me for my posts I take it as an implied request to do more of the same. One thing that we really don t want is to have men wait and do nothing until there is an immediate threat of violence. There are two massive problems with that plan, one is that being saved from a violent situation isn t a fun experience, the other is that an immediate threat of violence is most likely to happen when there is no-one around to intervene. Men Don t Listen to Women Rebecca Solnit wrote an article about being ignored by men titled Men Explain Things to Me [7]. When discussing women s issues the term Mansplaining is often used for that sort of thing, the geek feminism Wiki has some background [8]. It seems obvious that the men who have the greatest need to be taught some things related to women s issues are the ones who are least likely to listen to women. This implies that other men have to teach them. Craig says that women need space to discover and practice their own strength and their own voices . I think that the best way to achieve that goal is to listen when women speak. Of course that doesn t preclude speaking as well, just listen first, listen carefully, and listen more than you speak. Craig claims that when men like me and Matthew Garrett comment on such issues we are making women s spaces more comfortable, more palatable, for men . From all the discussion on this it seems quite obvious that what would make things more comfortable for men would be for the issue to never be discussed at all. It seems to me that two of the ways of making such discussions uncomfortable for most men are to discuss sexual assault and to discuss what should be done when you have a friend who treats women in a way that you don t like. Matthew has covered both of those so it seems that he s doing a good job of making men uncomfortable I think that this is a good thing, a discussion that is comfortable and palatable for the people in power is not going to be any good for the people who aren t in power. The Voting Aspect It seems to me that when certain issues are discussed we have a social process that is some form of vote. If one person complains then they are portrayed as crazy. When other people agree with the complaint then their comments are marginalised to try and preserve the narrative of one crazy person. It seems that in the case of the discussion about Rape Apology and LCA2011 most men who comment regard it as one person (either Valeria Aurora or Matthew Garrett) causing a dispute. There is even some commentary which references my blog post about Rape Apology [9] but somehow manages to ignore me when it comes to counting more than one person agreeing with Valerie. For reference David Zanetti was the first person to use the term apologist for rapists in connection with the LCA 2011 discussion [10]. So we have a count of at least three men already. These same patterns always happen so making a comment in support makes a difference. It doesn t have to be insightful, long, or well written, merely I agree and a link to a web page will help. Note that a blog post is much better than a comment in this regard, comments are much like conversation while a blog post is a stronger commitment to a position. I don t believe that the majority is necessarily correct. But an opinion which is supported by too small a minority isn t going to be considered much by most people. The Cost of Commenting The Internet is a hostile environment, when you comment on a contentious issue there will be people who demonstrate their disagreement in uncivilised and even criminal ways. S. E. Smith wrote an informative post for Tiger Beatdown about the terrorism that feminist bloggers face [11]. I believe that men face fewer threats than women when they write about such things and the threats are less credible. I don t believe that any of the men who have threatened me have the ability to carry out their threats but I expect that many women who receive such threats will consider them to be credible. The difference in the frequency and nature of the terrorism (and there is no other word for what S. E. Smith describes) experienced by men and women gives a vastly different cost to commenting. So when men fail to address issues related to the behavior of other men that isn t helping women in any way. It s imposing a significant cost on women for covering issues which could be addressed by men for minimal cost. It s interesting to note that there are men who consider themselves to be brave because they write things which will cause women to criticise them or even accuse them of misogyny. I think that the women who write about such issues even though they will receive threats of significant violence are the brave ones. Not Being Patronising Craig raises the issue of not being patronising, which is of course very important. I think that the first thing to do to avoid being perceived as patronising in a blog post is to cite adequate references. I ve spent a lot of time reading what women have written about such issues and cited the articles that seem most useful in describing the issues. I m sure that some women will disagree with my choice of references and some will disagree with some of my conclusions, but I think that most women will appreciate that I read what women write (it seems that most men don t). It seems to me that a significant part of feminism is about women not having men tell them what to do. So when men offer advice on how to go about feminist advocacy it s likely to be taken badly. It s not just that women don t want advice from men, but that advice from men is usually wrong. There are patterns in communication which mean that the effective strategies for women communicating with men are different from the effective strategies for men communicating with men (see my previous section on men not listening to women). Also there s a common trend of men offering simplistic advice on how to solve problems, one thing to keep in mind is that any problem which affects many people and is easy to solve has probably been solved a long time ago. Often when social issues are discussed there is some background in the life experience of the people involved. For example Rookie Mag has an article about the street harassment women face which includes many disturbing anecdotes (some of which concern primary school students) [12]. Obviously anyone who has lived through that sort of thing (which means most women) will instinctively understand some issues related to threatening sexual behavior that I can t easily understand even when I spend some time considering the matter. So there will be things which don t immediately appear to be serious problems to me but which are interpreted very differently by women. The non-patronising approach to such things is to accept the concerns women express as legitimate, to try to understand them, and not to argue about it. For example the issue that Valerie recently raised wasn t something that seemed significant when I first read the email in question, but I carefully considered it when I saw her posts explaining the issue and what she wrote makes sense to me. I don t think it s possible for a man to make a useful comment on any issue related to the treatment of women without consulting multiple women first. I suggest a pre-requisite for any man who wants to write any sort of long article about the treatment of women is to have conversations with multiple women who have relevant knowledge. I ve had some long discussions with more than a few women who are involved with the FOSS community. This has given me a reasonable understanding of some of the issues (I won t claim to be any sort of expert). I think that if you just go and imagine things about a group of people who have a significantly different life-experience then you will be wrong in many ways and often offensively wrong. Just reading isn t enough, you need to have conversations with multiple people so that they can point out the things you don t understand. This isn t any sort of comprehensive list of ways to avoid being patronising, but it s a few things which seem like common mistakes. Anne Onne wrote a detailed post advising men who want to comment on feminist blogs etc [13], most of it applies to any situation where men comment on women s issues.

16 December 2013

Craig Sanders: shopping online whinge of the day

There are numerous benefits to shopping online you can order stuff and pay for it from the comfort of your own home and it ll arrive on your doorstep in just a few days. wonderful, and so many words have been written about the benefits over the years that there s no need to belabour the point, But there are some things about it that really suck and make me NOT want to order stuff online most of the time my distaste for the privacy and delivery problems is enough to dissauade me from buying anything, helped by the fact that I live a fairly non-consumerist lifestyle (and for a geek, I have an almost non-existent gadget-fetish) but occasionally I want or need something that isn t available locally or which my ill-health makes impractical to buy in person (i ve been effectively housebound or in hospital for most of the year) so here are the things that piss me off most about shopping online.
  1. Loss of anonymity and privacy. You can walk into any shop and pay cash for whatever you want, they don t need to know who you are or where you live, and they don t get to accidentally add you to a mailing list against your express wishes oh, we didn t realise that when you said DO NOT SPAM ME, you actually meant DO NOT SPAM ME we ll just keep spamming you for another six months until we get around to processing your removal request . partial solution: use a different email address with each online shop. boycott the shop and destroy the address if they spam. what sucks is that this is actually necessary. it doesn t matter how loudly I say DO NOT SPAM ME. DO NOT ADD ME TO ANY MAILING LISTS. DO NOT ASK ME TO COMPLETE A SURVEY. DO NOT CONTACT ME FOR ANY REASON NOT DIRECTLY RELATED TO DELIVERING MY ORDER. MY PERSONAL INFORMATION IS PROVIDED SOLELY SO THAT YOU CAN SHIP MY ORDER TO ME AND MAY NOT BE USED FOR ANY OTHER PURPOSE. FAILURE TO RESPECT MY PRIVACY WILL RESULT IN IMMEDIATE AND PERMANENT BOYCOTT (yes, that IS the text I use in the comments/instructions field of every order I place), marketing vermin will decide that I really want their spam after all because their spam is super interesting and important and isn t really spam at all.
  2. nosy demands for unneccessary information. OK, they need your addresss to ship your parcel but they don t need your phone number, and they don t need to tie your order to your facebook or other bullshit social-spam account. most online order forms have phone number as a required field. Fortunately, they usually accept 0000000 or some other bogus non-phone number as a valid number .if not, i give them their own phone number.
  3. the thing that really shits me the most about online shopping is that both couriers and australia post suck. the lazy fuckers usually don t even bother attempting to deliver to residential addressess. at best, they just stick a non-delivery card in your letterbox without even attempting to knock on the door ..and sometimes they don t even bother doing that. the delivery driver just files a bogus failed to deliver or recipient refused delivery with their office.
i ve made four online purchases in the last month and a half. only one of them was delivered correctly to my door, even though I was sitting at my desk in the front room of the house, a few feet from my front door, and regularly monitoring the parcel s tracking web page so I knew it was arriving. parcel 1 (Nov 16): DHL from NZ to AU. delivered perfectly. delivered in about two days for $17. order value: around $35. parcel 2 (Nov 20): Aust Post from WA to Vic, Express Post (costs a few bucks more than standard post). order value: around $80. lazy driver didn t bother to attempt delivery. left card in letter box. i had to pick it up 2 days later from a post office the next suburb over (it wasn t there yet on the first attempt to pick up, the next day). Aust Post offered to maybe put me in touch with their re-delivery people who might possibly be able to re-deliver the parcel next week sometime maybe, if they felt like it. knowing that I had another hospital appt the next week, I declined (if i hadn t, it would have been an absolute certainty that they would try to deliver it while i was out). parcel 3 (nov 29): DHL from NZ to AU. order value: around $85. lazy courier didn t even bother to leave a card, just logged it as Recipient refused delivery . This refused delivery lie infuriated me when i saw it on the tracking page so i phoned and yelled at them a lot. DHL delivered it later the same day. if they hadn t I would have had to get myself over to the other side of town during business hours and pick it up from their South Melbourne depot. parcel 4 (today, Dec 16): Aust Post from NSW to Vic, Express Post . order value: around $160. lazy driver didn t even bother to leave a card, just logged it as Attempted delivery being carded to Australia Post outlet . Complete bullshit. There was no delivery attempt, not even a card in the letterbox, let alone a knock on the door with my parcel. Rang Aust Post on 8847 9045 (a phone number curiously absent from their tracking page, but which I had written down after their failure to deliver on Nov 20) and complained. they claim that they will try to deliver it today. i ll believe it when I see it. UPDATE @4.30pm: no, it s not going to be delivered today. they say they might choose to deliver it tomorrow or sometime in the future, if the delivery center manager feels like it. i told them that this is not an option my parcel WILL be delivered, it is what they were paid to do so it is what they ARE going to do. btw, the tracking page says that they attempted delivery at 12.15 today (they didn t not even a card in the letterbox), but the tracking page wasn t updated with that until about 2.30pm. by a complete non-coincidence their delivery center allegedly closes at 2pm, so it s impossible to complain about non-delivery in time for the complaint to do any good. Score so far: DHL 50% delivery rate. Aust Post: 100% failure rate. Annoyance Rate: extremely high. Desire to repeat the experience: non-existent. The Australia Post delivery problems are particularly annoying. I ve always thought of the network of local post offices to be a huge advantage for parcel delivery if i m not home, my parcel will go to a local post office and I can pick it up from there. And this is a great advantage. I used to take advantage of it when I was well enough to work. In fact, I d insist on delivery by Aust. Post rather than by a courier for exactly this reason, a local post office is far more convenient than some courier s depot on the other side of town or out in some far outer-suburban hellhole. But when it s used as an excuse to not even bother attempting delivery, it sucks. I m not well, and I should have been lying in bed reading or sleeping today, not wasting what little energy I have waiting for a parcel that never arrived and I certainly shouldn t have to phone Aust Post or DHL to yell at them for failing to do the job that they have been paid to do. shopping online whinge of the day is a post from: Errata

21 June 2013

Craig Sanders: backup-mysql.sh

A bash script to backup mysql databases, with separate schema and plain-text dump files (INSERT commands) for each database. Makes it easy to restore individual databases or copy them into dev/test servers. Keeps 30 days worth of backups in separate YYYY-MM-DD directories. Also dumps the mysql grants to a separate file mysql-grants.txt , and creates a sometimes conveniently useful mysql-create-databases.txt containing CREATE DATABASE commands. Based on my similar backup script for postgresql databases.
#! /bin/bash
# mysql backup script
#
# $Id: backup-mysql.sh,v 1.8 2013/06/21 02:20:39 cas Exp $
#
# by Craig Sanders <cas@taz.net.au>
# this script is public domain.  do whatever you want with it.
# set user/password here, or leave undefined to rely on ~/.my.cnf
#MYSQL_USER="root"
#MYSQL_PWD="SECRET"
[ -n "$MYSQL_USER" ] && ARGS="$ARGS -u$MYSQL_USER"
[ -n "$MYSQL_PWD" ] && ARGS="$ARGS -p$MYSQL_PWD"
DATABASES=$(mysql $ARGS -D mysql --skip-column-names -B -e 'show databases;'   egrep -v 'information_schema' );
BACKUPDIR=/var/backups/mysql
YEAR=$(date +"%Y")
MONTH=$(date +"%m")
DAY=$(date +"%d")
DATE="$YEAR-$MONTH/$YEAR-$MONTH-$DAY"
# make sure that perms on all created files/dirs are rwxr-x---
umask 0027
mkdir -p "$BACKUPDIR/$DATE"
cd "$BACKUPDIR/$DATE"
mysql $ARGS -Bse "SELECT CONCAT('SHOW GRANTS FOR \'', user ,'\'@\'', host, '\';') FROM mysql.user"   \
    mysql $ARGS -Bs   \
    sed 's/$/;/g'  > mysql-grants.txt
# "create database" lines, for easy cut-and-paste
> mysql-create-databases.txt
for i in $DATABASES ; do
  echo "CREATE DATABASE $i;" >> mysql-create-databases.txt
done
ARGS2="--skip-opt --no-create-db --no-create-info --single-transaction --flush-logs"
for db in $DATABASES ; do 
    echo -n "backing up $db: schema..."
    mysqldump $ARGS --no-data $db > $db.schema.sql
    echo -n "extended..."
    mysqldump $ARGS $ARGS2 --extended-insert $db > $db.data.extended.sql
    # uncomment if you want complete-insert dumps too.
    #echo -n "data...complete..."
    #mysqldump $ARGS $ARGS2 --complete-insert $db > $db.data.complete.sql
    echo -n "compressing..."
    gzip -9fq $db.schema.sql $db.data.extended.sql #$db.data.complete.sql
    echo "done."
done
echo deleting backups older than 30 days:
find "$BACKUPDIR" -mindepth 1 -a -type d -mtime +30 -print0   xargs -0r rm -rfv
backup-mysql.sh is a post from: Errata

20 June 2013

Craig Sanders: grub-list-kernels.pl

Due to boredom and needing something to do to keep my brain from atrophying, I ve decided to polish some of the rough edges off some of my scripts so that they re suitable for publication. here s the first of them, a perl script to list kernels and other grub menu entries, with numeric indexing suitable for use with grub-set-default and grub-reboot output looks like this:
# grub-list-kernels.pl
   0    Debian GNU/Linux    
   1    Advanced options for Debian GNU/Linux   
 1>0    Debian GNU/Linux, with Linux 3.9-1-amd64    
 1>1    Debian GNU/Linux, with Linux 3.9-1-amd64 (recovery mode)    
 1>2    Debian GNU/Linux, with Linux 3.8-2-amd64    
 1>3    Debian GNU/Linux, with Linux 3.8-2-amd64 (recovery mode)    
 1>4    Debian GNU/Linux, with Linux 3.8-1-amd64    
 1>5    Debian GNU/Linux, with Linux 3.8-1-amd64 (recovery mode)    
 1>6    Debian GNU/Linux, with Linux 3.7-trunk-amd64    
 1>7    Debian GNU/Linux, with Linux 3.7-trunk-amd64 (recovery mode)    
 1>8    Debian GNU/Linux, with Linux 3.2.0-4-amd64  
 1>9    Debian GNU/Linux, with Linux 3.2.0-4-amd64 (recovery mode)  
   2    Network boot (iPXE) 
   3    Bootable floppy: LSI    
   4    Bootable floppy: freedos-bare   
  Default: 0  "Debian GNU/Linux"
(this example tells me i need to uninstall some old kernels) and here s the script:
#! /usr/bin/perl 
# parse and list the boot entries in the grub boot menu (grub.cfg)
#
# $Id: grub-list-kernels.pl,v 1.8 2013/06/20 10:36:47 cas Exp $
# Copyright (C) 2007, 2008, 2009, 2010, 2011, 2012, 2013 Craig Sanders <cas@taz.net.au>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# Partial Revision History:
# 2013-06-20 - better handling of non-numeric default saved entries, code still uglyish but better commented.
# 2013-05-13 - first stab at support for submenus...works but uglified the code
# 2010-11-03 - rewrote for grub2 (grub.cfg rather than menu.lst)
use strict;
my $cfg = shift   '/boot/grub/grub.cfg';
my $grubenv='/boot/grub/grubenv';
# hash of kernels (menu entry index -> menu entry description) 
# e.g. '4' => 'Debian GNU/Linux, with Linux 3.8-1-amd64'
my %K = ();
# hash of menuentry_id_option -> menuindex
# e.g. 'gnulinux-3.8-1-amd64-advanced-6bb6d228-0581-49ae-9d49-dd148c273ecc' => 4
my %M = ();
# read in the menu entries.
open(MENU,"<$cfg")   die "couldn't open $cfg for read: $!\n" ;
my $insubmenu = 0;
my $menuitem=0;
my $submenuitem=0;
my $prefix='';
my $menuindex='';
my $kernel='';
my $menuidoption='';
while (<MENU>)  
  #next unless /^\s*menuentry\s/io;
  chomp;
  if ($insubmenu ne 0)  
    next unless /^\s*(menuentry submenu)\s ^ $/io;
    if (m/^ $/)   $insubmenu = 0 ; $prefix='' ; next  ;
    $menuindex = $prefix . $submenuitem++;
    else  
    next unless /^\s*(menuentry submenu)\s/io;
    if (m/submenu/)  
        $insubmenu=1;
        $prefix = $menuitem . '>';
        $submenuitem = 0;
        $menuindex = $submenuitem;
     ;
    $menuindex = $menuitem++;
   
  # sometimes the default entry saved in grubenv is the menu_id_option
  # rather than numeric.  The %M hash links menu_id_option to menuindex
  # numbers.
  # e.g. 'gnulinux-3.8-1-amd64-advanced-6bb6d228-0581-49ae-9d49-dd148c273ecc' => 4
  $menuidoption='';
  my $line = $_;
  if ($line =~ m/menuentry_id_option/)   
      $line =~ s/.*menuentry_id_option '//;
      $line =~ s/'.*//;
      $menuidoption=$line;
      $M $menuidoption  = $menuindex;
   ;
  s/[^'"]*["']([^'"]*)['"].*/$1/;
  $kernel = $_;
  $K $menuindex  = $kernel;
  #printf "%4s\t%s\t%s\n", $menuindex, $_, $menuidoption;
  printf "%4s\t%s\t\n", $menuindex, $_;
  ;
# now get the default kernel and boot-once kernel if any
my $saved_entry='';
my $prev_saved_entry='';
my $defk = 0;
my $once = 0;
open(GRUBENV, "<$grubenv")   die "couldn't open $grubenv for read: $!\n" ;
while (<GRUBENV>)  
  chomp;
  next unless /saved_entry/;
  if (/^saved_entry/)  
    (undef,$saved_entry) = split /=/;
    elsif (/^prev_saved_entry/)  
    (undef,$prev_saved_entry) = split /=/;
   
 
close(GRUBENV);
$saved_entry = 0 if ($saved_entry eq '');
if ($prev_saved_entry ne '')  
  $defk = $prev_saved_entry;
  $once = $saved_entry;
  else  
  $defk = $saved_entry;
 
# if $defk is non-numeric, look up index in %M
if ($defk !~ /^[0-9]+$/)  
  $defk = $M $defk ;
 
# if $once is non-numeric, look up index in %M
if ($once !~ /^[0-9]+$/)  
  $once = $M $once ;
 
print "\n  Default: $defk";
print "  \"", $K $defk , "\"\n";
if ($prev_saved_entry ne '')  
  print "Boot Once: $once";
  print "  \"", $K $once , "\"\n";
 
grub-list-kernels.pl is a post from: Errata

7 February 2013

Craig Sanders: Dead Can Dance

I saw Dead Can Dance at the Palais in St Kilda last night their first tour back in Australia in over 20 years. I ve wanted to see them perform live since I first discovered their music in the dim dark past bought the tickets as soon as the tour was announced last year and have been waiting impatiently ever since. The first act was percussionist David Kuckhermann who played solo and then joined DCD for the main act. He mostly played the handpan, which looks a lot like an upside-down wok with dimples on it. His tambourine solo was oustanding. Overall, a great start to the evening. I hadn t heard of him before, but I think I ll be ordering some of his CDs. Lisa Gerrard s voice was, as always, trance-inducingly sublime and Brendan Perry with a fine voice of his own really shines in a live performance. Definitely not a disappointment around three hours of amazing performance, old favourites and new, and the discovery (for me) of a new performer. If I have one (minor) complaint, it s that the volume was a little too loud, causing occasional distortion and difficulty distinguishing the lyrics on some of Brendan Perry s singing. Lisa Gerrard sings without words glossolalia and much higher than Perry so wasn t affected as much. If you haven t heard DCD or Lisa Gerrard s solo albums before, do so you re in for a treat labelled as world fusion , they re impossible to describe accurately while they started out as Dark Wave or as-goth-as-it-gets in the early 80s, they play across many genres and have strong influences from Middle Eastern, Mediaeval, world music, ambient, and more, they re in a genre of their own. You can t even say they re a bit like so-and-so because they aren t. Dead Can Dance is a post from: Errata

17 November 2012

Craig Sanders: done

OK, I m finished my experiment. It s over now and I can break character . First, I have to offer special thanks to tshirtman for being the first to unambiguously exemplify one of my main points. well done! In case it s not blindingly obvious (as it should be), the reason for my post was that I was outraged by the spectacle of one fairly high-profile member of the linux community trying to rally support to shun and exclude another fairly high-profile member because a nightmare had upset her. WTF? Is that really all it s going to take to destroy someone s reputation and perhaps their career? even with the shunning target s own words available and archived to disprove the ridiculous straw-man mis-characterisations of what he actually said? Not one of the arguments against him actually addressed anything he said, they ALL attacked him for things he didn t say, for things that other people claimed he said. I was further outraged by seeing everyone who even suggested that questioning of stats (or, indeed, ANY claim of fact or evidence) may be, in some small way, a valid and reasonable thing to do get instantly put in their place and dismissed as Yet Another Rape Apologist. Are we supposed to be anti-science, anti-scientific method now? or are rape stats a special case like religion where we are just supposed to switch off our analytical brains and accept what we are told on faith, without question? Surely we are capable of better than that? i know we are capable of better than that. Or, at least, i used to know that. Now i m not so sure. In 2011, all it took was Ted Tso ( TT ) making some fairly reasonable statements about the need for any claimed evidence or statistics to be viewed skeptically and that dissenting research should also be considered and he was instantly vilified as a rape apologist . Sorry, but questioning extremely dodgy stats (that even in feminist circles are viewed more as ideological propaganda than as serious research) is NOWHERE NEAR SUFFICIENT to earn the label of rape-apologist. That is not how debate works you can t just refuse to engage with someone s point and simply accuse them of being the enemy for not agreeing 100% with whatever you say at least, not if you have any intellectual honesty or self-respect. (sure, some people are complete jerks and deserve to be told to FOAD in no uncertain terms but a) jerks like that are self-evident and obvious, and b) TT s participation in that thread was at all times civil and reasonable) But that thread is ancient history it was over and done with nearly two years ago.
In October this year, for reasons which are not at all clear, Valerie Aurora ( VA ) decided to revive the issue (which had been resolved back in 2011 with a resounding fuck no, we don t want misogynist shit or porn in our conferences from pretty much the entire linux community including near-universal support for improved anti-harassment policies both for linux.conf.au and for geek conferences in general) and use it to attack TT. And she did so by twisting his words and claiming in a post on The Ada Initiative blog that he said something which he didn t, that rape was impossible if both people were drunk enough . If he chose, he could quite easily win a libel case against her and TAI on that. It s not what he said, it s not even close to what he said, and VA is clearly too smart to honestly believe that it is what he said. In another post on her personal, blog she talks about how what he said was so terrible that it even now gives her nightmares, and that she can t bear the thought of working with him. Again, WTF? VA can say I had nightmares and was upset and furious and THAT is enough to justify a call to shun TT?? He didn t attack her, or threaten her (explicitly or implicitly), he was polite and civil. What he did was disagree with VA by referring to other research that disputed VA s preferred studies. I agree with and support many (perhaps most) of VA s and The Ada Initiative s aims, I certainly believe that linux and open source etc should be very welcoming and supportive of human diversity (including gender and sexuality, identity, religion, politics and so on), believe that it s a good thing that The Ada Initiative exists as part of that diversity welcome to be particularly supportive of women in geekdom. And i wholeheartedly agree that Linux leaders should not make public statements belittling and condoning rape BUT: a) I haven t seen one instance of that happening, ever and b) I find VA s choice of tactics here to be despicable. as i do when anyone else uses similar tactics, because they ARE despicable tactics. They are exactly the same as accusing someone of being a child pornographer for being against net censorship: You dared to disagree with me so I m going to accuse you of being a monster. (and, i must admit, the enthusiasm level of my support for The Ada Initiative is somewhat .diminished by this tactical blunder by the spokesperson and co-founder) There are far more deserving targets of VA s ire than TT. And there are far better ways for the Ada Initiative to achieve their aims.
Why? So why did I decide to comment when I knew that I was inevitably going to be accused of hating women and being a rape apologist? Mostly because I thought it would be gutless of me not to. Hardly anyone else had, and they quickly backed down under the accusations of misogyny and since I consider myself to be psychologically fairly strong, I felt that I am capable of wearing a little shit (or even a lot) for a while. In my egocentric fashion, I thought if I can t do then it s no wonder that no one else dares . And also because anyone who cared to make even the slightest effort to find out what my actual views on sexual harassment, rape, women s rights and numerous inter-related issues are can fairly easily see a very consistent record of the kinds of things I argue for and against, and my scathing responses to actual misogynists when they appear on lists that I participate in. they are not my kind of people. and even then, i hesitated. it s scary and intimidating to be putting yourself forward to be accused of being one of the things you hate. this is, of course, an instance of the chilling effect. So, I found the prospect scary, almost terrifying .and I can t think of a single person who has met me online or in real life who would even remotely describe me as being any kind of delicate or sensitive wall-flower. again, if i can t do it, it s no wonder no one else dares . so i clicked the Publish button. an ego is useful for some things. Also, I took VA s words but don t be silent as inspiration. In the process, i discovered why it is that some people just simply refuse to engage in rational discourse. I ve seen it many times from the other side, but i ve never experienced the seductive pleasure of indulging in it myself before there s a liberating freedom in just ignoring any and every point that someone makes and simply accuse them of being the enemy. You don t have to try to understand what they wrote, hell you don t even have to really read it you just need to quickly scan it for overall tone and if they don t seem like they re 100% supportive, you just accuse them of being the enemy or an apologist for the enemy. it s that fucking simple and easy. well, sort of easy. easy for some, perhaps. i personally found it extremely difficult a struggle to refrain from engaging, to remain in character (i m not much of an actor). especially when i kind of agreed with whoever was arguing against my experimental character or if i thought they made a good point. and even more so when i thought that some comments skirted a bit too close to being the kind of misogynist crap that i didn t want to tolerate having on MY blog. (i resolved that issue by just approving any reply that didn t squick me or that i could squint at and think hmmm borderline, give benefit of doubt ) but even though i don t like it, i can recognise the attraction it holds for some people.
Other thoughts: I m particularly disgusted by the men who intervene way too early without an explicit invitation or request for help or a clear need such as an immediate threat of violence in womens issues. Many or maybe even most may not realise it, but they are just taking over and asserting male strength and control by protecting women rather than giving them the support and space to discover and practice their own strength and their own voices. These uninvited interventions do not help women, they weaken and undermine them, they perpetuate dependence, they steal strength from the movement. It is patronising and enfeebling. But mostly, they just re-assert male dominance and are an attempt to make women s spaces more comfortable, more palatable, for men.(*) (it s also quite often very transparent self-serving and ingratiating behaviour from blokes who want to lay the groundwork for perhaps getting laid one day) IMO this goes far beyond a problem with men over-involving themselves in feminist causes i feel the same way for any relatively less-privileged group with a need to find their own voice and their own power they ll never find it if members of the privileged class (i.e. white males like me) just ride roughshod over the movement and speak FOR them rather then just silently lending their strength in support. For the most part, they (we) should just shut up and listen we already have more than enough opportunities to have our say. (*) yes, i m well aware of the difficulty in writing something like that paragraph as a member of a privileged class, without coming across as either self-hating or patronising or both. if i ve failed here, it s not for want of trying. done is a post from: Errata

15 November 2012

Craig Sanders: I had a dream

I had a dream that many prominent members of the Linux community were dressed in Nazi SS uniforms on the Arctic ice, clubbing baby seals to death. I can t bear the thought that so many Linux people may secretly be Nazi seal murderers, so I demand that they be excluded from any future Linux-related events or conferences that I might wish to attend. My nightmare and the strength of my personal reaction to it are conclusive proof that they are guilty. I can t remember all those who were identified in my dream right now, so I need to be able to point the finger at any time in the future and have any individual shunned excluded and banned from events, conferences, mailing lists, and open source projects in general with any who communicate with them (or, worse, support the evil-doers by pretending to a moderate, reasonable, rational, and/or evidence-based stance) being likewise exiled. After all, remember that even in these enlightened times there is still a seething culture of penguin-supremacism in the Linux community overtly pushing the agenda of the pure penguin race over the pernicious and corrosive evil of seals .this culture of seal-murder must be stamped out by whatever means necessary. Zero tolerance for seal-murder apologists! Thank you for understanding and supporting me in my very real fears. You make the world a safer place for everyone. I had a dream is a post from: Errata

10 September 2012

Craig Sanders: openstack, bridging, netfilter and dnat

I just posted the following question to ServerFault .and then realised there might be people out there in magical internetland who know the answer but never visit any of the SO sites, so i ve posted it here too. Feel free to respond here on on serverfault.
In a recent upgrade (from Openstack Diablo on Ubuntu Lucid to Openstack Essex on Ubuntu Precise), we found that DNS packets were frequently (almost always) dropped on the bridge interface (br100). For our compute-node hosts, that s a Mellanox MT26428 using the mlx4_en driver module.
We ve found two workarounds for this:
1. Use an old lucid kernel (e.g. 2.6.32-41-generic). This causes other problems, in particular the lack of cgroups and the old version of the kvm and kvm_amd modules (we suspect the kvm module version is the source of a bug we re seeing where occasionally a VM will use 100% CPU). We ve been running with this for the last few months, but can t stay here forever.
2. With the newer Ubuntu Precise kernels (3.2.x), we ve found that if we use sysctl to disable netfilter on bridge (see sysctl settings below) that DNS started working perfectly again. We thought this was the solution to our problem until we realised that turning off netfilter on the bridge interface will, of course, mean that the DNAT rule to redirect VM requests for the nova-api-metadata server (i.e. redirect packets destined for 169.254.169.254:80 to compute-node s-IP:8775) will be completely bypassed.
Long-story short: with 3.x kernels, we can have reliable networking and broken metadata service or we can have broken networking and a metadata service that would work fine if there were any VMs to service. We haven t yet found a way to have both.
Anyone seen this problem or anything like it before? got a fix? or a pointer in the right direction?
Our suspicion is that it s specific to the Mellanox driver, but we re not sure of that (we ve tried several different versions of the mlx4_en driver, starting with the version built-in to the 3.2.x kernels all the way up to the latest 1.5.8.3 driver from the mellanox web site. The mlx4_en driver in the 3.5.x kernel from Quantal doesn t work at all)
BTW, our compute nodes have supermicro H8DGT motherboards with built-in mellanox NIC:
02:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] (rev b0)
We re not using the other two NICs in the system, only the Mellanox and the IPMI card are connected.
Bridge netfilter sysctl settings:
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
Since discovering this bridge-nf sysctl workaround, we ve found a few pages on the net recommending exactly this (including Openstack s latest network troubleshooting page and a launchpad bug report that linked to this blog-post that has a great description of the problem and the solution) .it s easier to find stuff when you know what to search for :), but we haven t found anything on the DNAT issue that it causes.

Update 2012-09-12: Something I should have mentioned earlier this happens even on machines that don t have any openstack or even libvirt packages installed. Same hardware, same everything, but with not much more than the Ubuntu 12.04 base system installed. On kernel 2.6.32-41-generic, the bridge works as expected. On kernel 3.2.0-29-generic, using the ethernet interface, it works perfectly.
Using a bridge on that same NIC fails unless net.bridge.bridge-nf-call-iptables=0 So, it seems pretty clear that the problem is either in the mellanox driver, the updated kernel s bridging code, netfilter code. or some interaction between them. Interestingly, I have other machines (without a mellanox card) with a bridge interface that don t exhibit this problem. with NICs ranging from cheap r8169 cards to better quality broadcom tg3 Gbit cards in some Sun Fire X2200 M2 servers and intel gb cards in supermicro motherboards. Like our openstack compute nodes, they all use the bridge interfaces as their primary (or sometimes only) interface with an IP address they re configured that way so we can run VMs using libvirt & kvm with real IP addresses rather than NAT. So, that indicates that the problem is specific to the mellanox driver, although the blog post I mentioned above had a similar problem with some broadcom NICs that used the bnx2 driver. openstack, bridging, netfilter and dnat is a post from: Errata

14 April 2012

Craig Sanders: why is bzr so slow?

from the whinge of the day dept: I started a bzr branch of calibre about 2.5 hours ago because I wanted to see how difficult it would be to understand the code and make a few changes. The calibre Get Involved page warns that it can take about an hour .which is excessive to begin with and, worse, a huge understatement.
$ date ; ps u -Cbzr
Sat Apr 14 15:10:00 EST 2012
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
cas      14279  3.0  1.3 286836 215976 pts/3   S+   12:39   4:36 /usr/bin/python /usr/bin/bzr branch lp:calibre
It's still far from finished.
cas@ganesh:/usr/local/src/calibre$ bzr branch lp:calibre
You have not informed bzr of your Launchpad ID, and you must do this to
write to Launchpad or access private data.  See "bzr help launchpad-login".
1395260kB    65kB/s - Fetching revisions:Inserting stream:Estimate 178815/203940
a git clone of a much larger project would have been finished in mere minutes. WTF is bzr so horrendously slow? It's not my connection, I'm on an otherwise idle ADSL2 connection, that syncs about 14Mbps. i.e. capable of downloads of up 1.4 megabytes per second, not the 50-70kB/s that bzr is dawdling along at. It's not my CPU, an AMD 1090T hex-core overclocked to 3.7GHz (and almost completely idle) and since I have 16GB RAM, it's not lack of RAM either. Is there anything good about bzr that makes people actually want to use it? or is it just the association with ununtu and launchpad? maybe the purpose is to actively discourage casual involvement .you have to make a massive life commitment and run the gauntlet of tediously long waits before you can even look at the code. No wonder github is so popular. why is bzr so slow? is a post from: Errata

11 April 2012

Craig Sanders: comments working again

how embarassing, i make a point of asking for comments and critique on my previous post but comments didn't actually work unless you're logged in. I only noticed because someone emailed me. It was the admin-ssl plugin. de-activated now, so comments are working again. A few people tried to add comments according to the log if you feel like making them again, go right ahead. thanks Craig comments working again is a post from: Errata

9 April 2012

Craig Sanders: getting the terminal size in python

I m finally getting around to making myself learn python as more than just a read-only language. Working with Openstack kind of requires it. So far, I like it. Much more than I thought I would as an unrepentant sh and perl using systems geek. And getting over my distaste for the white-space issue is also proving to be much easier than I thought it would (although I still think that block-delimiters like make the code easier to read PyHeresy, I know) While hacking the /usr/bin/glance command to have less ugly output, and adding a csv option to list the image index in a usefully parseable format, I needed to figure out how to get the terminal size (height,width). This, cobbled together from various google search results, seems to work. It s probably not strictly-correct python idiom, and there s probably a better way of doing it. Comments and critique are welcome.
def get_terminal_size(fd=1):
    """
    Returns height and width of current terminal. First tries to get
    size via termios.TIOCGWINSZ, then from environment. Defaults to 25
    lines x 80 columns if both methods fail.
 
    :param fd: file descriptor (default: 1=stdout)
    """
    try:
        import fcntl, termios, struct
        hw = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234'))
    except:
        try:
            hw = (os.environ['LINES'], os.environ['COLUMNS'])
        except:  
            hw = (25, 80)
 
    return hw
 
def get_terminal_height(fd=1):
    """
    Returns height of terminal if it is a tty, 999 otherwise
 
    :param fd: file descriptor (default: 1=stdout)
    """
    if os.isatty(fd):
        height = get_terminal_size(fd)[0]
    else:
        height = 999
 
    return height
 
def get_terminal_width(fd=1):
    """
    Returns width of terminal if it is a tty, 999 otherwise
 
    :param fd: file descriptor (default: 1=stdout)
    """
    if os.isatty(fd):
        width = get_terminal_size(fd)[1]
    else:
        width = 999
 
    return width
getting the terminal size in python is a post from: Errata

Next.