Search Results: "Joerg Jaspert"

8 December 2020

Russell Coker: Links December 2020

Business Insider has an informative article about the way that Google users can get locked out with no apparent reason and no recourse [1]. Something to share with clients when they consider putting everything in the cloud . Vice has an interestoing article about people jailbreaking used Teslas after Tesla has stolen software licenses that were bought with the car [2]. The Atlantic has an interesting article titled This Article Won t Change Your Mind [3]. It s one of many on the topic of echo chambers but has some interesting points that others don t seem to cover, such as regarding the benefits of groups when not everyone agrees. Inequality.org has lots of useful information about global inequality [4]. Jeffrey Goldberg has an insightful interview with Barack Obama for the Atlantic about the future course of American politics and a retrospective on his term in office [5]. A Game Designer s Analysis Of QAnon is an insightful Medium article comparing QAnon to an augmented reality game [6]. This is one of the best analysis of QAnon operations that I ve seen. Decrypting Rita is one of the most interesting web comics I ve read [7]. It makes good use of side scrolling and different layers to tell multiple stories at once. PC Mag has an article about the new features in Chrome 87 to reduce CPU use [8]. On my laptop I have 1/3 of all CPU time being used when it is idle, the majority of which is from Chrome. As the CPU has 2 cores this means the equivalent of 1 core running about 66% of the time just for background tabs. I have over 100 tabs open which I admit is a lot. But it means that the active tabs (as opposed to the plain HTML or PDF ones) are averaging more than 1% CPU time on an i7 which seems obviously unreasonable. So Chrome 87 doesn t seem to live up to Google s claims. The movie Bad President starring Stormy Daniels as herself is out [9]. Poe s Law is passe. Interesting summary of Parler, seems that it was designed by the Russians [10]. Wired has an interesting article about Indistinguishability Obfuscation, how to encrypt the operation of a program [11]. Joerg Jaspert wrote an interesting blog post about the difficulties packagine Rust and Go for Debian [12]. I think that the problem is many modern languages aren t designed well for library updates. This isn t just a problem for Debian, it s a problem for any long term support of software that doesn t involve transferring a complete archive of everything and it s a problem for any disconnected development (remote sites and sites dealing with serious security. Having an automatic system for downloading libraries is fine. But there should be an easy way of getting the same source via an archive format (zip will do as any archive can be converted to any other easily enough) and with version numbers.

2 November 2020

Joerg Jaspert: Debian NEW Queue, Rust packaging

Debian NEW Queue So for some reason I got myself motivated again to deal with some packages in Debians NEW Queue. We had 420 source packages waiting for some kind of processing when I started, now we are down to something around 10. (Silly, people keep uploading stuff ) That s not entirely my own work, others from the team have been active too, but for those few days I went through a lot of stuff waiting. And must say it still feels mostly like it did when I somehow stopped doing much in NEW. Except - well, I feel that maintainers are much better in preparing their packages, especially that dreaded task of getting the copyright file written seems to be one that is handled much better. Now, thats not supported by any real numbers, just a feeling, but a good one, I think.

Rust Dealing with NEW meant I got in contact with one part that currently generates some friction between the FTP Team and one group of package maintainers - the Rust team. Note: this is, of course, entirely written from my point of view. Though with the intention of presenting it as objective as possible. Also, I know what rust is, and have tried a Hello world in it, but that s about my deep knowledge of it

The problem Libraries in rust are bundled/shipped/whatever in something called crates, and you manage what your stuff needs and provides with a tool called cargo. A library (one per crate) can provide multiple features, say a TLS lib can link against gnutls or openssl or some other random implementation. Such features may even be combinable in various different ways, so one can have a high number of possible feature combinations for one crate. There is a tool called debcargo which helps creating a Debian package out of a crate. And that tool generates so-called feature-packages, one per feature / combination thereof. Those feature packages are empty packages, only containing a symlink for their /usr/share/doc/ directory, so their size is smaller than the metadata they will produce. Inside the archive and the files generated by it, stuff that every user everywhere has to download and their apt has to process. Additionally, any change of those feature sets means one round through NEW, which is also not ideal. So, naturally, the FTP Team dislikes those empty feature packages. Really, a lot. There appears to be a different way. Not having the feature packages, but putting all the combinations into a Provides header. That sometimes works, but has two problems:
  • It can generate really long Provides: lines. I mean, REALLY REALLY REALLY long. Somewhat around 250kb is the current record. Thats long enough that a tool (not dak itself) broke on it. Sure, that tool needs to be fixed, but still, that s not nice. Currently preferred from us, though.
  • Some of the features may need different dependencies (say, gnutls vs openssl), should those conflict with each other, you can not combine them into one package.

Solutions Currently we do not have a good one. The rust maintainers and the ftp team are talking, exploring various ideas, we will see what will come out.

Devel archive / Component One of the possible solutions for the feature package problem would be something that another set of packages could also make good use of, I think. The introduction of a new archive or component, meant only for packages that are needed to build something, but where users are discouraged from ever using them. What? Well, take golang as an example. While we have a load of golang-something packages in Debian, and they are used for building applications written in go - none of those golang-something are meant to be installed by users. If you use the language and develop in it, the go get way is the one you are expected to use. So having an archive (or maybe component like main or contrib) that, by default, won t be activated for users, but only for things like buildds or archive rebuilds, will make one problem (hated metadata bloat) be evaluated wildly different. It may also allow a more relaxed processing of binary-NEW (easier additions of new feature packages).

But but but Yes, it is not the most perfect solution. Without taking much energy to think about, it requires
  • an adjustment in how main is handled. Right now we have the golden rule that main is self contained, that is, things in it may not need anything outside it for building or running. That would need to be adjusted for building. (Go as well as currently rust are always building static binaries, so no library dependencies there).
  • It would need handling for the release, that is, the release team would need to deal with that archive/component too. We haven t, yet, talked to them (still, slowly, discussing inside FTP Team). So, no idea how many rusty knives they want to sink into our nice bodies for that idea

Final Well, it is still very much open. Had an IRC meeting with the rust people, will have another end of November, it will slowly go forward. And maybe someone comes up with an entire new idea that we all love. Don t know, time will tell.

3 November 2017

Joerg Jaspert: Automated wifi login, update 2

Seems my blog lately just consist of updates to my automated login script for the ICE wifi But I do hate the entirely useless Click a button crap, every day, twice. I ve seen it once, now leave me alone, please. Updated script:
#!/bin/bash
# (Some) docs at
# https://wiki.ubuntuusers.de/NetworkManager/Dispatcher/
IFACE=$ 1:-"none" 
ACTION=$ 2:-"up" 
TMPDIR=$ TMPDIR:-"/tmp" 
WGET="/usr/bin/wget"
TIMEOUT="/usr/bin/timeout -k 20 15"
case $ ACTION  in
    up)
        CONID=$ CONNECTION_ID:-$(iwconfig $IFACE   grep ESSID   cut -d":" -f2   sed 's/^[^"]*"\ "[^"]*$//g') 
        if [[ $ CONID  == WIFIonICE ]]; then
            REFERER="http://www.wifionice.de/de/"
            LOGIN="http://www.wifionice.de/de/"
            COOKIETMP=$(mktemp -p $ TMPDIR  nmwifionice.XXXXXXXXX)
            trap "rm -f $ COOKIETMP " EXIT TERM HUP INT QUIT
            csrftoken=$($ TIMEOUT  $ WGET  -q -O - --keep-session-cookies --save-cookies=$ COOKIETMP  --referer $ REFERER  $ LOGIN     grep -oP  'CSRFToken"\ value="\K[0-9a-z]+')
            if [[ -z $ csrftoken  ]]; then
                echo "CSRFToken is empty"
                exit 0
            fi
            sleep 1
            $ TIMEOUT  $ WGET  -q -O - --load-cookies=$ COOKIETMP  --post-data="login=true&connect=connect&CSRFToken=$ csrftoken " --referer $ REFERER  $ LOGIN  >/dev/null
        fi
        ;;
    *)
        # We are not interested in this
        :
        ;;
esac

24 July 2017

Joerg Jaspert: Automated wifi login, update

With recent changes the automated login script for WifiOnICE stopped working. Fortunately a fix is easy, it is enough to add a referrer header to the call and have de/ added to the url. Updated script:
#!/bin/bash
# (Some) docs at
# https://wiki.ubuntuusers.de/NetworkManager/Dispatcher/
IFACE=$ 1:-"none" 
ACTION=$ 2:-"up" 
case $ ACTION  in
    up)
        CONID=$ CONNECTION_ID:-$(iwgetid "$ IFACE " -r) 
        if [[ $ CONID  == WIFIonICE ]]; then
            /usr/bin/timeout -k 20 15 /usr/bin/wget -q -O - --referer http://www.wifionice.de/de/ http://www.wifionice.de/de/?login > /dev/null
        fi
        ;;
    *)
        # We are not interested in this
        :
        ;;
esac

23 February 2017

Joerg Jaspert: Automated wifi login

If you have the fortune to need to follow some silly Login button for some wifi, regularly, the following little script may help you avoid this idiotic (and useless) task. This example uses the WIFIonICE, the free wifi on german ICE trains, simply as I have it twice a day, and got annoyed by the pointless Login button. A friend pointed me at just wget-ting the login page, so I made Network-Manager do this for me. Should work for anything similar that doesn t need some elaborate webform filled out.
#!/bin/bash
# (Some) docs at
# https://wiki.ubuntuusers.de/NetworkManager/Dispatcher/
IFACE=$ 1:-"none" 
ACTION=$ 2:-"up" 
case $ ACTION  in
    up)
        CONID=$ CONNECTION_ID:-$(iwconfig $IFACE   grep ESSID   cut -d":" -f2   sed 's/^[^"]*"\ "[^"]*$//g') 
        if [[ $ CONID  == WIFIonICE ]]; then
            /usr/bin/timeout -k 20 15 /usr/bin/wget -q -O - http://www.wifionice.de/?login > /dev/null
        fi
        ;;
    *)
        # We are not interested in this
        :
        ;;
esac
This script needs to be put into /etc/NetworkManager/dispatcher.d and made executable, owned by the root user. It will run on every connection change, thats why the ACTION is checked. The case may be a bit much here, but it could be easily extended to do a lot more. Yay, no more silly Open this webpage and press login crap.

25 August 2016

Joerg Jaspert: New gnupg-agent in Debian

In case you just upgraded to the latest gnupg-agent and used gnupg-agent as your ssh-agent you may find that ssh refuses to work with a simple but not helpful sign_and_send_pubkey: signing failed: agent refused operation This seems to come from systemd starting the agent, no longer a script at the start of the X session. And so it ends up with either no or an unusable tty. A simple
gpg-connect-agent updatestartuptty /bye
updates that and voila, ssh agent functionality is back in. Note: This assumes you have enable-ssh-support in your ~/.gnupg/gpg-agent.conf

27 April 2016

Niels Thykier: auto-decrufter in top 5 after 10 months

About 10 months ago, we enabled an auto-decrufter in dak. Then after 3 months it had become the top 11th remover . Today, there are only 3 humans left that have removed more packages than the auto-decrufter impressively enough, one of them is not even an active FTP-master (anymore). The current score board:
 5371 Luca Falavigna
 5121 Alexander Reichle-Schmehl
 4401 Ansgar Burchardt
 3928 DAK's auto-decrufter
 3257 Scott Kitterman
 2225 Joerg Jaspert
 1983 James Troup
 1793 Torsten Werner
 1025 Jeroen van Wolffelaar
  763 Ryan Murray
For comparison, here is the number removals by year for the past 6 years:
 5103 2011
 2765 2012
 3342 2013
 3394 2014
 3766 2015  (1842 removed by auto-decrufter)
 2845 2016  (2086 removed by auto-decrufter)
Which tells us that in 2015, the FTP masters and the decrufter performed on average over 10 removals a day. And by the looks of it, 2016 will surpass that. Of course, the auto-decrufter has a tendency to increase the number of removed items since it is an advocate of remove early, remove often! .:) Data is from https://ftp-master.debian.org/removals-full.txt. Scoreboard computed as:
  grep ftpmaster: removals-full.txt   \
   perl -pe 's/.*ftpmaster:\s+//; s/\]$//;'   \
   sort   uniq -c   sort --numeric --reverse   head -n10
Removals by year computed as:
 grep ftpmaster: removals-full.txt   \
   perl -pe 's/.* (\d 4 ) \d 2 :\d 2 :\d 2 .*/$1/'   uniq -c   tail -n6
(yes, both could be done with fewer commands)
Filed under: Debian

15 March 2016

Joerg Jaspert: Removing checksums

And as just announced on d-d-a, I m trying to break all the tools dealing with the (Debian) archive. Or something like it. But its about time to get rid of MD5Sum checksums, and SHA1 can go with it directly. As it is only in experimental for now, we can test and see what still breaks. I hope it won t be too much, so we can get it over all the archive (minus the stable stuff, of course). For some reason, what I like most in this change is the following python code that ended up in our InRelease file generation tool:
import apt_pkg
# Note: suite.checksums being an array with possible values of md5sum, sha1, sha256
hashfuncs = dict(zip([x.upper().replace('UM', 'um') for x in suite.checksums],
                      [getattr(apt_pkg, "%s" % (x)) for x in [x.replace("sum", "") + "sum" for x in suite.checksums]]))
Though I m sure this can be done in much more readable ways, it s doing what we need, but heck, took me a while to get it. But probably the change to get rid of gzip will be much more challenging / hard to get through. Lets see, in the few minutes after my mail, I already got some notices about possible breakage here. Fortunately those indices and the release file stuff are nicely seperated settings, so if it turns out we only take the checksums drop for now into the other suites, thats doable.

9 March 2016

Joerg Jaspert: Squeeze fully archived

While writing this, the last bits of squeeze that haven t been on archive.debian.org yet are moved over there, that is security and the lts suite. As soon as that script done, some tiny archive magic will happen and over the next few dinstall runs, the rest of squeeze that still was there will no longer be on our regular mirror network anymore. If you still use it, now is the time to upgrade. Update: Forgot to mention, but yes, squeeze-backports(-sloppy) is also archived on archive.debian.org.

6 March 2016

Joerg Jaspert: New ftpsync release

It took nearly a year, but today a new ftpsync version got released. Most of the work for this release was done by weasel, with one new feature submitted by waldi, my work was mostly style fixes and a bit of documentation. And of course the release now. If you run a mirror, you will find the new version at the usual place, that is the project/ftpsync/ subdirectory. You may also want to subscribe to the debian mirrors mailinglist, as the mirror team will post more information about changes in ftpsync there.

20 July 2013

Luke Faraone: Joining the Debian FTPTeam

I'm pleased to say that I have joined the Debian FTPTeam as of the Friday before last. See Joerg Jaspert's announcement on debian-devel-announce.

The FTPTeam is responsible for maintaining the Debian software archive, and ensures that new software in Debian is high-quality and compliant with our policies.

As an "ftpassistant", I (along with Paul, Scott, Gergely, and others) will be helping to process the NEW queue, which is currently at a whopping 297 packages. Here's hoping we'll be able to get that number down over the coming weeks!

17 July 2013

Joerg Jaspert: tmux - tm

Wohoo, one more in that little series of mine, another update to tm. This time it gained getopts style options - while retaining the old non-getopts style. It now can open more than one multisession to the same set of hosts, so you can connect multiple times to host1 host2 at the same time. No matter if you open them using the ms subcommand or the pre-defined sessions from the .tmux.d directory. I also took a bit and wrote some zsh completion for it. I m pretty sure this can be done better and am happy for patches, but it does work for me, so for now I m happy. You can, as usual, grab tm from my misc git repository, the zsh completion is in my zsh config repository - also at Github.

4 May 2013

Joerg Jaspert: Wheezy release

There was Lenny, there was Squeeze, now there is Wheezy. Another major release of Debian where I had the pleasure to do the ftpmaster work for the release. Like the last times, Mark joined to help with the work. But 2 FTPMasters aren t enough for one Wheezy, or so, so Ansgar had his first run aside from point releases. With the whole load of work the release team did in the past to prepare this, combined with all the changes in dak we had since Squeeze, it turned out to be rather easy for us. Again, a few moments and not a detailed log (times in CEST): Right now (that is, 21:08 CEST) I am mostly waiting for the CD build to finish. Current schedule seems to have that in some two and a half hours, after which we can push the mirrors and our press people announce it. Except if Murphy wants to show up, but lets hope that not. Thanks go out to everyone involved in preparing this release, be it in the past by fixing bugs, uploading packages, doing whatever was needed, as well as doing the work today.

17 April 2013

Joerg Jaspert: tmux - tm update

Just did a small update to my tm helper script. There is now a new variable, TMSESSHOST. If that is set to false, the hostname of the current machine won t be prepended to the generated session name. true is the default, as that was the behaviour in the past. It now can attach to any session, not just the ones it created itself, thus nearly entirely removing my need of ever calling tmux directly. And cosmetically - window names for sessions defined in the .tmux.d directory are no longer plain Multisession , but the session name as defined in the file. If you are interested, you can get it over here from my git repository of misc stuff

18 March 2013

Joerg Jaspert: tmux - tm

Just a small update for those that use my little tmux helper: You may want to fetch the latest version from my git repository. At least if you have to deal with a tmux version prior to 1.4 (like, on squeeze), though I really recommend to use a later one. 1.6 from backports or 1.7 (but thats not backported). Fix is small, just a better handling of the non-existing -V option to get the version information.

14 March 2013

Joerg Jaspert: zsh config - and prompt

As most people that are using that funny grey on black thing with the blinking white box AKA a commandline, I do have a heavily configured shell. zsh in my case. Some while ago my old configuration for it started to annoy me and so I set out to redo it. With the basic requirement to be flexible and able to adjust to various zsh versions (back as far as Lenny). I ve set out and looked how others do it and took the (I hope) best bits and put them together in a way I like it. Maybe others want to pick from me, so here I describe a little, maybe someone is interested First - the config is completly inside .zsh. The only thing directly in my $HOME is a symlink from $HOME/.zshenv to $HOME/.zsh/zshenv.home. All in one place, nicer this way. The zshenv just sets ZDOTDIR, so zsh then uses the .zsh itself. Next I am using modules . Well, actually everything is splitted out in small(er) files, so its way easier to maintain. And to overwrite stuff wherever I need to. So my .zshrc just loads those module files - and then goes and checks if it finds the same file inside a series of subdirectories. Those subdirectories are based on the hostname, the kernel name, the username, the domain name and the name of the distribution of the system. And a number of combinations of those. That system allows me, for example, to have a set of aliases defined only on Debian systems (like those to do with apt-get & co). Or define some extra variables if I login to the host franck, domain debian.org. Or whatever more. I m also on the way of doing something similar for all my dotfiles. While I was working at that I finally got fed up with the fact that my prompt, which is based on Phil! s wasn t doing what I wanted it on various of the hosts I access. Thankfully someone pointed me at a change to the grml zsh config: Frank Terbeck rewrote their prompt to make use of the zstyle system. Now that got me going, I like the idea of having the whole prompt easily configurable and changeable. My prompt always has shown slight differences depending on where the shell runs, which meant I had to carefully craft PS1 for every such change, and when one changes something one had to check all the places to adjust them too. No longer. I took Franks work and extended it for me. I wanted to still have a design matching my old prompt (minus colors, I changed color themes anyways), but with full flexibility the zstyle system can offer here. Now I am at a point where As an example, on the ftp-master host my prompt, while otherwise being the same as everywhere else, contains one extra item - displaying the current status of the archive (dinstall). If you want to try it, the setup file is available, but note that I don t give any warranty. :) Works for me . I load it in my Prompts module, while having the prompt setup file in my function path. The 4 commented lines, my testcase, are a nice example of a custom token, though a real life one, the setup for the ftp-master prompt with the one extra item can be found over here. Comments? Bugfixes? Enhancements? Update: Got asked for an example how to use my prompt to fake another. So here is what you need to do to use my prompt setup to imitate the prompt clint, as delivered with zsh:
autoload promptinit && promptinit
zstyle ':prompt:ganneff' vcs_info true
# Alternatively you can set whatever you like it to be.
zstyle ':prompt:ganneff' set_vcs_info_defaults true
zstyle ':prompt:ganneff' colors true
## no right prompt
zstyle ':prompt:ganneff:right:setup' use-rprompt false
## color of brackets depending on variable
zstyle ':prompt:ganneff:*:items:openbracket' pre '$ $ SSH_CLIENT+"$ PR_YELLOW " :-"$ PR_RED " '
zstyle ':prompt:ganneff:*:items:openbracket' post '$ PR_NO_COLOR '
zstyle ':prompt:ganneff:*:items:closebracket' pre '$ $ SSH_CLIENT+"$ PR_YELLOW " :-"$ PR_RED " '
zstyle ':prompt:ganneff:*:items:closebracket' post '$ PR_NO_COLOR '
zstyle ':prompt:ganneff:*:items:openanglebracket' pre '$ $ SSH_CLIENT+"$ PR_YELLOW " :-"$ PR_RED " '
zstyle ':prompt:ganneff:*:items:openanglebracket' post '$ PR_NO_COLOR '
zstyle ':prompt:ganneff:*:items:closeanglebracket' pre '$ $ SSH_CLIENT+"$ PR_YELLOW " :-"$ PR_RED " '
zstyle ':prompt:ganneff:*:items:closeanglebracket' post '$ PR_NO_COLOR '
## extra date format, and its color
zstyle ':prompt:ganneff:*:items:date' token '%D %a %y/%m/%d %R %Z '
zstyle ':prompt:ganneff:*:items:date' pre '$ PR_CYAN '
## pts. %l instead of %y, also color again
zstyle ':prompt:ganneff:*:items:pts' token '%l'
zstyle ':prompt:ganneff:*:items:pts' pre '$ PR_GREEN '
zstyle ':prompt:ganneff:*:items:pts' post '$ PR_NO_COLOR '
## host ends with :
zstyle ':prompt:ganneff:*:items:host' token '%m:'
## and a different style for the rc level
zstyle ':prompt:ganneff:*:items:rc' token '%(?..[%?%1v] )'
zstyle ':prompt:ganneff:*:items:rc' pre ''
## change color for other parts that differ to my default
zstyle ':prompt:ganneff:*:items:user' pre '$ PR_GREEN '
zstyle ':prompt:ganneff:*:items:host' pre '$ PR_GREEN '
zstyle ':prompt:ganneff:*:items:at' pre '$ PR_GREEN '
zstyle ':prompt:ganneff:*:items:path' pre '$ PR_GREEN '
zstyle ':prompt:ganneff:*:items:history' pre ''
## Show the shell level in a different way
zstyle ':prompt:ganneff:*:items:shell-level' pre ''
zstyle ':prompt:ganneff:*:items:shell-level' token 'zsh%(2L./$SHLVL.) '
## And history is just a bold number. Lazy here, not using pre and post
zstyle ':prompt:ganneff:*:items:history' token '%B%h%b '
## And the last part in prompt, a # or % depending on privileges
## Using standard zsh tokens for bold
zstyle ':prompt:ganneff:*:items:privileges' pre '%B'
zstyle ':prompt:ganneff:*:items:privileges' post '%b'
## Now there are two parts not defined by default, so lets create them
## Both are simple existing variables, no need for extra precmd functions
## system info
zstyle ':prompt:ganneff:extra:ostype' pre '$ PR_CYAN '
zstyle ':prompt:ganneff:extra:ostype' post '$ PR_NO_COLOR '
zstyle ':prompt:ganneff:extra:ostype' token "$ MACHTYPE /$ OSTYPE /$(uname -r)"
## zsh version
zstyle ':prompt:ganneff:extra:zshvers' pre '$ PR_CYAN '
zstyle ':prompt:ganneff:extra:zshvers' post '$ PR_NO_COLOR '
zstyle ':prompt:ganneff:extra:zshvers' token "$ ZSH_VERSION "
zstyle ':prompt:ganneff:left:full:setup' items \
    openbracket date closebracket openbracket pts closebracket openbracket ostype closebracket \
    openbracket zshvers closebracket newline openanglebracket user at host \
    path closeanglebracket newline \
    shell-level space history rc space vcs privileges space
prompt ganneff
And if you now tell me Uh, thats so much more than just saying /prompt clint/ , then you are right. But then: This way is flexible. As I wrote above, my prompt on the ftpmaster host is different. Actually, my prompt on all machines inside the domain debian.org differs, by having
zstyle ':prompt:ganneff:*:items:host' pre '$ PR_YELLOW '
in the Prompts definition for domain:debian.org. And thats enough to show me im on a .debian.org machine by a yellow hostname, instead of my default red. On the ftpmaster host I have the additional few lines: # Want one more piece in my prompt here, dinstall status zstyle :prompt:ganneff:left:full:setup items \ ulcorner line openparentheses user at host pts closeparentheses line history \ line dinstall line shell-level line flexline openparentheses path closeparentheses line urcorner newline \ llcorner line rc openparentheses time closeparentheses line vcs line change-root pipe space
zstyle ':prompt:ganneff:extra:dinstall' pre '$ PR_CYAN '
zstyle ':prompt:ganneff:extra:dinstall' post '$ PR_NO_COLOR '
zstyle ':prompt:ganneff:extra:dinstall' token '$DINSTALL'
zstyle ':prompt:ganneff:extra:dinstall' precmd jj_update_dinstall
zmodload zsh/mapfile
jj_update_dinstall ()  
    DINSTALL="$ $ (z)$ (f)mapfile[/srv/ftp.debian.org/web/dinstall.status] [2] [3,99] "
 
And woo, thats simple. I think. (There ought to be a way to just easily add the item at a defined place in the items zstyle, but meh, too lazy to look).

2 March 2013

Joerg Jaspert: Goodbye Lenny

So this weekend was removal time, namely Lenny. I started out by getting all the different archives which had a Lenny release placed onto archive.debian.org, namely volatile, backports, security and of course the main archive. Actually, they got put onto one of the two hosts that hold this archive and are now syncing to the other. Today started by removing Lenny from those archives. As volatile is dead as an extra archive that was easy - DSA turned off the machine. Backports and security got all their files removed in one big removal after which I had them forget they ever dealt with Lenny. The main archive is a different story. Due to its size I can t simply remove all files at once. This would break every mirror, as they don t delete too many files at once. For the official mirror scripts that magic number is 40000. Both, backports and security, had less than that for their lenny suites, but the main archive has somewhere around 195k files that need to go away. Slightly more than the limit. For that reason I set out to only delete parts of the files for each mirror push. Lucky our toolset supports this easily, I simply had to declare the lenny suites empty and let our clean-suite tool do its work. Worked flawlessly - if one considers 8 hour runtime to mark the files as delete-able no flaw. Took another two hours to actually delete the first 20000 files out of our pool. The next mirror push will thus delete every lenny related entry in our dists/ tree and some 20000 files out of pool. The following mirror pushes will continue deletion, in batches of 10000, until all of lenny is gone.

1 March 2013

Joerg Jaspert: tmux - like screen, just nicer. Replacing clusterssh too.

By now I am a long time happy user of tmux instead of screen. I started using it somewhere in 2011 and by now I only have one usage of screen left over (direct ttyS0 access for one serial). As usual I have a custom config for it, and if you are interested there is a link for you: tmux.conf. But the more interesting part and why I blog is that I also wrote a little helper around tmux. Ingeniously called tm. And a colleague said I should let other people use it too and place it somewhere accessible. So, err, here it is. Or better, the usage output of it, the actual script has a link somewhere below.
tmux helper by Joerg Jaspert <joerg@ganneff.de>
Call as: tm CMD [host]...
CMD is one of
 ls          List running sessions
 s           Open ssh session to host
 ms          Open multi ssh sessions to hosts, synchronizing input
 $anything   Either plain tmux session with name of $anything or
             session according to TMDIR file
TMDIR file:
Each file in $TMDIR defines a tmux session.
Filename is the commandline option $anything
Content is defined as:
  First line: Session name
  Second line: extra tmux options
  Any following line: A hostname to open a shell with in the normal
  ssh syntax. (ie [user@]hostname)
Environment variables recognized by this script:
TMPDIR - Where tmux stores its session information
         DEFAULT: If unset: /tmp
TMSORT - Should ms sort the hostnames, so it always opens the same
         session, no matter in which order hostnames are presented
         DEFAULT: true
TMOPTS - Extra options to give to the tmux call
         Note that this ONLY affects the final tmux call to attach
         to the session, not to the earlier ones creating it
         DEFAULT: -2
TMDIR  - Where are session information files stored
         DEFAULT: $ HOME /.tmux.d
TMWIN  - Where does your tmux starts numbering its windows?
         This script tries to find the information in your config,
         but as it only checks $HOME/.tmux.conf it might fail.
         So if your window numbers start at anything different to 0,
         like mine do at 1, then you can set TMWIN to 1

It is mainly useful for using tmux as a ssh-multiplexer and replacement for clusterssh though its happy to do any other tmux session too. As the help text may already have told you, you can use it to list existing tmux sessions (oh, wow!). Honestly, that one is there because tm ls is shorter than tmux ls More interestingly, tm s HOSTNAME opens a tmux session with the first window being a ssh shell to HOSTNAME. Should you later type tm s HOSTNAME again, it will attach to the existing session. But the real kicker for me (and colleagues) actually is the ms subcommand of it. ms being short for multi-session, it is my way of no-longer using clusterssh. Basically you type tm ms HOST1 user@HOST2 and it opens a session with one tmux window consisting of two panes. Showing you host1 and host2 at the same time. And the input to both is synchronized, that is - all your input is send to both of them. You aren t limited to just two hosts, do as many as you wish. Actually, I have no idea if tmux has an upper limit, but I did already have more than 100 hosts open at once. The major limitation there is not tmux - but simply the available screenspace, as all hosts are shown in ONE window! (The one advantage of clusterssh, it uses one xterm per host) This is quite helpful in all kinds of administration work involving clusters - imaging installing a package on 5 hosts of a cluster, you usually want it on all of them - or any kind of situations where one wants to run one command on many systems. As I m already said, I m lazy. And so there is a way to easily define multisessions that one uses regulary, by creating files inside the directory behind $TMDIR, usually $ HOME /.tmux.d. They are simple text files and let you easily define sessions you can then (re)open with a simple tm whateveritsname. And if you want the whole script by now: tm

11 November 2012

Nathan Handler: Introducing nmbot

While going through the NM Process, I spent a lot of time on https://nm.debian.org. At the bottom of the page, there is a TODO list of some new features they would like to implement. One item on the list jumped out at me as something I was capable of completing: IRC bot to update stats in the #debian-newmaint channel topic. I immediately reached out to Enrico Zini who was very supportive of the idea. He also explained how he wanted to expand on that idea and have the bot send updates to the channel whenever the progress of an applicant changes. Thanks to Paul Tagliamonte, I was able to get my hands on a copy of the nm.debian.org database (with private information replaced with dummy data). This allowed me to create some code to parse the database and create counts for the number of people waiting in the various queues. I also created a list of events that occurred in the last n days. Enrico then took this code and modified it to integrate it into the website. You can specify the number of days to show events for, and even have the information produced in JSON format. This information is generated upon requesting the page, so it is always up-to-date. It took a couple of rounds of revisions to ensure that the website was exposing all of the necessary information in the JSON export. At this stage, I converted the code to be an IRC bot. Based on prior experience, I decided to implement this as an irssi5 script. The bot is currently running as nmbot in #debian-newmaint on OFTC. Every minute, it fetches the JSON statistics to see if any new events have occurred. If they have, it updates the topic and sends announcements as necessary. While the bot is running, there are still a few more things on its TODO list. First, we need to move it to a stable and permanent home. Running it off of my personal laptop is fine for testing, but it is not a long term solution. Luckily, Joerg Jaspert (Ganneff) has graciously offerred to host the bot. He also made the suggestion of converting the bot to a Supybot plugin so that it could be integrated into the existing ftpmaster bot (dak). The bot's code is currently pretty simple, so I do not expect too much difficulty in converting it to Python/supybot. One last item on the list is something that Enrico is working on implementing. He is going to have the website generate static versions of the JSON output whenever an applicant's progress changes. The bot could then fetch this file, which would reduce the number of times the site needs to generate the JSON. The code for the bot is available in a public git repository, and feedback/suggestions are always appreciated.

20 June 2012

Joerg Jaspert: Work

Gnarf. Someone happen to need a new Sysadmin - working from home?

Next.