Search Results: "brandon"

28 December 2023

Antonio Terceiro: Debian CI: 10 years later

It was 2013, and I was on a break from work between Christmas and New Year of 2013. I had been working at Linaro for well over a year, on the LAVA project. I was living and breathing automated testing infrastructure, mostly for testing low-level components such as kernels and bootloaders, on real hardware. At this point I was also a Debian contributor for quite some years, and had become an official project members two years prior. Most of my involvement was in the Ruby team, where we were already consistently running upstream test suites during package builds. During that break, I put these two contexts together, and came to the conclusion that Debian needed a dedicated service that would test the contents of the Debian archive. I was aware of the existance of autopkgtest, and started working on a very simple service that would later become Debian CI. In January 2014, debci was initially announced on that month's Misc Developer News, and later uploaded to Debian. It's been continuously developed for the last 10 years, evolved from a single shell script running tests in a loop into a distributed system with 47 geographically-distributed machines as of writing this piece, became part of the official Debian release process gating migrations to testing, had 5 Summer of Code and Outrechy interns working on it, and processed beyond 40 million test runs. In there years, Debian CI has received contributions from a lot of people, but I would like to give special credits to the following:

5 December 2021

Reproducible Builds: Reproducible Builds in November 2021

Welcome to the November 2021 report from the Reproducible Builds project. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is therefore to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. If you are interested in contributing to our project, please visit our Contribute page on our website.
On November 6th, Vagrant Cascadian presented at this year s edition of the SeaGL conference, giving a talk titled Debugging Reproducible Builds One Day at a Time:
I ll explore how I go about identifying issues to work on, learn more about the specific issues, recreate the problem locally, isolate the potential causes, dissect the problem into identifiable parts, and adapt the packaging and/or source code to fix the issues.
A video recording of the talk is available on archive.org.
Fedora Magazine published a post written by Zbigniew J drzejewski-Szmek about how to Use Diffoscope in packager workflows, specifically around ensuring that new versions of a package do not introduce breaking changes:
In the role of a packager, updating packages is a recurring task. For some projects, a packager is involved in upstream maintenance, or well written release notes make it easy to figure out what changed between the releases. This isn t always the case, for instance with some small project maintained by one or two people somewhere on GitHub, and it can be useful to verify what exactly changed. Diffoscope can help determine the changes between package releases. [ ]

kpcyrd announced the release of rebuilderd version 0.16.3 on our mailing list this month, adding support for builds to generate multiple artifacts at once.
Lastly, we held another IRC meeting on November 30th. As mentioned in previous reports, due to the global events throughout 2020 etc. there will be no in-person summit event this year.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb made the following changes, including preparing and uploading versions 190, 191, 192, 193 and 194 to Debian:
  • New features:
    • Continue loading a .changes file even if the referenced files do not exist, but include a comment in the returned diff. [ ]
    • Log the reason if we cannot load a Debian .changes file. [ ]
  • Bug fixes:
    • Detect XML files as XML files if file(1) claims if they are XML files or if they are named .xml. (#999438)
    • Don t duplicate file lists at each directory level. (#989192)
    • Don t raise a traceback when comparing nested directories with non-directories. [ ]
    • Re-enable test_android_manifest. [ ]
    • Don t reject Debian .changes files if they contain non-printable characters. [ ]
  • Codebase improvements:
    • Avoid aliasing variables if we aren t going to use them. [ ]
    • Use isinstance over type. [ ]
    • Drop a number of unused imports. [ ]
    • Update a bunch of %-style string interpolations into f-strings or str.format. [ ]
    • When pretty-printing JSON, mark the difference as being reformatted, additionally avoiding including the full path. [ ]
    • Import itertools top-level module directly. [ ]
Chris Lamb also made an update to the command-line client to trydiffoscope, a web-based version of the diffoscope in-depth and content-aware diff utility, specifically only waiting for 2 minutes for try.diffoscope.org to respond in tests. (#998360) In addition Brandon Maier corrected an issue where parts of large diffs were missing from the output [ ], Zbigniew J drzejewski-Szmek fixed some logic in the assert_diff_startswith method [ ] and Mattia Rizzolo updated the packaging metadata to denote that we support both Python 3.9 and 3.10 [ ] as well as a number of warning-related changes[ ][ ]. Vagrant Cascadian also updated the diffoscope package in GNU Guix [ ][ ].

Distribution work In Debian, Roland Clobus updated the wiki page documenting Debian reproducible Live images to mention some new bug reports and also posted an in-depth status update to our mailing list. In addition, 90 reviews of Debian packages were added, 18 were updated and 23 were removed this month adding to our knowledge about identified issues. Chris Lamb identified a new toolchain issue, absolute_path_in_cmake_file_generated_by_meson.
Work has begun on classifying reproducibility issues in packages within the Arch Linux distribution. Similar to the analogous effort within Debian (outlined above), package information is listed in a human-readable packages.yml YAML file and a sibling README.md file shows how to classify packages too. Finally, Bernhard M. Wiedemann posted his monthly reproducible builds status report for openSUSE and Vagrant Cascadian updated a link on our website to link to the GNU Guix reproducibility testing overview [ ].

Software development The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: Elsewhere, in software development, Jonas Witschel updated strip-nondeterminism, our tool to remove specific non-deterministic results from a completed build so that it did not fail on JAR archives containing invalid members with a .jar extension [ ]. This change was later uploaded to Debian by Chris Lamb. reprotest is the Reproducible Build s project end-user tool to build the same source code twice in widely different environments and checking whether the binaries produced by the builds have any differences. This month, Mattia Rizzolo overhauled the Debian packaging [ ][ ][ ] and fixed a bug surrounding suffixes in the Debian package version [ ], whilst Stefano Rivera fixed an issue where the package tests were broken after the removal of diffoscope from the package s strict dependencies [ ].

Testing framework The Reproducible Builds project runs a testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Holger Levsen:
    • Document the progress in setting up snapshot.reproducible-builds.org. [ ]
    • Add the packages required for debian-snapshot. [ ]
    • Make the dstat package available on all Debian based systems. [ ]
    • Mark virt32b-armhf and virt64b-armhf as down. [ ]
  • Jochen Sprickerhof:
    • Add SSH authentication key and enable access to the osuosl168-amd64 node. [ ][ ]
  • Mattia Rizzolo:
    • Revert reproducible Debian: mark virt(32 64)b-armhf as down - restored. [ ]
  • Roland Clobus (Debian live image generation):
    • Rename sid internally to unstable until an issue in the snapshot system is resolved. [ ]
    • Extend testing to include Debian bookworm too.. [ ]
    • Automatically create the Jenkins view to display jobs related to building the Live images. [ ]
  • Vagrant Cascadian:
    • Add a Debian package set group for the packages and tools maintained by the Reproducible Builds maintainers themselves. [ ]


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

19 November 2021

Reproducible Builds (diffoscope): diffoscope 193 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 193. This version includes the following changes:
[ Chris Lamb ]
* Don't duplicate file lists at each directory level.
  (Closes: #989192, reproducible-builds/diffoscope#263)
* When pretty-printing JSON, mark the difference as such, additionally
  avoiding including the full path.
  (Closes: reproducible-builds/diffoscope#205)
* Codebase improvements:
  - Update a bunch of %-style string interpolations into f-strings or
    str.format.
  - Import itertools top-level directly.
  - Drop some unused imports.
  - Use isinstance(...) over type(...) ==
  - Avoid aliasing variables if we aren't going to use them.
[ Brandon Maier ]
* Fix missing diff output on large diffs.
[ Mattia Rizzolo ]
* Ignore a Python warning coming from a dependent library (triggered by
  supporting Python 3.10)
* Document that support both Python 3.9 and 3.10.
You find out more by visiting the project homepage.

8 October 2020

Dirk Eddelbuettel: RcppSimdJson 0.1.2: Upstream update

A new RcppSimdJson release arrived on CRAN yesterday bringing along the simdjson 0.5.0 release that happened a few weeks. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is faster than CPU speed as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon (also voted best talk). Beside the upstream update, not too much happened to our package itself since 0.1.1 though Brandon did help one user to seriously speed up his JSON processing. The (this time very short) NEWS entry follows.

Changes in version 0.1.2 (2020-10-07)
  • Upgraded to simdjson 0.5.0 (Dirk #49)

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

7 August 2016

Dirk Eddelbuettel: littler 0.3.1

max-heap image The second release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in the summer of 2006, and joined by me a few weeks later. littler is the first command-line interface for R and predates Rscript. It is still faster, and in my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It prefers to live on Linux and Unix, has its difficulties on the OS X due yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!). This release brings us fixes and enhancements from three other contributors, a couple new example scripts, more robust builds, extended documentation and more -- see below for details. The NEWS file entry is below.
Changes in littler version 0.3.1 (2016-08-06)
  • Changes in examples
    • install2.r now passes on extra options past -- to R CMD INSTALL (PR #37 by Steven Pav)
    • Added rcc.r to run rcmdcheck::rcmdcheck()
    • Added (still simple) render.r to render (R)markdown
    • Several examples now support the -x or --usage flag to show extended help.
  • Changes in build system
    • The AM_LDFLAGS variable is now set and used too (PR #38 by Mattias Ellert)
    • Three more directories, used when an explicit installation directory is set, are excluded (also #38 by Mattias)
    • Travis CI is now driven via run.sh from our fork, and deploys all packages as .deb binaries using our PPA where needed
  • Changes in package
    • SystemRequirements now mentions the need for libR, i.e. an R built with a shared library so that we can embed R.
    • The docopt and rcmdcheck packages are now suggested, and added to the Travis installation.
    • A new helper function r() is now provided and exported so that the package can be imported (closes #40).
    • URL and BugReports links were added to DESCRIPTION.
  • Changes in documentation
    • The help output for installGithub.r was corrected (PR #39 by Brandon Bertelsen)
Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. will probably have new Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 December 2015

Antonio Terceiro: Bits from the Debian Continuous Integration project

It s been almost 2 years since the Debian Continuous Integration project has been launched, and it has proven to be a useful resource for the development of Debian. I have previously made a an introductory post, and this this is an update on the latest developments. Infrastructure upgrade Back in early 2014 when Debian CI was launched, there were less than 200 source packages with declared test suite metadata, and using a single worker machine polling the archive for updates and running tests sequentially in an infinite loop ( the simplest thing that could possibly work ) was OK-ish. Then our community started an incredible, slow and persistent effort to prepare source packages for automated testing, and we now have almost 5,000 of them. The original, over-simplistic design had to be replaced. The effort of transforming debci in a distributed system was started by Martin Pitt, who did an huge amount of work. In the latest months I was able to complete that work, to a point where I am confident in letting it run (mostly) unatended. We also had lots of contributions to the web UI from Brandon Fairchild, who was a GSOC intern in 2014, and continues to contribute to this date. All this work culminated in the migration from a single-worker model to a master/workers setup, currently with 10 worker nodes. On busy periods all of those worker nodes will go on for days with full utilization, but even then the turnaround between package upload and a test run is now a lot faster than it used to. Debian members can inspect the resource usage on those systems, as well as the length of the processing queue, by browsing to the corresponding munin instance (requires authentication via a SSL client certificated issued by sso.debian.org). The system is currenly being hosted on a Amazon EC2 account sponsored by Amazon. The setup is fully automated and reproducible. It is not fully (or at all) documented yet, but those interested should feel free to get in touch on IRC (OFTC, #debci) Testing backend changed from schroot to lxc Together with the infrastructure updates, we also switched to using lxc instead of schroot as backend. Most test suites should not be affected by this, but the default lxc settings might cause some very specific issues in a few packages. See for example #806542 ( liblinux-prctl-perl: autopkgtest failures: seccomp, capbset ) Adding support for KVM is also in the plans, and we will get to that at some point. Learn more If you want to learn more on how you can add tests for your package, a good first start is the debci online documentation (which is also available locally if you install debci ). You might also be interested in watching the live tutorial (WebM, 469 MB!) that has been presented at Debconf 15 earlier this year, full of tips and real examples from the archive. It would be awesome if someone wanted to transcribe that into a text tutorial ;-) How to get involved There are a few ways you can contribute: autodep8. if you are knowledgeable on a subset of packages that are very similar and can have their tests executed in a similar way, such as $Language libraries , you might consider writing a test metadata generator so that each package does not need to declare a debian/tests/control file explicitly, requiring only The Testsuite: header in debian/control. Ruby and Perl are already covered, and there is initial support for NodeJS. Adding support for new types of packages is very easy. See the source repository. If you manage to add support for your favorite language, please get in touch so we can discuss whitelisting the relavant packages in ci.debian.net so that they will get their tests executed even before being uploaded with the proper Testsuite: control field. autopkgtest. autopkgtest is responsible for actually running your tests, and you can use it to reproduce test runs locally. debci. debci is the system running in ci.debian.net (version 1.0, currently in testing, is exactly what is running up there, minus a version number and a changelog entry). It can also be used to have private clones of ci.debian.net, e.g. for derivatives or internal Debian-related development. See for example the Ubuntu autopkgtest site. Getting in touch For maintainer queries and general discussion: For the development of debci/autopkgtest/autodep8

6 September 2014

Russ Allbery: Accumulated hauls

I haven't made one of these in a long time, so I have some catching from random purchases to do, which includes a (repurposed) nice parting gift from my previous employer and a trip to Powell's since I was in the area for DebConf14. This also includes the contents of the Hugo voter's packet, which contained a wide variety of random stuff even if some of the novels were represented only by excerpts. John Joseph Adams (ed.) The Mad Scientist's Guide to World Domination (sff anthology)
Roger McBride Allen The Ring of Charon (sff)
Roger McBride Allen The Shattered Sphere (sff)
Iain M. Banks The Hydrogen Sonata (sff)
Julian Barnes The Sense of an Ending (mainstream)
M. David Blake (ed.) 2014 Campbellian Anthology (sff anthology)
Algis Budrys Benchmarks Continued (non-fiction)
Algis Budrys Benchmarks Revisited (non-fiction)
Algis Budrys Benchmarks Concluded (non-fiction)
Edgar Rice Burroughs Carson of Venus (sff)
Wesley Chu The Lives of Tao (sff)
Ernest Cline Ready Player One (sff)
Larry Correia Hard Magic (sff)
Larry Correia Spellbound (sff)
Larry Correia Warbound (sff)
Sigrid Ellis & Michael Damien Thomas (ed.) Queer Chicks Dig Time Lords (non-fiction)
Neil Gaiman The Ocean at the End of the Lane (sff)
Max Gladstone Three Parts Dead (sff)
Max Gladstone Two Serpents Rise (sff)
S.L. Huang Zero Sum Game (sff)
Robert Jordan & Brandon Sanderson The Wheel of Time (sff)
Drew Karpyshyn Mass Effect: Revelation (sff)
Justin Landon & Jared Shurin (ed.) Speculative Fiction 2012 (non-fiction)
John J. Lumpkin Through Struggle, the Stars (sff)
L. David Marquet Turn the Ship Around! (non-fiction)
George R.R. Martin & Raya Golden Meathouse Man (graphic novel)
Ramez Naam Nexus (sff)
Eiichiro Oda One Piece Volume 1 (manga)
Eiichiro Oda One Piece Volume 2 (manga)
Eiichiro Oda One Piece Volume 3 (manga)
Eiichiro Oda One Piece Volume 4 (manga)
Alexei Panshin New Celebrations (sff)
K.J. Parker Devices and Desires (sff)
K.J. Parker Evil for Evil (sff)
Sofia Samatar A Stranger in Olondria (sff)
John Scalzi The Human Division (sff)
Jonathan Straham (ed.) Fearsome Journeys (sff anthology)
Vernor Vinge The Children of the Sky (sff)
Brian Wood & Becky Cloonan Demo (graphic novel)
Charles Yu How to Live Safely in a Science Fictional Universe (sff) A whole bunch of this is from the Hugo voter's packet, and since the Hugos are over, much of that probably won't get prioritized. (I was very happy with the results of the voting, though.) Other than that, it's a very random collection of stuff, including a few things that I picked up based on James Nicoll's reviews. Now that I have a daily train commute, I should pick up the pace of reading, and as long as I can find enough time in my schedule to also write reviews, hopefully there will be more content in this blog shortly.

1 June 2014

Antonio Terceiro: An introduction to the Debian Continuous Integration project

Debian is a big system. At the time of writing, looking at my local package list caches tells me that the unstable suite contains 21306 source packages, and 42867 binary packages on amd64. Between these 42867 binary packages, there is an unthinkable number of inter-package dependencies. For example the dependency graph of the ruby packages contains other 20-something packages. A new version of any of these packages can potentially break some functionality in the ruby package. And that dependency graph is very small. Looking at the dependency graph for, say, the rails package will make your eyes bleed. I tried it here, and GraphViz needed a PNG image with 7653 10003 pixels to draw it. It ain t pretty. Installing rails on a clean Debian system will pull in another 109 packages as part of the dependency chain. Again, as new versions of those packages are uploaded the archive, there is a probability that a backwards-incompatible change, or even a bug fix which was being worked around, might make some funcionality in rails stop working. Even if that probability is low for each package in the dependency chain, with enough packages the probability of any of them causing problems for rails is quite high. And still the rails dependency chain is not that big. libreoffice will pull in another 264 packages. gnome will pull in 1311 dependencies, and kde-full 1320 (!). With a system this big, problems will arrive, and that s a fact of life. As developers, what we can do is try to spot these problems as early as possible, and fixing them in time to make a solid release with the high quality Debian is known for. While automated testing is not the proverbial Silver Bullet of Software Engineering, it is an effective way of finding regressions. Back in 2006, Ian Jackson started the development of autopkgtest as a tool to test Debian packages in their installed form (as opposed to testing packages using their source tree). In 2011, the autopkgtest test suite format was proposed as a standard for the Debian project, in what we now know as the DEP-8 specification. Since then, some maintainers such as myself started experimenting with DEP-8 tests in their packages. There was an expectation in the air that someday, someone would run those tests for the entire archive, and that would be a precious source of QA information. Durign the holiday break last year, I decided to give it a shot. I initially called the codebase dep8. Later I renamed it to debci, since it could potentially also run other other types of test suites in the future. Since early January, ci.debian.net run an instance of debci for the Debian Project. The Debian continuous Integration will trigger tests at most 4 times a day, 3 hours after each dinstall run. It will update a local APT cache and look for packages that declare a DEP-8 test suite. Each package with a test suite will then have its test suite executed if there was any change in its dependency chain since the last test run. Existing test results are published at ci.debian.net every hour, and at the end of each batch a global status is updated. Maintainers can subscribe to a per package Atom feed to keep up with their package test results. People interested in the overall status can subscribe to a global Atom feed of events. Since the introduction of Debian CI in mid-January 2014, we have seen an amazing increase in the number of packages with test suites. We had little less than 200 packages with test suites back then, against around 350 now (early June 2014). The ratio of packages passing passing their test suite has also improved a lot, going from less than 50% to more than 75%. There is documentation available, including a FAQ for package maintainers with further information about how the system works, how to declare test suites in their packages and how to reproduce test runs locally. Also available is development information about debci itself, to those inclined to help improve the system. This is just the beginning. debci is under a good rate of development, and you can expect to see a constant flux of improvements. In special, I would like to mention a few people who are giving amazing contributions to the project:

22 April 2014

Bits from Debian: Debian welcomes its 2014 GSoC students!

We're excited to announce that 19 students have been selected to work with Debian during the Google Summer of Code this year! Here is the list of accepted students and projects: As always, you will be able to follow their progress on the SoC coordination mailing-list Congratulations to all the students and let's make sure we all have an amazing summer!

Bits from Debian: Debian welcomes its 2014 GSoC students!

We're excited to announce that 19 students have been selected to work with Debian during the Google Summer of Code this year! Here is the list of accepted students and projects: As always, you will be able to follow their progress on the SoC coordination mailing-list Congratulations to all the students and let's make sure we all have an amazing summer!

27 February 2013

Sylvain Le Gall: planet.ocaml.org spring cleaning

Hi planet.ocaml.org. Just a quick post to thanks Marek Kubica for his help on the planet.ocaml.org spring cleaning. Here are the feeds that have been removed: http://redlizards.com/blog/feed/?tag=ocaml http://blog.mestan.fr/feed/?cat=16 http://www.sairyx.org/tag/ocaml/feed/ http://blog.dbpatterson.com/rss http://www.nicollet.net/toroidal/ocaml/feed/ http://ocamlhackers.ning.com/profiles/blog/feed?tag=ocaml&xn_auth=no http://eigenclass.org/R2/feeds/rss2/all http://procrastiblog.com/category/ocaml/feed http://savonet.sourceforge.net/liquidsoap.rss Here is the feed that have been added: http://newblog.0branch.com/rss.xml Here are the feeds that have been updated: https://ocaml.janestreet.com/?q=rss.xml http://scattered-thoughts.net/atom.xml http://www.rktmb.org/feed/tag/ocaml/atom http://nleyten.com/feed/tag/ocaml/atom http://www.mega-nerd.com/erikd/Blog/index.rss20 http://y-node.com/blog/feeds/latest/ If you want that we had back your blog, please follow the howto add your feed to planet. We didn't have removed feed on purpose, this was just a way to get rid of a lot of 404, And don't forget, planet.ocamlcore.org is now served by planet.ocaml.org! Update your feed reader.

28 March 2012

Andrew Pollock: [life/americania] Four days in New Orleans

Sarah's Mum had accrued too much annual leave and had to take some time off work, so Sarah did some (very mild) arm twisting and convinced her to come over for 3 weeks, and do a 5 day cruise to Mexico out of New Orleans. Unfortunately, my annual leave situation wasn't quite so abundant, and I had a lot going on at work, so regretfully I didn't join them on the cruise, and instead went to New Orleans for a four day weekend when they returned. From all reports, the cruise was very good. Zoe handled it well, although she did say "home" a lot. One of the two stops in Mexico was to check out some Mayan ruins, which looked awesome from the photos. The other stop involved a dolphin encounter. I was incredibly envious of all that they got to do, and would have loved to have gone with them, as I've never been on a cruise ship either. I can also report that no cats were lost during this bachelor stint. I had a night flight on the Wednesday evening to get there, which was scheduled to get in at around midnight, and I'd booked a motel room near the airport for that night, and we'd booked a vacation rental home for Thursday to Sunday nights. Unfortunately, my flight ended being delayed something like 2.5 hours, so I didn't get into New Orleans until around 2am. The house we rented did the trick nicely. It was a small "shotgun" duplex in what looked like a nice neighbourhood. It was advertised as being close to the street car line, but they were doing some work on the tracks, so the street car didn't seem to be running as far down the line as it usually did, so it ended up being a bit more of a trek to get to it. It was also extremely slow, and there was a marathon on the Sunday, which closed everything down for a long time, making it a generally pretty unreliable form the transport. We ended up renting a car for Saturday and Sunday, which was something of a saga in itself, as Enterprise didn't have any cars at the location we'd booked one, so after a couple of hours cooling our heels there (Zoe was incredibly well-behaved, all things considered), they shuttled us over to another location and we ended up with a minivan instead of a compact, which for the same price, allayed our concerns about being able to transport all of our luggage to the airport on Monday morning. We had a very early morning flight on Monday morning to come back, which got into SFO at around 9am, and I went directly to work from there. Thursday We all arrived at the house, separately. It ended up taking them 2 hours to disembark the ship when it came back into port, with Customs taking an eternity to process everyone. I think we went exploring the local area that afternoon, and took a street car into the city to check out Bourbon Street, having a Cajun dinner at Remoulade. Friday In the morning, we went to check out Lafayette Cemetery Number 2. Sarah took Zoe back to the house for a nap, and Sarah's Mum and I continued back to explore the French Quarter some more, walking down the length of Royal Street (which was vastly different from Bourbon Street, just one block over). We had lunch at the French Market. The cemetery was interesting, as pretty much all of the graves were these huge above ground tombs, that seemed to have multiple family members interred in them. Apparently the cemetery filled up quite quickly courtesy of a Yellow Fever outbreak. After lunch, Sarah's Mum and I continued wandering around the French Quarter. We went and took a look at the Mississippi River, and I had an encounter with a grifter who was so good at his job I couldn't bring myself to argue with him over the $20 he diddled me out of. We tried to get to the Civil War Museum, but it closed at 4pm. We looked at the Robert E. Lee Monument, which seemed to be draped in drunks, and then I think we rendezvoused with Sarah and Zoe back on Canal Street for dinner at The Court of Two Sisters (which apparently we were under-dressed for, as Sarah and her Mum said we were getting a lot of dirty looks from other patrons). Saturday On Saturday morning, we had the aforementioned car rental experience from hell, and by the time we had the car it was lunchtime, so Zoe napped in the car after lunch on the way out to Oak Alley Plantation, where we were introduced to the delightful beverage known as the mint julep, and took a tour of the house and wandered the grounds. Sunday On Sunday, Sarah and her Mum did a swamp tour, and Zoe and I went to the zoo. As I said earlier, there was a marathon that completely closed down Saint Charles Avenue, which is where the street cars run, so after walking down to where the street cars started operating (which ended up being most of the way down South Carrollton Avenue), the driver informed me that the street cars were queuing up at the corner of South Carrollton and Saint Charles, and I should get off her street car and get on the one at the front of the queue. I did this, but the driver of the front street car informed me that she wouldn't be leaving for an hour and half. At this point, I started considering a bus instead. Zoe and I went to check out the Mississippi River, which was quite close to where we were, and then I went back, and despite a street car having left (without any passengers) the driver of the current street car couldn't tell me when she'd be leaving, so I started walking down Saint Charles Avenue. Unfortunately, Zoe's going through a phase where she wants to be carried everywhere, so I was lugging her all over the place on my hip. Sarah didn't take a stroller with her, and instantly regretted it. Lesson learned. Eventually we managed to get onto a bus, which dropped us off at Audubon Park, which had Audubon Zoo at the other end of it. There was a playground near the Saint Charles Avenue end of the park, so Zoe had a bit of a play on that, and then we continued through the park to the zoo. Mercifully, the zoo had dodgy strollers for rent, and there was no way in the world I wasn't going to rent one of them, so that made getting around with Zoe a lot easier on my back. We had a really good time at the zoo. There was some sort of a music festival on in the parklands within the zoo grounds, and that included a jumping castle, which Zoe expressed a desire to have a go on. She had a fabulous time on it. I think she probably spent about 15 minutes in there, without any tears. I was very impressed. I took a brief video of some of her antics. It was getting close to Zoe's nap time, and she was getting tired, but fortunately Sarah and her Mum were able to pick us up from the zoo after their swamp tour and Zoe got to nap back at the house. Monday We had a very early start. Unfortunately, our flight (with United) was a couple of days after United and Continental officially merged, and despite having checked in online, we had to queue up with everyone else (for an extended period of time) to drop off our checked luggage. Then there was a 45 minute line for security screening. We pulled the "toddler going to melt down" card and jumped to the head of the line, but Sarah's Mum had to wait. The flight ended up being delayed because people were stuck in the security line. Overall impressions of New Orleans Fabulous architecture. There were so many gorgeous houses on Saint Charles Avenue and the surrounding area. I'd have loved to do an architecture tour. Crap (but cheap) public transport. $3 gets you a day pass. The street cars are cute, but slow and unpredictable. The drivers were remarkably unhelpful. The buses were okay. Great food. Zoe seemed to have a liking for the spicy stuff. I gave her some fresh alligator jerky, and after some initial coughing and spluttering at the spiciness of it, she came back for more. Post-Katrina recovery. We really only saw a very small part of the city, but there were still some houses with boarded up windows, and some vacant blocks where buildings had been demolished, but largely you'd not have been able to tell that large parts of the city had been underwater, from casual inspection. I really enjoyed the trip, even though it was brief, I feel like I got a good feel for the place. We were there just after Mardi Gras, and there were still beads everywhere. Draped all over fences. Over power lines. Trees on the parade route were absolutely covered in beads. I'd have loved to have been there for Mardi Gras. My friend Brandon, who is an excellent street photographer, took some great photos that capture some of it. Photos from Sarah's cruise and our time in New Orleans are here.

16 June 2011

Andrew Pollock: [life] The Afghan leopard gecko page on Wikipedia now has a photo

It was very much a team effort. My friend Brandon, who is an excellent photographer, came over and took some photos. Then my friend Sara, who is a bit of a Wikipedia contributor/edit/expert, showed me how to upload the photo to the Wikimedia Commons, and then it was just a case of gluing it all together by making the edit on the Wikipedia page. Then it was almost immediately deleted. It turns out that apparently the Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic license (better known as CC BY-NC-SA 2.0) is somehow incompatible with Wikipedia & friends (I haven't figured out the exact ins and outs of why yet). I had to get Brandon to re-license the photo I used as just CC BY-SA 2.0 (basically ditching the non-commercial restriction). Then it magically reappeared (after some help from Sara with the response). Crikey, this Wikipedia contributing is tricky stuff. At least when Speck's owners get back to pick her up, I can tell them their gecko is famous.

10 May 2010

Brandon Holtsclaw: Ubuntu Developer Summit Keynote from Mark Shuttleworth

Click For Larger Image

Mark Shuttleworth just kicked off UDS with the Keynote. I have the Audio at just under one hour in length for everyone to enjoy that could not attend the Ubuntu Developer Summit this time.

Keynote in MP3 ( Direct )
Keynote in MP3 ( Torrent )
Keynote in OGG ( Torrent )

He Announces the next radical changes to the desktop, along with Ubuntu "Lite" being installed Dual Boot by Dell and other OEM's that boots to a USABLE web browser in 7 seconds, and we're not talking about something coming up, this is available today by installing "Unity".

The Perfect 10 as Mark describes it will consist of UNE, new icons, the new font and many other desktop improvements. And he is pushing for a Release on 10 Oct, 2010 or 10.10.10 , for those of you geeks out there ( like me ) binary 101010 = decimal 42 ( HHGTTG Reference )

This is going to be an exciting cycle !

UPDATE: Looks Like Dropbox did not like the excessive traffic, I have updated the links.

2 May 2010

Brandon Holtsclaw: Lucid Release Party Recap

The Kansas City Ubuntu Lucid Release Party was a great get together , it was nice seeing some other Geeks around the KC Metro area. Hopefully we'll make this a more often occurrence than once every 6 months.

Here is a Picture from the event ( sorry I didnt take more than one, and I'm am not pictured since I was running the camera , but my empty chair is pictured :P )Click for a larger Picture



Hope everyone else has a great weekend, enjoy the new Release !

28 April 2010

Brandon Holtsclaw: Kansas City Ubuntu 10.04 Lucid Release Party Update

Only one more day until release and a few more until the Kansas City Ubuntu Release Party.

I have updated this page with all the needed details ( even has a section to RSVP if you wish to be counted )

A Huge thanks got out to the great people over at Cowtown Computer Congress Hackerspace for offering to Host the event! Should be a great time.

23 April 2010

Brandon Holtsclaw: Kansas City: Lucid Release, and GPG Key signing

With only a few days until the Ubuntu 10.04 "Lucid" release I'm starting the planning for a release party here in the Kansas City Metro area on the evening of Friday April 30th 2010, I'm not 100% on the venu yet , ideas welcome but I have setup a wiki page entry to get the discussion going.

I'd also like to setup a time before hand ( maybe an hour prior ) to do a "key signing party" for anyone that would like to exchange information. If there are any Debian Developers in the area I'd be thrilled to have one stop by to sign keys too, even if they could not or wish to not stay for the Lucid Release party, I still need one to begin part 2 of the NM process.

Just drop a note in the comments or email me ( me@brandonholtsclaw.com ) if you need any more info.

20 April 2010

Brandon Holtsclaw: My Email Setup

A few People on IRC have ask me to elaborate on my email reading setup. ( As far as hosting I use Google Apps for your domain, but I wont be getting into the details of that. )

First this will work for anyone that uses IMAP to currently fetch their mail, no matter the provider even though I'll be hi-lighting Gmail because thats what I use. This also works well for GMail and/or IMAP Backup of your email.

In short the setup is as follows:
IMAP Server --> offlineimap --> Local Maildir --> Evolution Reading Local Maildir

Soo lets get to the meat of this, first install offlineimap sudo apt-get install offlineimap then lets configure it, below is my config file, most everything should be straight forward (the formating below seems wrong on planet click the post link to copy/paste):
[general] accounts = brandonholtsclaw maxsyncaccounts=1 ui = Noninteractive.Basic metadata=~/.offlineimap [Account brandonholtsclaw] localrepository = gmailLocal remoterepository = gmailRemote [Repository gmailLocal] type = Maildir localfolders = /home/bholtsclaw/.maildir/brandonholtsclaw/ sep = / [Repository gmailRemote] type = Gmail remoteuser = me@brandonholtsclaw.com remotepass=XXXX realdelete = no # This translates folder names such that everything (including your Inbox) # appears in the same folder (named root). #nametrans = lambda foldername: re.sub('\[G.*ail\] ^\.INBOX\.*', '.', re.sub('^', './', foldername)) nametrans = lambda foldername: re.sub('^(\[G.*ail\] INBOX)', '.', foldername) # This excludes some folders from being synced. You will almost # certainly want to exclude 'All Mail', 'Trash', and 'Starred', at # least. Note that offlineimap does NOT honor subscription details. folderfilter = lambda foldername: foldername not in ['[Gmail]/All Mail', '[Gmail]/Trash','[Gmail]/Spam','[Gmail]/Starred']
Save that file as ~/.offlineimaprc and then create the directory ~/.maildir/

Make sure to chmod 0600 ~/.offlineimaprc so because it stores your email password and you dont want anyone else reading that. Other than that you are now all set, just Setup a new email account in Evolution and tell it to use Maildir , point it at your ~/.maildir directory and your golden.

If anyone else uses a setup similar to this and noticed something glaring I missed please drop me a comment, and I'll update the post ( plus I might learn something new ).

Cheers!
P.S. Dont forget to add a crontab to run offlineimap once in a while ( every ~10 minutes ) to check for new mail and sync your local changes back to the IMAP server.

5 February 2010

Brandon Holtsclaw: My Friend Audio Business Converter Product

Audio information can be stored in a wide variety of formats, which vary in sound quality, file size, compatibility, etc. There are lots of extensions that cannot be read by some players. For this reason, many audiophiles are often confronted with compatibility issues. There are lots of audio conversion tools available on the web, but there are much more formats out there than they support. Most converters support only a few extensions and therefore they fail to satisfy users requirements.

Finding a high quality mp3 converter
has long been a challenge. Happily, software developers have designed Factory Audio Converter. It converts audio files to MP3, WAV, WMA, OGG, APE, MP4, FLAC, MPC, AAC. The array of input formats is even broader: MP3, RA, RMM, RAM, RPM, RM, RMVB, WAV, OGG, CDA, APE, APL, MPC, MP+, WMA, FLAC, AAC, M4A, MP4, TTA, OFR, SPX, WV, XM, IT, S3M, MOD, MTM, UMX, AMR, MP3/OGG compressed MODs, MIDI, VQF, AU / SND, PAF, IFF, SVX, WAV, SF, VOC, W64, MAT4, MAT5, PVF, XI, HTK, CAF. If you are looking for a flexible audio conversion utility, this one is the best!

Good audio converters feature batch mode. If you have a lot of wma files and would like to save them as mp3, you need a fast-working and powerful wma to mp3 converter. Factory Audio Converter can convert hundreds of files within seconds. If you have gigs of music on your PC, you can set the utility to convert all of them and turn in. You will get up in the morning to see all your files saved in your preferred format!

By aid of Factory Audio Converter, you can convert wma to mp3 right from the desktop. Once installed on a PC, the program integrates into Windows seamlessly, and Convert to option appears in the right-button menu. All you need to do is right-click on a wma file and select Convert to. You can also run Factory Audio Converter from within other programs via command line.

Advanced users do not only want toconvert wma to mp3
. They want to do it easily! User-friendliness all but heads the top-importance list of characteristics. Factory Audio Converter has a robust and intuitive user interface. It does not take any background to understand how to use it. The program wizard will guide you through the whole procedure. It will help you select your settings. You may either skip it and leave the default settings or specify your own settings. Sound quality will depend on your choice of bitrate, samplerate and channel.

The more new features a program has, the more likely it is to enjoy demand. Factory Audio Converter incorporates a built-in audio player and CD ripper. You can listen to your tracks prior to conversion and compare output and input files. The CD ripper will rip your CD files and help you save them on any type of hard drive. Factory Audio Converter permits you to change wav to mp3
and save a lot of drive space. Now you can use any portable player and listen to your favorite musicians any time, wherever you are!

You can buy Factory Audio converter at a price comparable to that of any other converter. However, few converters are comparable to Factory Audio Converter in terms of user friendliness and productiveness. <script class="owbutton" src="http://onlywire.com/button" title="My Friend Audio Business Converter Product" type="text/javascript" url="http://www.imbrandon.com/2010/02/05/my-friend-audio-business-product/"></script>

4 February 2010

Brandon Holtsclaw: More Women in Management, the More Profitable the Company

According to an article in theWashington Post, Weekly Edition, 9 by Katty Kay and Claire Shipman, companies that employ more women in upper level management are more profitable than those that rely heavily on male talent to run their businesses.

This exciting (but not surprising) news is not limited to the United States, but is true throughout the world.

And it s not only one study, but at least half a dozen, from a broad spectrum of organizations such as Columbia University, McKinsey & Co., Goldman Sachs and Pepperdine University, that document a clear relationship between women in senior management and corporate financial success. By all measures, more women in your company means better performance.

One of the points made in the Washington Post article was that men may naturally be more prone to risk taking and competition (thanks to the hormone testosterone), but women are better collaborators and better at achieving long-term results.

So men, if you want your businesses to flourish, hire and promote more qualified women. The results are in, the statistics speak for themselves. Men are the hares and women are the tortoises, who will win the race steady and slow. <script class="owbutton" src="http://onlywire.com/button" title="More Women in Management, the More Profitable the Company" type="text/javascript" url="http://www.imbrandon.com/2010/02/04/more-women-in-management-the-more-profitable-the-company/"></script>

Next.