Search Results: "ashley"

31 January 2024

Valhalla's Things: Macrame Bookbag

Posted on January 31, 2024
Tags: madeof:atoms, craft:macrame
a macrame bag in ~3 mm ecru yarn, with very irregular knots of different types, holding a book with a blue cover. The bottom part has a rigid single layer triangle and a fringe. In late 2022 I prepared a batch of drawstring backpacks in cotton as reusable wrappers for Christmas gifts; however I didn t know what cord to use, didn t want to use paracord, and couldn t find anything that looked right in the local shops. With Christmas getting dangerously closer, I visited a craft materials website for unrelated reasons, found out that they sold macrame cords, and panic-bought a few types in the hope that at least one would work for the backpacks. I got lucky, and my first choice fitted just fine, and I was able to finish the backpacks in time for the holidays. And then I had a box full of macrame cords in various sizes and types that weren t the best match for the drawstring in a backpack, and no real use for them. I don t think I had ever done macrame, but I have made friendship bracelets in primary school, and a few Friendship Bracelets, But For Real Men So We Call Them Survival Bracelets(TM) more recently, so I didn t bother reading instructions or tutorials online, I just grabbed the Ashley Book of Knots to refresh myself on the knots used, and decided to make myself a small bag for an A6 book. I choose one of the thin, ~3 mm cords, Tre Sfere Macram Barbante, of which there was plenty, so that I could stumble around with no real plan. A loop of four cords, with a handle made of square knots that keeps it together. I started by looping 5 m of cord, making iirc 2 rounds of a loop about the right size to go around the book with a bit of ease, then used the ends as filler cords for a handle, wrapped them around the loop and worked square knots all over them to make a handle. Then I cut the rest of the cord into 40 pieces, each 4 m long, because I had no idea how much I was going to need (spoiler: I successfully got it wrong :D ) I joined the cords to the handle with lark head knots, 20 per side, and then I started knotting without a plan or anything, alternating between hitches and square knots, sometimes close together and sometimes leaving some free cord between them. And apparently I also completely forgot to take in-progress pictures. I kept working on this for a few months, knotting a row or two now and then, until the bag was long enough for the book, then I closed the bottom by taking one cord from the front and the corresponding on the back, knotting them together (I don t remember how) and finally I made a rigid triangle of tight square knots with all of the cords, progressively leaving out a cord from each side, and cutting it in a fringe. I then measured the remaining cords, and saw that the shortest ones were about a meter long, but the longest ones were up to 3 meters, I could have cut them much shorter at the beginning (and maybe added a couple more cords). The leftovers will be used, in some way. And then I postponed taking pictures of the finished object for a few months. The same bag, empty and showing how the sides aren't straight. Now the result is functional, but I have to admit it is somewhat ugly: not as much for the lack of a pattern (that I think came out quite fine) but because of how irregular the knots are; I m not confident that the next time I will be happy with their regularity, either, but I hope I will improve, and that s one important thing. And the other important thing is: I enjoyed making this, even if I kept interrupting the work, and I think that there may be some other macrame in my future.

30 July 2023

Russell Coker: Links July 2023

Phys.org has an interesting article about finding evidence for nanohertz gravity waves [1]. 1nano-Herz is a wavelength of 31.7 light years! Wired has an interesting story about OpenAI saying that no further advances will be made with larger training models [2]. Bruce Schneier and Nathan Sanders wrote an insightful article about the need for government run GPT type systems [3]. He focuses on the US, but having other countries/groups of countries do it would be good too. We could have a Chinese one, an EU one, etc. I don t think it would necessarily make sense for a small country like Australia to have one but it would make a lot more sense than having nuclear submarines (which are much more expensive). The Roadmap project is a guide for learning new technologies [4]. The content seems quite good. Bigthink has an informative and darkly amusing article Horror stories of cryonics: The gruesome fates of futurists hoping for immortality [5]. From this month in Australia psilocybin (active ingredient in Magic Mushrooms) can be prescribed for depression and MDMA (known as Ecstacy on the streets) can be prescribed for PTSD [6]. That s great news! Slate has an interesting article about the Operation Underground Railroad organisation that purports to help sex trafficed chilren [7]. This is noteworthy now with the controverst over the recent movie about that. Apparently they didn t provide much help for kids after they had been rescued and at least some of the kids were trafficed specifically to fulfill the demand that they created by offering to pay for it. Vigilantes aren t as effective as law enforcement. The ACCC is going to prevent Apple and Google from forcing app developers to give them a share of in-app purchases in Australia [8]. We need this in every country! This site has links to open source versions of proprietary games [9]. Vice has an interesting article about the Hungarian neuroscientist Viktor T th who taught rats to play Doom 2 [10]. The next logical step is to have mini tanks that they can use in real battlefields. Like the Mason s Rats episode of Love Death and Robots on Netflix. Brian Krebs wrote a mind boggling pair of blog posts about the Ashley Adison hack [11]. A Jewish disgruntled ex-employee sending anti-semitic harassment to the Jewish CEO and maybe cooperating with anti-semitic organisations to harass him is one of the people involved, but he killed himself (due to mental health problems) before the hack took place. Long Now has an insightful blog post about digital avatars being used after the death of the people they are based on [12]. Tavis Ormandy s description of the zenbleed bug is interesting [13]. The technique for finding the bug is interesting as well as the information on how the internals of the CPUs in question work. I don t think this means AMD is bad, trying to deliver increasing performance while limited by the laws of physics is difficult and mistakes are sometimes made. Let s hope the microcode updates are well distributed. The Hacktivist documentary about Andrew Bunnie Huang is really good [14]. Bunnie s lecture about supply chain attacks is worth watching [15]. Most descriptions of this issue don t give nearly as much information. However bad you thought this problem was, after you watch this lecture you will realise it s worse than that!

18 January 2016

David Pashley: NullPointerExceptions in Xerces-J

Xerces is an XML library for several languages, but if a very common library in Java. I recently came across a problem with code intermittently throwing a NullPointerException inside the library:
java.lang.NullPointerException
        at org.apache.xerces.dom.ParentNode.nodeListItem(Unknown Source)
        at org.apache.xerces.dom.ParentNode.item(Unknown Source)
        at com.example.xml.Element.getChildren(Element.java:377)
        at com.example.xml.Element.newChildElementHelper(Element.java:229)
        at com.example.xml.Element.newChildElement(Element.java:180)
         
 
You may also find the NullPointerException in ParentNode.nodeListGetLength() and other locations in ParentNode. Debugging this was not helped by the fact that the xercesImpl.jar is stripped of line numbers, so I couldn t find the exact issue. After some searching, it appeared that the issue was down to the fact that Xerces is not thread-safe. ParentNode caches iterations through the NodeList of children to speed up performance and stores them in the Node s Document object. In multi-threaded applications, this can lead to race conditions and NullPointerExceptions. And because it s a threading issue, the problem is intermittent and hard to track down. The solution is to synchronise your code on the DOM, and this means the Document object, everywhere you access the nodes. I m not certain exactly which methods need to be protected, but I believe it needs to be at least any function that will iterate a NodeList. I would start by protecting every access and testing performance, and removing some if needed.
/**
 * Returns the concatenation of all the text in all child nodes
 * of the current element.
 */
public String getText()  
StringBuilder result = new StringBuilder();
 
synchronized ( m_element.getOwnerDocument())  
NodeList nl = m_element.getChildNodes();
for (int i = 0; i < nl.getLength(); i++)  
Node n = nl.item(i);
 
if (n != null && n.getNodeType() == org.w3c.dom.Node.TEXT_NODE)  
result.append(((CharacterData) n).getData());
 
 
 
 
return result.toString();
 
Notice the synchronized ( m_element.getOwnerDocument()) block around the section that deals with the DOM. The NPE would normally be thrown on the nl.getLength() or nl.item() calls. Since putting in the synchronized blocks, we ve gone from having 78 NPEs between 2:30am and 3:00am, to having zero in the last 12 hours, so I think it s safe to say, this has drastically reduced the problem. The post NullPointerExceptions in Xerces-J appeared first on David Pashley.com.

23 April 2014

David Pashley: Working with development servers

I can t believe that this is not a solved problem by now, but my Google-fu is failing me. I m looking for a decent, working extension for Chrome that can redirect a list of hosts to a different server while setting the Host: header to the right address. Everything I ve found so far assumes that you re running the servers on different urls. I m using the same URL on different servers and don t want to mess around with /etc/hosts. Please tell me something exists to do this? The post Working with development servers appeared first on David Pashley.com.

16 April 2014

David Pashley: Bad Password Policies

After the whole Heartbleed fiasco, I ve decided to continue my march towards improving my online security. I d already begun the process of using LastPass to store my passwords and generate random passwords for each site, but I hadn t completed the process, with some sites still using the same passwords, and some having less than ideal strength passwords, so I spent some time today improving my password position. Here s some of the bad examples of password policy I ve discovered today. First up we have Live.com. A maximum of 16 characters from the Microsoft auth service. Seems to accept any character though. Screenshot from 2014-04-15 21:36:57 This excellent example is from creditexpert.co.uk, one of the credit agencies here in the UK. They not only restrict to 20 characters, they restrict you to @, ., _ or . So much for teaching people how to protect themselves online. Screenshot from 2014-04-15 17:38:28 Here s Tesco.com after attempting to change my password to QvHn#9#kDD%cdPAQ4&b&ACb4x%48#b . If you can figure out how this violates their rules, I d love to know. And before you ask, I tried without numbers and that still failed so it can t be the three and only three thing. The only other idea might be that they meant i.e. rather than e.g. , but I didn t test that. Screenshot from 2014-04-15 16:20:17 Edit: Here is a response from Tesco on Twitter: Screenshot from 2014-04-16 07:47:58 Here s a poor choice from ft.com, refusing to accept non-alphanumeric characters. On the plus side they did allow the full 30 characters in the password. Screenshot from 2014-04-15 15:22:08 The finest example of a poor security policy is a company who will remain nameless due to their utter lack of security. Not only did they not use HTTPS, they accepted a 30 character password and silently truncated it to 20 characters. The reason I know this is because when I logged out and tried to log in again and then used the forgot my password option, they emailed me the password in plain text. I have also been setting up two-factor authentication where possible. Most sites use the Google Authenticator application on your mobile to give you a 6 digit code to type in in addition to your password. I highly recommend you set it up too. There s a useful list of sites that implement 2FA and links to their documentation at http://twofactorauth.org/. I realise that my choice LastPass requires me to trust them, but I think the advantages outweigh the disadvantages of having many sites using the same passwords and/or low strength passwords. I know various people cleverer than me have looked into their system and failed to find any obvious flaws. Remember people, when you implement a password, allow the following things: If you are going to place restrictions, please make sure the documentation matches the implementation, provide a client-side implementation to match and provide quick feedback to the user, and make sure you explicitly say what is wrong with the password, rather than referring back to the incorrect documentation. There are also many JS password strength meters available to show how secure the inputted passwords are. They are possibly a better way of providing feedback about security than having arbitrary policies that actually harm your security. As someone said to me on twitter, it s not like password is too strong was ever a bad thing. The post Bad Password Policies appeared first on David Pashley.com.

23 September 2013

David Pashley: A New Chapter

It s been a while since I posted anything to my personal site, but I figured I should update with the news that I m leaving Brighton (and the UK) after nearly nine years living by the seaside. I ll be sad to leave this city, which is the greatest place to live in the country, but I have to opportunity to go explore the world and I d be crazy to let it pass me by. So what am I doing? In ten short days, I plan to sell all my possessions bar those I need to live and work day to day and will be moving to Spain for three months. I m renting a flat in Madrid, where I ll continue to work for my software development business and set about improving both my Spanish and my fitness. If you want to follow my adventures, or read about the reasons for my change, then check out the Experimental Nomad website. The post A New Chapter appeared first on David Pashley.com.

17 July 2012

Luciano Bello: there is no cabal.. but, what s a cabal?

In my long trip to Nicaragua I made progress in my reading: Quicksilver, by Neal Stephenson. In the Spanish edition the title is Azogue. But I m assuming that you are not a Spanish speaker. Here is a small fragment (in English) I found there:
You must remember that the planters are short-sighted. They re all desperate to get out of Jamaica they wake up every day expecting to find themselves, or their children, in the grip of some tropical fever. To import female Neegers would cost nearly as much as to import males, but the females cannot produce as much sugar particularly when they are breeding. Daniel had finally recognized this voice as belonging to Sir Richard Apthorp the second A in the CABAL.
It s a bit embarrassing when I discovered myself realizing where the word cabal comes from. And I m posting this as a head-up for everyone who know there is no cabal in Debian; but they don t know which is the origin of the word cabal. Stephenson changed the name of the historic cabal, a group of high councillers of King Charles II of England, Scotland and Ireland, in 1668. In the novel, they are: John Comstock (Earl of Epsom), Louis Anglesey (Duke of Gunfleet), Knott Bolstrood (Count Penistone), Sir Richard Apthorp and Hugh Lewis (Duke of Tweed). In the real world they had been:
Thomas Clifford, 1st Baron Clifford of Chudleigh (1630-1673). Henry Bennet, 1st Earl of Arlington (1618-1685). George Villiers, 2nd Duke of Buckingham (1628-1687). Anthony Ashley Cooper, 1st Baron Ashley of Wimborne St Giles (1621-1683). John Maitland, 1st Duke of Lauderdale (1616-1682).
This group shared the effective power in a royal council rather than the King.

29 January 2011

David Pashley: Multiple Crimes

mysql> select "a" = "A";
+-----------+
  "a" = "A"  
+-----------+
          1  
+-----------+
1 row in set (0.00 sec)
WTF? (via Nuxeo)
Read Comments (5)

7 September 2010

Brett Parker: Life, the Universe, and Everything (or something)

Forgive me readers, for I have sinned... or at least, not blogged for a looooong time. I've been reasonably busy, stopped working for Runtime Collective and started contracting for a small company in Portsmouth, it's been somewhat hectic, and the commute isn't brilliant, but it has given me a lot of time for reading... Really, mostly a lot of my time during the week is commuting, working, reading and sleeping, haven't had much time for anything else, and have mostly been knackered... I'm getting in to the habit now, though, so am getting a bit more time for getting on with other things. I have, however, gained a lot more fiction books in the last few months, and am averaging 2 or 3 books a week at the moment. Most enjoyed the first 2 books of the Demon Trilogy by Peter V Brett - have to wait till 2012 to get the last book, though - which is a loooong time! Actually, I seem to have made a habit of ending up starting trilogys recently where not all of the books are published, in the case of Feed by Mira Grant I managed to really screw up - not even the second book is due out until next year - but the first is really quite good. I've also got the first book of Mark C Newton's Legend of the Red Sun books, really enjoyed that, but am waiting for the second in paperback, as they're much easier to carry around and use on the tube. Have read the Millennium trilogy, and thoroughly enjoyed it. The first film, although a reasonable adaptation of the book, misses some good parts of the book, and doesn't leave you feeling as if you know the characters as much. Am also planning on working through all the Laundry books by Charles Stross, I picked up The Fuller Memorandum on a whim (OK, it was on a 2 for 1 in Waterstones...) and really enjoyed it, so am going to get the rest (when I start getting low on books again - I've currently got another 5 queued up, so it might not be for a couple of weeks). So, other than that, what have I been up to? Well, played the last bits of DLC for Assassins Creed 2 on the PS3, and thoroughly looking forward to the release of AC3, played most of the way through Lego Harry Potter, wandered about and saw people... you know, the usual things! Oh, and of course, August bank holiday weekend (28-30) were spent with good peoples at the ever fantastic Debian UK BBQ where much Nero was imbibed, many people spoken to, and some gpg signing done - as always many thanks to Steve McIntyre for a fantastic weekend, to Collabora and Mythic Beasts for the beer, and to antibodyMX and Coding Craft Ltd for the food (Oi, Pashley , stop being crap and sort out your company website...).

17 March 2010

David Pashley: Letter to my MP regarding the Digital Economy Bill

I have just sent the following email to my MP, David Lepper MP, outlining my concerns about the Digital Economy Bill. I urge you to write to your MP with a similar letter. Open Rights Group's guide to writing to your MP
From: David Pashley <david@davidpashley.com>
To: David Lepper
Cc: 
Bcc: 
Subject: Digital Economy Bill
Reply-To: 
Dear Mr Lepper, 
I'm writing to you so express my concern at the Digital Economy Bill
which is currently working its way through the House of Commons. I
believe that the bill as it stands will have a negative effect on
the digital economy that the UK and in particular Brighton have
worked so hard to foster. 
Section 4-17 deals with disconnecting people reported as infringing
copyright. As it stands, this section will result in the possibility
that my internet connection could be disconnected as a result of the
actions of my flatmate. My freelance web development business is
inherently linked to my access of the Internet. I currently allow my
landlady to share my internet access with her holiday flat above me.
I will have to stop this arrangement for fear of a tourist's actions
jeopardising my business. 
This section will also result in the many pubs and cafes, much
favoured by Brighton's freelancers, from removing their free wifi. I
have often used my local pub's wifi when I needed a change of
scenery. I know a great many freelancers use Cafe Delice in the
North Laine as a place to meet other freelancers and discuss
projects while drinking coffee and working.
Section 18 deals with ISPs being required to prevent access to sites
hosting copyrighted material. The ISPs can insist on a court
injunction forcing them to prevent access. Unfortunately, a great
many ISPs will not want to deal with the costs of any court
proceedings and will just block the site in question. A similar law
in the Unitied States, the Digital Millenium Copyright Act (DMCA)
has been abused time and time again by spurious copyright claims to
silence critics or embarrassments.  A recent case is Microsoft
shutting down the entire Cryptome.org website because they were
embarrassed by a document they had hosted.  There are many more
examples of abuse at http://www.chillingeffects.org/
A concern is that there's no requirement for the accuser to prove
infringement has occured, nor is there a valid defense that a user
has done everything possible to prevent infringement. 
There are several ways to reduce copyright infringement of music and
movies without introducing new legislation. The promotion of legal
services like iTunes and spotify, easier access to legal media, like
Digital Rights Management free music. Many of the record labels and
movie studios are failing to promote competing legal services which
many people would use if they were aware of them. A fairer
alternative to disconnection is a fine through the courts. 
You can find further information on the effects of the Digital
Economy Bill at http://www.openrightsgroup.org/ and
http://news.bbc.co.uk/1/hi/technology/8544935.stm
The bill has currently passed the House of Lords and its first
reading in the Commons. There is a danger that without MPs demanding
to scrutinise this bill, this damaging piece of legislation will be
rushed through Parliament before the general election.
I ask you to demand your right to debate this bill and to amend the
bill to remove sections 4-18. I would also appreciate a response to
this email. If you would like to discuss the issues I've raised
further, I can be contacted on 01273 xxxxxx or 07966 xxx xxx or via
email at this address.
Thank you for your time.
-- 
David Pashley
david@davidpashley.com
Read Comments (0)

7 March 2010

David Pashley: Mod_fastcgi and external PHP

Has anyone managed to get a standard version of mod_fastcgi work correctly with FastCGIExternalServer? There seems to be a complete lack of documentation on how to get this to work. I have managed to get it working by removing some code which appears to completely break AddHandler. However, people on the FastCGI list told me I was wrong for making it work. So, if anyone has managed to get it to work, please show me some working config.
Read Comments (1)

25 February 2010

David Pashley: Reducing Coupling between modules

In the past, several of my Puppet modules have been tightly coupled. A perfect example is Apache and Munin. When I install Apache, I want munin graphs set up. As a result my apache class has the following snippet in it:
munin::plugin   "apache_accesses":  
munin::plugin   "apache_processes":  
munin::plugin   "apache_volume":  
This should make sure that these three plugins are installed and that munin-node is restarted to pick them up. The define was implemented like this:
define munin::plugin (
      $enable = true,
      $plugin_name = false,
      )  
   include munin::node
   file   "/etc/munin/plugins/$name":
      ensure => $enable ?  
         true => $plugin_name ?  
            false => "/usr/share/munin/plugins/$name",
            default => "/usr/share/munin/plugins/$plugin_name"
          ,
         default => absent
       ,
      links => manage,
      require => Package["munin-node"],
      notify => Service["munin-node"],
    
 
(Note: this is a slight simplification of the define). As you can see, the define includes munin::node, as it needs the definition of the munin-node service and package. As a result of this, installing Apache drags in munin-node on your server too. It would be much nicer if the apache class only installed the munin plugins if you also install munin on the server. It turns out that is is possible, using virtual resources. Virtual resources allow you to define resources in one place, but not make them happen unless you realise them. Using this, we can make the file resource in the munin::plugin virtual and realise it in our munin::node class. Our new munin::plugin looks like:
define munin::plugin (
      $enable = true,
      $plugin_name = false,
      )  
   # removed "include munin::node"
   # Added @ in front of the resource to declare it as virtual
   @file   "/etc/munin/plugins/$name":
      ensure => $enable ?  
         true => $plugin_name ?  
            false => "/usr/share/munin/plugins/$name",
            default => "/usr/share/munin/plugins/$plugin_name"
          ,
         default => absent
       ,
      links => manage,
      require => Package["munin-node"],
      notify => Service["munin-node"],
      tag => munin-plugin,
    
 
We add the following line to our munin::node class:
File<  tag == munin-plugin  >
The odd syntax in the munin::node class realises all the virtual resources that match the filter, in this case, any that is tagged munin-plugin. We've had to define this tag ourself, as the auto-generated tags don't seem to work. You'll also notice that we've removed the munin::node include from the munin::plugin define, which means that we no longer install munin-node just by using the plugin define. I've used a similar technique for logcheck, so additional rules are not installed unless I've installed logcheck. I'm sure there are several other places where I can use it to reduce such tight coupling between classes.
Read Comments (2)

22 December 2009

David Pashley: Maven and Grails 1.2 snapshot

Because I couldn't find the information anywhere else, if you want to use maven with Grails 1.2 snapshot, use:
mvn org.apache.maven.plugins:maven-archetype-plugin:2.0-alpha-4:generate
-DarchetypeGroupId=org.grails
-DarchetypeArtifactId=grails-maven-archetype
-DarchetypeVersion=1.2-SNAPSHOT     -DgroupId=uk.org.catnip
-DartifactId=armstrong
-DarchetypeRepository=http://snapshots.maven.codehaus.org/maven2
Read Comments (0)

8 December 2009

Bastian Venthur: Printing

Funny coincidence that David writes how well Linux and Printers go together for 12 years, while it is apparently impossible to print something with CUPS in unstable since a week ;)

David Pashley: Conversations regarding printers

I just had the following conversation with my linux desktop:
Me: "Hi, I'd like to use my new printer please." Computer: "Do you mean this HP Laserjet CP1515n on the network?" Me: "Erm, yes I do." Computer: "Good. You've got a test page printing as we speak. Anything else I can help you with?"
Sadly I don't have any alternative modern operating systems to compare it to, but having done similar things with linux over the last 12 years, I'm impressed with how far we've come. Thank you to everyone who made this possible.
Read Comments (3)

13 October 2009

David Pashley: Tarballs explained

This entry was originally posted in slightly different form to Server Fault If you're coming from a Windows world, you're used to using tools like zip or rar, which compress collections of files. In the typical Unix tradition of doing one thing and doing one thing well, you tend to have two different utilities; a compression tool and a archive format. People then use these two tools together to give the same functionality that zip or rar provide. There are numerous different compression formats; the common ones used on Linux these days are gzip (sometimes known as zlib) and the newer, higher performing bzip2. Unfortunately bzip2 uses more CPU and memory to provide the higher rates of compression. You can use these tools to compress any file and by convention files compressed by either of these formats is .gz and .bz2. You can use gzip and bzip2 to compress and gunzip and bunzip2 to decompress these formats. There are also several different types of archive formats available, including cpio, ar and tar, but people tend to only use tar. These allow you to take a number of files and pack them into a single file. They can also include path and permission information. You can create and unpack a tar file using the tar command. You might hear these operations referred to as "tarring" and "untarring". (The name of the command comes from a shortening of Tape ARchive. Tar was an improvement on the ar format in that you could use it to span multiple physical tapes for backups).
# tar -cf archive.tar list of files to include
This will create (-c) and archive into a file -f called archive.tar. (.tar is the convention extention for tar archives). You should now have a single file that contains five files ("list", "of", "files", "to" and "include"). If you give tar a directory, it will recurse into that directory and store everything inside it.
# tar -xf archive.tar
# tar -xf archive.tar list of files
This will extract (-x) the previously created archive.tar. You can extract just the files you want from the archive by listing them on the end of the command line. In our example, the second line would extract "list", "of", "file", but not "to" and "include". You can also use
# tar -tf archive.tar
to get a list of the contents before you extract them. So now you can combine these two tools to replication the functionality of zip:
# tar -cf archive.tar directory
# gzip archive.tar
You'll now have an archive.tar.gz file. You can extract it using:
# gunzip archive.tar.gz
# tar -xf archive.tar
We can use pipes to save us having an intermediate archive.tar:
# tar -cf - directory   gzip > archive.tar.gz
# gunzip < archive.tar.gz   tar -xf -
You can use - with the -f option to specify stdin or stdout (tar knows which one based on context). We can do slightly better, because, in a slight apparent breaking of the "one job well" idea, tar has the ability to compress its output and decompress its input by using the -z argument (I say apparent, because it still uses the gzip and gunzip commandline behind the scenes)
# tar -czf archive.tar.gz directory
# tar -xzf archive.tar.gz
To use bzip2 instead of gzip, use bzip2, bunzip2 and -j instead of gzip, gunzip and -z respectively (tar -cjf archive.tar.bz2). Some versions of tar can detect a bzip2 file archive with you use -z and do the right thing, but it is probably worth getting in the habit of being explicit. More info:
Read Comments (8)

11 October 2009

David Pashley: mod_proxy or mod_jk

This entry was originally posted in slightly different form to Server Fault There are several ways to run Tomcat applications. You can either run tomcat direcly on port 80, or you can put a webserver in front of tomcat and proxy connections to it. I would highly recommend using Apache as a front end. The main reason for this suggestion is that Apache is more flexible than tomcat. Apache has many modules that would require you to code support yourself in Tomcat. For example, while Tomcat can do gzip compression, it's a single switch; enabled or disabled. Sadly you can not compress CSS or javascript for Internet Explorer 6. This is easy to support in Apache, but impossible to do in Tomcat. Things like caching are also easier to do in Apache. Having decided to use Apache to front Tomcat, you need to decide how to connect them. There are several choices: mod_proxy ( more accurately, mod_proxy_http in Apache 2.2, but I'll refer to this as mod_proxy), mod_jk and mod_jk2. Mod_jk2 is not under active development and should not be used. This leaves us with mod_proxy or mod_jk. Both methods forward requests from apache to tomcat. mod_proxy uses the HTTP that we all know an love. mod_jk uses a binary protocol AJP. The main advantages of mod_jk are: A slight disadvantage is that AJP is based on fixed sized chunks, and can break with long headers, particularly request URLs with long list of parameters, but you should rarely be in a position of having 8K of URL parameters. (It would suggest you were doing it wrong. :) ) It used to be the case that mod_jk provided basic load balancing between two tomcats, which mod_proxy couldn't do, but with the new mod_proxy_balancer in Apache 2.2, this is no longer a reason to choose between them. The position is slightly complicated by the existence of mod_proxy_ajp. Between them, mod_jk is the more mature of the two, but mod_proxy_ajp works in the same framework as the other mod_proxy modules. I have not yet used mod_proxy_ajp, but would consider doing so in the future, as mod_proxy_ajp is part of Apche and mod_jk involves additional configuration outside of Apache. Given a choice, I would prefer a AJP based connector, mostly due to my second stated advantage, more than the performance aspect. Of course, if your application vendor doesn't support anything other than mod_proxy_http, that does tie your hands somewhat. You could use an alternative webserver like lighttpd, which does have an AJP module. Sadly, my prefered lightweight HTTP server, nginx, does not support AJP and is unlike ever to do so, due to the design of its proxying system.
Read Comments (3)

10 October 2009

David Pashley: Blog Copyright

To make things explicitly clear, my blog is copyrighted and licensed as "All rights reserved". It even says that at the footer of every page. That means you may not redistribute any content without my permission. Yes, this means you, Ross Beazley. I may allow aggregation sites to redistribute my content, but the only sites where I have given explicit permission are Planet Debian and Planet BNM. I am unlikely to be upset if your aggregation site links back to the original entry and does not carry advertising, and will probably give you permission. If both these conditions are not met, you do not have permission and will not be granted permission.
Read Comments (7)

David Pashley: Name-Based HTTPS

This entry was originally posted in slightly different form to Server Fault There are two methods of using virtual hosting with HTTP: Name based and IP based. IP based is the simplest as each virtual host is served from a different IP address configured on the server, but this requires an IP address for every host, and we're meant to be running out. The better solution is to use the Host: header introduced in HTTP 1.1, which allows the server to serve the right host to the client from a single IP address. HTTPS throws a spanner in the works, as the server does not know which certificate to present to the client during the SSL connection set up, because the client can't send the Host: header until the connection is set up. As a result, if you want to host more than one HTTPS site, you need to use IP-based virtual hosting. However, you can run multiple SSL sites from a single IP address using a couple of methods, each with their own drawbacks. The first method is to have a SSL certificate that covers both sites. The idea here is to have a single SSL certificate that covers all the domains you want to host from a single IP address. You can either do this using a wildcard certificate that covers both domains or use Subject Alternative Name. Wildcard certificates would be something *.example.com, which would cover www.example.com, mail.example.com and support.example.com. There are a number of problems with wildcard certificates. Firstly, every hostname needs to have a common domain, e.g. with *.example.com you can have www.example.com, but not www.example.org. Secondly, you can't reliably have more than one subdomain, i.e. you can have www.example.com, but not www.eu.example.com. This might work in earlier versions of Firefox (<= 3.0), but it doesn't work in 3.5 or any version of Internet Explorer. Thirdly, wildcard certificates are significantly more expensive than normal certificates if you want it signed by a root CA. Subject Alternative Name is a method of using an extension to X509 certificates that lists alternative hostnames that are valid for that certificate. It involves adding a "subjectAltName" field to the certificate that lists each additional host you want covered by the certificate. This should work in most browsers; certainly every modern mainstream browser. The downside of this method is that you have to list every domain on the server that will use SSL. You may not want this information publicly available. You probably don't want unrelated domains to be listed on the same certificate. It may also be difficult to add additional domains at a later date to your certificate. The second approach is to use something called SNI (Server Name Indication) which is an extension in TLS that solves the chicken and egg problem of not knowing which certificate to send to the client because the client hasn't sent the Host: header yet. As part of the TLS negotiation, the client sends the required hostname as one of the options. The only downside to this is client and server support. The support in browsers tends to be better than in servers. Firefox has supported it since 2.0. Internet Explorer supports it from 7 onwards, but only on Vista or later. Chrome only supports it on Vista or later too. Opera 8 and Safari 8.2.1 have support. Other browsers may not support it. The biggest problem preventing adoption is the server support. Until very recently neither of the two main webservers supported it. Apache gained SNI support as of 2.2.12, which was released July 2009. As of writing, IIS does not support SNI in any version. nginx, lighttpd and Cherokee all support SNI. Going forward, SNI is the best method for solving the name-based virtual hosting of HTTPS, but support might be patchy for a year or two yet. If you must do HTTPS virtual hosting without problems in the near future, IP based virtual hosting is the only option.
Read Comments (14)

1 September 2009

David Pashley: Setting up gitosis on Jaunty

While git is a completely distributed revision control system, sometimes the lack of a central canonical repository can be annoying. For example, you might want to make your repository published publically, so other people can fork your code, or you might want all your developers to push into (or have code pulled into) a central "golden" tree, that you then use for automated building and continuous integration. This entry should explain how to get this all working on Ubuntu 9.04 (Jaunty). Gitosis is a very useful git repository manager, which adds support like ACLs in pre-commits and gitweb and git-daemon management. While it's possible to set all these things up by hand, gitosis does everything for you. It is nicely configured via git; to make configuration changes, you push the config file changes into gitosis repository on the server. Gitosis is available in Jaunty, but unfortunately there is a bug in the version in Jaunty, which means it doesn't work out of the box. Fortunately there is a fixed version in jaunty-proposed that fixes the main problem. This does mean that you need to add the following to your sources.list:
deb http://gb.archive.ubuntu.com/ubuntu/ jaunty-proposed universe
Run apt-get update && apt-get install gitosis. You should install 0.2+20080825-2ubuntu0.1 or later. There is another small bug in the current version too, as a result of git removing the git-$command scripts out of /usr/bin. Edit /usr/share/python-support/gitosis/gitosis/templates/admin/hooks/post-update and replace
git-update-server-info
with
git update-server-info
With these changes in place, we can now set up our gitosis repository. On the server you are going to use to host your central repositories, run:
sudo -H -u gitosis gitosis-init < id_rsa.pub
The id_rsa.pub file is a public ssh key. As I mentioned, gitosis is managed over git, so you need an initial user to clone and then push changes back into the gitosis repo, so make sure this key belongs to a keypair you have available to the remote user you're going to configure gitosis. Now, on your local computer, you can clone the gitosis-admin repo using:
git clone gitosis@gitserver.example.com:gitosis-admin.git
If you look inside the gitosis-admin directory, you should find a file called gitosis.conf and a directory called keydir. The directory is where you can add ssh public keys for your users. The file is the configuration file for gitosis.
[gitosis]
loglevel = INFO
[group gitosis-admin]
writable = gitosis-admin
members = david@david
[group developers]
members = david@david
writable = publicproject privateproject
[group contributors]
members = george@wilber
writable = publicproject
[repo publicproject]
daemon = yes
gitweb = yes
[repo privateproject]
daemon = no
gitweb = no
This sets up two repositories, called publicproject and privateproject. It enables the public project to be available via the git protocol and in gitweb if you have that installed. We also create two groups, developers and contributors. David has access to both projects, but George only has access to change the publicproject. David can also modify the gitosis configuration. The users are the names of ssh keys (the last part of the line in id_dsa.pub or id_rsa.pub). Once you've changed this file, you can run git add gitosis.conf to add it to the commit, git commit -m "update gitosis configuration to commit it to your local repository, and finally git push to push your commits back up into the central repository. Gitosis should now update the configuration on the server to match the config file. One last thing to do is to enable git-daemon, so people can anonymously clone your projects. Create /etc/event.d/git-daemon with the following contents:
start on startup
stop on shutdown
exec /usr/bin/git daemon \
   --user=gitosis --group=gitosis \
   --user-path=public-git \
   --verbose \
   --syslog \
   --reuseaddr \
   --base-path=/srv/gitosis/repositories/ 
respawn
You can now start this using start git-daemon So now, you need to start using your repository. You can either start with an existing project or an empty directory. Start by running git init and then git add $file to add each of the files you want in your project, and finally git commit to commit them to your local repository. The final task is to add a remote repository and push your code into it.
git remote add origin gitosis@gitserver.example.com:privateproject.git
git push origin master:refs/heads/master
In future, you should be able to do git push to push your changes back into the central repository. You can also clone a project using git or ssh, providing you have access, using the following commands. The first is for read-write access over ssh and the second uses the git protocol for read-only access. The git protocol uses TCP port 9418, so make sure that's available externally, if you want the world to be able to clone your repos.
git clone gitosis@gitserver.example.com:publicproject.git
git clone git://gitserver.example.com/publicproject.git
Setting up GitWeb is left as an exercise for the reader (and myself because I am yet to attempt to set that up).
Read Comments (1)

Next.