Valhalla's Things: Macrame Bookbag
Tags: madeof:atoms, craft:macrame
java.lang.NullPointerException at org.apache.xerces.dom.ParentNode.nodeListItem(Unknown Source) at org.apache.xerces.dom.ParentNode.item(Unknown Source) at com.example.xml.Element.getChildren(Element.java:377) at com.example.xml.Element.newChildElementHelper(Element.java:229) at com.example.xml.Element.newChildElement(Element.java:180)You may also find the NullPointerException in ParentNode.nodeListGetLength() and other locations in ParentNode. Debugging this was not helped by the fact that the xercesImpl.jar is stripped of line numbers, so I couldn t find the exact issue. After some searching, it appeared that the issue was down to the fact that Xerces is not thread-safe. ParentNode caches iterations through the NodeList of children to speed up performance and stores them in the Node s Document object. In multi-threaded applications, this can lead to race conditions and NullPointerExceptions. And because it s a threading issue, the problem is intermittent and hard to track down. The solution is to synchronise your code on the DOM, and this means the Document object, everywhere you access the nodes. I m not certain exactly which methods need to be protected, but I believe it needs to be at least any function that will iterate a NodeList. I would start by protecting every access and testing performance, and removing some if needed.
/** * Returns the concatenation of all the text in all child nodes * of the current element. */ public String getText() StringBuilder result = new StringBuilder(); synchronized ( m_element.getOwnerDocument()) NodeList nl = m_element.getChildNodes(); for (int i = 0; i < nl.getLength(); i++) Node n = nl.item(i); if (n != null && n.getNodeType() == org.w3c.dom.Node.TEXT_NODE) result.append(((CharacterData) n).getData()); return result.toString();Notice the synchronized ( m_element.getOwnerDocument()) block around the section that deals with the DOM. The NPE would normally be thrown on the nl.getLength() or nl.item() calls. Since putting in the synchronized blocks, we ve gone from having 78 NPEs between 2:30am and 3:00am, to having zero in the last 12 hours, so I think it s safe to say, this has drastically reduced the problem. The post NullPointerExceptions in Xerces-J appeared first on David Pashley.com.
You must remember that the planters are short-sighted. They re all desperate to get out of Jamaica they wake up every day expecting to find themselves, or their children, in the grip of some tropical fever. To import female Neegers would cost nearly as much as to import males, but the females cannot produce as much sugar particularly when they are breeding. Daniel had finally recognized this voice as belonging to Sir Richard Apthorp the second A in the CABAL.It s a bit embarrassing when I discovered myself realizing where the word cabal comes from. And I m posting this as a head-up for everyone who know there is no cabal in Debian; but they don t know which is the origin of the word cabal. Stephenson changed the name of the historic cabal, a group of high councillers of King Charles II of England, Scotland and Ireland, in 1668. In the novel, they are: John Comstock (Earl of Epsom), Louis Anglesey (Duke of Gunfleet), Knott Bolstrood (Count Penistone), Sir Richard Apthorp and Hugh Lewis (Duke of Tweed). In the real world they had been:
Thomas Clifford, 1st Baron Clifford of Chudleigh (1630-1673). | Henry Bennet, 1st Earl of Arlington (1618-1685). | George Villiers, 2nd Duke of Buckingham (1628-1687). | Anthony Ashley Cooper, 1st Baron Ashley of Wimborne St Giles (1621-1683). | John Maitland, 1st Duke of Lauderdale (1616-1682). |
mysql> select "a" = "A"; +-----------+ "a" = "A" +-----------+ 1 +-----------+ 1 row in set (0.00 sec)WTF? (via Nuxeo)
From: David Pashley <david@davidpashley.com> To: David Lepper Cc: Bcc: Subject: Digital Economy Bill Reply-To: Dear Mr Lepper, I'm writing to you so express my concern at the Digital Economy Bill which is currently working its way through the House of Commons. I believe that the bill as it stands will have a negative effect on the digital economy that the UK and in particular Brighton have worked so hard to foster. Section 4-17 deals with disconnecting people reported as infringing copyright. As it stands, this section will result in the possibility that my internet connection could be disconnected as a result of the actions of my flatmate. My freelance web development business is inherently linked to my access of the Internet. I currently allow my landlady to share my internet access with her holiday flat above me. I will have to stop this arrangement for fear of a tourist's actions jeopardising my business. This section will also result in the many pubs and cafes, much favoured by Brighton's freelancers, from removing their free wifi. I have often used my local pub's wifi when I needed a change of scenery. I know a great many freelancers use Cafe Delice in the North Laine as a place to meet other freelancers and discuss projects while drinking coffee and working. Section 18 deals with ISPs being required to prevent access to sites hosting copyrighted material. The ISPs can insist on a court injunction forcing them to prevent access. Unfortunately, a great many ISPs will not want to deal with the costs of any court proceedings and will just block the site in question. A similar law in the Unitied States, the Digital Millenium Copyright Act (DMCA) has been abused time and time again by spurious copyright claims to silence critics or embarrassments. A recent case is Microsoft shutting down the entire Cryptome.org website because they were embarrassed by a document they had hosted. There are many more examples of abuse at http://www.chillingeffects.org/ A concern is that there's no requirement for the accuser to prove infringement has occured, nor is there a valid defense that a user has done everything possible to prevent infringement. There are several ways to reduce copyright infringement of music and movies without introducing new legislation. The promotion of legal services like iTunes and spotify, easier access to legal media, like Digital Rights Management free music. Many of the record labels and movie studios are failing to promote competing legal services which many people would use if they were aware of them. A fairer alternative to disconnection is a fine through the courts. You can find further information on the effects of the Digital Economy Bill at http://www.openrightsgroup.org/ and http://news.bbc.co.uk/1/hi/technology/8544935.stm The bill has currently passed the House of Lords and its first reading in the Commons. There is a danger that without MPs demanding to scrutinise this bill, this damaging piece of legislation will be rushed through Parliament before the general election. I ask you to demand your right to debate this bill and to amend the bill to remove sections 4-18. I would also appreciate a response to this email. If you would like to discuss the issues I've raised further, I can be contacted on 01273 xxxxxx or 07966 xxx xxx or via email at this address. Thank you for your time. -- David Pashley david@davidpashley.com
munin::plugin "apache_accesses": munin::plugin "apache_processes": munin::plugin "apache_volume":This should make sure that these three plugins are installed and that munin-node is restarted to pick them up. The define was implemented like this:
define munin::plugin ( $enable = true, $plugin_name = false, ) include munin::node file "/etc/munin/plugins/$name": ensure => $enable ? true => $plugin_name ? false => "/usr/share/munin/plugins/$name", default => "/usr/share/munin/plugins/$plugin_name" , default => absent , links => manage, require => Package["munin-node"], notify => Service["munin-node"],(Note: this is a slight simplification of the define). As you can see, the define includes munin::node, as it needs the definition of the munin-node service and package. As a result of this, installing Apache drags in munin-node on your server too. It would be much nicer if the apache class only installed the munin plugins if you also install munin on the server. It turns out that is is possible, using virtual resources. Virtual resources allow you to define resources in one place, but not make them happen unless you realise them. Using this, we can make the file resource in the munin::plugin virtual and realise it in our munin::node class. Our new munin::plugin looks like:
define munin::plugin ( $enable = true, $plugin_name = false, ) # removed "include munin::node" # Added @ in front of the resource to declare it as virtual @file "/etc/munin/plugins/$name": ensure => $enable ? true => $plugin_name ? false => "/usr/share/munin/plugins/$name", default => "/usr/share/munin/plugins/$plugin_name" , default => absent , links => manage, require => Package["munin-node"], notify => Service["munin-node"], tag => munin-plugin,We add the following line to our munin::node class:
File< tag == munin-plugin >The odd syntax in the munin::node class realises all the virtual resources that match the filter, in this case, any that is tagged munin-plugin. We've had to define this tag ourself, as the auto-generated tags don't seem to work. You'll also notice that we've removed the munin::node include from the munin::plugin define, which means that we no longer install munin-node just by using the plugin define. I've used a similar technique for logcheck, so additional rules are not installed unless I've installed logcheck. I'm sure there are several other places where I can use it to reduce such tight coupling between classes.
mvn org.apache.maven.plugins:maven-archetype-plugin:2.0-alpha-4:generate -DarchetypeGroupId=org.grails -DarchetypeArtifactId=grails-maven-archetype -DarchetypeVersion=1.2-SNAPSHOT -DgroupId=uk.org.catnip -DartifactId=armstrong -DarchetypeRepository=http://snapshots.maven.codehaus.org/maven2
Me: "Hi, I'd like to use my new printer please." Computer: "Do you mean this HP Laserjet CP1515n on the network?" Me: "Erm, yes I do." Computer: "Good. You've got a test page printing as we speak. Anything else I can help you with?"Sadly I don't have any alternative modern operating systems to compare it to, but having done similar things with linux over the last 12 years, I'm impressed with how far we've come. Thank you to everyone who made this possible.
# tar -cf archive.tar list of files to includeThis will create (-c) and archive into a file -f called archive.tar. (.tar is the convention extention for tar archives). You should now have a single file that contains five files ("list", "of", "files", "to" and "include"). If you give tar a directory, it will recurse into that directory and store everything inside it.
# tar -xf archive.tar # tar -xf archive.tar list of filesThis will extract (-x) the previously created archive.tar. You can extract just the files you want from the archive by listing them on the end of the command line. In our example, the second line would extract "list", "of", "file", but not "to" and "include". You can also use
# tar -tf archive.tarto get a list of the contents before you extract them. So now you can combine these two tools to replication the functionality of zip:
# tar -cf archive.tar directory # gzip archive.tarYou'll now have an archive.tar.gz file. You can extract it using:
# gunzip archive.tar.gz # tar -xf archive.tarWe can use pipes to save us having an intermediate archive.tar:
# tar -cf - directory gzip > archive.tar.gz # gunzip < archive.tar.gz tar -xf -You can use - with the -f option to specify stdin or stdout (tar knows which one based on context). We can do slightly better, because, in a slight apparent breaking of the "one job well" idea, tar has the ability to compress its output and decompress its input by using the -z argument (I say apparent, because it still uses the gzip and gunzip commandline behind the scenes)
# tar -czf archive.tar.gz directory # tar -xzf archive.tar.gzTo use bzip2 instead of gzip, use bzip2, bunzip2 and -j instead of gzip, gunzip and -z respectively (tar -cjf archive.tar.bz2). Some versions of tar can detect a bzip2 file archive with you use -z and do the right thing, but it is probably worth getting in the habit of being explicit. More info:
deb http://gb.archive.ubuntu.com/ubuntu/ jaunty-proposed universeRun apt-get update && apt-get install gitosis. You should install 0.2+20080825-2ubuntu0.1 or later. There is another small bug in the current version too, as a result of git removing the git-$command scripts out of /usr/bin. Edit /usr/share/python-support/gitosis/gitosis/templates/admin/hooks/post-update and replace
git-update-server-infowith
git update-server-infoWith these changes in place, we can now set up our gitosis repository. On the server you are going to use to host your central repositories, run:
sudo -H -u gitosis gitosis-init < id_rsa.pubThe id_rsa.pub file is a public ssh key. As I mentioned, gitosis is managed over git, so you need an initial user to clone and then push changes back into the gitosis repo, so make sure this key belongs to a keypair you have available to the remote user you're going to configure gitosis. Now, on your local computer, you can clone the gitosis-admin repo using:
git clone gitosis@gitserver.example.com:gitosis-admin.gitIf you look inside the gitosis-admin directory, you should find a file called gitosis.conf and a directory called keydir. The directory is where you can add ssh public keys for your users. The file is the configuration file for gitosis.
[gitosis] loglevel = INFO [group gitosis-admin] writable = gitosis-admin members = david@david [group developers] members = david@david writable = publicproject privateproject [group contributors] members = george@wilber writable = publicproject [repo publicproject] daemon = yes gitweb = yes [repo privateproject] daemon = no gitweb = noThis sets up two repositories, called publicproject and privateproject. It enables the public project to be available via the git protocol and in gitweb if you have that installed. We also create two groups, developers and contributors. David has access to both projects, but George only has access to change the publicproject. David can also modify the gitosis configuration. The users are the names of ssh keys (the last part of the line in id_dsa.pub or id_rsa.pub). Once you've changed this file, you can run git add gitosis.conf to add it to the commit, git commit -m "update gitosis configuration to commit it to your local repository, and finally git push to push your commits back up into the central repository. Gitosis should now update the configuration on the server to match the config file. One last thing to do is to enable git-daemon, so people can anonymously clone your projects. Create /etc/event.d/git-daemon with the following contents:
start on startup stop on shutdown exec /usr/bin/git daemon \ --user=gitosis --group=gitosis \ --user-path=public-git \ --verbose \ --syslog \ --reuseaddr \ --base-path=/srv/gitosis/repositories/ respawnYou can now start this using start git-daemon So now, you need to start using your repository. You can either start with an existing project or an empty directory. Start by running git init and then git add $file to add each of the files you want in your project, and finally git commit to commit them to your local repository. The final task is to add a remote repository and push your code into it.
git remote add origin gitosis@gitserver.example.com:privateproject.git git push origin master:refs/heads/masterIn future, you should be able to do git push to push your changes back into the central repository. You can also clone a project using git or ssh, providing you have access, using the following commands. The first is for read-write access over ssh and the second uses the git protocol for read-only access. The git protocol uses TCP port 9418, so make sure that's available externally, if you want the world to be able to clone your repos.
git clone gitosis@gitserver.example.com:publicproject.git git clone git://gitserver.example.com/publicproject.gitSetting up GitWeb is left as an exercise for the reader (and myself because I am yet to attempt to set that up).
Next.