Search Results: "cmot"

28 June 2015

Russell Coker: RAID Pain

One of my clients has a NAS device. Last week they tried to do what should have been a routine RAID operation, they added a new larger disk as a hot-spare and told the RAID array to replace one of the active disks with the hot-spare. The aim was to replace the disks one at a time to grow the array. But one of the other disks had an error during the rebuild and things fell apart. I was called in after the NAS had been rebooted when it was refusing to recognise the RAID. The first thing that occurred to me is that maybe RAID-5 isn t a good choice for the RAID. While it s theoretically possible for a RAID rebuild to not fail in such a situation (the data that couldn t be read from the disk with an error could have been regenerated from the disk that was being replaced) it seems that the RAID implementation in question couldn t do it. As the NAS is running Linux I presume that at least older versions of Linux have the same problem. Of course if you have a RAID array that has 7 disks running RAID-6 with a hot-spare then you only get the capacity of 4 disks. But RAID-6 with no hot-spare should be at least as reliable as RAID-5 with a hot-spare. Whenever you recover from disk problems the first thing you want to do is to make a read-only copy of the data. Then you can t make things worse. This is a problem when you are dealing with 7 disks, fortunately they were only 3TB disks and only each had 2TB in use. So I found some space on a ZFS pool and bought a few 6TB disks which I formatted as BTRFS filesystems. For this task I only wanted filesystems that support snapshots so I could work on snapshots not on the original copy. I expect that at some future time I will be called in when an array of 6+ disks of the largest available size fails. This will be a more difficult problem to solve as I don t own any system that can handle so many disks. I copied a few of the disks to a ZFS filesystem on a Dell PowerEdge T110 running kernel 3.2.68. Unfortunately that system seems to have a problem with USB, when copying from 4 disks at once each disk was reading about 10MB/s and when copying from 3 disks each disk was reading about 13MB/s. It seems that the system has an aggregate USB bandwidth of 40MB/s slightly greater than USB 2.0 speed. This made the process take longer than expected. One of the disks had a read error, this was presumably the cause of the original RAID failure. dd has the option conv=noerror to make it continue after a read error. This initially seemed good but the resulting file was smaller than the source partition. It seems that conv=noerror doesn t seek the output file to maintain input and output alignment. If I had a hard drive filled with plain ASCII that MIGHT even be useful, but for a filesystem image it s worse than useless. The only option was to repeatedly run dd with matching skip and seek options incrementing by 1K until it had passed the section with errors. for n in /dev/loop[0-6] ; do echo $n ; mdadm examine -v -v scan $n grep Events ; done Once I had all the images I had to assemble them. The Linux Software RAID didn t like the array because not all the devices had the same event count. The way Linux Software RAID (and probably most RAID implementations) work is that each member of the array has an event counter that is incremented when disks are added, removed, and when data is written. If there is an error then after a reboot only disks with matching event counts will be used. The above command shows the Events count for all the disks. Fortunately different event numbers aren t going to stop us. After assembling the array (which failed to run) I ran mdadm -R /dev/md1 which kicked some members out. I then added them back manually and forced the array to run. Unfortunately attempts to write to the array failed (presumably due to mismatched event counts). Now my next problem is that I can make a 10TB degraded RAID-5 array which is read-only but I can t mount the XFS filesystem because XFS wants to replay the journal. So my next step is to buy another 2*6TB disks to make a RAID-0 array to contain an image of that XFS filesystem. Finally backups are a really good thing

28 April 2011

Adrian von Bidder: cmot died on April 17th 2011

Sadly, I have to make an end to this blog. Adrian - my husband - died on april 17th of a heart attack. Debian obituary

22 March 2011

Adrian von Bidder: On Bad Lenses

I, and maybe some others, wondered why people would (have to) send new lenses back claiming that they got a bad copy so often. I couldn't imagine quality control being so bad that two out of three (at least that's the impression I got from reading the occasional foto discussion forum) lenses are bad. It turns out that it is not only people needing to be pampered and having the impression that their second (identical) copy of the expensive new lens is better. Nor is it bad QA by the manufacturers alone. Fact is that todays high megapixel cameras are at the limit of the manufacturing precision that can be achieved at a reasonable price. Roger Cicala (LensRentals.com, so he occasionally does have a lens or two at his place) wrote some very interesting articles about this topic which I discovered just now as they were featured on canonrumors (In case you're wondering: No, I don't see any issues with my new 70-200mm, just coincidence that I'm reading this just after getting a new lens. When I upgrade my camera body to something like a 5D MkII I will take a very close look at my Tokina 11-16 though, where I don't trust the AF 100%. On the 10MP Canon 40D it's a great lens, so I'm quite happy for now, but it seems strange that when I release and half-press the shutter again after having focused, the focus will sometimes move a tiny bit again. Haven't noticed this behaviour on the other lenses.)

10 March 2011

Adrian von Bidder: New Toy

Getting a birthday present is always nice, of course. This one made me itch to go outside and hunt for nice motifs, though ... Images are straight from the camera, just a bit cropped (look at the EXIF if you feel like it.) If you don't want to look at the EXIF: the beautiful new lens is the Canon Canon EF 70-200mm 1:2.8 L IS II USM (like all manufacturers, Canon likes to collect funny abbreviations at the end of their product names though I feel they're not as bad as some...), and I truly like what I'm seeing so far. That this lens has excellent sharpness goes without saying (especially on the 40D with modest 10MPixel on a crop sensor), see the comparison with the old EF 100-300mm f/4.5-5.6 USM (the piccolo picture.) Beyond sharpness, there's the 1:2.8 aperture, the IS really keeps what it promises (and is absolutely silent), and AF is quiet and ultra fast as well. I've only got it today, so I took only a few shots I'm confident that CA, flare resistance etc. will live up to reputation though. Now I'm really starting about a full frame camera with higher MP count to take full advantage of my lenses (you may remember that I quite like decent equipment; the wideangle is said to be usable for full frame down to ca. 14mm although it's really designed for crop sensor...) The obvious question is if I really can take better pictures just because I'm now lugging around a 6kg bag of stuff ... but honestly, having a telephoto lens without image stabilisation was a pain, so getting the old tele replaced was the obvious next step.Oh yes, and: see you in Banja Luka! :-)

3 March 2011

Adrian von Bidder: Btrfs data deduplication

Apparently, it's coming. Haven't tested these patches, though (and why yet another btrfs-foo command rather than something integrated with the btrfs command?), but together with my hacked up dirvish (added support to create btrfs snapshots instead of hardlinked trees) this will save me a couple of gigabytes for all those backups of various servers (which I pretty much keep at the same releases.) The discussion on the btrfs mailing list got quite heated on the online vs. offline dedup issue (and also very silly IMO since nobody said online dedup shouldn't be supported. It's just not written yet...); What nobody mentioned was: how much memory is the hash index of an online dedup daemon going to consume, and how much CPU cache will it burn? This would be my main concern since my NAS only has 512M memory, and also needs to do NAT, VPN and DNS (yes, I'm a home user. I'd like to get the public IP off the NAS, but I'll have to buy some box to do this first...)

2 March 2011

Adrian von Bidder: Open Letter to Gianugo Rabellino

Dear Mr. Rabellino, Actions speak louder than words. So stop giving interviews and do your homework. Release documentation without license and patent threats surrounding them (how about all protocols related to Sharepoint and Exchange?) Support stuff like Ogg Theora in HTML 5 for IE and the OpenDocument format in MS Office. Admit that you just bought the OOXML standard and that you can't even implement it yourself. Hand over your patent portfolio to the Open Invention Network. Use your cash reserves to buy Nvidia and then release a proper driver for their GPUs (obviously, I'm talking about a driver for the X window system under appropriate licenses, not a Windows driver.) If you ever are challenged that Microsoft should do something like this, don't come up with lame excuses like I would like to do this but the company won't let me. Either you can move your company or admit defeat and resign instead of pretending to care about relationship to the Free Software communitites. Your company has a long, long history and needs to make good on it. Talking won't change any of this. Being forced to take some (very small) steps towards openness by court decisions won't change any of this. Probably, even being a model citizen for 5 years won't change much because people have long memories, but hey, you worked hard to get this reputation.

10 February 2011

Adrian von Bidder: Freedom

LWN Coverage of Eben Moglen's FOSDEM keynote. A very thoughtful article on freedom and how to get there. While we can, technically, build these free systems that Eben Moglen talks about, history has shown that we won't. People don't care until it's too late until the police is on their doorstep with an arrest warrant (if the regime cares about such details at all.) So it's up to the small minority of people who actually cares and is prepared to pay the price (in money or in being isolated by not being on Facebook or whatever) to create systems that are both Free and also appealing to the uneducated masses. People will move to Free systems only when they're much cheaper and also offer more value as perceived by them, in terms of features, or coolness or whatever. Bling. We're almost there in terms of software (Firefox managed to get noticeable market share, and people at least talk about OpenOffice / LibreOffice.) We're nowhere close in terms of networks and content, though. Wikipedia and Openstreetmap are the rare exceptions, but there's not much in terms of social networking, searching, messaging, collaboration platforms etc.: all firmly in corporate hands. I'm not sure that Moglen's Freedom Box idea will succeed either: so far, I can't see the must have element for people who don't care about these things that might change the game.

29 January 2011

Adrian von Bidder: Sci-Fi classics (and other stuff)

I again find myself spending time watching movies ... catching up on all those friends who chastise me for not having seen whatever movie we're talking about. For example, the first three Terminator movies. I'm always fond of old style science fiction, so the first of these is quite cute. And then there's the liquid metal effect introduced in Judgment Day, which also is cool, and at the time certainly was at the bleeding edge of what was possible in CG. But by the third movie the concept itself is sure showing its age although it's still well made action, I didn't enjoy it as much. Haven't got hold of the Salvation nor Bruno Mattai's unofficial Terminator 2. Speaking about old fashioned Sci Fi: Blade Runner has a very nice retro look in the buildings and furnishings. And, although this is more because it also was made in the early 80s, very retro computer consoles :-) Fun to watch, but I'm not quite happy with how the plot turns out in the end. But then, it's a Hollywood production, so should I be surprised? But there's not only Sci Fi ... The Shawshank Redemption is a gripping story about an innocent serving a 20 years sentence for murder. A banker entering the rough world of prison, bets are taken how long he'll last. But obviously it turns out that he's really quite tough ... I'm getting good at this ... I thought Jim Jarmusch after about 20s of the opening credits of Down By Law. While it is (again) about innocent people in prison, the focus here is solely on the interaction of three people sharing a cell, excluding almost everything else. Done beautifully in black and white, and while it's not fast paced in any way, the plot has a steady flow to it.

Adrian von Bidder: Execute stuff from dhcpd.conf

Since my NAS at home also acts as DHCP server, the obvious idea was to back up my PC and my laptop whenever they're switched on. Sadly, the documentation on how to do this from dhcpd.conf (instead of watching log files and reacting to log messages) is quite hidden, so here it is: I found the "execute" keyword (see the dhcp-eval manpage), and I found Tim Gustafson, which allowed me to pull it off (although, since I only have the two computers, I opted for a execute statement each and hardcoded the client IP in the call to the backup script. So I don't know if the address parsing stuff he does is correct.) So the host statements for me look just like this:
  host laeggerli-wifi  
    hardware ethernet 00:22:69:aa:bb:cc;
    fixed-address 172.23.5.19;
    on commit  
      execute("/usr/local/sbin/dhcp-run-backup", "commit", "laeggerli");
     
   
And the script to start dirvish is similarly simple, except that it needs to fork to the background to make sure dhcpd is not blocked. The sleep 600 is based on the theory that if I'm on my way out in the morning and need to check mail quickly, that will be less than 10 min, whereas when I'm still online after 10 min, there's a good chance that the back up will go through. Obviously, this could be improved...
#! /bin/bash
# called by dhcpd.conf
# arguments:
#  $1 -> "commit" if called from dhcpd.conf, fork into background
#     -> "run"    if called internall from first instance
#  $2 -> client host
if [ "$1" == "commit" ]; then
    # fork to background
    $0 run "$2" >> /var/log/dhcpbackup.log 2>&1 &
    exit 0
fi
if [ "$1" != "run" ]; then
    echo error
    exit 1
fi
# figure out which host:
host="$2"
if [ "$host" != "laeggerli" -a "$host" != "faehrimaa" ]; then
    echo "Unknown host: $2"
    exit 1
fi
# did backup already run today?
d= date +%Y%m%d 
if [ -d "/srv/backup/$ host _home/$d" ]; then
    exit 0
fi
# wait 10min before actually running the backup
# (if computer still runs after 10min, it'll likely run for longer...
sleep 600
ping -n -w 3 $host >/dev/null 2>&1   exit 0
agentpid="/var/run/dirvish/ssh-agent-$host.pid"
[ -f "$agentpid" ] && \
    kill $(< "$agentpid") 2>/dev/null
mkdir /var/run/dirvish >/dev/null 2>&1
eval  ssh-agent  >/dev/null 2>&1
echo $SSH_AGENT_PID > "$agentpid"
ssh-add /etc/dirvish/ssh-key >/dev/null 2>&1
/usr/sbin/dirvish --vault $ host _home
/usr/sbin/dirvish --vault $ host _root
/usr/sbin/dirvish-expire --quiet --vault $ host _home
/usr/sbin/dirvish-expire --quiet --vault $ host _root
kill $SSH_AGENT_PID
rm "$agentpid"
(I don't claim any rights on any of it, it's trivial enough.)

Adrian von Bidder: Execute stuff from dhcpd.conf

Since my NAS at home also acts as DHCP server, the obvious idea was to back up my PC and my laptop whenever they're switched on. Sadly, the documentation on how to do this from dhcpd.conf (instead of watching log files and reacting to log messages) is quite hidden, so here it is: I found the "execute" keyword (see the dhcp-eval manpage), and I found Tim Gustafson, which allowed me to pull it off (although, since I only have the two computers, I opted for a execute statement each and hardcoded the client IP in the call to the backup script. So I don't know if the address parsing stuff he does is correct.) So the host statements for me look just like this:
  host laeggerli-wifi  
    hardware ethernet 00:22:69:aa:bb:cc;
    fixed-address 172.23.5.19;
    on commit  
      execute("/usr/local/sbin/dhcp-run-backup", "commit", "laeggerli");
     
   
And the script to start dirvish is similarly simple, except that it needs to fork to the background to make sure dhcpd is not blocked. The sleep 600 is based on the theory that if I'm on my way out in the morning and need to check mail quickly, that will be less than 10 min, whereas when I'm still online after 10 min, there's a good chance that the back up will go through. Obviously, this could be improved...
#! /bin/bash
# called by dhcpd.conf
# arguments:
#  $1 -> "commit" if called from dhcpd.conf, fork into background
#     -> "run"    if called internall from first instance
#  $2 -> client host
if [ "$1" == "commit" ]; then
    # fork to background
    $0 run "$2" >> /var/log/dhcpbackup.log 2>&1 &
    exit 0
fi
if [ "$1" != "run" ]; then
    echo error
    exit 1
fi
# figure out which host:
host="$2"
if [ "$host" != "laeggerli" -a "$host" != "faehrimaa" ]; then
    echo "Unknown host: $2"
    exit 1
fi
# did backup already run today?
d= date +%Y%m%d 
if [ -d "/srv/backup/$ host _home/$d" ]; then
    exit 0
fi
# wait 10min before actually running the backup
# (if computer still runs after 10min, it'll likely run for longer...
sleep 600
ping -n -w 3 $host >/dev/null 2>&1   exit 0
agentpid="/var/run/dirvish/ssh-agent-$host.pid"
[ -f "$agentpid" ] && \
    kill $(< "$agentpid") 2>/dev/null
mkdir /var/run/dirvish >/dev/null 2>&1
eval  ssh-agent  >/dev/null 2>&1
echo $SSH_AGENT_PID > "$agentpid"
ssh-add /etc/dirvish/ssh-key >/dev/null 2>&1
/usr/sbin/dirvish --vault $ host _home
/usr/sbin/dirvish --vault $ host _root
/usr/sbin/dirvish-expire --quiet --vault $ host _home
/usr/sbin/dirvish-expire --quiet --vault $ host _root
kill $SSH_AGENT_PID
rm "$agentpid"
(I don't claim any rights on any of it, it's trivial enough.)

28 January 2011

Adrian von Bidder: How user support should go

My Blackberry is Not Working! If you haven't seen this yet, you really, really, really, want to watch it. Safe for work and everything.

21 January 2011

Adrian von Bidder: Wasted developer resources?

I had some hope when I read Girish Ramakrishnan's blog post that starts with To my knowledge, there are 3 Qt based JSON parsers out there . But I was really disappointed: instead of trying to get a consolidation going or at least highlighting the need for three different parsers, he announces yet another implementation. And, to make matters worse, it is intended to be statically linked whenever it is used. A friendly wave to all security conscious engineers who will now have to hunt down and kill security issues in various places wherever this json parser was used, in various different versions, possibly with local modifications. Girish, please do not take this as a personal attack, but what you're doing is just bad engineering practice. I don't claim qjsonparser is buggy. I haven't even looked at the code. But let's face it: bugs happen, and json is often passed over the net, so any parser is attack surface. So it should be as easy as possible to get fixed versions of the code out to the users. The means to do this is by allowing distribution builders to be aware of where the code in question was used, and to get fixed versions of it distributed easily. In other words: such code should always be in a shared library. Take, for example, the history of xpdf/poppler: many people spent countless hours chasing copies of xpdf code in many applications before they finally had enough, forked xpdf (if I have the history correctly) and created the poppler library which is now widely used. Now security issues with the PDF parser require one security fix, not 10.

4 January 2011

Adrian von Bidder: Cyrus

Just say no. Filed under Debian since this is a Univention system which is based on Debian (still etch, though.) And what specifically annoyed me was today I knew from other experiences that one shouldn't use cyrus, but it can't be said often enough... And since I usually don't use it, it amazes me anew every time I have to babysit an installation. While I don't have a similarly big installation to compare it, I've found Dovecot to be very nice. Admittedly it doesn't have that many features.

12 December 2010

Adrian von Bidder: Order Your Debian Swirl Umbrella Now

Ok, here we go: It seems that I'll receive the umbrella (big picture) in the first week of June, so I'm taking orders now. Please read this posting carefully if you want a Debian umbrella. Update IV 2010-12-12: Orders are now processed via debian.ch, so just go over there for your Umbrella. I still have a very few umbrellas here in Basel, so if you want to pick one up locally you're still welcome. About CHF 5 to 6 per umbrella will go to debian.ch (where it is held as official Debian money under the authority of the DPL.) (Old version of this article removed. You're still welcome to send money to my bank account, but you won't get an umbrella in return.)

20 November 2010

Adrian von Bidder: Tool: incron

One in the obvious, now that you mention it category. The package description is good enough:
incron is an "inotify cron" system. It works like the regular cron but is driven by filesystem events instead of time events. This package provides two programs, a daemon called "incrond" (analogous to crond) and a table manipulator "incrontab" (like "crontab").
Where filesystem events is anything that is reported by inotify; see the inotify(7) manpage. I didn't test and/or use it since I stumbled on it while searching for something completely different, but it sure sounds useful. The important feature not mentioned in the package description: can it limit how often an event triggers a script execution? Reading the manpage, it doesn't appear so, but there's IN_NO_LOOP to ... disable monitoring events until the current one is completely handled (until its child process exits). Which obviously opens up all kinds of race conditions. So I guess this tool needs to be used with care. Still, I guess a good candidate is monitoring /etc/aliases to run newaliases on change.

6 November 2010

Adrian von Bidder: Some Gigabytes

For some reason, 60G of free space on my notebook suddenly turned into 60G of movies. Bownian motion, perhaps. Unfortunately, in this particular case, I absolutely don't like to let a movie (or book, for that matter) unfinished, so I watched Fear and Loathing in Las Vegas (directed by Terry Gilliam, starring Johnny Depp; so far, so good) from start to finish. After the first half hour I started wondering if the plot is going to start. The reptile party in the hotel lobby was a short entertaining interlude, but to the end, no plot manifests. I'm left wondering what this was all about. Quantum of Solace on the other hand is good entertainment. Probably not one of the best, but still worth watching if you like her majesty's secret agent. Going back to movies I've seen before is something I like to do as well, so having another evening in company with the unforgettable Leon (the professional) was time well spent. I didn't give up on Terry Gilliam yet and met The Fisher King. While the story is completely different, it does feel a bit similar to 12 Monkeys in terms of set design and atmosphere. Speaking of Terry Gilliam: I'm curious how his current attempt at The Man Who Killed Don Quixote turns out; so far I've only seen him losing this movie in the documentary about the aborted last attempt: Lost in La Mancha. On the funny side, Dogma and especially The Wedding Crashers were worth watching. If you have to pick one, it's the latter one. Dogma feels a bit forced in some places. Returning back to the opening theme: seeing Inland Empire was another evening that left me confused. The main story is interesting enough, but I have to admit that I just couldn't follow where all the other sub plots tie in, or if they even are supposed to. And I'm not only talking about the rabbit family (those felt a bit like the Middle of the Film in The Meaning of Life and were quite in order.)

22 October 2010

Adrian von Bidder: SuperMicro BMC / IPMI: Can I Get In?

So I got a SuperMicro A+ Server 1012G-MTF today (seems to be a very nice unit for a decent price) and am preparing it for taking over fortytwo.ch and related services. Now this thing has got IPMI / BMC with remote management and KVM (both serial console and full graphical console with virtual CD-ROM etc.); works very nice. Basically the only thing I miss is the ability to disable services I don't need and/or restrict access to certain IP addresses. (No, I don't have the BMC on a public IP, but still...) So the question is: has anybody worked out how to hack / what kind of file system the IPMI Firmware for the H8SGL-F mainboard is? Or how one could drop from the BMC commandline to a /bin/sh prompt on the urnning system? A blog entry at Serverfault suggests it's been done but doesn't say how. (Running strings on the firmware binary shows the string Photoshop ICC profile near the end. I'm not sure if I want to know the story ... ;-)

16 September 2010

Adrian von Bidder: Remote Controlled

The Apple universe is happy to live with whatever Steve Jobs allows to get in. Numerous examples going back at least 20 years are available; most recent Apple's law action against a manufacturer offering compatible batteries and now an external harddisk for iPods. Microsoft is obviously the traditional enemy for many (personally, I think there's not much difference between MS, Apple and Google anymore.) So no surprise that MS Exchange plays a role in the story about a remote wipe function of many smartphones. Just being connected to a MS Exchange server allows the administrator of that server to remotely factory-reset the smartphone. I'm not sure if there's a complete list of vulnerable smartphones, but at least iOS, webOS and Android based phones are affected. Personally, I'd say this is a serious security issue. But of course, according to the manufacturers, it is a function for the convenience of network adminstrators so they can wipe stolen phones remotely. Update: Nico: If I'm somebody who is really after your data, I've got your smartphone and I'm clever enough not to connect it to the net before I got the data out. So you're only catching the Oh look what I accidentally found case, which I think is really just giving you a false sense of security. Anonymous: Google is the company with the Do Not Be Evil company motto and the very good press and marketing team touting everywhere how much they do for the good of everybody. Except when it cuts into their earnings. Like really cooperating with the Linux kernel folks. Like living up to what they announced they'd do in China. It's also the company covertly collecting WLAN data, and only admitting to exactly as little as they can get away with when it comes into the open. And, in the end, simple greed will do the rest: it's a powerful big company with plenty of very well paid career positions, and in my experience this company tends to attract people who will use this power in any way they can to increase their income. At this level of business, a company motto like the one of Google is just a few meaningless words. The real company motto is Earn Money. In Whatever Way. Other examples are told by anybody who needed to interact with their so called customer support, either because they got their free account locked or because they had a specific request as a paying customer placing ads on Google (and I'm talking about a company with at least several CHF 10k per year in ads, not just a small nobody.)

10 September 2010

Adrian von Bidder: Some Fun

Watching the Star Wars Trilogies (both old and new) has finally closed one of the major (in at least some of my collegues' view) holes in my cinematographic education. (I've seen some of the new trilogy when it came out, but not all.) Oh well ... while it's quite enjoyable to watch, it's probably at least as important to have seen them just to spot the references all over the place. Repo Man was where I found myself literally laughing out loudly all by myself. And not only because of the vintage special effects (1984). The Straight Story (Lynch) is just beautiful. Alvin Straight is your proverbial old uncle who may be a bit funny in the head but you just have to like him... If you like mind control stories, The Manchurian Candidate is for you. Now that would be a way for Debian to achieve world domination. Worth thinking about? Speaking of american politics, this reminds me of All the President's Men, which I've seen (and liked) a long time ago. I don't know enough about history to tell how close it is to reality, but it's at least a well made political thriller. Based On A True Story is the term usually used, isn't it?

8 July 2010

Adrian von Bidder: What's the Purpose Here?

Hello,
Just to let you know that we sent to you some possible interesting informations, but it seems it has been discarded in undesired mails.
Regards. That was the full content of an email I got today, with my email address in both To: and From: headers. It's certainly undesired email, but I just can't see why somebody would send it out. Verifying an address list to see how many bounces? Just being silly?

Next.