Search Results: "dmz"

1 November 2020

Paul Wise: FLOSS Activities October 2020

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review
  • Spam: reported 2 Debian bug reports and 147 Debian mailing list posts
  • Patches: merged libicns patches
  • Debian packages: sponsored iotop-c
  • Debian wiki: RecentChanges for the month
  • Debian screenshots:

Administration
  • Debian: get us removed from an RBL
  • Debian wiki: reset email addresses, approve accounts

Communication

Sponsors The pytest-rerunfailures/pyemd/morfessor work was sponsored by my employer. All other work was done on a volunteer basis.

28 March 2020

Fran ois Marier: How to get a direct WebRTC connections between two computers

WebRTC is a standard real-time communication protocol built directly into modern web browsers. It enables the creation of video conferencing services which do not require participants to download additional software. Many services make use of it and it almost always works out of the box. The reason it just works is that it uses a protocol called ICE to establish a connection regardless of the network environment. What that means however is that in some cases, your video/audio connection will need to be relayed (using end-to-end encryption) to the other person via third-party TURN server. In addition to adding extra network latency to your call that relay server might overloaded at some point and drop or delay packets coming through. Here's how to tell whether or not your WebRTC calls are being relayed, and how to ensure you get a direct connection to the other host.

Testing basic WebRTC functionality Before you place a real call, I suggest using the official test page which will test your camera, microphone and network connectivity. Note that this test page makes use of a Google TURN server which is locked to particular HTTP referrers and so you'll need to disable privacy features that might interfere with this:
  • Brave: Disable Shields entirely for that page (Simple view) or allow all cookies for that page (Advanced view).
  • Firefox: Ensure that http.network.referer.spoofSource is set to false in about:config, which it is by default.
  • uMatrix: The "Spoof Referer header" option needs to be turned off for that site.

Checking the type of peer connection you have Once you know that WebRTC is working in your browser, it's time to establish a connection and look at the network configuration that the two peers agreed on. My favorite service at the moment is Whereby (formerly Appear.in), so I'm going to use that to connect from two different computers:
  • canada is a laptop behind a regular home router without any port forwarding.
  • siberia is a desktop computer in a remote location that is also behind a home router, but in this case its internal IP address (192.168.1.2) is set as the DMZ host.

Chromium For all Chromium-based browsers, such as Brave, Chrome, Edge, Opera and Vivaldi, the debugging page you'll need to open is called chrome://webrtc-internals. Look for RTCIceCandidatePair lines and expand them one at a time until you find the one which says:
  • state: succeeded (or state: in-progress)
  • nominated: true
  • writable: true
Then from the name of that pair (N6cxxnrr_OEpeash in the above example) find the two matching RTCIceCandidate lines (one local-candidate and one remote-candidate) and expand them. In the case of a direct connection, I saw the following on the remote-candidate:
  • ip shows the external IP address of siberia
  • port shows a random number between 1024 and 65535
  • candidateType: srflx
and the following on local-candidate:
  • ip shows the external IP address of canada
  • port shows a random number between 1024 and 65535
  • candidateType: prflx
These candidate types indicate that a STUN server was used to determine the public-facing IP address and port for each computer, but the actual connection between the peers is direct. On the other hand, for a relayed/proxied connection, I saw the following on the remote-candidate side:
  • ip shows an IP address belonging to the TURN server
  • candidateType: relay
and the same information as before on the local-candidate.

Firefox If you are using Firefox, the debugging page you want to look at is about:webrtc. Expand the top entry under "Session Statistics" and look for the line (should be the first one) which says the following in green:
  • ICE State: succeeded
  • Nominated: true
  • Selected: true
then look in the "Local Candidate" and "Remote Candidate" sections to find the candidate type in brackets.

Firewall ports to open to avoid using a relay In order to get a direct connection to the other WebRTC peer, one of the two computers (in my case, siberia) needs to open all inbound UDP ports since there doesn't appear to be a way to restrict Chromium or Firefox to a smaller port range for incoming WebRTC connections. This isn't great and so I decided to tighten that up in two ways by:
  • restricting incoming UDP traffic to the IP range of siberia's ISP, and
  • explicitly denying incoming to the UDP ports I know are open on siberia.
To get the IP range, start with the external IP address of the machine (I'll use the IP address of my blog in this example: 66.228.46.55) and pass it to the whois command:
$ whois 66.228.46.55   grep CIDR
CIDR:           66.228.32.0/19
To get the list of open UDP ports on siberia, I sshed into it and ran nmap:
$ sudo nmap -sU localhost
Starting Nmap 7.60 ( https://nmap.org ) at 2020-03-28 15:55 PDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000015s latency).
Not shown: 994 closed ports
PORT      STATE         SERVICE
631/udp   open filtered ipp
5060/udp  open filtered sip
5353/udp  open          zeroconf
Nmap done: 1 IP address (1 host up) scanned in 190.25 seconds
I ended up with the following in my /etc/network/iptables.up.rules (ports below 1024 are denied by the default rule and don't need to be included here):
# Deny all known-open high UDP ports before enabling WebRTC for canada
-A INPUT -p udp --dport 5060 -j DROP
-A INPUT -p udp --dport 5353 -j DROP
-A INPUT -s 66.228.32.0/19 -p udp --dport 1024:65535 -j ACCEPT

12 April 2017

Daniel Pocock: What is the risk of using proprietary software for people who prefer not to?

Jonas berg has recently blogged about Using Proprietary Software for Freedom. He argues that it can be acceptable to use proprietary software to further free and open source software ambitions if that is indeed the purpose. Jonas' blog suggests that each time proprietary software is used, the relative risk and reward should be considered and there may be situations where the reward is big enough and the risk low enough that proprietary software can be used. A question of leadership Many of the free software users and developers I've spoken to express frustration about how difficult it is to communicate to their family and friends about the risks of proprietary software. A typical example is explaining to family members why you would never install Skype. Imagine a doctor who gives a talk to school children about the dangers of smoking and is then spotted having a fag at the bus stop. After a month, if you ask the children what they remember about that doctor, is it more likely to be what he said or what he did? When contemplating Jonas' words, it is important to consider this leadership factor as a significant risk every time proprietary software or services are used. Getting busted with just one piece of proprietary software undermines your own credibility and posture now and well into the future. Research has shown that when communicating with people, what they see and how you communicate is ninety three percent of the impression you make. What you actually say to them is only seven percent. When giving a talk at a conference or a demo to a client, or communicating with family members in our everyday lives, using a proprietary application or a product or service that is obviously proprietary like an iPhone or Facebook will have far more impact than the words you say. It is not only a question of what you are seen doing in public: somebody who lives happily and comfortably without using proprietary software sounds a lot more credible than somebody who tries to explain freedom without living it. The many faces of proprietary software One of the first things to consider is that even for those developers who have a completely free operating system, there may well be some proprietary code lurking in their BIOS or other parts of their hardware. Their mobile phone, their car, their oven and even their alarm clock are all likely to contain some proprietary code too. The risks associated with these technologies may well be quite minimal, at least until that alarm clock becomes part of the Internet of Things and can be hacked by the bored teenager next door. Accessing most web sites these days inevitably involves some interaction with proprietary software, even if it is not running on your own computer. There is no need to give up Some people may consider this state of affairs and simply give up, using whatever appears to be the easiest solution for each problem at hand without thinking too much about whether it is proprietary or not. I don't think Jonas' blog intended to sanction this level of complacency. Every time you come across a piece of software, it is worth considering whether a free alternative exists and whether the software is really needed at all. An orderly migration to free software In our professional context, most software developers come across proprietary software every day in the networks operated by our employers and their clients. Sometimes we have the opportunity to influence the future of these systems. There are many cases where telling the client to go cold-turkey on their proprietary software would simply lead to the client choosing to get advice from somebody else. The free software engineer who looks at the situation strategically may find that it is possible to continue using the proprietary software as part of a staged migration, gradually helping the user to reduce their exposure over a period of months or even a few years. This may be one of the scenarios where Jonas is sanctioning the use of proprietary software. On a technical level, it may be possible to show the client that we are concerned about the dangers but that we also want to ensure the continuity of their business. We may propose a solution that involves sandboxing the proprietary software in a virtual machine or a DMZ to prevent it from compromising other systems or "calling home" to the vendor. As well as technical concerns about a sudden migration, promoters of free software frequently encounter political issues as well. For example, the IT manager in a company may be five years from retirement and is not concerned about his employer's long term ability to extricate itself from a web of Microsoft licenses after he or she has the freedom to go fishing every day. The free software professional may need to invest significant time winning the trust of senior management before he is able to work around a belligerant IT manager like this. No deal is better than a bad deal People in the UK have probably encountered the expression "No deal is better than a bad deal" many times already in the last few weeks. Please excuse me for borrowing it. If there is no free software alternative to a particular piece of proprietary software, maybe it is better to simply do without it. Facebook is a great example of this principle: life without social media is great and rather than trying to find or create a free alternative, why not just do something in the real world, like riding motorcycles, reading books or getting a cat or dog? Burning bridges behind you For those who are keen to be the visionaries and leaders in a world where free software is the dominant paradigm, would you really feel satisfied if you got there on the back of proprietary solutions? Or are you concerned that taking such shortcuts is only going to put that vision further out of reach? Each time you solve a problem with free software, whether it is small or large, in your personal life or in your business, the process you went through strengthens you to solve bigger problems the same way. Each time you solve a problem using a proprietary solution, not only do you miss out on that process of discovery but you also risk conditioning yourself to be dependent in future. For those who hope to build a successful startup company or be part of one, how would you feel if you reach your goal and then the rug is pulled out underneath you when a proprietary software vendor or cloud service you depend on changes the rules? Personally, in my own life, I prefer to avoid and weed out proprietary solutions wherever I can and force myself to either make free solutions work or do without them. Using proprietary software and services is living your life like a rat in a maze, where the oligarchs in Silicon Valley can move the walls around as they see fit.

13 December 2015

Robert Edmonds: Works with Debian: Intel SSD 750, AMD FirePro W4100, Dell P2715Q

I recently installed new hardware in my primary computer running Debian unstable. The disk used for the / and /home filesystem was replaced with an Intel SSD 750 series NVM Express card. The graphics card was replaced by an AMD FirePro W4100 card, and two Dell P2715Q monitors were installed. Intel SSD 750 series NVM Express card This is an 800 GB SSD on a PCI-Express x4 card (model number SSDPEDMW800G4X1) using the relatively new NVM Express interface, which appears as a /dev/nvme* device. The stretch alpha 4 Debian installer was able to detect and install onto this device, but grub-installer 1.127 on the installer media was unable to install the boot loader. This was due to a bug recently fixed in 1.128:
grub-installer (1.128) unstable; urgency=high
  * Fix buggy /dev/nvme matching in the case statement to determine
    disc_offered_devfs (Closes: #799119). Thanks, Mario Limonciello!
 -- Cyril Brulebois <kibi@debian.org>  Thu, 03 Dec 2015 00:26:42 +0100
I was able to download and install the updated .udeb by hand in the installer environment and complete the installation. This card was installed on a Supermicro X10SAE motherboard, and the UEFI BIOS was able to boot Debian directly from the NVMe card, although I updated to the latest available BIOS firmware prior to the installation. It appears in lspci like this:
02:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)
(prog-if 02 [NVM Express])
    Subsystem: Intel Corporation SSD 750 Series [Add-in Card]
    Flags: bus master, fast devsel, latency 0
    Memory at f7d10000 (64-bit, non-prefetchable) [size=16K]
    Expansion ROM at f7d00000 [disabled] [size=64K]
    Capabilities: [40] Power Management version 3
    Capabilities: [50] MSI-X: Enable+ Count=32 Masked-
    Capabilities: [60] Express Endpoint, MSI 00
    Capabilities: [100] Advanced Error Reporting
    Capabilities: [150] Virtual Channel
    Capabilities: [180] Power Budgeting <?>
    Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
    Capabilities: [270] Device Serial Number 55-cd-2e-41-4c-90-a8-97
    Capabilities: [2a0] #19
    Kernel driver in use: nvme
The card itself appears very large in marketing photos, but this is a visual trick: the photographs are taken with the low-profile PCI bracket installed, rather than the standard height PCI bracket which it ships installed with. smartmontools fails to read SMART data from the drive, although it is still able to retrieve basic device information, including the temperature:
root@chase 0 :~# smartctl -d scsi -a /dev/nvme0n1
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.3.0-trunk-amd64] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Vendor:               NVMe
Product:              INTEL SSDPEDMW80
Revision:             0135
Compliance:           SPC-4
User Capacity:        800,166,076,416 bytes [800 GB]
Logical block size:   512 bytes
Rotation Rate:        Solid State Device
Logical Unit id:      8086INTEL SSDPEDMW800G4                     1000CVCQ531500K2800EGN  
Serial number:        CVCQ531500K2800EGN
Device type:          disk
Local Time is:        Sun Dec 13 01:48:37 2015 EST
SMART support is:     Unavailable - device lacks SMART capability.
=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     31 C
Drive Trip Temperature:        85 C
Error Counter logging not supported
[GLTSD (Global Logging Target Save Disable) set. Enable Save with '-S on']
Device does not support Self Test logging
root@chase 4 :~# 
Simple tests with cat /dev/nvme0n1 >/dev/null and iotop show that the card can read data at about 1 GB/sec, about twice as fast as the SATA-based SSD that it replaced. apt/dpkg now run about as fast on the NVMe SSD as they do on a tmpfs. Hopefully this device doesn't at some point require updated firmware, like some infamous SSDs have. AMD FirePro W4100 graphics card This is a graphics card capable of driving multiple DisplayPort displays at "4K" resolution and at a 60 Hz refresh rate. It has four Mini DisplayPort connectors, although I only use two of them. It was difficult to find a sensible graphics card. Most discrete graphics cards appear to be marketed towards video gamers who apparently must seek out bulky cards that occupy multiple PCI slots and have excessive cooling devices. (To take a random example, the ASUS STRIX R9 390X has three fans and brags about its "Mega Heatpipes".) AMD markets a separate line of "FirePro" graphics cards intended for professionals rather than gamers, although they appear to be based on the same GPUs as their "Radeon" video cards. The AMD FirePro W4100 is a normal half-height PCI-E card that fits into a single PCI slot and has a relatively small cooler with a single fan. It doesn't even require an auxilliary power connection and is about the same dimensions as older video cards that I've successfully used with Debian. It was difficult to determine whether the W4100 card was actually supported by an open source driver before buying it. The word "FirePro" appears nowhere on the webpage for the X.org Radeon driver, but I was able to find a "CAPE VERDE" listed as an engineering name which appears to match the "Cape Verde" code name for the FirePro W4100 given on Wikipedia's List of AMD graphics processing units. This explains the "verde" string that appears in the firmware filenames requested by the kernel (available only in the non-free/firmware-amd-graphics package):
[drm] initializing kernel modesetting (VERDE 0x1002:0x682C 0x1002:0x2B1E).
[drm] Loading verde Microcode
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_pfp.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_me.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_ce.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_rlc.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_mc.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_smc.bin
The card appears in lspci like this:
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde GL [FirePro W4100]
(prog-if 00 [VGA controller])
    Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Device 2b1e
    Flags: bus master, fast devsel, latency 0, IRQ 55
    Memory at e0000000 (64-bit, prefetchable) [size=256M]
    Memory at f7e00000 (64-bit, non-prefetchable) [size=256K]
    I/O ports at e000 [size=256]
    Expansion ROM at f7e40000 [disabled] [size=128K]
    Capabilities: [48] Vendor Specific Information: Len=08 <?>
    Capabilities: [50] Power Management version 3
    Capabilities: [58] Express Legacy Endpoint, MSI 00
    Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
    Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
    Capabilities: [150] Advanced Error Reporting
    Capabilities: [200] #15
    Capabilities: [270] #19
    Kernel driver in use: radeon
The W4100 appears to work just fine, except for a few bizarre error messages that are printed to the kernel log when the displays are woken from power saving mode:
[Sun Dec 13 00:24:41 2015] [drm:si_dpm_set_power_state [radeon]] *ERROR* si_enable_smc_cac failed
[Sun Dec 13 00:24:41 2015] [drm:si_dpm_set_power_state [radeon]] *ERROR* si_enable_smc_cac failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
[Sun Dec 13 00:24:41 2015] [drm:si_dpm_set_power_state [radeon]] *ERROR* si_enable_smc_cac failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
There don't appear to be any ill effects from these error messages, though. I have the following package versions installed:
 / Name                          Version             Description
+++-=============================-===================-================================================
ii  firmware-amd-graphics         20151207-1          Binary firmware for AMD/ATI graphics chips
ii  linux-image-4.3.0-trunk-amd64 4.3-1~exp2          Linux 4.3 for 64-bit PCs
ii  xserver-xorg-video-radeon     1:7.6.1-1           X.Org X server -- AMD/ATI Radeon display driver
The Supermicro X10SAE motherboard has two PCI-E 3.0 slots, but they're listed as functioning in either "16/NA" or "8/8" mode, which apparently means that putting anything in the second slot (like the Intel 750 SSD, which uses an x4 link) causes the video card to run at a smaller x8 link width. This can be verified by looking at the widths reported in the "LnkCap" and "LnkSta" lines in the lspci -vv output:
root@chase 0 :~# lspci -vv -s 01:00.0   egrep '(LnkCap LnkSta):'
        LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
        LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
root@chase 0 :~# 
I did not notice any visible artifacts or performance degradation because of the smaller link width. The sensors utility from the lm-sensors package is capable of reporting the temperature of the GPU:
root@chase 0 :~# sensors radeon-pci-0100
radeon-pci-0100
Adapter: PCI adapter
temp1:        +55.0 C  (crit = +120.0 C, hyst = +90.0 C)
root@chase 0 :~# 
Dell P2715Q monitors Two new 27" Dell monitors with a native resolution of 3840x2160 were attached to the new graphics card. They replaced two ten year old Dell 2001FP monitors with a native resolution of 1600x1200 that had experienced burn-in, providing 4.32 times as many pixels. (TV and monitor manufacturers now shamelessly refer to the 3840x2160 resolution as "4K" resolution even though neither dimension reaches 4000 pixels.) There was very little to setup beyond plugging the DisplayPort inputs on these monitors into the DisplayPort outputs on the graphics card. Most of the setup involved reconfiguring software to work with the very high resolution. X.org, for tl;dr CLOSED NOTABUG reasons doesn't set the DPI correctly. These monitors have ~163 DPI resolution, so I added -dpi 168 to /etc/X11/xdm/Xservers. (168 is an even 1.75x multiple of 96.) Software like Google Chrome and xfce4-terminal rendered fonts and graphical elements at the right size, but other software like notion, pidgin, and virt-manager did not fully understand the high DPI. E.g., pidgin renders fonts at the correct size, but icons are too small. The default X cursor was also too small. To fix this, I installed the dmz-cursor-theme package, ran update-alternatives --config x-cursor-theme and selected /usr/share/icons/DMZ-Black/cursor.theme as the cursor theme. Overall, these displays are much brighter and more readable than the ones they replaced.

2 May 2015

Patrick Schoenfeld: Inbox: Zeroed.

E-Mail is a pest, a big time killer wasting your and my time each and every day. Of course it is also a valuable tool, one that no one can renounce. So how can it be of more use than trouble? So far I ve followed a no-delete policy when it comes to my mails, since space was not a problem at all. But it developed into a big nasty pile of mails, that brought regular distraction, each time I looked at my inbox. So I decided to adopt the Inbox zero concept. Step 1: Get the pile down My e-mails piled up since years, so I had around 10000 mails in my inbox, with some hundred being unread. I needed to get this pile down and started with the most recent mails, trying to identify clusters of mails, filtering for them and then following these steps: Since it wasn t possible to decide on a course for every mail (that would be a bit like hoovering in the dessert), I did this only for the first 1000 of mails or so. All mails older than a month were marked read and moved to archive immediately after. Another approach would be to move all files to a folder called DMZ and go to step 2. Step 2: Prepare for implanting some habits Most mails are the opposite of good old hackish perl code: read only. They are easy to act on, when they come around: just archive or delete them. But the rest will be what steals your time. Some mails require action, either immediately or in a while, some wait for a schedule, e.g. flight informations or reservation mails and stuff. Whatever the reason is, you want to keep them around, because they still have a purpose. There are various filing systems for those mails, most of them GTD variants. As a gmail user I found this variant, with multiple inboxes in a special gmail view, interesting and now give it a try. One word about the archive folders. I can highly recommend to reduce the number of folders you archive to as much as possible. Step 3: Get into habit Now to the hard part. Get into habit with acting on your inbox. Do it regularly, maybe every hour or so and be prepared to do quick decisions. Act on any mail immediately, which means either file/delete it, reply to it (if this is what takes less time) or mark it according to your filing system as prepared in step 2. And if no mails arrived, then it s a good moment to review your marked mails if any on them can be further processed. Now let s see weither my inbox will still be zeroed in a month from now.

12 July 2013

Daniel Pocock: Practical VPNs with strongSwan, Shorewall, Linux firewalls and OpenWRT routers

There is intense interest in communications privacy at the moment thanks to the Snowden scandal. Open source software has offered credible solutions for privacy and encryption for many years. Sadly, making these solutions work together is not always plug-and-play. In fact, secure networking, like VoIP, has been plagued by problems with interoperability and firewall/filtering issues although now the solutions are starting to become apparent. Here I will look at some of them, such as the use of firewall zones to simplify management or the use of ECDSA certificates to avoid UDP fragmentation problems. I've drawn together a lot of essential tips from different documents and mailing list discussions to demonstrate how to solve a real-world networking problem. A typical network scenario and requirements Here is a diagram of the network that will be used to help us examine the capabilities of these open source solutions. Some comments about the diagram:
  • The names in square brackets are the zones for Shorewall, they are explained later.
  • The dotted lines are IPsec tunnels over the untrusted Internet.
  • The road-warrior users (mobiles and laptops) get virtual IP addresses from the VPN gateway. The branch offices/home routers do not use virtual IPs.
  • The mobile phones are largely untrusted: easily lost or stolen, many apps have malware, they can only tunnel to the central VPN and only access a very limited range of services.
  • The laptops are managed devices so they are trusted with access to more services. For efficiency they can connect directly to branch office/home VPNs as well as the central server.
  • Smart-phone user browsing habits are systematically monitored by mobile companies with insidious links into mass-media and advertising. Road-warriors sometimes plug-in at client sites or hotels where IT staff may monitor their browsing. Therefore, all these users want to tunnel all their browsing through the VPN.
  • The central VPN gateway/firewall is running strongSwan VPN and Shorewall firewall on Linux. It could be Debian, Fedora or Ubuntu. Other open source platforms such as OpenBSD are also very well respected for building firewall and VPN solutions, but Shorewall, which is one of the key ingredients in this recipe, only works on Linux at present.
  • The branch office/home network could be another strongSwan/Shorewall server or it could also be an OpenWRT router
  • The default configuration for most wifi routers creates a bridge joining wifi users with wired LAN users. Not here, it has been deliberately configured as an independent network. Road-warriors who attach to the wifi must use VPN tunnelling to access the local wired network. In OpenWRT, it is relatively easy to make the Wifi network an independent subnet. This is an essential security precaution because wifi passwords should not be considered secure: they are often transmitted to third parties, for example, by the cloud backup service in many newer smart phones.
Package mayhem The major components are packaged on all the major Linux distributions. Nonetheless, in every case, I found that it is necessary to re-compile fresh strongSwan packages from sources. It is not so hard to do but it is necessary and worth the effort. Here are related blog entries where I provide the details about how to re-build fresh versions of the official packages with all necessary features enabled: Using X.509 certificates as a standard feature of the VPN For convenience, many people building a point-to-point VPN start with passwords (sometimes referred to as pre-shared keys (PSK)) as a security mechanism. As the VPN grows, passwords become unmanageable. In this solution, we only look at how to build a VPN secured by X.509 certificates. The certificate concept is not hard. In this scenario, there are a few things that make it particularly easy to work with certificates:
  • strongSwan comes with a convenient command line tool, ipsec pki. Many of it's functions are equivalent to things you can do with OpenSSL or GnuTLS. However, the ipsec pki syntax is much more lightweight and simply provides a convenient way to do the things you need to do when maintaining a VPN.
  • Many of the routine activities involved in certificate maintainence can be scripted. My recent blog about using Android clients with strongSwan gives a sample script demonstrating how ipsec pki commands can be used to prepare a PKCS#12 (.p12) file that can be easily loaded into an Android device using the SD-card.
  • For building a VPN, there is no need to use a public certificate authority such as Verisign. Consequently, there is no need to fill in all their forms or make any payments for each device/certificate. Some larger organisations do choose to outsource their certificate management to such firms. For smaller organisations, an effective and sometimes better solution can be achieved by maintaining the process in-house with a private root CA.
UDP fragmentation during IPsec IKEv2 key exchange and ECDSA A common problem for IPsec VPNs using X.509 certificates is the fragmentation of key exchange datagrams during session setup. Sometimes it works, sometimes it doesn't. Various workarounds exist, such as keeping copies of all certificates from potential peers on every host. As the network grows, this becomes inconvenient to maintain and to some extent it eliminates the benefits of using PKI. Fortunately, there is a solution: Elliptic Curve Cryptography (ECC). Many people currently use RSA key-pairs. Best practice suggests using RSA keys of at least 2048 bits and often 4096 bits. Using ECC with a smaller 384-bit key is considered to be equivalent to a 7680 bit RSA key pair. Consequently, ECDSA certificates are much smaller than RSA certificates. Furthermore, at these key sizes, the key exchange packets are almost always smaller than the typical 1500 byte MTU. A further demand for ECDSA is arising due to the use of ECC within smart cards. Many smartcards don't support any RSA key larger than 2048 bits. The highly secure 384-bit ECC key is implemented in quite a few common smart cards. Smart card vendors have shown a preference for the ECC keys due to the US Government's preference for ECC and the lower computational overheads make them more suitable for constrained execution environments. Anyone who wants to use smart cards as part of their VPN or general IT security now, or in the future, needs to consider ECC/ECDSA. Making the network simple with Shorewall zones For this example, we are not just taking some easy point-to-point example. We have a real-world, multi-site, multi-device network with road warriors. Simplifying this architecture is important to help us understand and secure it. The solution? Each of these is abstracted to a "zone" in Shorewall. In the diagram above, the zone names are in square brackets. The purpose of each zone is described below:
Zone name Description
loc This is the private LAN and contains servers like databases, private source code respositories and NFS file servers
dmz This is the DMZ and contains web servers that are accessible from the public internet. Some of these servers talk to databases or message queues in the LAN network loc
vpn_a These are road warriors that are not very trustworthy, such as mobile devices. They are occasionally stolen and usually full of spyware (referred to by users as "apps"). They have limited access to ports on some DMZ servers, e.g. for sending and receiving mail using SMTP and IMAP (those ports are not exposed to the public Internet at large). They use the VPN tunnel for general internet access/browsing, to avoid surveillance by their mobile carrier.
vpn_b These are managed laptops that have a low probability of malware infection. They may well be using smart cards for access control. Consequently, they are more trusted than the vpn_a users and have access to some extra intranet pages and file servers. Like the smart-phone users, they use the VPN tunnel for general internet access/browsing, to avoid surveillance by third-party wifi hotspot operators.
vpn_c This firewall zone represents remote sites with managed hardware, such as branch offices or home networks with IPsec routers running OpenWRT.
cust These are servers hosted for third-parties or collaborative testing/development purposes. They have their own firewall arrangements if necessary.
net This zone represents traffic from the public Internet.
A practical Shorewall configuration Shorewall is chosen to manage the iptables and ip6tables firewall rules. Shorewall provides a level of abstraction that makes netfilter much more manageable than manual iptables scripting. The Shorewall concept of zones is very similar to the zones implemented in OpenWRT and this is an extremely useful paradigm for firewall management. Practical configuration of Shorewall is very well explained in the Shorewall quick start. The one thing that is not immediately obvious is a strategy for planning the contents of the /etc/shorewall/policy and /etc/shorewall/rules files. The exact details for making it work effectively with a modern IPsec VPN are not explained in a single document, so I've gathered those details below as well. An effective way to plan the Shorewall zone configuration is with a table like this:
Destination zone
loc dmz vpn_a vpn_b vpn_c cust net
Source zone loc \
dmz ? \ X X X
vpn_a ? ? \ X X
vpn_b ? ? X \ X
vpn_c X \
cust X ? X X X \
net X ? X X X \
The symbols in the table are defined:
Symbol Meaning
ACCEPT in policy file
X REJECT or DROP in policy file
? REJECT or DROP in policy file, but ACCEPT some specific ports in the rules file
Naturally, this modelling technique is valid for both IPv4 and IPv6 firewalling (with Shorewall6) Looking at the diagram in two dimensions, it is easy to spot patterns. Each pattern can be condensed into a single entry in the rules file. For example, it is clear from the first row that the loc zone can access all other zones. That can be expressed very concisely with a single line in the policy file:
loc all ACCEPT
Specific Shorewall tips for use with IPsec VPNs and strongSwan Shorewall has several web pages dedicated to VPNs, including the IPsec specific documentation.. Personally, I found that I had to gather a few details from several of these pages to make an optimal solution. Here are those tips:
  • Ignore everything about the /etc/shorewall/tunnels file. It is not needed and not used any more
  • Name the VPN zones (we call them vpn_a, vpn_b and vpn_c) in the zones file but there is no need to put them in the /etc/shorewall/interfaces file.
  • The /etc/shorewall/hosts file is not just for hosts and can be used to specify network ranges, such as those associated with the VPN virtual IP addresses. The ranges you put in this file should match the rightsourceip pool assignments in strongSwan's /etc/ipsec.conf
  • One of the examples suggests using mss=1400 in the /etc/shorewall/zones file. I found that is too big and leads to packets being dropped in some situations. To start with, try a small value such as 1024 and then try larger values later after you prove everything works. Setting mss for IPsec appears to be essential.
  • Do not use the routefilter feature in the /etc/shorewall/interfaces file as it is not compatible with IPsec
Otherwise, just follow the typical examples from the Shorewall quick start guide and configure it to work the way you want. Here is an example /etc/shorewall/zones file:
fw firewall
net ipv4
dmz ipv4
loc ipv4
cust ipv4
vpn_a ipsec mode=tunnel mss=1024
vpn_b ipsec mode=tunnel mss=1024
vpn_c ipsec mode=tunnel mss=1024
Here is an example /etc/shorewall/hosts file describing the VPN ranges from the diagram:
vpn_a eth0:10.1.100.0/24 ipsec
vpn_b eth0:10.1.200.0/24 ipsec
vpn_c eth0:192.168.1.0/24 ipsec
Here is an example /etc/shorewall/policy file based on the table above:
loc all ACCEPT
vpn_c all ACCEPT
cust net ACCEPT
net cust ACCEPT
all all REJECT
Here is an example /etc/shorewall/rules file based on the network:
SECTION ALL
# allow connections to the firewall itself to start VPNs:
# Rule  source    dest    protocol/port details
ACCEPT   all       fw                ah
ACCEPT   all       fw                esp
ACCEPT   all       fw                udp 500
ACCEPT   all       fw                udp 4500
# allow access to HTTP servers in DMZ:
ACCEPT   all       dmz               tcp 80
# allow connections from HTTP servers to MySQL database in private LAN:
ACCEPT   dmz       loc:10.2.0.43     tcp 3306
# allow connections from all VPN users to IMAPS server in private LAN:
ACCEPT vpn_a,vpn_b,vpn_c loc:10.2.0.58 tcp 993
# allow VPN users (but not the smartphones in vpn_a) to the
# PostgresQL database for PostBooks accounting system:
ACCEPT vpn_b,vpn_c loc:10.2.0.48      tcp 5432
SECTION ESTABLISHED
ACCEPT   all       all
SECTION RELATED
ACCEPT   all       all
Once the files are created, Shorewall can be easily activated with:
# shorewall compile && shorewall restart
strongSwan IPsec VPN setup Like Shorewall, strongSwan is also very well documented and I'm just going to focus on those specific areas that are relevant to this type of VPN project.
  • Allow the road-warriors to send all browsing traffic over the VPN means including leftsubnet=0.0.0.0/0 in the VPN server's /etc/ipsec.conf file. Be wary though: sometimes the road-warriors start sending the DHCP renewal over the tunnel instead of to their local DHCP server.
  • As we are using Shorewall zones for firewalling, you must set the options leftfirewall=no and lefthostaccess=no in ipsec.conf. Shorewall already knows about the remote networks as they are defined in the /etc/shorewall/hosts file and so firewall rules don't need to be manipulated each time a tunnel goes up or down.
  • As discussed above, X.509 certificates are used for peer authentication. In the certificate Distinguished Name (DN), store the zone name in the Organizational Unit (OU) component, for example, OU=vpn_c, CN=gw.branch1.example.org
  • In the ipsec.conf file, match the users to connections using wildcard specifications such as rightid="OU=vpn_a, CN=*"
  • Put a subjectAltName with hostname in every certificate. The --san option to the ipsec pki commands adds the subjectAltName.
  • Keep the certificate distinguished names (DN) short, this makes the certificate smaller and reduces the risk of fragmented authentication packets. Many examples show a long and verbose DN such as C=GB, O=Acme World Wide Widget Corporation, OU=Engineering, CN=laptop1.eng.acme.example.org. On a private VPN, it is rarely necessary to have all that in the DN, just include OU for the zone name and CN for the host name.
  • As discussed above, use the ECDSA scheme for keypairs (not RSA) to ensure that the key exchange datagrams don't suffer from fragmentation problems. For example, generate a keypair with the command ipsec pki --gen --type ecdsa --size 384 > user1Key.der
  • Road warriors should have leftid=%fromcert in their ipsec.conf file. This forces them to use the Distinguished Name and not the subjectAltName (SAN) to identify themselves.
  • For road warriors who require virtual tunnel IPs, configure them to request both IPv4 and IPv6 addresses (dual stack) with leftsourceip=%config4,%config6 and on the VPN gateway, configure the ranges as arguments to rightsourceip
  • To ensure that roadwarriors query the LAN DNS, add the DNS settings to strongswan.conf and make sure road warriors are using a more recent strongSwan version that can dynamically update /etc/resolv.conf. Protecting DNS traffic is important for privacy reasons. It also makes sure that the road-warriors can discover servers that are not advertised in public DNS records.
Central firewall/VPN gateway configuration Although they are not present in the diagram, IPv6 networks are also configured in these strongSwan examples. It is very easy to combine IPv4 and IPv6 into a single /etc/ipsec.conf file. As long as the road-warriors have leftsourceip=%config4,%config6 in their own configurations, they will operate dual-stack IPv4/IPv6 whenver they connect to the VPN. Here is an example /etc/ipsec.conf for the central VPN gateway:
config setup
        charonstart=yes
        charondebug=all
        plutostart=no
conn %default
        ikelifetime=60m
        keylife=20m
        rekeymargin=3m
        keyingtries=1
        keyexchange=ikev2
conn branch1
        left=198.51.100.1
        leftsubnet=203.0.113.0/24,10.0.0.0/8,2001:DB8:12:80:/64
        leftcert=fw1Cert.der
        leftid=@fw1.example.org
        leftfirewall=no
        lefthostaccess=no
        right=%any
        rightid=@branch1-fw.example.org
        rightsubnet=192.168.1.0/24
        auto=add
conn rw_vpn_a
        left=198.51.100.1
        leftsubnet=0.0.0.0/0,::0/0
        leftcert=fw1Cert.der
        leftid=@fw1.example.org
        leftfirewall=no
        lefthostaccess=no
        right=%any
        rightid="OU=vpn_a, CN=*"
        rightsourceip=10.1.100.0/24,2001:DB8:1000:100::/64
        auto=add
conn rw_vpn_b
        left=198.51.100.1
        leftsubnet=0.0.0.0/0,::0/0
        leftcert=fw1Cert.der
        leftid=@fw1.example.org
        leftfirewall=no
        lefthostaccess=no
        right=%any
        rightid="OU=vpn_b, CN=*"
        rightsourceip=10.1.200.0/24,2001:DB8:1000:200::/64
        auto=add
Sample branch office or home router VPN configuration Here is an example /etc/ipsec.conf for the Linux server or OpenWRT VPN at the branch office or home:
conn head_office
        left=%defaultroute
        leftid=@branch1-fw.example.org
        leftcert=branch1-fwCert.der
        leftsubnet=192.168.1.0/24,2001:DB8:12:80:/64
        leftfirewall=no
        lefthostaccess=no
        right=fw1.example.org
        rightid=@fw1.example.org
        rightsubnet=203.0.113.0/24,10.0.0.0/8,2001:DB8:1000::/52
        auto=start
# notice we only allow vpn_b users, not vpn_a
# these users are given virtual IPs from our own
# 192.168.1.0 subnet
conn rw_vpn_b
        left=branch1-fw.example.org
        leftsubnet=192.168.1.0/24,2001:DB8:12:80:/64
        leftcert=branch1-fwCert.der
        leftid=@branch1-fw.example.org
        leftfirewall=no
        lefthostaccess=no
        right=%any
        rightid="OU=vpn_b, CN=*"
        rightsourceip=192.168.1.160/27,2001:DB8:12:80::8000/116
        auto=add
Further topics Shorewall and Shorewall6 don't currently support a unified configuration. This can make it slightly tedious to duplicate rules between the two IP variations. However, the syntax for IPv4 and IPv6 configuration is virtually identical. Shorewall only currently supports Linux netfilter rules. In theory it could be extended to support other types of firewall API, such as pf used by OpenBSD and the related BSD family of systems. A more advanced architecture would split the single firewall into multiple firewall hosts, like the inner and outer walls of a large castle. The VPN gateway would also become a standalone host in the DMZ. This would require more complex routing table entries. Smart cards with PIN numbers provide an effective form of two-factor authentication that can protect certificates for remote users. Smart cards are documented well by strongSwan already so I haven't repeated any of that material in this article. Managing a private X.509 certificate authority in practice may require slightly more effort than I've described, especially when an organisation grows. Small networks and home users don't need to worry about these details too much, but for most deployments it is necessary to consider things like certificate revocation lists and special schemes to protect the root certificate's private key. EJBCA is one open source project that might help. Some users may want to consider ways to prevent the road-warriors from accidentally browsing the Internet when the IPsec tunnel is not active. Such policies could be implemented with some basic iptables firewalls rules in the road-warrior devices. Summary Using these strategies and configuration tips, planning and building a VPN will hopefully be much simpler. Please feel free to ask questions on the mailing lists for any of the projects discussed in this blog.

12 December 2012

C.J. Adams-Collier: For the DMZ

image

12 July 2012

Ritesh Raj Sarraf: Reporting bugs with Apport

So yet another bug reporting tool. :-)When I prepared Apport for Debian, I wasn't sure how it will look going forward. If you look at the current one lying in experimental, it just detects your crashes and pops up in your systray. It doesn't have the mechanism to interact with BTS.So, like the title of this post says, this is the first worthy feature for Apport in the context of Debian.Apport Detail ReportNothing special here. You would have seen a similar window before if you have installed apport. What changes with this release is, that now, when you check the Send an error report to help fix this problem, it will really file a bug report on the BTS.Here's what the emailed bug report will look like:
Package: leafnode
Version: 2.0.0.alpha20090406a-1
=============================
ProblemType: Crash
Architecture: amd64
Date: Tue Jul  3 00:08:02 2012
Dependencies:
 adduser 3.113+nmu3
 base-passwd 3.5.24
 cron 3.0pl1-123
 debconf 1.5.44
 debianutils 4.3.1
 dpkg 1.16.4.3
 gcc-4.7-base 4.7.1-2
 libbz2-1.0 1.0.6-3
 libc-bin 2.13-33
 libc6 2.13-33
 libclass-isa-perl 0.36-3
 libdb5.1 5.1.29-4
 libfile-copy-recursive-perl 0.38-1
 libgcc1 1:4.7.1-2
 libgdbm3 1.8.3-11
 liblzma5 5.1.1alpha+20120614-1
 libpam-modules 1.1.3-7.1
 libpam-modules-bin 1.1.3-7.1
 libpam-runtime 1.1.3-7.1
 libpam0g 1.1.3-7.1
 libpcre3 1:8.30-5
 libpopt0 1.16-7
 libselinux1 2.1.9-5
 libsemanage-common 2.1.6-6
 libsemanage1 2.1.6-6
 libsepol1 2.1.4-3
 libswitch-perl 2.16-2
 libustr-1.0-1 1.0.4-3
 libwrap0 7.6.q-23
 logrotate 3.8.1-4
 lsb-base 4.1+Debian7 [modified: lib/lsb/init-functions]
 multiarch-support 2.13-33
 netbase 5.0
 openbsd-inetd 0.20091229-2
 passwd 1:4.1.5.1-1
 perl 5.14.2-12
 perl-base 5.14.2-12
 perl-modules 5.14.2-12
 sensible-utils 0.0.7
 tar 1.26-4
 tcpd 7.6.q-23
 update-inetd 4.43
 zlib1g 1:1.2.7.dfsg-13
Disassembly:
 => 0x7f8e69738475 <*__GI_raise+53>:	cmp    $0xfffffffffffff000,%rax
    0x7f8e6973847b <*__GI_raise+59>:	ja     0x7f8e69738492 <*__GI_raise+82>
    0x7f8e6973847d <*__GI_raise+61>:	repz retq 
    0x7f8e6973847f <*__GI_raise+63>:	nop
    0x7f8e69738480 <*__GI_raise+64>:	test   %eax,%eax
    0x7f8e69738482 <*__GI_raise+66>:	jg     0x7f8e69738465 <*__GI_raise+37>
    0x7f8e69738484 <*__GI_raise+68>:	test   $0x7fffffff,%eax
    0x7f8e69738489 <*__GI_raise+73>:	jne    0x7f8e697384a2 <*__GI_raise+98>
    0x7f8e6973848b <*__GI_raise+75>:	mov    %esi,%eax
    0x7f8e6973848d <*__GI_raise+77>:	nopl   (%rax)
    0x7f8e69738490 <*__GI_raise+80>:	jmp    0x7f8e69738465 <*__GI_raise+37>
    0x7f8e69738492 <*__GI_raise+82>:	mov    0x34e97f(%rip),%rdx        # 0x7f8e69a86e18
    0x7f8e69738499 <*__GI_raise+89>:	neg    %eax
    0x7f8e6973849b <*__GI_raise+91>:	mov    %eax,%fs:(%rdx)
    0x7f8e6973849e <*__GI_raise+94>:	or     $0xffffffff,%eax
    0x7f8e697384a1 <*__GI_raise+97>:	retq
DistroRelease: Debian 7.0
ExecutablePath: /usr/sbin/fetchnews
ExecutableTimestamp: 1265584779
Package: leafnode 2.0.0.alpha20090406a-1
PackageArchitecture: amd64
ProcCmdline: /usr/sbin/fetchnews
ProcCwd: /
ProcEnviron:
 LANGUAGE=en_US:en
 LC_TIME=en_IN.UTF-8
 LC_MONETARY=en_IN.UTF-8
 PATH=(custom, no user)
 LC_ADDRESS=en_IN.UTF-8
 LANG=en_US.UTF-8
 LC_TELEPHONE=en_IN.UTF-8
 LC_NAME=en_IN.UTF-8
 SHELL=/bin/sh
 LC_MEASUREMENT=en_IN.UTF-8
 LC_NUMERIC=en_IN.UTF-8
 LC_PAPER=en_IN.UTF-8
ProcMaps:
 00400000-00421000 r-xp 00000000 08:06 4464934                            /usr/sbin/fetchnews
 00621000-00622000 rw-p 00021000 08:06 4464934                            /usr/sbin/fetchnews
 00622000-00623000 rw-p 00000000 00:00 0 
 00be4000-00c05000 rw-p 00000000 00:00 0                                  [heap]
 7f8e68edc000-7f8e68eef000 r-xp 00000000 08:06 1179776                    /lib/x86_64-linux-gnu/libresolv-2.13.so
 7f8e68eef000-7f8e690ee000 ---p 00013000 08:06 1179776                    /lib/x86_64-linux-gnu/libresolv-2.13.so
 7f8e690ee000-7f8e690ef000 r--p 00012000 08:06 1179776                    /lib/x86_64-linux-gnu/libresolv-2.13.so
 7f8e690ef000-7f8e690f0000 rw-p 00013000 08:06 1179776                    /lib/x86_64-linux-gnu/libresolv-2.13.so
 7f8e690f0000-7f8e690f2000 rw-p 00000000 00:00 0 
 7f8e690f2000-7f8e690f7000 r-xp 00000000 08:06 1180036                    /lib/x86_64-linux-gnu/libnss_dns-2.13.so
 7f8e690f7000-7f8e692f6000 ---p 00005000 08:06 1180036                    /lib/x86_64-linux-gnu/libnss_dns-2.13.so
 7f8e692f6000-7f8e692f7000 r--p 00004000 08:06 1180036                    /lib/x86_64-linux-gnu/libnss_dns-2.13.so
 7f8e692f7000-7f8e692f8000 rw-p 00005000 08:06 1180036                    /lib/x86_64-linux-gnu/libnss_dns-2.13.so
 7f8e692f8000-7f8e692fa000 r-xp 00000000 08:06 1183392                    /lib/libnss_mdns4_minimal.so.2
 7f8e692fa000-7f8e694f9000 ---p 00002000 08:06 1183392                    /lib/libnss_mdns4_minimal.so.2
 7f8e694f9000-7f8e694fa000 rw-p 00001000 08:06 1183392                    /lib/libnss_mdns4_minimal.so.2
 7f8e694fa000-7f8e69505000 r-xp 00000000 08:06 1180055                    /lib/x86_64-linux-gnu/libnss_files-2.13.so
 7f8e69505000-7f8e69704000 ---p 0000b000 08:06 1180055                    /lib/x86_64-linux-gnu/libnss_files-2.13.so
 7f8e69704000-7f8e69705000 r--p 0000a000 08:06 1180055                    /lib/x86_64-linux-gnu/libnss_files-2.13.so
 7f8e69705000-7f8e69706000 rw-p 0000b000 08:06 1180055                    /lib/x86_64-linux-gnu/libnss_files-2.13.so
 7f8e69706000-7f8e69883000 r-xp 00000000 08:06 1179700                    /lib/x86_64-linux-gnu/libc-2.13.so
 7f8e69883000-7f8e69a83000 ---p 0017d000 08:06 1179700                    /lib/x86_64-linux-gnu/libc-2.13.so
 7f8e69a83000-7f8e69a87000 r--p 0017d000 08:06 1179700                    /lib/x86_64-linux-gnu/libc-2.13.so
 7f8e69a87000-7f8e69a88000 rw-p 00181000 08:06 1179700                    /lib/x86_64-linux-gnu/libc-2.13.so
 7f8e69a88000-7f8e69a8d000 rw-p 00000000 00:00 0 
 7f8e69a8d000-7f8e69a95000 r-xp 00000000 08:06 1180031                    /lib/x86_64-linux-gnu/libcrypt-2.13.so
 7f8e69a95000-7f8e69c94000 ---p 00008000 08:06 1180031                    /lib/x86_64-linux-gnu/libcrypt-2.13.so
 7f8e69c94000-7f8e69c95000 r--p 00007000 08:06 1180031                    /lib/x86_64-linux-gnu/libcrypt-2.13.so
 7f8e69c95000-7f8e69c96000 rw-p 00008000 08:06 1180031                    /lib/x86_64-linux-gnu/libcrypt-2.13.so
 7f8e69c96000-7f8e69cc4000 rw-p 00000000 00:00 0 
 7f8e69cc4000-7f8e69cc6000 r-xp 00000000 08:06 1180047                    /lib/x86_64-linux-gnu/libdl-2.13.so
 7f8e69cc6000-7f8e69ec6000 ---p 00002000 08:06 1180047                    /lib/x86_64-linux-gnu/libdl-2.13.so
 7f8e69ec6000-7f8e69ec7000 r--p 00002000 08:06 1180047                    /lib/x86_64-linux-gnu/libdl-2.13.so
 7f8e69ec7000-7f8e69ec8000 rw-p 00003000 08:06 1180047                    /lib/x86_64-linux-gnu/libdl-2.13.so
 7f8e69ec8000-7f8e69ed5000 r-xp 00000000 08:06 1179893                    /lib/x86_64-linux-gnu/libpam.so.0.83.0
 7f8e69ed5000-7f8e6a0d4000 ---p 0000d000 08:06 1179893                    /lib/x86_64-linux-gnu/libpam.so.0.83.0
 7f8e6a0d4000-7f8e6a0d5000 r--p 0000c000 08:06 1179893                    /lib/x86_64-linux-gnu/libpam.so.0.83.0
 7f8e6a0d5000-7f8e6a0d6000 rw-p 0000d000 08:06 1179893                    /lib/x86_64-linux-gnu/libpam.so.0.83.0
 7f8e6a0d6000-7f8e6a112000 r-xp 00000000 08:06 1182916                    /lib/x86_64-linux-gnu/libpcre.so.3.13.1
 7f8e6a112000-7f8e6a312000 ---p 0003c000 08:06 1182916                    /lib/x86_64-linux-gnu/libpcre.so.3.13.1
 7f8e6a312000-7f8e6a313000 rw-p 0003c000 08:06 1182916                    /lib/x86_64-linux-gnu/libpcre.so.3.13.1
 7f8e6a313000-7f8e6a333000 r-xp 00000000 08:06 1180134                    /lib/x86_64-linux-gnu/ld-2.13.so
 7f8e6a503000-7f8e6a507000 rw-p 00000000 00:00 0 
 7f8e6a530000-7f8e6a532000 rw-p 00000000 00:00 0 
 7f8e6a532000-7f8e6a533000 r--p 0001f000 08:06 1180134                    /lib/x86_64-linux-gnu/ld-2.13.so
 7f8e6a533000-7f8e6a534000 rw-p 00020000 08:06 1180134                    /lib/x86_64-linux-gnu/ld-2.13.so
 7f8e6a534000-7f8e6a535000 rw-p 00000000 00:00 0 
 7fffbc06e000-7fffbc08f000 rw-p 00000000 00:00 0                          [stack]
 7fffbc1ff000-7fffbc200000 r-xp 00000000 00:00 0                          [vdso]
 ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
ProcStatus:
 Name:	fetchnews
 State:	S (sleeping)
 Tgid:	6872
 Pid:	6872
 PPid:	6871
 TracerPid:	0
 Uid:	9	9	9	9
 Gid:	9	9	9	9
 FDSize:	64
 Groups:	9 
 VmPeak:	   21440 kB
 VmSize:	   21276 kB
 VmLck:	       0 kB
 VmPin:	       0 kB
 VmHWM:	     984 kB
 VmRSS:	     984 kB
 VmData:	     380 kB
 VmStk:	     136 kB
 VmExe:	     132 kB
 VmLib:	    2132 kB
 VmPTE:	      64 kB
 VmSwap:	       0 kB
 Threads:	1
 SigQ:	0/23227
 SigPnd:	0000000000000000
 ShdPnd:	0000000000000000
 SigBlk:	0000000000000000
 SigIgn:	0000000000000000
 SigCgt:	0000000000000000
 CapInh:	0000000000000000
 CapPrm:	0000000000000000
 CapEff:	0000000000000000
 CapBnd:	ffffffffffffffff
 Cpus_allowed:	f
 Cpus_allowed_list:	0-3
 Mems_allowed:	00000000,00000001
 Mems_allowed_list:	0
 voluntary_ctxt_switches:	6
 nonvoluntary_ctxt_switches:	1
Registers:
 rax            0x0	0
 rbx            0x0	0
 rcx            0xffffffffffffffff	-1
 rdx            0x6	6
 rsi            0x1ad8	6872
 rdi            0x1ad8	6872
 rbp            0x0	0x0
 rsp            0x7fffbc08d4f8	0x7fffbc08d4f8
 r8             0x7f8e6a504700	140249645729536
 r9             0x6d6f642064656966	7885631562835126630
 r10            0x8	8
 r11            0x246	582
 r12            0x0	0
 r13            0x7fffbc08d920	140736348084512
 r14            0x0	0
 r15            0x0	0
 rip            0x7f8e69738475	0x7f8e69738475 <*__GI_raise+53>
 eflags         0x246	[ PF ZF IF ]
 cs             0x33	51
 ss             0x2b	43
 ds             0x0	0
 es             0x0	0
 fs             0x0	0
 gs             0x0	0
Signal: 6
SourcePackage: leafnode
Stacktrace:
 #0  0x00007f8e69738475 in *__GI_raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
         pid = <optimized out>
         selftid = <optimized out>
 #1  0x00007f8e6973b6f0 in *__GI_abort () at abort.c:92
         act =  __sigaction_handler =  sa_handler = 0x7f8e69850d3d, sa_sigaction = 0x7f8e69850d3d , sa_mask =  __val =  140736348083740, 140249634747968, 0, 0, 140736348084512, 140249631083424, 140249645738440, 0, 4294967295, 206158430232, 1, 6427272, 0, 0, 0, 0 , sa_flags = 1781664242, sa_restorer = 0x1 
         sigs =  __val =  32, 0 <repeats 15 times> 
 #2  0x0000000000416292 in ?? ()
 No symbol table info available.
 #3  0x0000000000411c80 in ?? ()
 No symbol table info available.
 #4  0x0000000000406952 in ?? ()
 No symbol table info available.
 #5  0x00007f8e69724ead in __libc_start_main (main=<optimized out>, argc=<optimized out>, ubp_av=<optimized out>, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffbc08d918) at libc-start.c:228
         result = <optimized out>
         unwind_buf =  cancel_jmp_buf =  jmp_buf =  0, 498162418289285118, 4206480, 140736348084512, 0, 0, -498021567843461122, -435434157313288194 , mask_was_saved = 0 , priv =  pad =  0x0, 0x0, 0x418360, 0x7fffbc08d928 , data =  prev = 0x0, cleanup = 0x0, canceltype = 4293472 
         not_first_call = <optimized out>
 #6  0x0000000000402fb9 in ?? ()
 No symbol table info available.
 #7  0x00007fffbc08d918 in ?? ()
 No symbol table info available.
 #8  0x000000000000001c in ?? ()
 No symbol table info available.
 #9  0x0000000000000001 in ?? ()
 No symbol table info available.
 #10 0x00007fffbc08ee96 in ?? ()
 No symbol table info available.
 #11 0x0000000000000000 in ?? ()
 No symbol table info available.
StacktraceTop:
 ?? ()
 ?? ()
 ?? ()
 __libc_start_main (main=<optimized out>, argc=<optimized out>, ubp_av=<optimized out>, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffbc08d918) at libc-start.c:228
 ?? ()
ThreadStacktrace:
 .
 Thread 1 (LWP 6872):
 #0  0x00007f8e69738475 in *__GI_raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
         pid = <optimized out>
         selftid = <optimized out>
 #1  0x00007f8e6973b6f0 in *__GI_abort () at abort.c:92
         act =  __sigaction_handler =  sa_handler = 0x7f8e69850d3d, sa_sigaction = 0x7f8e69850d3d , sa_mask =  __val =  140736348083740, 140249634747968, 0, 0, 140736348084512, 140249631083424, 140249645738440, 0, 4294967295, 206158430232, 1, 6427272, 0, 0, 0, 0 , sa_flags = 1781664242, sa_restorer = 0x1 
         sigs =  __val =  32, 0 <repeats 15 times> 
 #2  0x0000000000416292 in ?? ()
 No symbol table info available.
 #3  0x0000000000411c80 in ?? ()
 No symbol table info available.
 #4  0x0000000000406952 in ?? ()
 No symbol table info available.
 #5  0x00007f8e69724ead in __libc_start_main (main=<optimized out>, argc=<optimized out>, ubp_av=<optimized out>, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffbc08d918) at libc-start.c:228
         result = <optimized out>
         unwind_buf =  cancel_jmp_buf =  jmp_buf =  0, 498162418289285118, 4206480, 140736348084512, 0, 0, -498021567843461122, -435434157313288194 , mask_was_saved = 0 , priv =  pad =  0x0, 0x0, 0x418360, 0x7fffbc08d928 , data =  prev = 0x0, cleanup = 0x0, canceltype = 4293472 
         not_first_call = <optimized out>
 #6  0x0000000000402fb9 in ?? ()
 No symbol table info available.
 #7  0x00007fffbc08d918 in ?? ()
 No symbol table info available.
 #8  0x000000000000001c in ?? ()
 No symbol table info available.
 #9  0x0000000000000001 in ?? ()
 No symbol table info available.
 #10 0x00007fffbc08ee96 in ?? ()
 No symbol table info available.
 #11 0x0000000000000000 in ?? ()
 No symbol table info available.
Title: fetchnews crashed with SIGABRT in __libc_start_main()
Uname: Linux 3.4-trunk-amd64 x86_64
UserGroups: 
The first 2 lines should be enough for the BTS server to file the bug report to the correct package and against the correct version.If you paid attention in the screen shot and the email report, you will notice that the email report does not include the CoreDump section. This was knocked off intentionally as we currently might not need it. If there is a need, the data is always available on the user's apport database, and the dev/user and seek it on a need basis. But in case, if a day comes when we really want it, that too is possible. The same email report will have the CoreDump seciton with a section like the following:
CoreDump: base64 
H4sICAAAAAAC/0NvcmVEdW1wAOx9C0AU1f7/7LLAggirouF7VDQ0xcVXWGqLoq4Kur6KMmV5CoqwwqJYVquggYqhlVm3B3mt7OWlbnXtZeubHtfobc9Lb6xUemiU5f7PmfkcmBl3VMzbvff3Px89fPb7nfOeM2fOOfOdMzeNT5pgNBgEBpMwRmiRBMEmnA6bEC8MbfYvI6eN4BePBfvX0zQC6Q9RJyULY6NKpuGC6A8rO56uE07wH44lI7YynINlc6cmnFE3n1L5nNB7j+Ssv8lPelbDaeEkuBCu8ag6HENjz9PCmZTh6sNz/aanVz4PS6+V4aoQTohQhxM1rK2XGoQTI/yn5xD814sX4VyacGerFxbOObh15atj6Z17OKl89Qjn0Qkn6pSvEeGqB59z+YKU4aqmtS6fQgDS0wlXo5NPC8I5HK07DyyczdW68yCy9FytK58V4Zw64eou8l8+G8JZy/2Xj51Abfmaw21Th7Np+LTrD+Fc21p5/SGcRxPOoX8dye0T4ep00vPotU92Hh5u3Xln4Wwvt6587A7jaGU4C8I5X/afz3qd/pqFs73WuutWZOm91rr20hzuu9aVz4pwru9adz3YEM6jE65Op3wOdh6OtO68s3C2tgtaVT4nS+/cw0nlc7H61AnnDfBfPg/Cie0WnOv5C1KFi2tdPqsQzqoTTtAZ91SzcI4Frepf6tlIbeaCc71PyyECcXxm686fBeGsrQwnIpytleGsCFcX9IJPVT6jmrXnwRHM2q863NnOn5MNbJ0+irOFo2EcBjn8uGkzxgvN/ZuMID8t7lAXQXiPuHfhLjTE983qMmswAVVmhpydm5UpZhYsEiaq254PCFLkW8nfQF9cViT1HUM18Te9LefjEhahJv6zgV3DtI7LMaJhdSzMwMFo+cIIgdPWbXaWOyMnP2tpEbwPLi4qHFyUnps/uPmI+EfqOhBD/SBF3nq19J1S+VmT8Z389iVVm8CBdhAzwQHNl 7xNNReLV6SpPH/NUzH0OWZNHkPAVYPl+Npo9GEaua1GDtfIF2ni7wz+5QP5fOOOIXxxTJYjWPgP/bfLQHSLRkUd3hTgv749PWm7FP7r0FcDP15U/cmO9fL4jLFNgzkaiBr4zwWL//QaYtey3vH/NtBrxEbbBHFJk6bOTuFt4v9cmwj4N/vn4ODg4ODg4ODg4OD4/wE3aZ7/G/H8n60B2aDfFmVs9kOf/5vJ325CV2n+HSgol59tKvYiasZmxRzNJCeIhG0q7gY1Y4OCA1UlsKl4b5hRxWxhu2Vdmq0Dp6vYi8UxS5SgCmdEuBiEi4F

Now, Is this going to be useful? Will we get flooded with bug reports?
 
Usefulness: I think this will uncover many bugs that might be getting unnoticed today. Take this example itself. This bug was detect against the leafnode package's fetchnews binary. That binary does a quiet job in the background, fetching news. I never was aware that it had been crashing. As a system-wide monitoring tool, apport was able to detect, trap and inform the user of the crash. So I think this will be useful and improve the overall quality.
Abuse/Flood: This is something I can't predict. Perhaps it will bring in challenges if people just blindly click on the Send Report button. One option can be to have the "Send Report" checkbox unchecked by default. That'll hopefully lower down the possibilities.
 
What do you think? Let me know your comments.
This change will soon be pushed to apport in experimental. Hopefully this weekend.
 
I want to wrap up this post with a Thank You note to Martin Pitt and his team who created Apport, and with such simplicity. It took me 30 lines of code to adapt it to Debian. My intent with Apport is to keep the changes to the minimum so that we can always leverage new features and fixes that Apport brings in. So, as far as I'll try, there will be no drift from its original shape.
Categories:
Keywords:

16 June 2011

Rapha&#235;l Hertzog: Installing GNOME 3 on Debian 6.0 Squeeze? No, sorry

Ever since I blogged about the status of GNOME 3 in Debian experimental, my web logs show that many people are looking for ways to try out GNOME 3 with Debian Squeeze. No GNOME 3 for Debian 6.0 Don t hold your breath, it s highly unlikely that anyone of the Debian GNOME team will prepare backports of GNOME 3 for Debian 6.0 Squeeze. It s already difficult enough to do everything right in unstable with a solid upgrade path from the current versions in Squeeze But if you are brave enough to want to install GNOME 3 with Debian 6.0 on your machine then I would suggest that you re the kind of person who should run Debian testing instead (or even Debian unstable, it s not so horrible). That s what most people who like to run recent versions of software do. How to run Debian testing You re convinced and want to run Debian testing? It s really easy, just edit your /etc/apt/sources.list and replace stable with testing . A complete file could look like this:
# Main repository
deb http://ftp.debian.org/debian testing main contrib non-free
deb-src http://ftp.debian.org/debian testing main contrib non-free
# Security updates
deb http://security.debian.org/debian-security testing main contrib non-free
Now you should be able to run apt-get dist-upgrade and end up with a testing system. How to install GNOME 3 on Debian testing aka wheezy If you want to try GNOME 3 before it has landed in testing, you ll have to add unstable and experimental to your sources.list:
deb http://ftp.debian.org/debian unstable main contrib non-free
deb http://ftp.debian.org/debian experimental main contrib non-free
You should not install GNOME 3 from experimental if you re not ready to deal with some problems and glitches. Beware: once you upgraded to GNOME 3 it will be next to impossible to go back to GNOME 2.32 (you can try it, but it s not officially supported by Debian).
To avoid upgrading all your packages to unstable, you will tell APT to prefer testing with the APT::Default-Release directive:
# cat >/etc/apt/apt.conf.d/local <<END
APT::Default-Release "testing";
END
To allow APT to upgrade the GNOME packages to unstable/experimental, you will also install the following pinning file as /etc/apt/preferences.d/gnome:
Package: *gnome* libglib2.0* *vte* *pulse* *peas* libgtk* *gjs* *gconf* *gstreamer* alacarte *brasero* cheese ekiga empathy* gdm3 gcalctool baobab *gucharmap* gvfs* hamster-applet *nautilus* seahorse* sound-juicer *totem* remmina vino gksu xdg-user-dirs-gtk dmz-cursor-theme eog epiphany* evince* *evolution* file-roller gedit* metacity *mutter* yelp* rhythmbox* banshee* system-config-printer transmission-* tomboy network-manager* libnm-* update-notifier shotwell liferea *software-properties* libunique-3.0-0 libseed-gtk3-0 libnotify* libpanel-applet-4-0 libgdata11 libcamel* libcanberra* libchamplain* libebackend* libebook* libecal* libedata* libegroupwise* libevent* gir1.2-* libxklavier16 python-gmenu libgdict-1.0-6 libgdu-gtk0
Pin: release experimental
Pin-Priority: 990
Package: *gnome* libglib2.0* *vte* *pulse* *peas* libgtk* *gjs* *gconf* *gstreamer* alacarte *brasero* cheese ekiga empathy* gdm3 gcalctool baobab *gucharmap* gvfs* hamster-applet *nautilus* seahorse* sound-juicer *totem* remmina vino gksu xdg-user-dirs-gtk dmz-cursor-theme eog epiphany* evince* *evolution* file-roller gedit* metacity *mutter* yelp* rhythmbox* banshee* system-config-printer transmission-* tomboy network-manager* libnm-* update-notifier shotwell liferea *software-properties* libunique-3.0-0 libseed-gtk3-0 libnotify* libpanel-applet-4-0 libgdata11 libcamel* libcanberra* libchamplain* libebackend* libebook* libecal* libedata* libegroupwise* libevent* gir1.2-* libxklavier16 python-gmenu libgdict-1.0-6 libgdu-gtk0
Pin: release unstable
Pin-Priority: 990
Package: *
Pin: release experimental
Pin-Priority: 150
Note that I used Pin-Priority: 990 this time (while I used 500 in the article explaining how to install GNOME 3 on top of unstable), that s because you want these packages to have the same priority than those of testing, and they have a priority of 990 instead of 500 due to the APT::Default-Release setting. You re done, your next dist-upgrade should install GNOME 3. It will pull a bunch of packages from unstable too but that s expected since the packages required by GNOME 3 are spread between unstable and experimental.
If you want to read more articles like this one, click here to subscribe to my free newsletter. You can also follow me on Identi.ca, Twitter and Facebook.

13 comments Liked this article? Click here. My blog is Flattr-enabled.

18 April 2011

Rapha&#235;l Hertzog: Status update of GNOME 3 in Debian experimental

Last week s post generated a lot of interest so I will make a small update to keep you posted on the status of GNOME 3 in Debian experimental. Experimental is not for everybody But first let me reiterate this: GNOME 3 is in Debian experimental because it s a work in progress. You should not install it if you can t live with problems and glitches. Beware: once you upgraded to GNOME 3 it will be next to impossible to go back to GNOME 2.32 (you can try it, but it s not officially supported by Debian). Even with the fallback mode, you won t get the same experience than what you had with GNOME 2.32. Many applets are not yet ported to the newest gnome-panel API. So do not upgrade to it if you re not ready to deal with the consequences. It will come to Debian unstable and to Debian testing over time and it should be in a better shape at this point. Good progress made Most of the important modules have been updated to 3.0. You can see the progress here. The exception is gdm, it still needs to be updated, the login screen looks quite ugly right now when using GNOME 3. Frequently Asked Questions and Common Problems Why do links always open in epiphany instead of iceweasel? You need to upgrade to the latest version on libglib2.0-0, gvfs and gnome-control-center in experimental. Then you can customize the default application used in the control center (under System Information > Default applications ). You might need to switch to iceweasel 4.0 in experimental to have iceweasel appear in the list of browsers. Or you can edit ~/.local/share/applications/mimeapps.list and put x-scheme-handler/http=iceweasel.desktop;epiphany.desktop; in the Added Associations section (replace the corresponding line if it already exists and lists epiphany only). The theme looks ugly, and various icons are missing. Ensure that you have installed the latest version of gnome-themes-standard, gnome-icon-theme and gnome-icon-theme-symbolic. The network icon in the Shell does not work. Ensure you have upgraded both network-manager-gnome and network-manager to the experimental version. Some applications do not start at all. If an application loads GTK2 and GTK3, it exits immediately with a clear message on the standard error output (Gtk-ERROR **: GTK+ 2.x symbols detected. Using GTK+ 2.x and GTK+ 3 in the same process is not supported.). It usually means that one of the library used by that application uses a different version of GTK+ than the application itself. You should report those problems to the Debian bug tracking system if you find any. Some people also reported failures of all GTK+ applications while using the Oxygen themes. Switching to another theme should help. BTW, the default theme in GNOME 3 is called Adwaita. Where are my icons on the desktop? They are gone, it s by design. But you can reenable them with gsettings set org.gnome.desktop.background show-desktop-icons true and starting nautilus (if it s not already running). (Thanks to bronte for the information) Why do I see all applications twice in the shell? The package menu-xdg generates a desktop file from the Debian menu information, those are in a menu entry that is hidden by default in the old GNOME menu. Gnome Shell doesn t respect those settings and displays all .desktop files. Remove menu-xdg and you will get a cleaner list of applications. APT pinning file for the brave Since last week, we got APT 0.8.14 in unstable and it supports pattern matching for package name in pinning files. So I can give you a shorter and more complete pinning file thanks to this:
Package: *gnome* libglib2.0* *vte* *pulse* *peas* libgtk* *gjs* *gconf* *gstreamer* alacarte *brasero* cheese ekiga empathy* gdm3 gcalctool baobab *gucharmap* gvfs* hamster-applet *nautilus* seahorse* sound-juicer *totem* remmina vino gksu xdg-user-dirs-gtk dmz-cursor-theme eog epiphany* evince* evolution* file-roller gedit* metacity *mutter* yelp* rhythmbox* banshee* system-config-printer transmission-* tomboy network-manager* libnm-* update-notifier shotwell liferea *software-properties* libunique-3.0-0 libseed-gtk3-0 libnotify* libpanel-applet-4-0 libgdata11 libcamel* libcanberra* libchamplain* libebackend* libebook* libecal* libedata* libegroupwise* libevent* gir1.2-*
Pin: release experimental
Pin-Priority: 500
Package: *
Pin: release experimental
Pin-Priority: 150
Putting the file above in /etc/apt/preferences.d/gnome and having experimental enabled in /etc/apt/sources.list should be enough to enable apt-get dist-upgrade to upgrade to GNOME 3 in experimental. But if you have packages depending on libimobiledevice1, you might have to wait until #620065 is properly fixed so that libimobiledevice2 is co-installable with libimobiledevice1. Update: integrated the explanation to reenable the desktop icons thanks to bronte s comment.

15 comments Liked this article? Click here. My blog is Flattr-enabled.

11 April 2011

Rapha&#235;l Hertzog: Journey of a new GNOME 3 Debian packager

With all the buzz around GNOME 3, I really wanted to try it out for real on my main laptop. It usually runs Debian Unstable but that s not enough in this case, GNOME 3 is not fully packaged yet and it s only in experimental for now. I asked Josselin Mouette (of the pkg-gnome team) when he expected it to be available and he could not really answer because there s lots of work left. Instead Roland Mas gently answered me Sooner if you help . :-) First steps as a GNOME packager This is pretty common in free software and for once I followed the advice, I spent most of sunday helping out with GNOME 3 packaging. I have no prior experience with GNOME packaging but I m fairly proficient in Debian packaging in general so when I showed up on #debian-gnome (irc.debian.org) on sunday morning, Josselin quickly added me to the team on alioth.debian.org. Still being a pkg-gnome rookie, I started by reading the documentation on pkg-gnome.alioth.debian.org. This is enough to know where to find the code in the SVN repository, and how to do releases, but it doesn t contain much information about what you need to know to be a good GNOME packager. It would have been great to have some words on introspection and what it changes in terms of packaging for instance. Josselin suggested me to start with one of the modules that was not yet updated at all (most packages have a pre-release version usually 2.91 in experimental, but some are still at 2.30). Packages updated and problems encountered (You can skip this section if you re not into GNOME packaging) So I picked up totem. I quickly updated totem-pl-parser as a required build-dependency and made my first mistake by uploading it to unstable (it turns out it s not a problem for this specific package). Totem itself was more complicated even if some preliminary work was already in the subversion repository. It introduces a new library which required a new package and I spent a long time debugging why the package would not build in a minimalistic build environment. Indeed while the package was building fine in my experimental chroot, I took care to build my test packages like the auto-builders would do with sbuild (in sid environment + the required build-dependencies from experimental) and there it was failing. In fact it turns out pkg-config was failing because libquvi-dev was missing (and it was required by totem-pl-parser.pc) but this did not leave any error message in config.log. Next, I decided to take care of gnome-screensaver as it was not working for me (I could not unlock the screen once it was activated). When built in my experimental chroot, it was fine but when built in the minimalistic environment it was failing. Turns out /usr/lib/gnome-screensaver/gnome-screensaver-dialog was loading both libgtk2 and libgtk3 at the same time and was crashing. It s not linked against libgtk2 but it was linked against the unstable version of libgnomekbdui which is still using libgtk2. Bumping the build-dependency on libgnomekbd-dev fixed the problem. In the evening, I took care of mutter and gnome-shell, and did some preliminary work on gnome-menus. Help is still welcome There s still lots of work to do, you re welcome to do like me and join to help. Come on #debian-gnome on irc.debian.org, read the documentation and try to update a package (and ask questions when you don t know). Installation of GNOME 3 from Debian experimental You can also try GNOME 3 on your Debian machine, but at this point I would advise to do it only if you re ready to invest some time in understanding the remaining problems. It s difficult to cherry-pick just the required packages from experimental, I tried it and at the start I ended up with a bad user experience (important packages like gnome-themes-standard or gnome-icon-theme not installed/updated and similar issues). To help you out with this, here s a file that you can put in /etc/apt/preferences.d/gnome to allow APT to upgrade the most important GNOME 3 packages from experimental:
Package: gnome gnome-desktop-environment gnome-core alacarte brasero cheese ekiga empathy gdm3 gcalctool gconf-editor gnome-backgrounds gnome-bluetooth gnome-media gnome-netstatus-applet gnome-nettool gnome-system-monitor gnome-system-tools gnome-user-share baobab gnome-dictionary gnome-screenshot gnome-search-tool gnome-system-log gstreamer0.10-tools gucharmap gvfs-bin hamster-applet nautilus-sendto seahorse seahorse-plugins sound-juicer totem-plugins remmina vino gksu xdg-user-dirs-gtk gnome-shell gnome-panel dmz-cursor-theme eog epiphany-browser evince evolution evolution-data-server file-roller gedit gnome-about gnome-applets gnome-control-center gnome-disk-utility gnome-icon-theme gnome-keyring gnome-menus gnome-panel gnome-power-manager gnome-screensaver gnome-session gnome-settings-daemon gnome-terminal gnome-themes gnome-user-guide gvfs gvfs-backends metacity mutter nautilus policykit-1-gnome totem yelp gnome-themes-extras gnome-games libpam-gnome-keyring rhythmbox-plugins banshee rhythmbox-plugin-cdrecorder system-config-printer totem-mozilla epiphany-extensions gedit-plugins evolution-plugins evolution-exchange evolution-webcal gnome-codec-install transmission-gtk avahi-daemon tomboy network-manager-gnome gnome-games-extra-data gnome-office update-notifier shotwell liferea epiphany-browser-data empathy-common nautilus-sendto-empathy brasero-common
Pin: release experimental
Pin-Priority: 500
Package: *
Pin: release experimental
Pin-Priority: 150
The list might not be exhaustive and sometimes you will have to give supplementary hints to apt for the upgrade to succeed, but it s better than nothing. I hope you find this useful. I m enjoying my shiny new GNOME 3 desktop and it s off for a good start. My main complaint is that hamster-applet (time tracker) has not yet been integrated in the shell.

21 comments Liked this article? Click here. My blog is Flattr-enabled.

22 November 2010

Petter Reinholdtsen: Lenny->Squeeze upgrades of the Gnome and KDE desktop, now with apt-get autoremove

Michael Biebl suggested to me on IRC, that I changed my automated upgrade testing of the Lenny Gnome and KDE Desktop to do apt-get autoremove when using apt-get. This seem like a very good idea, so I adjusted by test scripts and can now present the updated result from today: This is for Gnome: Installed using apt-get, missing with aptitude
apache2.2-bin aptdaemon baobab binfmt-support browser-plugin-gnash cheese-common cli-common cups-pk-helper dmz-cursor-theme empathy empathy-common freedesktop-sound-theme freeglut3 gconf-defaults-service gdm-themes gedit-plugins geoclue geoclue-hostip geoclue-localnet geoclue-manual geoclue-yahoo gnash gnash-common gnome gnome-backgrounds gnome-cards-data gnome-codec-install gnome-core gnome-desktop-environment gnome-disk-utility gnome-screenshot gnome-search-tool gnome-session-canberra gnome-system-log gnome-themes-extras gnome-themes-more gnome-user-share gstreamer0.10-fluendo-mp3 gstreamer0.10-tools gtk2-engines gtk2-engines-pixbuf gtk2-engines-smooth hamster-applet libapache2-mod-dnssd libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libart2.0-cil libboost-date-time1.42.0 libboost-python1.42.0 libboost-thread1.42.0 libchamplain-0.4-0 libchamplain-gtk-0.4-0 libcheese-gtk18 libclutter-gtk-0.10-0 libcryptui0 libdiscid0 libelf1 libepc-1.0-2 libepc-common libepc-ui-1.0-2 libfreerdp-plugins-standard libfreerdp0 libgconf2.0-cil libgdata-common libgdata7 libgdu-gtk0 libgee2 libgeoclue0 libgexiv2-0 libgif4 libglade2.0-cil libglib2.0-cil libgmime2.4-cil libgnome-vfs2.0-cil libgnome2.24-cil libgnomepanel2.24-cil libgpod-common libgpod4 libgtk2.0-cil libgtkglext1 libgtksourceview2.0-common libmono-addins-gui0.2-cil libmono-addins0.2-cil libmono-cairo2.0-cil libmono-corlib2.0-cil libmono-i18n-west2.0-cil libmono-posix2.0-cil libmono-security2.0-cil libmono-sharpzip2.84-cil libmono-system2.0-cil libmtp8 libmusicbrainz3-6 libndesk-dbus-glib1.0-cil libndesk-dbus1.0-cil libopal3.6.8 libpolkit-gtk-1-0 libpt2.6.7 libpython2.6 librpm1 librpmio1 libsdl1.2debian libsrtp0 libssh-4 libtelepathy-farsight0 libtelepathy-glib0 libtidy-0.99-0 media-player-info mesa-utils mono-2.0-gac mono-gac mono-runtime nautilus-sendto nautilus-sendto-empathy p7zip-full pkg-config python-aptdaemon python-aptdaemon-gtk python-axiom python-beautifulsoup python-bugbuddy python-clientform python-coherence python-configobj python-crypto python-cupshelpers python-elementtree python-epsilon python-evolution python-feedparser python-gdata python-gdbm python-gst0.10 python-gtkglext1 python-gtksourceview2 python-httplib2 python-louie python-mako python-markupsafe python-mechanize python-nevow python-notify python-opengl python-openssl python-pam python-pkg-resources python-pyasn1 python-pysqlite2 python-rdflib python-serial python-tagpy python-twisted-bin python-twisted-conch python-twisted-core python-twisted-web python-utidylib python-webkit python-xdg python-zope.interface remmina remmina-plugin-data remmina-plugin-rdp remmina-plugin-vnc rhythmbox-plugin-cdrecorder rhythmbox-plugins rpm-common rpm2cpio seahorse-plugins shotwell software-center system-config-printer-udev telepathy-gabble telepathy-mission-control-5 telepathy-salut tomboy totem totem-coherence totem-mozilla totem-plugins transmission-common xdg-user-dirs xdg-user-dirs-gtk xserver-xephyr
Installed using apt-get, removed with aptitude
cheese ekiga eog epiphany-extensions evolution-exchange fast-user-switch-applet file-roller gcalctool gconf-editor gdm gedit gedit-common gnome-games gnome-games-data gnome-nettool gnome-system-tools gnome-themes gnuchess gucharmap guile-1.8-libs libavahi-ui0 libdmx1 libgalago3 libgtk-vnc-1.0-0 libgtksourceview2.0-0 liblircclient0 libsdl1.2debian-alsa libspeexdsp1 libsvga1 rhythmbox seahorse sound-juicer system-config-printer totem-common transmission-gtk vinagre vino
Installed using aptitude, missing with apt-get
gstreamer0.10-gnomevfs
Installed using aptitude, removed with apt-get
[nothing]
This is for KDE: Installed using apt-get, missing with aptitude
ksmserver
Installed using apt-get, removed with aptitude
kwin network-manager-kde
Installed using aptitude, missing with apt-get
arts dolphin freespacenotifier google-gadgets-gst google-gadgets-xul kappfinder kcalc kcharselect kde-core kde-plasma-desktop kde-standard kde-window-manager kdeartwork kdeartwork-emoticons kdeartwork-style kdeartwork-theme-icon kdebase kdebase-apps kdebase-workspace kdebase-workspace-bin kdebase-workspace-data kdeeject kdelibs kdeplasma-addons kdeutils kdewallpapers kdf kfloppy kgpg khelpcenter4 kinfocenter konq-plugins-l10n konqueror-nsplugins kscreensaver kscreensaver-xsavers ktimer kwrite libgle3 libkde4-ruby1.8 libkonq5 libkonq5-templates libnetpbm10 libplasma-ruby libplasma-ruby1.8 libqt4-ruby1.8 marble-data marble-plugins netpbm nuvola-icon-theme plasma-dataengines-workspace plasma-desktop plasma-desktopthemes-artwork plasma-runners-addons plasma-scriptengine-googlegadgets plasma-scriptengine-python plasma-scriptengine-qedje plasma-scriptengine-ruby plasma-scriptengine-webkit plasma-scriptengines plasma-wallpapers-addons plasma-widget-folderview plasma-widget-networkmanagement ruby sweeper update-notifier-kde xscreensaver-data-extra xscreensaver-gl xscreensaver-gl-extra xscreensaver-screensaver-bsod
Installed using aptitude, removed with apt-get
ark google-gadgets-common google-gadgets-qt htdig kate kdebase-bin kdebase-data kdepasswd kfind klipper konq-plugins konqueror ksysguard ksysguardd libarchive1 libcln6 libeet1 libeina-svn-06 libggadget-1.0-0b libggadget-qt-1.0-0b libgps19 libkdecorations4 libkephal4 libkonq4 libkonqsidebarplugin4a libkscreensaver5 libksgrd4 libksignalplotter4 libkunitconversion4 libkwineffects1a libmarblewidget4 libntrack-qt4-1 libntrack0 libplasma-geolocation-interface4 libplasmaclock4a libplasmagenericshell4 libprocesscore4a libprocessui4a libqalculate5 libqedje0a libqtruby4shared2 libqzion0a libruby1.8 libscim8c2a libsmokekdecore4-3 libsmokekdeui4-3 libsmokekfile3 libsmokekhtml3 libsmokekio3 libsmokeknewstuff2-3 libsmokeknewstuff3-3 libsmokekparts3 libsmokektexteditor3 libsmokekutils3 libsmokenepomuk3 libsmokephonon3 libsmokeplasma3 libsmokeqtcore4-3 libsmokeqtdbus4-3 libsmokeqtgui4-3 libsmokeqtnetwork4-3 libsmokeqtopengl4-3 libsmokeqtscript4-3 libsmokeqtsql4-3 libsmokeqtsvg4-3 libsmokeqttest4-3 libsmokeqtuitools4-3 libsmokeqtwebkit4-3 libsmokeqtxml4-3 libsmokesolid3 libsmokesoprano3 libtaskmanager4a libtidy-0.99-0 libweather-ion4a libxklavier16 libxxf86misc1 okteta oxygencursors plasma-dataengines-addons plasma-scriptengine-superkaramba plasma-widget-lancelot plasma-widgets-addons plasma-widgets-workspace polkit-kde-1 ruby1.8 systemsettings update-notifier-common
Running apt-get autoremove made the results using apt-get and aptitude a bit more similar, but there are still quite a lott of differences. I have no idea what packages should be installed after the upgrade, but hope those that do can have a look.

20 November 2010

Petter Reinholdtsen: Lenny->Squeeze upgrades, apt vs aptitude with the Gnome and KDE desktop

I'm still running upgrade testing of the Lenny Gnome and KDE Desktop, but have not had time to spend on reporting the status. Here is a short update based on a test I ran 20101118. I still do not know what a correct migration should look like, so I report any differences between apt and aptitude and hope someone else can see if anything should be changed. This is for Gnome: Installed using apt-get, missing with aptitude
apache2.2-bin aptdaemon at-spi baobab binfmt-support browser-plugin-gnash cheese-common cli-common cpp-4.3 cups-pk-helper dmz-cursor-theme empathy empathy-common finger freedesktop-sound-theme freeglut3 gconf-defaults-service gdm-themes gedit-plugins geoclue geoclue-hostip geoclue-localnet geoclue-manual geoclue-yahoo gnash gnash-common gnome gnome-backgrounds gnome-cards-data gnome-codec-install gnome-core gnome-desktop-environment gnome-disk-utility gnome-screenshot gnome-search-tool gnome-session-canberra gnome-spell gnome-system-log gnome-themes-extras gnome-themes-more gnome-user-share gs-common gstreamer0.10-fluendo-mp3 gstreamer0.10-tools gtk2-engines gtk2-engines-pixbuf gtk2-engines-smooth hal-info hamster-applet libapache2-mod-dnssd libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libart2.0-cil libatspi1.0-0 libboost-date-time1.42.0 libboost-python1.42.0 libboost-thread1.42.0 libchamplain-0.4-0 libchamplain-gtk-0.4-0 libcheese-gtk18 libclutter-gtk-0.10-0 libcryptui0 libcupsys2 libdiscid0 libeel2-data libelf1 libepc-1.0-2 libepc-common libepc-ui-1.0-2 libfreerdp-plugins-standard libfreerdp0 libgail-common libgconf2.0-cil libgdata-common libgdata7 libgdl-1-common libgdu-gtk0 libgee2 libgeoclue0 libgexiv2-0 libgif4 libglade2.0-cil libglib2.0-cil libgmime2.4-cil libgnome-vfs2.0-cil libgnome2.24-cil libgnomepanel2.24-cil libgnomeprint2.2-data libgnomeprintui2.2-common libgnomevfs2-bin libgpod-common libgpod4 libgtk2.0-cil libgtkglext1 libgtksourceview-common libgtksourceview2.0-common libmono-addins-gui0.2-cil libmono-addins0.2-cil libmono-cairo2.0-cil libmono-corlib2.0-cil libmono-i18n-west2.0-cil libmono-posix2.0-cil libmono-security2.0-cil libmono-sharpzip2.84-cil libmono-system2.0-cil libmtp8 libmusicbrainz3-6 libndesk-dbus-glib1.0-cil libndesk-dbus1.0-cil libopal3.6.8 libpolkit-gtk-1-0 libpt-1.10.10-plugins-alsa libpt-1.10.10-plugins-v4l libpt2.6.7 libpython2.6 librpm1 librpmio1 libsdl1.2debian libservlet2.4-java libsrtp0 libssh-4 libtelepathy-farsight0 libtelepathy-glib0 libtidy-0.99-0 libxalan2-java libxerces2-java media-player-info mesa-utils mono-2.0-gac mono-gac mono-runtime nautilus-sendto nautilus-sendto-empathy openoffice.org-writer2latex openssl-blacklist p7zip p7zip-full pkg-config python-4suite-xml python-aptdaemon python-aptdaemon-gtk python-axiom python-beautifulsoup python-bugbuddy python-clientform python-coherence python-configobj python-crypto python-cupshelpers python-cupsutils python-eggtrayicon python-elementtree python-epsilon python-evolution python-feedparser python-gdata python-gdbm python-gst0.10 python-gtkglext1 python-gtkmozembed python-gtksourceview2 python-httplib2 python-louie python-mako python-markupsafe python-mechanize python-nevow python-notify python-opengl python-openssl python-pam python-pkg-resources python-pyasn1 python-pysqlite2 python-rdflib python-serial python-tagpy python-twisted-bin python-twisted-conch python-twisted-core python-twisted-web python-utidylib python-webkit python-xdg python-zope.interface remmina remmina-plugin-data remmina-plugin-rdp remmina-plugin-vnc rhythmbox-plugin-cdrecorder rhythmbox-plugins rpm-common rpm2cpio seahorse-plugins shotwell software-center svgalibg1 system-config-printer-udev telepathy-gabble telepathy-mission-control-5 telepathy-salut tomboy totem totem-coherence totem-mozilla totem-plugins transmission-common xdg-user-dirs xdg-user-dirs-gtk xserver-xephyr zip
Installed using apt-get, removed with aptitude
arj bluez-utils cheese dhcdbd djvulibre-desktop ekiga eog epiphany-extensions epiphany-gecko evolution-exchange fast-user-switch-applet file-roller gcalctool gconf-editor gdm gedit gedit-common gnome-app-install gnome-games gnome-games-data gnome-nettool gnome-system-tools gnome-themes gnome-utils gnome-vfs-obexftp gnome-volume-manager gnuchess gucharmap guile-1.8-libs hal libavahi-compat-libdnssd1 libavahi-core5 libavahi-ui0 libbind9-50 libbluetooth2 libcamel1.2-11 libcdio7 libcucul0 libcurl3 libdirectfb-1.0-0 libdmx1 libdvdread3 libedata-cal1.2-6 libedataserver1.2-9 libeel2-2.20 libepc-1.0-1 libepc-ui-1.0-1 libexchange-storage1.2-3 libfaad0 libgadu3 libgalago3 libgd2-noxpm libgda3-3 libgda3-common libggz2 libggzcore9 libggzmod4 libgksu1.2-0 libgksuui1.0-1 libgmyth0 libgnome-desktop-2 libgnome-pilot2 libgnomecups1.0-1 libgnomeprint2.2-0 libgnomeprintui2.2-0 libgpod3 libgraphviz4 libgtk-vnc-1.0-0 libgtkhtml2-0 libgtksourceview1.0-0 libgtksourceview2.0-0 libgucharmap6 libhesiod0 libicu38 libisccc50 libisccfg50 libiw29 libjaxp1.3-java-gcj libkpathsea4 liblircclient0 libltdl3 liblwres50 libmagick++10 libmagick10 libmalaga7 libmozjs1d libmpfr1ldbl libmtp7 libmysqlclient15off libnautilus-burn4 libneon27 libnm-glib0 libnm-util0 libopal-2.2 libosp5 libparted1.8-10 libpisock9 libpisync1 libpoppler-glib3 libpoppler3 libpt-1.10.10 libraw1394-8 libsdl1.2debian-alsa libsensors3 libsexy2 libsmbios2 libsoup2.2-8 libspeexdsp1 libssh2-1 libsuitesparse-3.1.0 libsvga1 libswfdec-0.6-90 libtalloc1 libtotem-plparser10 libtrackerclient0 libvoikko1 libxalan2-java-gcj libxerces2-java-gcj libxklavier12 libxtrap6 libxxf86misc1 libzephyr3 mysql-common rhythmbox seahorse sound-juicer swfdec-gnome system-config-printer totem-common totem-gstreamer transmission-gtk vinagre vino w3c-dtd-xhtml wodim
Installed using aptitude, missing with apt-get
gstreamer0.10-gnomevfs
Installed using aptitude, removed with apt-get
[nothing]
This is for KDE: Installed using apt-get, missing with aptitude
autopoint bomber bovo cantor cantor-backend-kalgebra cpp-4.3 dcoprss edict espeak espeak-data eyesapplet fifteenapplet finger gettext ghostscript-x git gnome-audio gnugo granatier gs-common gstreamer0.10-pulseaudio indi kaddressbook-plugins kalgebra kalzium-data kanjidic kapman kate-plugins kblocks kbreakout kbstate kde-icons-mono kdeaccessibility kdeaddons-kfile-plugins kdeadmin-kfile-plugins kdeartwork-misc kdeartwork-theme-window kdeedu kdeedu-data kdeedu-kvtml-data kdegames kdegames-card-data kdegames-mahjongg-data kdegraphics-kfile-plugins kdelirc kdemultimedia-kfile-plugins kdenetwork-kfile-plugins kdepim-kfile-plugins kdepim-kio-plugins kdessh kdetoys kdewebdev kdiamond kdnssd kfilereplace kfourinline kgeography-data kigo killbots kiriki klettres-data kmoon kmrml knewsticker-scripts kollision kpf krosspython ksirk ksmserver ksquares kstars-data ksudoku kubrick kweather libasound2-plugins libboost-python1.42.0 libcfitsio3 libconvert-binhex-perl libcrypt-ssleay-perl libdb4.6++ libdjvulibre-text libdotconf1.0 liberror-perl libespeak1 libfinance-quote-perl libgail-common libgsl0ldbl libhtml-parser-perl libhtml-tableextract-perl libhtml-tagset-perl libhtml-tree-perl libio-stringy-perl libkdeedu4 libkdegames5 libkiten4 libkpathsea5 libkrossui4 libmailtools-perl libmime-tools-perl libnews-nntpclient-perl libopenbabel3 libportaudio2 libpulse-browse0 libservlet2.4-java libspeechd2 libtiff-tools libtimedate-perl libunistring0 liburi-perl libwww-perl libxalan2-java libxerces2-java lirc luatex marble networkstatus noatun-plugins openoffice.org-writer2latex palapeli palapeli-data parley parley-data poster psutils pulseaudio pulseaudio-esound-compat pulseaudio-module-x11 pulseaudio-utils quanta-data rocs rsync speech-dispatcher step svgalibg1 texlive-binaries texlive-luatex ttf-sazanami-gothic
Installed using apt-get, removed with aptitude
amor artsbuilder atlantik atlantikdesigner blinken bluez-utils cvs dhcdbd djvulibre-desktop imlib-base imlib11 kalzium kanagram kandy kasteroids katomic kbackgammon kbattleship kblackbox kbounce kbruch kcron kdat kdemultimedia-kappfinder-data kdeprint kdict kdvi kedit keduca kenolaba kfax kfaxview kfouleggs kgeography kghostview kgoldrunner khangman khexedit kiconedit kig kimagemapeditor kitchensync kiten kjumpingcube klatin klettres klickety klines klinkstatus kmag kmahjongg kmailcvt kmenuedit kmid kmilo kmines kmousetool kmouth kmplot knetwalk kodo kolf kommander konquest kooka kpager kpat kpdf kpercentage kpilot kpoker kpovmodeler krec kregexpeditor kreversi ksame ksayit kshisen ksig ksim ksirc ksirtet ksmiletris ksnake ksokoban kspaceduel kstars ksvg ksysv kteatime ktip ktnef ktouch ktron kttsd ktuberling kturtle ktux kuickshow kverbos kview kviewshell kvoctrain kwifimanager kwin kwin4 kwordquiz kworldclock kxsldbg libakode2 libarts1-akode libarts1-audiofile libarts1-mpeglib libarts1-xine libavahi-compat-libdnssd1 libavahi-core5 libavc1394-0 libbind9-50 libbluetooth2 libboost-python1.34.1 libcucul0 libcurl3 libcvsservice0 libdirectfb-1.0-0 libdjvulibre21 libdvdread3 libfaad0 libfreebob0 libgd2-noxpm libgraphviz4 libgsmme1c2a libgtkhtml2-0 libicu38 libiec61883-0 libindex0 libisccc50 libisccfg50 libiw29 libjaxp1.3-java-gcj libk3b3 libkcal2b libkcddb1 libkdeedu3 libkdegames1 libkdepim1a libkgantt0 libkleopatra1 libkmime2 libkpathsea4 libkpimexchange1 libkpimidentities1 libkscan1 libksieve0 libktnef1 liblockdev1 libltdl3 liblwres50 libmagick10 libmimelib1c2a libmodplug0c2 libmozjs1d libmpcdec3 libmpfr1ldbl libneon27 libnm-util0 libopensync0 libpisock9 libpoppler-glib3 libpoppler-qt2 libpoppler3 libraw1394-8 librss1 libsensors3 libsmbios2 libssh2-1 libsuitesparse-3.1.0 libswfdec-0.6-90 libtalloc1 libxalan2-java-gcj libxerces2-java-gcj libxtrap6 lskat mpeglib network-manager-kde noatun pmount tex-common texlive-base texlive-common texlive-doc-base texlive-fonts-recommended tidy ttf-dustin ttf-kochi-gothic ttf-sjfonts
Installed using aptitude, missing with apt-get
dolphin kde-core kde-plasma-desktop kde-standard kde-window-manager kdeartwork kdebase kdebase-apps kdebase-workspace kdebase-workspace-bin kdebase-workspace-data kdeutils kscreensaver kscreensaver-xsavers libgle3 libkonq5 libkonq5-templates libnetpbm10 netpbm plasma-widget-folderview plasma-widget-networkmanagement xscreensaver-data-extra xscreensaver-gl xscreensaver-gl-extra xscreensaver-screensaver-bsod
Installed using aptitude, removed with apt-get
kdebase-bin konq-plugins konqueror

30 January 2010

Josselin Mouette: Please save the graphical installer

The current state of g-i, the graphical version of the Debian installer is very concerning. Currently, the GTK+ version in squeeze (2.18 and soon 2.20) has very serious bugs in the DirectFB backend, which make it unusable for g-i. Because of that, the first alpha version of d-i will ship without graphical installer support. Unless someone steps up and does something, this will be the end of the graphical installer. Among other things, it means the end of support for several languages: Indic scripts, Thai, Amharic, and all RTL languages. Option 1: fix GTK+ DirectFB support Until now, we always found some good wills who volunteered to fix GTK+ so that g-i worked again. I d like to thank Attilio Fiandrotti and Sven Neumann for their past work, but unfortunately it seems they have better things to do now. If someone takes over their work and hacks on GTK+ to get it to work correctly again on DirectFB, we will be able to go on this way at least for the squeeze release. This requires someone with serious DirectFB knowledge who will not be afraid to dig into the GDK internals. Option 2: switch to X11 If GTK+ doesn t work on DirectFB, there is another plan, but it needs to happen fast. It should be possible to make the installer work on X11. This has the advantage that we know X11 works fine and is maintained in the long term, and so does the X11 GTK+ backend. This has also the drawback to make the installation media slightly larger. This requires quite some work on udebs: It looks like a lot, but there s nothing complicated in it. Anyone familiar enough with Debian can do this, with a little support from the maintainers of said packages. So this could well be you. Assuming you re interested in keeping g-i alive. Alternatives Other possibilities to support complex languages include:

21 August 2008

Josselin Mouette: Don t worry, your data is safe

Computer security is generally defined as the following:
  1. The data should remain available to those who need it.
  2. The data should not be available to those who don t have the right to.
The critical software component to the first item is your well-known nightmare: the backup software. Backups are easy! In many corporate environments, here is how you ensure you have backups: Isn t that cool? So little work to have all your data completely safe! Wait if you look more closely, there is something wrong here. Nowhere in this process did you specify credentials to the backup guy, nor did you add anything to identify the backup server. Or to say it otherwise, once you have installed the backup agent, the backup server has full access to your data. But you didn t even specify anywhere the IP of the backup server. So this means that anyone on the network has full access to your data. Which, while trying to make your data secure, actually brings a gaping security issue on your system. Security by nullity When you install a well-designed backup agent, like Bacula, you have to specify a password for each client and make this password known to the backup server. A simple authentication protocol ensures that only the backup server is able to backup your data. However, when you install, for example, the HP Data protector agent, it starts listening on a TCP socket and (no kidding!) binds it to a restricted shell which has access to a small list of commands. The backup server only needs to connect to this TCP socket and issue commands. While this has the great advantage of simplifying the development process of the backup server, such a software has a name: a rootkit. Several other proprietary backup software have different implementations, like RPC-based or proprietary protocols, but the basis remains the same: you connect to the TCP port, and you have a way to read absolutely any file on the system. Of course, there are so-called security options, that you can buy besides the disk agent or the nifty web interface. Yes, when you buy software that is critical for your security, the very lowest level of security that you d expect from any software not turning your box into a self-service is a paying option. In the end, what prevents your data from being available to anyone on the intarweb is the ultimate solution to everything(tm): the firewall. As people are not stupid enough to use the same backup system in the DMZ, you can t simply bounce from it after having used a hole in the lousy PHP code that is never updated. (Well, that will still work in many cases since people actually are stupid enough.) No, what makes it very fun is the backup network. The hackup network A tape library is expensive hardware, therefore you only want to buy one. However, when you have several departments or, in outsourced environments, several clients, you can t simply access to the backup agents on all these servers, because of all these evil routers and firewalls that are blocking you. The solution is trivial: use the second network card on all servers to connect to a backup network. The backup network is entirely managed by backup people who have no idea of network security, and is routed all the way down to the backup server. If you are lucky, some ACLs set up by the network guy who plugged the switch will prevent some parts of the network to access some others. In all cases, it s very likely that many people in the company (or from other companies!) have full IP access to all your private servers where sensitive data is stored. That includes access to your databases setup without a password, and of course to your highly secure backup agent. That generally also includes full IP access to the lousy maintained backup server, on which security updates are never installed. But don t worry. Your data is safe. Note: no, this is not a worst-case scenario. Such badly-designed software is widespread. Such practice is widespread in several companies I know.

30 May 2008

Joey Hess: locking down ssh authorized keys

The way .ssh/authorized_keys is typically used is not secure. Because using it securely is hard, and dumping in passwordless ssh keys is easy. I spent about 5 hours today locking down my authorized_keys. rsync The only easy case is a rsync of a single directory. Follow these good instructions. If you need to rsync multiple separate directories, it's easy to find several documents involving a validate-rsync.sh. Do not use, it is insecure -- it allows rsync to be run with any parameters. Including parameters that allow the remote system to rsync in a new ~/.ssh/authorized_keys. Oops. (You can probably also trick validate-rsync.sh into running other arbitrary commands.) To be secure, you have to check the rsync parameters against some form of whitelist. git Locking down git should be easy. Use git-shell. Unless you also want to be able to log in to an interactive shell on the same account. To do that, you'll need to generate a separate ssh key for git. Both git servers I use are named git.*, so I set it up like this in ~/.ssh/config on the client:
Host git.*
    IdentityFile ~/.ssh/id_rsa.git
Then on the server, I added the key to authorized_keys, prefixed with this:
command="perl -e 'exec qw(git-shell -c), $ENV SSH_ORIGINAL_COMMAND '"
(I also tried the simpler command="git-shell -c $SSH_ORIGINAL_COMMAND"; but it didn't work with alioth's old version of openssh, and I didn't want to worry about exposing SSHORIGINALCOMMAND to the shell.) Before arriving at that, I took a detour to using gitosis. It can make it impressively easy to manage locked down git over ssh, but you have to fit how it does things. Including repository layout, and a dedicated account. So I spent several hours changing my git repo layout. In the end, I decided gitosis was too complicated and limiting for what I needed, but I do recommend checking it out. svn Like git, this is fairly easy, since svn provides svnserve. The svnbook documents how to use it in .authroized_keys. Basically, just use
command="svnserve -t"
A similar approach as used for git can be used here if you want to have a dedicated ssh key that causes svnserve to be run. d-i daily builds d-i has a d-i-unpack-helper that can be put in authorized_keys. unison Can probably be locked down similarly to rsync, but I haven't tried yet. duplicity The only way seems to not use duplicity over ssh at all, and instead use a different transport. too hard What if you want to allow locked down git, and a rsync, and maybe unison too, all to the same account? You'll end up with some nasty mismash of all of the above, and there are plenty more things using ssh as a transport that need other techniques to lock them down. Until this becomes easier, a majority will just dump passwordless ssh keys in ~/.ssh/authorized_keys, creating exploitable trust paths that don't need to exist.

23 January 2008

Russell Coker: Asus EeePC as a Router

It seems to me that the Asus EeePC (a $AU499 ultra-light laptop with only flash storage) would make a decent router. Often full desktop PCs are used as routers because they run the most common software and have standard interfaces. There have been dedicated router devices with flash for a long time, but without the ability to connect a standard monitor and keyboard they were always more difficult to manage than general purpose PCs. Also dedicated routers have limited RAM and storage which often does not permit running a standard OS. According to the best review I could find [1] EeePC has a minimum of 256M of RAM, a Celeron-M CPU (32bit Intel), and a minimum of 2G of flash storage. This hardware is more than adequate to run most server software that you might want to run (my current router/firewall/server has 192M of RAM). It could run a web server, a mail server, or any other general server stuff. It comes pre-loaded with a modified version of Debian so you get all the Debian software (which incidentally means more pre-packaged software than is available for any other distribution). Bigger versions are common, I believe that the $AU499 version has 512M of RAM and 4G of flash - I’m not sure that I could even obtain a lesser EeePC in Australia. The up-side of flash is that it doesn’t take much power as having a low power device in whatever confined space ends up housing your router is a good thing (the EeePC is listed as using less than 20W no matter what the load and idling at as little as 14W) and that it doesn’t tend to break when dropped or have any moving parts to wear out. The down-side of flash is that a sufficient number of writes will destroy it. Obviously swap is not a suitable use for a flash storage device. But a small mail server (suitable for the needs of a home or a small office) should be fine with it. Squid is commonly run on router devices, to run it on an EeePC I would be inclined to buy 1G USB flash device for the cache, then if Squid’s use destroyed the flash storage it would be easy to spend $20 and buy another. The EeePC has three USB ports and a built-in Ethernet port. I believe that the minimum number of Ethernet ports for a firewall is three, this means either one for the uplink, one for the DMZ, and one for your LAN, or two for uplinks (with a redundant uplink) and one for the LAN. The three USB ports allow using two USB Ethernet devices to provide the minimum three Ethernet ports and one for USB flash storage. One notable advantage of using a laptop as a server is the built-in UPS (the laptop battery). Many people have put old laptops into service as servers for this reason, but usually an old battery gives no more than about 30 minutes of power, while a new EeePC should be able to last for more than 3 hours without mains power. Using a second-hand laptop as a server is usually not viable in a corporate environment as laptops are all different. Repairing an old desktop PC is easy, repairing an old laptop is unreasonably difficult and often more expensive than replacing it. The low price of the EeePC makes it easily affordable (cheaper than some of the desktop machines that might be used for such a purpose) and the fact that it is purchased new at such a price means that you get warranty support etc. It seems to me that a significant disadvantage of using an EeePC (or anything other than a real server) for a server task is that it lacks ECC RAM [2], but as I’m not aware of any device sold for the purpose of running as a low-end router or sold at a price-point such that it could be used as a low-end router which has ECC RAM I guess that’s not a big disadvantage for that purpose. But the lack of ECC RAM and the lack of useful support for RAID does make it a little less suitable for use as a mail server. Finally if you are going to run a VPN then it would be handy to be able to secure the hardware against malicious activity when the office is closed. An EeePC will fit nicely in most safes… Please let me know what you think about this idea. I’ll probably start installing some such machines for clients in the near future if ASUS can keep up with the demand.

14 December 2007

Martin F. Krafft: Finally on the right track

It's now three weeks since I am officially a Ph.D. student at the University of Limerick, but I've been too busy to do anything about it. Today, I finally updated my research webpage and sat down to tell you the tale of how I successfully defended my research, but failed to leave the country the next day. As is customary in the British system, I initially enrolled at UL as a Master student. In October, I then applied for a transfer to the Ph.D. track, submitted a transfer report and prepared for a live defence, according to the UL code of practice. On 23 November, I presented my research to my examiners, Dr. P r gerfalk and Patrick Healy, who found my work to be at Ph.D. level and approved of my transfer. I was moderately nervous going into the presentation, not really knowing what was expected of me. In the end, however, the presentation went well and the ensuing discussion provided valuable feedback. Thank you, P r and Paddy, as well as Norah (who chaired), and Brian (my supervisor) for all your time, help, and support. After the defence, Mel waited outside with a beer in hand, and it being Friday and all, I happily popped the cap and drank. Plans to head to The Stables gave way to chilling on the couch in 110 (my home away from home), before we headed to town for the Kila gig. We arrived early, had some beers, enjoyed the show, had some more beers, went home, and had some more beers to Guitar Hero, and when I crawled into bed, I was shocked to find out that I had lost all track of time and it was in fact already 5:30 on Saturday morning, instead of the perceived 2 o'clock. See, the source of this shock was a plane that left Dublin the next day around noon, and a rental car that was to get me there, leaving the house at 8. But when my alarm shattered my drunk dreams at 7:30, it quickly became clear (or well, everything was blurred, actually), that I would have a hard time getting into the car, and I would certainly never manage to navigate it to Dublin, a three hour drive. So I called up the airline and the rental company and rescheduled for the next day (only one flight from Dublin to Zurich a day), and went back to dreamland. I am entirely unsure whether my Ph.D. transfer defence was enough of a reason to get smashed. I still had a good time, though as always in Ireland, especially with this crowd NP: The Flower Kings: The Rainmaker

20 October 2007

Jordi Mallach: You might get an email from me tonight

Sometime in August, I said I would watch the Inbox Zero talk later on that day. Well, I finally did today. And I'm ready to mass-murder my (now not so) fat inbox folder and start from scratch, and becoming a good boy. In fact, I've been on probation for a few weeks. While I wasn't watching the talk (which is pretty insightful and fun, and useful if you also have these horrid mail handling problems) I did roll up my sleeves a few times and worked on reducing the problem. After a few rounds of fighting, things were looking slightly better. I deleted TONS of spam which still was sitting in there. I deleted entire threads of list mail which for some reason wasn't being filtered properly. I archived a lot of random, misc email. I even replied to some job offers, for a change. I fixed my .procmailrc a little to get rid of lots of useless stuff that appears in my mail. It got better, but not entirely better. I went from the 6600~ which was probably the figure when I said Enough! to around 2580. It's still a lot, and I can still get rid of a lot more with easy pattern searches in mutt. The good news is that, for the first time in ages, the number of emails in the mailbox has stayed stable for more than a month. I tell you: I'm proud! So Merlin gets asked in the talk what to do when you've been a naughty boy for a long time, and you've ended up with this HUGE mailbox you can't handle anymore. His answer was what some people suggested in blog comments: put it aside, start from zero. Merlin calls it mail-DMZ, and that's probably what I'll do in a few hours, admittedly with a sentiment of guilt deep in my chest. And from that point, I'll have my mailbox be a TODO list. Delete. Defer. Delegate. Respond. Do. Simple! Other Planet Debian participants like joeyh commented that something that really helps is reducing the number of times you poll for email. For me, that means
set daemon      1800            # Pool every 30 minutes
when it was 5 minutes before. I hope I won't find myself issuing awaken commands often... I remember when, more than five years ago, having more than 100 mails made me feel bad and go cleanup. After some vacation, it went up to 150. Then Christmas came along, 300, until I found myself nearing 7000 last summer. Before moving my junk to a demilitarised mailbox, I'm having some fun replying to some email. The first one in my mailbox is from a member of a Catalan "Mallach" family.
From: Conchita Broquetas <familia_mallach_broquetas@yahoo.es>
Subject: Hola!
To: jordi@sindominio.net
Date: Sun, 17 Jun 2001 16:55:17 +0200 (CEST)
who discovered there was a "Jordi Mallach" other than his brother in the Internet. Apparently we had an exchange on where our families came from (Mallach is all but a common surname... anywhere, and my family has always wondered where it came from). So that's more than 6 years ago. I think I'd love to get a reply to some email sent by me years ago which has been sitting for years in a mailbox, because "I need to reply to this sometime". I think the Mallach-Broquetas are getting one tonight. If you think I'm dumping random thoughts on a vim buffer, it's probably due to me feeling sad today. Sorry, but I feel like typing, and I don't have a typewriter with me. Speaking of sad, nothing beats the next email which sat for some dramatic 6 months in my messy inbox until I found out in the worst of the possible scenarios. Let's go back to late February, 2004, when I had no job, and I didn't have a clue on what to do with my life.
From: Mark Shuttleworth <mark@hbd.com>
Subject: New project to discuss
To: Jordi Mallach <jordi@debian.org>
Date: Sun, 29 Feb 2004 18:33:51 +0000
[...]
I'm hiring a team of debian developers to work full time on a new
distribution based on Debian. We're making internationalisation a prime
focus, together with Python and regular release management. I've discussed
it with a number of Debian leaders and they're all very positive about it.
[...]
I'm not sure if I totally missed it as it came in, or I skimmed through it and thought WTF?! Dude on crack or I just forgot I need to reply to this email , but I'd swear it was the former. Not long after, no-name-yet.com popped up, the rumours started spreading around Debian channels. Luckily, I got a job at LliureX two months later, where I worked during the following 2 years, but that's another story. I guess it was July or so when Ubuntu was made public, and Mark and his secret team organised a conference (blog entries [1] [2] [3] [4] [5]), just before the Warty release, and I was invited to it, for the same reasons I got that email. During that conference, probably because Mark sent me some email and I applied a filter to get to it, I found the lost email, and felt like digging a hole to hide for a LONG while. I couldn't believe the incredible opportunity I had missed. I went to Mark and said "hey, you're not going to believe this", and he did look quite surprised about someone being such an idiot. I wonder if I should reply to his email today...

22 July 2007

Petr Rockai: toolbox upgrades

In the past something-over-half-a-year (wow, time flies like crazy), i have shuffled my (virtual) toolbox fairly significantly. In the desktop department, I no longer use the full KDE desktop, only a few applications from KDE 3 (that is mostly konqueror, plus some of kdeedu). I still use amarok, although by now mostly just to manage my iPod, since it is outright laptop-hostile. Most of the time, it’s better (and much more power-efficient) to use iPod hooked into usb for power, at least as long as the music is there. For the rest, i now use the commandline players to play single albums (just music123 for most formats). For the window manager, i have opted for xmonad, after briefly using ion3 and a faint attempt at wmii (which didn’t quite work for me). Xmonad, however, works great and its simplicity is quite amazing. Despite its absolute minimalism, it is, after some getting used to, much more usable than the rest of WMs i have used. For terminal, i use urxvt, which is lighter and apparently also faster than konsole (i may reconsider that choice when KDE 4 konsole comes out). With xmonad, i don’t really need tabs anymore, i’m not sure why, but it may be related to more general change in my workflow. In the version control department, i am now using darcs for all the projects where i decide. I no longer use svk, since i no longer really know how to. Not working with it for a while, i lost the ability to use it efficiently (which possibly hints at the fact it is somewhat too complex). On the other hand, i am much happier with darcs, also since it is much more forgiving about mistakes and generally very deterministic. It requires some insight to be used efficiently i suppose, but this is mostly due to differences with more traditional systems (i believe). Today, i have finally moved my ikiwiki to darcs, which was the last thing using subversion/svk (well, i have only been able to use it through ssh and local svn for some months now, so i generally didn’t use it at all). This hopefully means that i will be able to add more content to this page and blog as well, since it is less effort now. I am still using emacs (GNU emacs 22, xemacs didn’t quite work for me when i have tried it — i have a somewhat nontrivial setup, too). I intend to try out vimpulse at some point, in addition to viper, to supersede my emacs-lisp hacks for visual line selection. I have also started working on replacing my internet-facing services, since they are running in an “emergency” (ie, not really working well) mode, after the xen system broke down completely (it was mostly un-upgradable due to fragility of xen packages and of the whole xen system — and since the setup relied on working xen for even basic internet connectivity, upgrades were extremely painful). I am migrating to a solution based on user-mode-linux, and moving all the services to a less power-hungry and less noisy machine (an oldish pentium 3 box). When this is complete, i should be able to read my private mail sanely again, since currently the spam filters don’t work and my inbox is mostly trash. This is unrelated to the xen breakdown, my mail used to be handled by a machine at a company i used to do admin work for. However, they removed my accounts (and therefore access to my mail) without a line of prior notice. I still have control over my domain at least (although there will be a problem when it gets to expire, i will have to arrange a transfer with the said company, i suppose). Morale of the story: never do that. (Yeah, and in the spirit of the post, i am still using mutt for reading mail).

Next.