Search Results: "danw"

28 October 2022

Antoine Beaupr : Debating VPN options

In my home lab(s), I have a handful of machines spread around a few points of presence, with mostly residential/commercial cable/DSL uplinks, which means, generally, NAT. This makes monitoring those devices kind of impossible. While I do punch holes for SSH, using jump hosts gets old quick, so I'm considering adding a virtual private network (a "VPN", not a VPN service) so that all machines can be reachable from everywhere. I see three ways this can work:
  1. a home-made Wireguard VPN, deployed with Puppet
  2. a Wireguard VPN overlay, with Tailscale or equivalent
  3. IPv6, native or with tunnels
So which one will it be?

Wireguard Puppet modules As is (unfortunately) typical with Puppet, I found multiple different modules to talk with Wireguard.
module score downloads release stars watch forks license docs contrib issue PR notes
halyard 3.1 1,807 2022-10-14 0 0 0 MIT no requires firewall and Configvault_Write modules?
voxpupuli 5.0 4,201 2022-10-01 2 23 7 AGPLv3 good 1/9 1/4 1/61 optionnally configures ferm, uses systemd-networkd, recommends systemd module with manage_systemd to true, purges unknown keys
abaranov 4.7 17,017 2021-08-20 9 3 38 MIT okay 1/17 4/7 4/28 requires pre-generated private keys
arrnorets 3.1 16,646 2020-12-28 1 2 1 Apache-2 okay 1 0 0 requires pre-generated private keys?
The voxpupuli module seems to be the most promising. The abaranov module is more popular and has more contributors, but it has more open issues and PRs. More critically, the voxpupuli module was written after the abaranov author didn't respond to a PR from the voxpupuli author trying to add more automation (namely private key management). It looks like setting up a wireguard network would be as simple as this on node A:
wireguard::interface   'wg0':
  source_addresses => ['2003:4f8:c17:4cf::1', '149.9.255.4'],
  public_key       => $facts['wireguard_pubkeys']['nodeB'],
  endpoint         => 'nodeB.example.com:53668',
  addresses        => [ 'Address' => '192.168.123.6/30', , 'Address' => 'fe80::beef:1/64' ,],
 
This configuration come from this pull request I sent to the module to document how to use that fact. Note that the addresses used here are examples that shouldn't be reused and do not confirm to RFC5737 ("IPv4 Address Blocks Reserved for Documentation", 192.0.2.0/24 (TEST-NET-1), 198.51.100.0/24 (TEST-NET-2), and 203.0.113.0/24 (TEST-NET-3)) or RFC3849 ("IPv6 Address Prefix Reserved for Documentation", 2001:DB8::/32), but that's another story. (To avoid boostrapping problems, the resubmit-facts configuration could be used so that other nodes facts are more immediately available.) One problem with the above approach is that you explicitly need to take care of routing, network topology, and addressing. This can get complicated quickly, especially if you have lots of devices, behind NAT, in multiple locations (which is basically my life at home, unfortunately). Concretely, basic Wireguard only support one peer behind NAT. There are some workarounds for this, but they generally imply a relay server of some sort, or some custom registry, it's kind of a mess. And this is where overlay networks like Tailscale come in.

Tailscale Tailscale is basically designed to deal with this problem. It's not fully opensource, but pretty close, and they have an interesting philosophy behind that. The client is opensource, and there is an opensource version of the server side, called headscale. They have recently (late 2022) hired the main headscale developer while promising to keep supporting it, which is pretty amazing. Tailscale provides an overlay network based on Wireguard, where each peer basically has a peer-to-peer encrypted connexion, with automatic key rotation. They also ship a multitude of applications and features on top of that like file sharing, keyless SSH access, and so on. The authentication layer is based on an existing SSO provider, you don't just register with Tailscale with new account, you login with Google, Microsoft, or GitHub (which, really, is still Microsoft). The Headscale server ships with many features out of that:
  • Full "base" support of Tailscale's features
  • Configurable DNS
    • Split DNS
    • MagicDNS (each user gets a name)
  • Node registration
    • Single-Sign-On (via Open ID Connect)
    • Pre authenticated key
  • Taildrop (File Sharing)
  • Access control lists
  • Support for multiple IP ranges in the tailnet
  • Dual stack (IPv4 and IPv6)
  • Routing advertising (including exit nodes)
  • Ephemeral nodes
  • Embedded DERP server (AKA NAT-to-NAT traversal)
Neither project (client or server) is in Debian (RFP 972439 for the client, none filed yet for the server), which makes deploying this for my use case rather problematic. Their install instructions are basically a curl bash but they also provide packages for various platforms. Their Debian install instructions are surprisingly good, and check most of the third party checklist we're trying to establish. (It's missing a pin.) There's also a Puppet module for tailscale, naturally. What I find a little disturbing with Tailscale is that you not only need to trust Tailscale with authorizing your devices, you also basically delegate that trust also to the SSO provider. So, in my case, GitHub (or anyone who compromises my account there) can penetrate the VPN. A little scary. Tailscale is also kind of an "all or nothing" thing. They have MagicDNS, file transfers, all sorts of things, but those things require you to hook up your resolver with Tailscale. In fact, Tailscale kind of assumes you will use their nameservers, and have suffered great lengths to figure out how to do that. And naturally, here, it doesn't seem to work reliably; my resolv.conf somehow gets replaced and the magic resolution of the ts.net domain fails. (I wonder why we can't opt in to just publicly resolve the ts.net domain. I don't care if someone can enumerate the private IP addreses or machines in use in my VPN, at least I don't care as much as fighting with resolv.conf everywhere.) Because I mostly have access to the routers on the networks I'm on, I don't think I'll be using tailscale in the long term. But it's pretty impressive stuff: in the time it took me to even review the Puppet modules to configure Wireguard (which is what I'll probably end up doing), I was up and running with Tailscale (but with a broken DNS, naturally). (And yes, basic Wireguard won't bring me DNS either, but at least I won't have to trust Tailscale's Debian packages, and Tailscale, and Microsoft, and GitHub with this thing.)

IPv6 IPv6 is actually what is supposed to solve this. Not NAT port forwarding crap, just real IPs everywhere. The problem is: even though IPv6 adoption is still growing, it's kind of reaching a plateau at around 40% world-wide, with Canada lagging behind at 34%. It doesn't help that major ISPs in Canada (e.g. Bell Canada, Videotron) don't care at all about IPv6 (e.g. Videotron in beta since 2011). So we can't rely on those companies to do the right thing here. The typical solution here is often to use a tunnel like HE's tunnelbroker.net. It's kind of tricky to configure, but once it's done, it works. You get end-to-end connectivity as long as everyone on the network is on IPv6. And that's really where the problem lies here; the second one of your nodes can't setup such a tunnel, you're kind of stuck and that tool completely breaks down. IPv6 tunnels also don't give you the kind of security a VPN provides as well, naturally. The other downside of a tunnel is you don't really get peer-to-peer connectivity: you go through the tunnel. So you can expect higher latencies and possibly lower bandwidth as well. Also, HE.net doesn't currently charge for this service (and they've been doing this for a long time), but this could change in the future (just like Tailscale, that said). Concretely, the latency difference is rather minimal, Google:
--- ipv6.l.google.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 136,8ms
RTT[ms]: min = 13, median = 14, p(90) = 14, max = 15
--- google.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 136,0ms
RTT[ms]: min = 13, median = 13, p(90) = 14, max = 14
In the case of GitHub, latency is actually lower, interestingly:
--- ipv6.github.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 134,6ms
RTT[ms]: min = 13, median = 13, p(90) = 14, max = 14
--- github.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 293,1ms
RTT[ms]: min = 29, median = 29, p(90) = 29, max = 30
That is because HE.net peers directly with my ISP and Fastly (which is behind GitHub.com's IPv6, apparently?), so it's only 6 hops away. While over IPv4, the ping goes over New York, before landing AWS's Ashburn, Virginia datacenters, for a whopping 13 hops... I managed setup a HE.net tunnel at home, because I also need IPv6 for other reasons (namely debugging at work). My first attempt at setting this up in the office failed, but now that I found the openwrt.org guide, it worked... for a while, and I was able to produce the above, encouraging, mini benchmarks. Unfortunately, a few minutes later, IPv6 just went down again. And the problem with that is that many programs (and especially OpenSSH) do not respect the Happy Eyeballs protocol (RFC 8305), which means various mysterious "hangs" at random times on random applications. It's kind of a terrible user experience, on top of breaking the one thing it's supposed to do, of course, which is to give me transparent access to all the nodes I maintain. Even worse, it would still be a problem for other remote nodes I might setup where I might not have acess to the router to setup the tunnel. It's also not absolutely clear what happens if you setup the same tunnel in two places... Presumably, something is smart enough to distribute only a part of the /48 block selectively, but I don't really feel like going that far, considering how flaky the setup is already.

Other options If this post sounds a little biased towards IPv6 and Wireguard, it's because it is. I would like everyone to migrate to IPv6 already, and Wireguard seems like a simple and sound system. I'm aware of many other options to make VPNs. So before anyone jumps in and says "but what about...", do know that I have personnally experimented with:
  • tinc: nice, automatic meshing, used for the Montreal mesh, serious design flaws in the crypto that make it generally unsafe to use; supposedly, v1.1 (or 2.0?) will fix this, but that's been promised for over a decade by now
  • ipsec, specifically strongswan: hard to configure (especially configure correctly!), harder even to debug, otherwise really nice because transparent (e.g. no need for special subnets), used at work, but also considering a replacement there because it's a major barrier to entry to train new staff
  • OpenVPN: mostly used as a client for [VPN service][]s like Riseup VPN or Mullvad, mostly relevant for client-server configurations, not really peer-to-peer, shared secrets or TLS, kind of an hassle to maintain, see also SoftEther for an alternative implementation
All of those solutions have significant problems and I do not wish to use any of those for this project. Also note that Tailscale is only one of many projects laid over Wireguard to do that kind of thing, see this LWN review for others (basically NetbBird, Firezone, and Netmaker).

Future work Those are options that came up after writing this post, and might warrant further examination in the future.
  • Meshbird, a "distributed private networking" with little information about how it actually works other than "encrypted with strong AES-256"
  • Nebula, "A scalable overlay networking tool with a focus on performance, simplicity and security", written by Slack people to replace IPsec, docs, runs as an overlay for Slack's 50k node network, only packaged in Debian experimental, lagging behind upstream (1.4.0, from May 2021 vs upstream's 1.6.1 from September 2022), requires a central CA, Golang, I'm in "wait and see" mode for now
  • n2n: "layer two VPN", seems packaged in Debian but inactive
  • ouroboros: "peer-to-peer packet network prototype", sounds and seems complicated
  • QuickTUN is interesting because it's just a small wrapper around NaCL, and it's in Debian... but maybe too obscure for my own good
  • unetd: Wireguard-based full mesh networking from OpenWRT, not in Debian
  • vpncloud: "high performance peer-to-peer mesh VPN over UDP supporting strong encryption, NAT traversal and a simple configuration", sounds interesting, not in Debian
  • Yggdrasil: actually a pretty good match for my use case, but I didn't think of it when starting the experiments here; packaged in Debian, with the Golang version planned, Puppet module; major caveat: nodes exposed publicly inside the global mesh unless configured otherwise (firewall suggested), requires port forwards, alpha status

Conclusion Right now, I'm going to deploy Wireguard tunnels with Puppet. It seems like kind of a pain in the back, but it's something I will be able to reuse for work, possibly completely replacing strongswan. I have another Puppet module for IPsec which I was planning to publish, but now I'm thinking I should just abort that and replace everything with Wireguard, assuming we still need VPNs at work in the future. (I have a number of reasons to believe we might not need any in the near future anyways...)

29 September 2022

Russell Coker: Links September 2022

Tony Kern wrote an insightful document about the crash of a B-52 at Fairchild air base in 1994 as a case study of failed leadership [1]. Cory Doctorow wrote an insightful medium article We Should Not Endure a King describing the case for anti-trust laws [2]. We need them badly. Insightful Guardian article about the way reasonable responses to the bad situations people are in are diagnosed as mental health problems [3]. Providing better mental healthcare is good, but the government should also work on poverty etc. Cory Doctorow wrote an insightful Locus article about some of the issues that have to be dealt with in applying anti-trust legislation to tech companies [4]. We really need this to be done. Ars Technica has an interesting article about Stable Diffusion, an open source ML system for generating images [5], the results that it can produce are very impressive. One interesting thing is that the license has a set of conditions for usage which precludes exploiting or harming minors or generating false information [6]. This means it will need to go in the non-free section of Debian at best. Dan Wang wrote an interesting article on optimism as human capital [7] which covers the reasons that people feel inspired to create things.

11 December 2013

Gustavo Noronha Silva: WebKitGTK+ hackfest 5.0 (2013)!

For the fifth year in a row the fearless WebKitGTK+ hackers have gathered in A Coru a to bring GNOME and the web closer. Igalia has organized and hosted it as usual, welcoming a record 30 people to its office. The GNOME foundation has sponsored my trip allowing me to fly the cool 18 seats propeller airplane from Lisbon to A Coru a, which is a nice adventure, and have pulpo a feira for dinner, which I simply love! That in addition to enjoying the company of so many great hackers.
Web with wider tabs and the new prefs dialog

Web with wider tabs and the new prefs dialog

The goals for the hackfest have been ambitious, as usual, but we made good headway on them. Web the browser (AKA Epiphany) has seen a ton of little improvements, with Carlos splitting the shell search provider to a separate binary, which allowed us to remove some hacks from the session management code from the browser. It also makes testing changes to Web more convenient again. Jon McCan has been pounding at Web s UI making it more sleek, with tabs that expand to make better use of available horizontal space in the tab bar, new dialogs for preferences, cookies and password handling. I have made my tiny contribution by making it not keep tabs that were created just for what turned out to be a download around. For this last day of hackfest I plan to also fix an issue with text encoding detection and help track down a hang that happens upon page load.
Martin Robinson and Dan Winship hack

Martin Robinson and Dan Winship hack

Martin Robinson and myself have as usual dived into the more disgusting and wide-reaching maintainership tasks that we have lots of trouble pushing forward on our day-to-day lives. Porting our build system to CMake has been one of these long-term goals, not because we love CMake (we don t) or because we hate autotools (we do), but because it should make people s lives easier when adding new files to the build, and should also make our build less hacky and quicker it is sad to see how slow our build can be when compared to something like Chromium, and we think a big part of the problem lies on how complex and dumb autotools and make can be. We have picked up a few of our old branches, brought them up-to-date and landed, which now lets us build the main WebKit2GTK+ library through cmake in trunk. This is an important first step, but there s plenty to do.
Hackers take advantage of the icecream network for faster builds

Hackers take advantage of the icecream network for faster builds

Under the hood, Dan Winship has been pushing HTTP2 support for libsoup forward, with a dead-tree version of the spec by his side. He is refactoring libsoup internals to accomodate the new code paths. Still on the HTTP front, I have been updating soup s MIME type sniffing support to match the newest living specification, which includes specification for several new types and a new security feature introduced by Internet Explorer and later adopted by other browsers. The huge task of preparing the ground for a one process per tab (or other kinds of process separation, this will still be topic for discussion for a while) has been pushed forward by several hackers, with Carlos Garcia and Andy Wingo leading the charge.
Jon and Guillaume battling code

Jon and Guillaume battling code

Other than that I have been putting in some more work on improving the integration of the new Web Inspector with WebKitGTK+. Carlos has reviewed the patch to allow attaching the inspector to the right side of the window, but we have decided to split it in two, one providing the functionality and one the API that will allow browsers to customize how that is done. There s a lot of work to be done here, I plan to land at least this first patch durign the hackfest. I have also fought one more battle in the never-ending User-Agent sniffing war, in which we cannot win, it looks like.
Hackers chillin' at A Coru a

Hackers chillin at A Coru a

I am very happy to be here for the fifth year in a row, and I hope we will be meeting here for many more years to come! Thanks a lot to Igalia for sponsoring and hosting the hackfest, and to the GNOME foundation for making it possible for me to attend! See you in 2014!

30 January 2013

Russell Coker: SE Linux Things To Do

At the end of my talk on Monday about the status of SE Linux [1] I described some of the things that I want to do with SE Linux in Debian (and general SE Linux stuff). Here is a brief summary of some of them: One thing I ve wanted to do for years is to get X Access Controls working in Debian. This means that two X applications could have windows on the same desktop but be unable to communicate with each other by any of the X methods (this includes screen capture and clipboard). It seems that the Fedora people are moving to sandbox processes with Xephyr for X access (see Dan Walsh s blog post about sandbox -X [2]). But XAce will take a lot of work and time is always an issue. An ongoing problem with SE Linux (and most security systems) is the difficulty in running applications with minimum privilege. One example of this is utility programs which can be run by multiple programs, if a utility is usually run by a process that is privileged then we probably won t notice that it requires excess privileges until it s run in a different context. This is a particular problem when trying to restrict programs that may be run as part of a user session. A common example is programs that open files read-write when they only need to read them, if the program then aborts when it can t open the file in question then we will have a problem when it s run from a context that doesn t grant it write access. To deal with such latent problems I am considering ways of analysing the operation of systems to try and determine which programs request more access than they really need. During my talk I discussed the possibility of using a shared object to log file open/read/write to find such latent problems. A member of the audience suggested static code analysis which seems useful for some languages but doesn t seem likely to cover all necessary languages. Of course the benefit of static code analysis is that it will catch operations that the program doesn t perform in a test environment error handling is one particularly important corner case in this regard.

26 January 2012

Russell Coker: Links January 2012

Cops in Tennessee routinely steal cash from citizens [1]. They are ordered to do so and in some cases their salary is paid from the cash that they take. So they have a good reason to imagine that any large sum of money is drug money and take it. David Frum wrote an insightful article for NY Mag about the problems with the US Republican Party [2]. TreeHugger.com has an interesting article about eco-friendly features on some modern cruise ships [3]. Dan Walsh describes how to get the RSA SecureID PAM module working on a SE Linux system [4]. It s interesting that RSA was telling everyone to turn off SE Linux and shipping a program that was falsely marked as needing an executable stack and which uses netstat instead of /dev/urandom for entropy. Really the only way RSA could do worse could be to fall victim to an Advanced Persistent Attack :-# The Long Now has an interesting summary of a presentation about archive.org [5]. I never realised the range of things that archive.org stores, I will have to explore that if I find some spare time! Jonah Lehrer wrote a detailed and informative article about the way that American high school students receive head injuries playing football[6]. He suggests that it might eventually be the end of the game as we know it. Fran ois Marier wrote an informative article about optimising PNG files [7], optipng is apparently the best option at the moment but it doesn t do everything you might want. Helen Keeble wrote an interesting review of Twilight [8]. The most noteworthy thing about it IMHO is that she tries to understand teenage girls who like the books and movies. Trying to understand young people is quite rare. Jon Masters wrote a critique of the concept of citizen journalism and described how he has two subscriptions to the NYT as a way of donating to support quality journalism [9]. The only comment on his post indicates a desire for biased news (such as Fox) which shows the reason why most US media is failing at journalism. Luis von Ahn gave an interesting TED talk about crowd-sourced translation [10]. He starts by describing CAPTCHAs and the way that his company ReCAPTCHA provides the CAPTCHA service while also using people s time to digitise books. Then he describes his online translation service and language education system DuoLingo which allows people to learn a second language for free while translating text between languages [11]. One of the benefits of this is that people don t have to pay to learn a new language and thus poor people can learn other languages great for people in developing countries that want to learn first-world languages! DuoLingo is in a beta phase at the moment but they are taking some volunteers. Cory Doctorow wrote an insightful article for the Publishers Weekly titles Copyrights vs Human Rights [12] which is primarily about SOPA. Naomi Wolf wrote an insightful article for The Guardian about the Occupy movement, among other things the highest levels of the US government are using the DHS as part of the crackdown [13]. Naomi s claim is that the right-wing and government attacks on the Occupy movement are due to the fact that they want to reform the political process and prevent corruption. John Bohannon gave an interesting and entertaining TED talk about using dance as part of a presentation [14]. He gave an example of using dancerts to illustrate some concepts related to physics and then spoke about the waste of PowerPoint. Joe Sabia gave an amusing and inspiring TED talk about the technology of storytelling [15]. He gave the presentation with live actions on his iPad to match his words, a difficult task to perform successfully. Thomas Koch wrote an informative post about some of the issues related to binary distribution of software [16]. I think the problem is evenm worse than Thomas describes. Related posts:
  1. Links January 2011 Halla Tomasdottir gave an interesting TED talk about her financial...
  2. Links January 2010 Magnus Larsson gave an interesting TED talk about using bacteria...
  3. Links January 2009 Jennifer 8 Lee gave an interesting TED talk about the...

27 February 2007

Edd Dumbill: OpenID and microformats support on XTech site

Thanks in no small part to the advocacy of Simon Willison, I've just OpenID-enabled the XTech web site. OpenID log in box on Expectnation Users can now create their accounts using an OpenID, or associate an OpenID with an existing account. A single-sign on solution like OpenID solves an important problem for us, as most people tend to interact with our conference web sites in only one or two time periods each year. While we've gone to the trouble of making retrieving a password easy, there's still the mental burden on the user of setting up the account and noting it down somewhere. As a measure of the impact of this on me personally: I habitually save registration confirmation emails in a certain mail folder. Since 1997 I have collected no fewer than 572 of these, and I'm sure some have been missed! One other cool thing about OpenID is that finally I can get the identity I wish to have. No longer do I have to be a compulsive early adopter of every service just to get the name edd. (Well, as long as said service integrates OpenID of course!) Personal branding is an important attractive aspect of OpenID.ImplementationImplementing OpenID using the Ruby ruby-openid gem was quite straightforward, as was the logical integration into our user models. I've not been the only one following this path recently, as illustrated by this post on Rails OpenID integration from Dan Webb.The harder problem of deploying OpenID lies in making the user interface work well: ultimately that will have a huge influence over its uptake. We've made a decent first go of it in Expectnation, but I'm sure we'll evolve and improve it over time. The main puzzling thing is how obvious to make the OpenID facility, given its relatively small take-up right now. We don't want to confuse normal users too much by using it. Microformats When I did my behind-the-scenes piece on the building of the XTech schedule last week, one feature I didn't discuss was the support for microformats we have in the schedule and on the session pages. If you use a tool such as Operator, you can easily save talk times to your calendar while reading the schedule.
XTech schedule microformats I'm personally a little late to the microformats party. Being a fan of pragmatic RDF, I didn't see much need for microformats right away. However, with tools like Operator I can honestly say that the use of microformats does enhance the XTech schedule.My impressions of microformats (in particular hCalendar and hCard) from using them are mixed. One the plus side, it was very easy to do. On the negative side, I found them restrictive in the sense that for the metadata to be present in the hCalendar object, it needs to be part of the HTML presentation.So, while microformats are meant to be about making human readable data useful for computers, they can have a tail-wagging effect on the human markup. Let me elaborate. In the conference schedule there is a grid overview. For readability here we want to keep the details down to a minimum in each box. There is definitely no need to repeat the date of each presentation when you can see there's a grid per day.But also we want to have microformats available in the page so users can use the grid to pick off talks to add to their calendar. The only details you currently get from the microformat are those you physically include inside the div marked as vevent. This means we can't embed the full details, such as the talk description. It also means I indulge in some dubious markup practices (an empty abbr element) in order to get the date and time into each hCalendar object.It seems to me that this could be ameliorated by more intelligent user agent behaviour. Each of my hCalendar events is given a URL. At the end of that URL is a full description of the event, using microformats. So, as long as I reference the URL in a summary page, the user agent can beetle off and pull down the full information, in much the same sort of way that FOAF uses the rdfs:seeAlso property.So, remove the expectation that microformats provide complete data, and I'm sold.Other schedule features: iCal, Upcoming.org
Of course, we have iCalendar support in the XTech schedule, so you can subscribe conventionally using iCal, Evolution or a similar program. Aaron Straup Cope took the iCalendar, and uploaded each event into Upcoming.org. If you look at the upcoming events tagged xtech07, you see the results of his work.This foreshadows some of the social elements we plan to add to Expectnation itself: indicating your intent to attend a talk, and adding comments to it. As a program chair I'm finding this quite fascinating to watch.

17 December 2006

Clint Adams: Next week: why everyone should go to tech shows topless

Kathy Sierra writes about how conference T-shirt distributors are not trying to make their attendees look sexy enough, by providing tees that attempt to hide breasts. She doesn't mention underwear though, and this suffers from exactly the same problem. The free logo-branded boxers you get at these things are almost always poorly-made and ill-fitting. Why not provide Speedos or bikini bottoms or anything which will tightly-hug the pubic region and showcase one's junk? Does this guy want to hide his genitalia? No, he wants to strut, proud and trouserless, across the expo floor, knowing full well how much the conference organizers care about him. Bless his courage and rejection of societal norms.

28 February 2006

Erich Schubert: On AppArmor vs. SELinux

Some might have read recent news such as Novell SELinux killer rattles Red Hat, or Dan Walsh's critique of Novells AppArmor release, concerned with "unix like fragmentation in the security sector". While I also do think that SELinux is both more mature in the core system and more powerful than AppArmor (with a big plus being that SELinux is in the vanilla kernel) - I do think that AppArmor can quickly become a true SELinux killer, by just being better documented and easier to use. SELinux has serious deficiencies in documentation and development community. Almost all the available SELinux documentation is based around the policy as published by the NSA, which is "superseded by the reference policy project". This is the policy currently in Debian and used in the Gentoo SELinux docs - which hasn't received any updates in months now. The newer "reference policy" is updated every few days, by exporting Tresys' internal SVN into a public CVS on sourceforge. Dan Walsh claimed "multiple distributions shipping with SELinux including Fedora Core (2,3,4 and soon 5), Red Hat Enterprise Linux 4, Gentoo, Debian, Ubuntu, Suse and Slackware. " Which is not entirely true. SuSE has AppArmor now, Fedora and RHEL are pretty much the same, and apparently neither Gentoo, Debian, Ubuntu or Slackware are up to date with SELinux. Or actually involved in the current development. So that basically makes 1 distribution using current SELinux and 1 distribution using AppArmor... Looks like a tie to me. Also with the development it's pretty much down. AppArmor was developed by a small company called Immunix, and is now backed by big Novell, owner of SuSE. Current SELinux is mostly developed by a small company called Tresys, and somewhat backed and used by RedHat. Both have the feeling of "closed door" commercial development, which may be the reason why it reminds some people of the old Unix wars. Both of course claim to do an open development, with for example the current SELinux Symposium. But if you look closely at the Agenda and the speakers, it's fairly obvious that this is pretty much a business meeting, with some university speakers talking about the security concepts used. Just one quote from the site:
Developer Summit
An invitation only meeting for the core developers of SELinux to discuss future plans for SELinux and upcoming technologies.
The winner of this "war" between AppArmor and SELinux will be whoever manages to incorporate community development best, and get the other distributions like Debian, Ubuntu and Slackware to support their efforts. Currently neither of them has the air of actively supporting them, which is really bad. Widespread adoption is also where grSecurity has failed before.