Search Results: "debacle"

18 March 2024

Gunnar Wolf: After miniDebConf Santa Fe

Last week we held our promised miniDebConf in Santa Fe City, Santa Fe province, Argentina just across the river from Paran , where I have spent almost six beautiful months I will never forget. Around 500 Kilometers North from Buenos Aires, Santa Fe and Paran are separated by the beautiful and majestic Paran river, which flows from Brazil, marks the Eastern border of Paraguay, and continues within Argentina as the heart of the litoral region of the country, until it merges with the Uruguay river (you guessed right the river marking the Eastern border of Argentina, first with Brazil and then with Uruguay), and they become the R o de la Plata. This was a short miniDebConf: we were lent the APUL union s building for the weekend (thank you very much!); during Saturday, we had a cycle of talks, and on sunday we had more of a hacklab logic, having some unstructured time to work each on their own projects, and to talk and have a good time together. We were five Debian people attending: santiago debacle eamanu dererk gwolf @debian.org. My main contact to kickstart organization was Mart n Bayo. Mart n was for many years the leader of the Technical Degree on Free Software at Universidad Nacional del Litoral, where I was also a teacher for several years. Together with Leo Mart nez, also a teacher at the tecnicatura, they contacted us with Guillermo and Gabriela, from the APUL non-teaching-staff union of said university. We had the following set of talks (for which there is a promise to get electronic record, as APUL was kind enough to record them! of course, I will push them to our usual conference video archiving service as soon as I get them)
Hour Title (Spanish) Title (English) Presented by
10:00-10:25 Introducci n al Software Libre Introduction to Free Software Mart n Bayo
10:30-10:55 Debian y su comunidad Debian and its community Emanuel Arias
11:00-11:25 Por qu sigo contribuyendo a Debian despu s de 20 a os? Why am I still contributing to Debian after 20 years? Santiago Ruano
11:30-11:55 Mi identidad y el proyecto Debian: Qu es el llavero OpenPGP y por qu ? My identity and the Debian project: What is the OpenPGP keyring and why? Gunnar Wolf
12:00-13:00 Explorando las masculinidades en el contexto del Software Libre Exploring masculinities in the context of Free Software Gora Ortiz Fuentes - Jos Francisco Ferro
13:00-14:30 Lunch
14:30-14:55 Debian para el d a a d a Debian for our every day Leonardo Mart nez
15:00-15:25 Debian en las Raspberry Pi Debian in the Raspberry Pi Gunnar Wolf
15:30-15:55 Device Trees Device Trees Lisandro Dami n Nicanor Perez Meyer (videoconferencia)
16:00-16:25 Python en Debian Python in Debian Emmanuel Arias
16:30-16:55 Debian y XMPP en la medici n de viento para la energ a e lica Debian and XMPP for wind measuring for eolic energy Martin Borgert
As it always happens DebConf, miniDebConf and other Debian-related activities are always fun, always productive, always a great opportunity to meet again our decades-long friends. Lets see what comes next!

26 February 2021

Ritesh Raj Sarraf: Wayland KDE X11

KDE Impressions These days, I often hear a lot about Wayland. And how much of effort is being put into it; not just by the Embedded world but also the usual Desktop systems, namely KDE and GNOME. In recent past, I switched back to KDE and have been (very) happy about the switch. Even though the KDE 4 (and initial KDE 5) debacle had burnt many, coming back to a usable KDE desktop is always a delight. It makes me feel home with the elegance, while at the same time the flexibility, it provides. It feels so nice to draft this blog article from Kwrite + VI Input Mode Thanks to the great work of the Debian KDE Team, but Norbert Preining in particular, who has helped bring very up-to-date KDE packages into Debian. Right now, I m on a Plamsa 5.21.1 desktop, which is recent by all standards.

Wayland Almost all the places in the Linux world these days are busy with integrating Wayland as the primary display service. Not sure what the current status on the GNOME side is but I definitely keep trying KDE + Wayland with every release. I keep trying with every release because it still is not prime for daily use. And it makes me get back to X11, no matter how dated some may call. Fact is, X11 still shines to me as an end-user. Glitches with Wayland still are (Based on this week s test on Plasma 5.21.1):
  • Horrible performance compared to X11
  • Very crashy, especially when hotplugging secondary display. Plasma would just crash. X11 is very resilient to such things, part of the reason I can think is the age of the codebase.
  • Many many applications still need to be fixed for Wayland. Or Wayland needs to accomodate them in some way. XWayland does not really feel like the answer.
And while KDE keeps insisting users to switch to Wayland, as that s where all the new enhancements and fixes are put in, someone like me still needs to stick to X11 for the time being. So to get my shiny new LG 27" 4K Monitor (3840x2160 60.00*+) to work without too much glitch, I had to live with an alias:
$ alias   grep xrandr
alias rrs_xrandr_lg='xrandr --output DP-1 --mode 3840x2160 --scale .75x.75'
18:31                    

Plasma 5.21 On the brighter side, the Plasma 5.21.1 release brings some nice enhancements in other areas.
  • I m now able to make use of tighter integration with systemd/cgroups, with better organization and management of processes overall.
  • The new Plasma theme, Breeze Twilight, is a good blend of Light + Dark.
I also appreciate the work put in by Michail Vourlakos. The KDE project is lucky to have a developer/designer like him. His vision and work into the KDE desktop is well beyond a writing by me.
$ usystemctl status plasma-plasmashell.service 
  plasma-plasmashell.service - KDE Plasma Workspace
     Loaded: loaded (/usr/lib/systemd/user/plasma-plasmashell.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-02-26 18:34:23 IST; 13s ago
   Main PID: 501806 (plasmashell)
      Tasks: 21 (limit: 18821)
     Memory: 759.8M
        CPU: 13.706s
     CGroup: /user.slice/user-1000.slice/user@1000.service/session.slice/plasma-plasmashell.service
              501806 /usr/bin/plasmashell --no-respawn
             
Feb 26 18:35:00 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:21 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:49 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:57 priyasi plasmashell[501806]: qml: recreating buttons
18:36                  

OBS - Open Build Service I should also thank the OpenSUSE folks for the OBS work. It has enabled the close equivalent (or better, in my experience) of PPAs for Debian. And that is what has enabled developers like Norbert to easily and quickly be able to deliver the entire KDE suite.

OBS - Some detail Christian asked for some more details on the OBS side of things, of my view. I m updating this article with it because the comment system may not always be reliable and I hate losing content. Having been using OBS myself, and also others in the Debian community who are making use of it, I surely think we as project should consider making use of OBS. Given that OBS is Free Software, it is a perfect fit for Debian. Gitlab is another example of what we ve made available in Debian. OBS is divided into multiple parts
  • OBS Server
  • OBS DoD service
  • OBS Publisher
  • OBS Workers
  • OBS Warden
  • OBS Rep Server
For every Debian release I care about, I add an OBS project per release. So I have OBS projects for: Sid, Bullseye, Buster, Jessie. Now, say you have a package, foo . You prep your package and enable all the releases that you want to build the package for. So the same package gets built, in separate clean environments, for every release I mentioned as an example above. You don t have to manually trigger the build for every release/every architcture. You should add the release (as projects) in OBS, set their supported architectures, and then add those enabled release projets as bits to your package. Every build involves:
  • Creating a new chroot for each build
  • Building the package
Builds can be scattered across multiple hosts, known as workers in OBS terminology. Your workers are independent machine entities, supporting different architectures. The machines can be Bare-Metal ones, VMs, even containers. So this allows for very nice scale-in and scale-out. There may be auto-scaling too but that is something worth investigating. Think of things like cross architecture builds. Let s assume the cloud vendors decide to donate resources to the Debian project. We could enable OBS worker instances on the respective clouds (different architectures) and plug them into the master OBS instance that Debian hosts. Fully distributed. Similarly, big hardware vendors willing to donate compute resources could house them in their premises and Debian could just easily establish a connection to them. All of this just a TCP connection away. So when I look at the features of OBS, from the point of view of Debian, I like it more. Extensibility won t be an issue. Supporting a new Debian release would just be a matter of bootstrapping the Debian release as a project in OBS, and then all done. The single effort of setting of the target release project is a one time job, and then all can leverage it. The PPA was a long craved feature missing in Debian, in my opinion. OBS allows to not just fulfil that gap but also extend it in a very easy way. Andrew Lee had put in a nice video presentation about the same @ Debconf 20

17 January 2017

Ritesh Raj Sarraf: Linux Tablet-Mode Usability

In my ongoing quest to get Tablet-Mode working on my Hybrid machine, here's how I've been living with it so far. My intent is to continue using Free Software for both use cases. My wishful thought is to use the same software under both use cases.
  • Browser: On the browser front, things are pretty decent. Chromium has good support for Touchscreen input. Most of the Touchscreen use cases work well with Chromium. On the Firefox side, after a huge delay, finally, Firefox seems to be catching up. Hopefully, with Firefox 51/52, we'll have a much more usable Touchscreen browser.
  • Desktop Shell: One of the reason of migrating to GNOME was its touch support. From what I've explored so far, GNOME is the only desktop shell that has touch support natively done. The feature isn't complete yet, but is fairly well usable.
    • Given that GNOME has touchscreen support native, it is obvious to be using GNOME equivalent of tools for common use cases. Most of these tools inherit the touchscreen capabilities from the underneath GNOME libraries.
    • File Manager: Nautilus has decent support for touch, as a file manager. The only annoying bit is a right-click equivalent. Or in touch input sense, a long-press.
    • Movie Player: There's a decent movie player, based on GNOME libs; GNOME MPV. In my limited use so far, this interface seems to have good support. Other contenders are:
      • SMPlayer is based on Qt libs. So initial expectation would be that Qt based apps would have better Touch support. But I'm yet to see any serious Qt application with Touch input support. Back to SMPlayer, the dev is pragmatic enough to recognize tablet-mode users and as such has provided a so called "Tablet Mode" view for SMPlayer (The tooltip did not get captured in the screenshot).
      • MPV doesn't come with a UI but has basic management with OSD. And in my limited usage, the OSD implementation does seem capable to take touch input.
  • Books / Documents: GNOME Documents/Books is very basic in what it has to offer, to the point that it is not much useful. But since it is based on the same GNOME libraries, it enjoys native touch input support. Calibre, on the other hand, is feature rich. But it is based on (Py)Qt. Touch input is told to work for Windows. For Linux, there's no support yet. The good thing about Calibre is that it has its own UI, which is pretty decent in a Tablet-Mode Touch workflow.
  • Photo Management: With compact digital devices commonly available, digital content (Both Photos and Videos) is on the rise. The most obvious names that come to mind are Digikam and Shotwell.
    • Shotwell saw its reincarnation in the recent past. From what I recollect, it does have touch support but was lacking quite a bit in terms of features, as compared to Digikam.
    • Digikam is an impressive tool for digital content management. While Digikam is a KDE project, thankfully it does a great job in keeping its KDE dependencies to a bare minimum. But given that Digikam builds on KDE/Qt libs, I haven't had any much success in getting a good touch input solution for Tablet Mode. To make it barely usable in Table-Mode, one could choose a theme preference with bigger toolbars, labels and scrollbars. This helps in making a touch input workaround use case. As you can see, I've configured the Digikam UI with Text alongside Icons for easy touch input.
  • Email: The most common use case. With Gmail and friends, many believe standalone email clients are no more a need. But there always are users like us who prefer emails offline, encrypted emails and prefer theis own email domains. Many of these are still doable with free services like Gmail, but still.
    • Thunderbird shows its age at times. And given the state of Firefox in getting touch support (and GTK3 port), I see nothing happening with TB.
    • KMail was something I discontinued while still being on KDE. The debacle that KDEPIM was, is something I'd always avoid, in the future. Complete waste of time/resource in building, testing, reporting and follow-ups.
    • Geary is another email client that recently saw its reincarnation. I recently had explored Geary. It enjoys similar benefits like the rest applications using GNOME libraries. There was one touch input bug I found, but otherwise Geary's featureset was limited in comparison to Evolution.
    • Migration to Evolution, when migrating to GNOME, was not easy. GNOME's philosophy is to keep things simple and limited. In doing that, they restrict possible flexibilities that users may find obvious. This design philosophy is easily visible across all applications of the GNOME family. Evolution is no different. Hence, coming from TB to E was a small unlearning + newLearning curve. And since Evolution is using the same GNOME libraries, it enjoys similar benefits. Touch input support in Evolution is fairly good. The missing bit is the new Toolbar and Menu structure that many have noticed in the newer GNOME applications (Photos, Documents, Nautilus etc). If only Evolution (and the GNOME family) had the option of customization beyond the developer/project's view, there wouldn't be any wishful thoughts.
      • Above is a screenshot of 2 windows of Evoluiton. In its current form too, Evolution is a gem at times. For my RSS feeds, they are stored in a VFolder in Evolution, so that I can read them when offline. RSS feeds are something I read up in Tablet-mode. On the right is an Evolution window with larger fonts, while on the left, Evoltuion still retains its default font size. This current behavior helps me get Table-Mode Touch working to an extent. In my wishful thoughts, I wish if Evolution provided flexibility to change Toolbar icon sizes. That'd really help easily touch the delete button when in Tablet Mode. A simple button, Tablet Mode, like what SMPlayer has done, would keep users sticky with Evolution.
My wishful thought is that people write (free) software, thinking more about usability across toolkits and desktop environments. Otherwise, the year of the Linux desktop, laptop, tablet; in my opinion, is yet to come. And please don't rip apart tools, in porting them to newer versions of the toolkits. When you rip a tool, you also rip all its QA, Bug Reporting and Testing, that was done over the years. Here's an example of a tool (Goldendict), so well written. Written in Qt, Running under GNOME, and serving over the Chromium interface. In this whole exercise of getting a hybrid working setup, I also came to realize that there does not seem to be a standardized interface, yet, to determine the current operating mode of a running hybrid machine. From what we explored so far, every product has its own way to doing it. Most hybrids come pre-installed and supported with Windows only. So, their mode detection logic seems to be proprietary too. In case anyone is awaer of a standard interface, please drop a note in the comments.

Categories:

Keywords:

Like:

8 November 2014

Martin-Éric Racine: On the Joey debacle

Looking back, I cannot think of a single moment when Joey wouldn't have shown the utmost patience and courtesy towards anyone involved in Debian, even towards mere users filing sometimes senseless bug reports against his packages. From this perspective, I cannot help but venture that whatever chain of events lead to Joey's decisions essentially means one thing: Debian must have seriously gotten off-course for someone who has been involved for so long to call it quits. As for the current situation at hands, while I admittedly haven't followed too closely who or what caused Joey's decision, I nonetheless cannot help but feel that whoever pushed Joey's buttons so hard as to make him decide to leave Debian ought to be the one(s) kicked out of Debian instead.

20 June 2013

Daniel Pocock: Wrong cloud: Victorian schools and Ultranet

wrong cloud
"The blood, sweat and tears that has gone into the Ultranet and the work teachers put in it's soul destroying. I have to face parents who took me on face value when I said: 'This is the best thing since sliced bread every school is going to be using it in the future."
Those are the words of a school principal at Alkira Secondary College in Victoria, Australia. His school is receiving a real-life lesson in the the risks of cloud computing. Ultranet, the state-wide private cloud project for schools is at risk of being cancelled in 10 days due to budget cuts. It is not clear whether the vendor, NEC, is obliged to release data back to the users if the original four year contract is not extended. Commercial, confidential discussions It appears that the possibility of ending the contract was not supposed to be public knowledge. If the finance department had kept their intentions secret then it is possible that users would not even have had the opportunity to try and manually extract some of their data, as they are rushing to do now. Instead of getting on with the business of education itself, teachers and students are having to waste time trying to individually download student projects, photos, feedback and other meta data from the system in an ad-hoc manner. An opportunity for free and open source software This debacle may well be a huge opportunity for the free and open source software community to remind people that the value of retaining control over user data can not be forgotten when evaluating outsourced, private cloud solutions that claim to be "free" or offer low costs to get started. There were apparently 18 schools who initially trialed the program and many more have gradually joined up. This would be the perfect opportunity for advocates of genuinely free software solutions to contact them and let them know they are not alone and that many of us have anticipated these problems for years. While central government administrators tend to be hooked on the idea of administering big contracts with trusted' vendors, these local school administrators and the communities around them are probably more receptive to the free software message than at any other time. List of schools using Ultranet Many of the schools using Ultranet can be found with Google, as it appears to be integrated in their web sites. Click here for a Google search that finds many of the schools and related technical pages, support documents and other material A message to schools everywhere For schools facing this crisis or considering adopting such services in other countries, I would share this very simple advice: if you are offered a solution that doesn't give you complete control to conveniently extract all of your data in bulk whenever you want and use it in your own software on any computer, then it is not a good foundation to build on. The free software community has been actively working to provide truly free solutions for schools and I would strongly encourage schools to look at projects like Skolelinux/Debian-Edu and k12Linux that have been built by educators who also have a deep understanding of software and data freedom issues.

27 October 2012

Riku Voipio: My 5 eurocents on the Rasberry Pi opengl driver debacle

The Linux community rather unenthusiastic about the open source shim to Raspberry PI GPU. I think the backslash is a bit unfair, even if the marketing from Raspberry was pompous - It is still a step in the right direction. More raspberry PI hardware enablement code is open source than before. A step in the wrong direction is "locking firmware", where a loadable firmware is made read-only by storing it to a ROM. It has been planned by the GTA04 phone project, and Raspberry foundation also considers making a locked GPU ROM device. This is madness that needs to stop. It may fulfill the letter of FSF guidelines, but it is firmly against the spirit. It does not increase freedom - it takes away vendors ability to provide bugfixes to firmware, and communities possibility of reverse engineering the firmware.

28 November 2010

Christian Perrier: [life] a pique...

Yesterday night, we were at Stade de France to witness the debacle of the French national rugby team against Australia (16-59). 7 tries to 1. For someone who loves rugby and knows the high level of players we have in France (Top 14 is a collection of great games every week), it hurts. Maybe, next time, there will be a little more Toulouse players on the field and a little less people who have nothing to do there (should I name a few? Andreu, Huget, Porical...). The only good point of that game was (as usual) the scrum first row (at least during one hour...then even there we got crushed by the Australians). French winter was even here to help, but apparently the yellow/green folks didn't really care. Australia has a really impressive team, particularly their back line. We'll be having hard times in RWC in New Zealand next year. Big round of aplause at the end of the game for that team. For our team: a pique.....fort!

26 January 2009

Andrew Pollock: [life] Too much deja vu

Seriously, every time Sarah has surgery, it all goes pear shaped. The bottom (last time it was towards the top) of Sarah's incision from her recent wire removal surgery was looking a bit red, swollen and hard on Friday, so she went in to the clinic at got them to take a look at it. They decided that a knot in the sutures was trying to work its way out, and causing inflammation or something, so just like last time, they decided to reopen the incision a bit. They did that, and cut the knot out, and rather than packing with gauze, they just taped the small opening closed with some steristrips and covered it with gauze and sent her on her way. Somewhat fortunately (for me) Sarah went into the clinic during the day while I was at work, so I missed participating in the fun and games. They didn't use any local anaesthetic, but Sarah said it didn't hurt too much. That night, though, she was in a reasonable amount of pain, again, similar to last time, except it was a band across her upper abdomen, below her breasts, instead of chest pain. Mindful of the staph infection disaster from last time, we trotted off to the ER on Saturday afternoon, hoping to nip this in the bud. They did some blood work, which came back clear, and gave her some IV pain medication, which didn't seem to do much, and were generally scratching their heads as to the cause of the pain. Fortunately, one of the surgeons involved with Sarah's wire removal from 2.5 weeks previously happened to be on call, so he got called in. He had a bit of a poke around the opening with a long cotton bud, and declared that it was possibly infected, because there was a bit of cloudy fluid discharged when he poked it in. Oh, and the cotton bud disappeared a good centimetre or more under the skin further up the incision, which indicated an abscess was forming. So, for the second time, I got to watch my wife being cut open with a scalpel. I think the local anaesthetic injections were the most painful part for her. He opened the incision up a bit further, and packed it with some Betadine-soaked gauze, and we got sent home with the same instructions as last time this happened: I had to repack the wound with saline-soaked gauze two to three times a day and let it heal from the bottom up. She was also prescribed some antibiotics. She's got about a 2cm incision just below her breasts. It looks a lot deeper than the hole from last year's incision. So we got home about 6pm last night, and went to bed around 11pm. Sarah lasted a couple of hours before she woke up in even more pain than before. She called up the hospital and had the same surgeon paged, and he advised her to come in to the ER again. So at I think about 2am, we headed back. They took some more blood, did a CT scan, which came back clear, and decided to admit her. They're giving her IV antibiotics and pain medication. Initially she didn't want to have any more IV pain medication because it didn't seem to be working, and it tends to make her nauseous. She wasn't keen on dealing with throwing up with the amount of pain she was having. It hurt to move as it was. With a mixture of IV anti-nausea drugs and the IV pain medication, the pain has finally come under control though. She did have one bought of vomiting this evening, which didn't seem to cause her too much discomfort, and she's getting in and out of bed more comfortably. They're going to keep her in overnight. She has a post-operative followup tomorrow in the clinic, so I expect they'll keep her in the hospital until at least then, depending on how things go. Her cardiologist, intimately familiar with last year's debacle, dropped in this afternoon. He's wondering, given how this has happened twice now, if Sarah has some sort of allergy to the type of absorbable suture they've used. Apparently there are different types of sutures, so if she ever has any other surgery, we'll have to get them to try a different type It'll be interesting to see how she goes tonight, as the pain seems to get worse overnight. She hasn't had any IV pain medication since about 1pm, and she wanted to see how she goes without it, because if she gets discharged tomorrow, she'll have to rely on oral stuff anyway. I think it definitely gets harder for her to manage her pain when the pain is causing her to lose sleep. She can only deal with having a couple of night's worth of bad sleep before it all gets too much. So I'm really hoping she gets a good night's sleep tonight. Not that hospitals are renowned for that...

21 January 2009

Martin-Éric Racine: Uninformed shoppers blaming Ubuntu: a brief TODO list for Canonical

My own perception of the debacle about the end-user who decided to cancel her college enrollment because she could not get Microsoft products to install on Ubuntu is three folds: Personally, I think that Canonical needs to hire an individual that understands the above three aspects and, most of all, how to remedy them, as its next OEM channel Manager, if they truly want to increase Ubuntu's market penetration.

13 December 2008

David Welton: Startups and Work: Europe vs the US

Michael Arrington of Techcrunch comments on the US vs Europe in terms of startups: Joie De Vivre: The Europeans Are Out To Lunch It's a pretty rough and coarse-grained view of things, but there's a grain of truth there. I've written about this a little bit before. It's something that I feel qualified to talk about, having lived and worked in both the US and Europe (Italy and Austria to be precise), and in that time, generally gravitated towards startups. Here's my quick take on a list of what's good and bad about each. Please note that these are of course not true for everyone, that things are changing, but are still things I think are generally true, even when I could think of several counterexamples for some of them myself. Also, it's very important to keep in mind how diverse "Europe" is, so most of what I write is really about Italy, and to a much smaller degree, Austria. I'd be very curious to hear your own experiences in the comments. Europe Bad
  • Less of a startup culture and mentality. It's more typical to get a "job for life" and hang on to it for all you're worth. Many Italians are tremendously creative, industrious, inventive people, but are going to find it more difficult to express that in some form of business.
  • The side effects of this mean that there are fewer people to talk with, and network with, fewer potential employees willing to risk a startup, and so on. For instance, people are at times more suspicious of a new company - both clients and suppliers.
  • As I mentioned in my other article, it takes a lot more money to get started in many European countries - something like 10,000 Euro in Italy. Other places like the UK are cheaper, and apparently Germany is introducing some legislation to ease the burden on new companies. I really hope this changes in Europe because it's such an easy change to make: don't extract money from companies until they're making it.
  • More bureaucracy. I think that higher taxes, once you are profitable, might be worth paying for the social system you get in Europe, but the cost of sorting out paperwork falls inordinately on smaller, newer companies. Big, established firms can hire people to deal with all the rules and regulations, and probably have contacts in the government that can help them out in some cases. Smaller firms are the ones whose time is really going to be wasted running around to different offices trying to figure out what they have to do.
  • Smaller, fragmented markets. Localization is not a lot of fun in some ways, and trying to translate everything into all the languages of the European Union is a huge undertaking. In the US, you get a huge market with just English and the US Dollar. Even beyond language issues, the culture changes less in the US from place to place, meaning that you have a more homogeneous target.
  • Lack of acceptance of failure, both culturally and institutionally. If you go bankrupt in Italy, it's a very serious problem. Apparently (although this is second hand, I'm not 100% sure of it - maybe someone can confirm whether it's actually true?), you can even lose the right to vote. Plenty of people in the US try several times before they get it right.
Good
  • Even if your undertaking fails, you still have health care. Likewise, there are other bits and pieces of social support (that change from country to country) that mean you're probably not going to land quite so hard on your rear if things go wrong.
  • Lots of smart, educated people. I never lacked for plenty of smart people to talk shop with in Padova, and didn't miss the California bay area at all from that point of view. Open source is really big in Europe - perhaps, in part, because for many people it's a better avenue for their talents than creating a business, when doing so is difficult and less common.
  • Work/life balance. Sorry Mike, but the amount of people who are truly going to strike it rich is pretty small. If they want to work hard, great, but it's nice to have some other options, in terms of good, lasting friendships (rather than everything revolving around work), knowing plenty of people from outside your field, people living in and belonging to a community (how many people spend their whole lives in the bay area?). When I moved from San Francisco to Padova in 2000, I went from a world gone mad with money and the dot com craze to one where there were rich and poor, families, young people, old people, and many people who were not working for some dot com. That's not to say that people don't work quite hard in Italy - two hour lunches are a thing of the past for most anyone I know.
  • Smaller fragmented markets can be an advantage, too. By the time some valley-based startup finally gets around to noticing that languages other than English exist, it's possible to capture a smaller market. Ok, so you won't be the next Google that way, but there's good money to be had in doing so.
  • In some ways, the staid, established, don't rock the boat way of doing things in some industries may present big opportunities for outsiders to come in and pull the carpet out from under everyone. This is especially true of internet/web companies that can get started quickly and cheaply.
US Bad
  • If you're not careful, you can get completely sucked in to work - your life revolves around it, your friends are mostly work friends, and if something happens to your job, a lot of that goes poof. This is especially true in places like the bay area, where so many people are 'transients' - just there for a few years, without any real roots in the area. This is ok if you'll potentially make a big pile of money, but long term it's unhealthy.
  • Less of a sense of building for the long term. Personally, I don't want a "job for life", but I think there is some value in loyalty (the genuine sort, not the sort created by legally not being able to lose a job) between employer and employee that has been lost in the US. Being able to count on someone growing with your company, and as an employee, knowing that your company will do what it can to help you out even in tough times are things that capture some value. The US seems to be heading towards a "Coasian" world where everyone is a freelancer, and while that efficiency is hard to deny, I wonder what is lost in the process.
  • The competition is tough. Sure, it is in Europe too, but in Europe everyone may compete quite hard during the year, but still all take that one month vacation (although that is becoming a bit less common, especially amongst people my age), whereas in the US almost no one gets that kind of benefit - even if you wanted to take unpaid time off, people would look askance at that kind of behavior. That said, if you need to stay open in, say, August, in Italy, to compete, you are in big trouble, because for many businesses it's simply impossible because it's a chain reaction: all your suppliers and clients shut down, so you really have no choice.
Good
  • There are certainly bureaucratic obstacles in the US, but are much more maneageable, and don't hit smaller/newer companies quite so badly.
  • It's cheap to get started. As per my other article, the actual state filing fee in Oregon for an LLC is $55, and that's all you really need.
  • The culture is definitely there, especially in the right places like the Bay Area, but even in plenty of others.
  • You aren't bound to employees for life, and it's easy to find freelancers. I know I'm contradicting myself, but I think it's actually a complex issue, and there are definite advantages to not having people who expect or at least want to find that job for life.
  • A large target market. Go online in one state, and you can, for the most part, deal with customers all over the US, in one language.
  • Things turn around faster. I have more of a sense that when there are problems, they get fixed. Companies (the debacle of the automakers notwithstanding) are more often allowed to fail, or at least put in Chapter 11. Sometimes this means that when things are bad, they get really bad, but also turn around and get better sooner.
Conclusions An interesting example of all this from my own experience was Linuxcare. The company was, culturally, a Bay Area startup, with the headquarters in San Francisco, founded by very startup oriented guys (one of whom, Dave Sifry, went on to do technorati, and is currently working on yet another venture). However, most of the actual open source talent was in our satellite offices, in Australia, Italy and Canada, places where, perhaps, people were not so distracted by the prospect of "make money fast!!! hurry!!!" that they had time to work, learn, and create some really great open source code. To be honest, I find myself torn. Business-wise, I prefer the US. However, outside of that, there's a lot to be said for Europe. I also think that some of what's good about business in the US is coming to Europe, albeit slowly in some cases, in Europe. People my age here can see what's going on elsewhere, and try and copy what they like. A lot of what's good about Europe, though, might be more difficult to import into the US, especially the livability of the cities. In any case, I can conclude that it's a complex, difficult topic best discussed over a glass of wine.

David Welton: Startups and Work: Europe vs the US

Michael Arrington of Techcrunch comments on the US vs Europe in terms of startups: Joie De Vivre: The Europeans Are Out To Lunch It's a pretty rough and coarse-grained view of things, but there's a grain of truth there. I've written about this a little bit before. It's something that I feel qualified to talk about, having lived and worked in both the US and Europe (Italy and Austria to be precise), and in that time, generally gravitated towards startups. Here's my quick take on a list of what's good and bad about each. Please note that these are of course not true for everyone, that things are changing, but are still things I think are generally true, even when I could think of several counterexamples for some of them myself. Also, it's very important to keep in mind how diverse "Europe" is, so most of what I write is really about Italy, and to a much smaller degree, Austria. I'd be very curious to hear your own experiences in the comments. Europe Bad Good US Bad Good Conclusions An interesting example of all this from my own experience was Linuxcare. The company was, culturally, a Bay Area startup, with the headquarters in San Francisco, founded by very startup oriented guys (one of whom, Dave Sifry, went on to do technorati, and is currently working on yet another venture). However, most of the actual open source talent was in our satellite offices, in Australia, Italy and Canada, places where, perhaps, people were not so distracted by the prospect of "make money fast!!! hurry!!!" that they had time to work, learn, and create some really great open source code. To be honest, I find myself torn. Business-wise, I prefer the US. However, outside of that, there's a lot to be said for Europe. I also think that some of what's good about business in the US is coming to Europe, albeit slowly in some cases, in Europe. People my age here can see what's going on elsewhere, and try and copy what they like. A lot of what's good about Europe, though, might be more difficult to import into the US, especially the livability of the cities. In any case, I can conclude that it's a complex, difficult topic best discussed over a glass of wine.

16 September 2008

Mike Hommey: Finally, some sense

The Firefox EULA debacle is over. While this is nice, especially because they retracted, there are several things at stake here. Update: there is a nice article on the EULA issue on Groklaw.

23 May 2008

Lucas Nussbaum: FOSSCAMP

Last week-end, I was in Prague for FOSSCAMP, a Canonical-sponsor event aiming at bringing together people from various Free Software projects. Such an event is a very good idea, especially after the openssl debacle, where we saw how difficult it is to build good relationships with other Free Software projects. The event was organized in a rather interesting way: the attendees make up the schedule as the event happens, by adding sessions to the timetable using marker pens on a whiteboard (see picture, hi Jorge!). If we have a spare BOF room at Debconf, it would be great to do the same thing: I always find it frustrating that many important discussions at Debconf happen outside BOFs/lectures, and are so easy to miss, simply because you submit talk proposals months before Debconf, so you can’t know yet what will be the hot stuff when Debconf finally arrives :-) It was also interesting to see all the different ways people organized BOFs. It might be interesting to write a list of DOes and DON’T about BOFs (not that the sessions weren’t of good quality!). Does anyone know if such a list already exist? And finally, as usual, it was great to meet all those people from this nice community again. If you still believe that some projects are fighting with each other, you need to attend such an event. And special thanks go to Ond ej ert k for guiding me through a visit of Prague!

19 May 2008

Tollef Fog Heen: New backup system!

(This post is mostly as a reminder to myself on how I've set up my backup system. It should probably go on a wiki instead so I can keep it up to date.) After the recent OpenSSL debacle in Debian and Ubuntu, I found that all my backups were encrypted with something amounting to a well-known secret key. Ouch. I was not entirely happy with how my old backup system worked either (it was based on boxbackup). In particular, the on-disk format was opaque, the tools needed to access it were not particularly user-friendly and I had to run Yet Another CA for managing the keys for it. After looking around a little, I settled on rdup which is a tool very much written in the unix tradition of "do one thing and do it well". As it reads on the home page:
The only backup program that doesn't make backups!
(which is almost true). It keeps a list of information about which files have been backed up locally on the machine to be backed up, including some meta-information such as file size and permissions, so it can take a new backup if any of those changes. For more details, read the web page and the source. rdup is more of a framework for making your own backup system than a complete system in its own right, so this post is really about how I have customised it. First, I want my backups to be encrypted, and rdup supports encryption (both GPG and mcrypt). I'm lazy, so I settled on what rdup-simple gives me, which is mcrypt. Key generation is easy enough: head -c 56 /dev/random > /root/backup-$(hostname).crypt.key and then a chmod 600 to avoid it being world-readable. In /root/.ssh/config, I put
Host backup-$hostname
Hostname $backupserver.err.no
User backup-$hostname
IdentityFile /root/.ssh/id_rsa_rdup
ProxyCommand pv -L 40k -q   nc %h %p
so as to make it fairly easy to move stuff around and to make it pick up the right identity. The last bit is a trick to rate limit it so it doesn't saturate my DSL. pv has a wonderful -R switch which lets me change the arguments to an already-running pv, if I want to do that. ssh-keygen -t rsa -f /root/.ssh/id_rsa_rdup to generate an ssh key. It got put into /home/backup-$hostname/.ssh/authorized_keys on the backup server, so the line reads like:
command="/usr/local/bin/rdup-ssh-wrapper",no-pty,no-port-forwarding,no-agent-forwarding,no-X11-forwarding ssh-rsa AAAAB3N
The /usr/local/bin/rdup-ssh-wrapper is a small perl wrapper which only allows the rdup commands and sanitises the command line somewhat. Since I don't want to make a backup of all bits on my machines, I have an exclude file, which lives in /root/rdup-exclude. It is just a list of regexes of files to ignore. To actually make a backup, I run something like for p in /etc /home /var; do rdup-simple -v -a -z -E /root/rdup-exclude -k /root/backup-$(hostname).crypt.key $p ssh://backup-$(hostname)/srv/backup/$(hostname)/$p ; done which then goes on for a while. It gives me nice structures with hard-linked files to avoid using more disk space than needed. I can then just have a small find(1) script prunes old backups as I don't need them.

17 May 2008

Josselin Mouette: Some lessons to learn

There are obviously some things we need to remind if we don t want something like the OpenSSL debacle to happen again. It doesn t mean we need to throw stones nor to rush into changing our processes without thinking. However, there are already some things that should be obvious but unfortunately are not.
  1. Shipping a giant diff.gz that contains all changes in one, putting security fixes, policy fixes, bug fixes, cosmetic changes and autotools files at the same level, is not something we should accept anymore. Improvements in the dpkg-source format are much welcome in this direction, but they are useless if maintainers don t use them. Neither a VCS nor a build tool will be able to know which line of the changes is related to which bug. Only the maintainer can.
  2. Core packages should all have co-maintainers. This is pretty much stating the obvious, and is much easier said than done. The OpenSSL case is one of the best examples here: Kurt is not one of those who refuses help, but frankly, would you want to maintain that package? Having already maintained packages with messy code, upstreams not understanding at all the needs of a distributor, avalanches of security alerts and randomly-changing ABIs, I can tell you this is no fun like it can be to hack on a desktop environment or a device driver. The only sane reason to do this is that you need the package to work. The only visible result you get from your work is that programs are not randomly crashing.
    I have no magic recipe to propose so that more people help with such packages, and that s where we need to be really innovative. Cross-distribution teams, mandatory co-maintainership on a core package for each DD these (and all ideas I have not heard of) are the experiments we should start now.
  3. Patching bad code leads to unpredictable results. What maintainer of a complex package has not introduced a new bug while trying to fix another one? Even when a piece of code is maintained by uncooperative developers, is not commented, uses arcane variable names or is impossible to understand without having contributed 3 winning entries to the IOCCC, it needs to evolve. And in these cases, it is only a matter of time until such things happen.
    Don t get me wrong: I m not trying to put the blame on upstream here. They have contributed very valuable code to the community and their work helped in the considerable widespread of cryptography. It s just that their code is not enough for our needs. If we can t patch it safely (and I m now convinced we can t), maybe we need to focus on alternatives and help them getting used by crypto-related packages. The code in GnuTLS and NSS is not necessarily better, but most (if not all) patches Debian needs to apply to them are build and portability fixes.
  4. Unless Debian-specific, 1 patch = 1 bug in the upstream tracker. This should be obvious, but given the number of patches that are never forwarded, it doesn t seem so. You should not only give a chance for upstreams to review the patch, but you need them to track it, and you must give them the chance to review it anytime someone else stomps on a similar issue. If upstream does not have a bug tracker, they probably think their software has no bugs. Which means they are not trustworthy, and we go back to point 3.
  5. We need to give more priority to security. Issues in the security team seem now fixed for good and they have been doing an awesome work. There isn t much left to do so that packages are all built with security-hardening features, but it still needs to be done. And there is much more to do so that we can provide out of the box a decent SELinux setup, or, if it turns out unrealistic to do, a decent system hardening setup using another framework. I know the SELinux zealots will jump on their high horses to explain that their framework is better, but the current situation where it is impossible for the average system engineer to setup a Debian-based MAC system is much worse than having a suboptimal setup that already works.
All in all, this incident has a great impact on Debian s image. If we don t react accordingly, adapting our processes and our system to match what our users expect from us and they expect the best they will turn away from us. With very good reasons to do so. Update : It seems OpenSSL does have a bug tracker. Thanks Kurt for pointing me to it.

15 May 2008

Daniel Burrows: Worst Debian day ever.

Regarding the OpenSSL debacle, Julien Blache writes:
Worst Debian day ever since the 2003 compromise. And that was a BAD one.
I disagree. This is far, far worse than the 2003 compromise. The compromise was scary, but the key updates for users were straightforward and as far as we know, user security was never actually compromised. In contrast, every single user who uses Debian or Ubuntu for anything serious is now (Update: and has been for two years) vulnerable to attacks on their supposedly secure cryptography [0] unless they perform a labor-intensive and error-prone series of steps to regenerate all their cryptographic keys -- not to mention finding a secure way to distribute their public keys to everyone who needs them! [0]: Update: I should perhaps make it clear that I mean anyone who generated a key on such a system. But in a way this makes things worse: unless you're willing to regenerate every key in sight, you need to check each key manually, which means that there's a nontrivial chance that you'll overlook one and leave yourself vulnerable...

Julien Blache: Obligatory loldebian post

Because a lolcat is worth a thousand jokes, here are 3 of them. Thanks to rominet for coming up with those :-)

14 May 2008

Julien Blache: Of course it s far worse. Did I tell otherwise?

Dear Daniel, It looks like you are referring to my post, though you got my name wrong so that wasn’t immediately obvious. Of course this is far worse than the 2003 compromise in terms of the direct, known and quantifiable impact it has on our users. I don’t think I stated otherwise, so I hardly see why your post starts with “I disagree”.

27 February 2008

Sergio Talens-Oliag: Tips & Tricks: plone, nginx and path rewriting

The problem On a couple of Debian Etch systems we have a plone-site that is published using a backport of the nginx web server. The Zope instance is running on the standard port and serves the Plone contents under the /plone path. Initially we were publishing the site to the external world using an https site served by nginx using the following entry on the configuration:
  location /plone/  
    proxy_pass http://plone:9673;
    include    /etc/nginx/proxy.conf;
   
The proxy.conf contents are quite standard:
  # proxy.conf
  proxy_redirect                  off;
  proxy_set_header                Host $host;
  proxy_set_header                X-Real-IP $remote_addr;
  proxy_set_header                X-Forwarded-For $proxy_add_x_forwarded_for;
  client_max_body_size            0;
  client_body_buffer_size         128k;
  proxy_connect_timeout           90;
  proxy_send_timeout              90;
  proxy_read_timeout              90;
  proxy_buffer_size               4k;
  proxy_buffers                   4 32k;
  proxy_busy_buffers_size         64k;
  proxy_temp_file_write_size      64k;
With this settings we see the /plone contents using the same path that is used by the Zope instance, but after testing we have decided to change the /plone path and server the contents under the /web path. The Wrong Solution The fist option I though about was quite simple, rename the Zope's plone object to web. Seems reasonable and simple for someone without Zope experience (I don't administer the internals of the Zope/Plone site), but now I know that it is a very big mistake, because renaming objects in Zope in not cheap, as it implies that the server has to modify all the contents of the renamed object and the operation can take a very long time. With my ignorance I tried to rename the plone object using the Zope administrative interface and after a minute or so I cancelled the page loading that was running on my browser, thinking that I had cancelled the rename operation. To make a long story short I'll tell you that the operation was still running and after several hours the folder was renamed (in fact I noticed when the good solution broke, as I had already solved the problem using the next method), but something went wrong and part of the site functionality was broken... the final solution to the debacle has been to recover a backup of the Zope instance older than the rename operation and continue from that copy. The Right Solution (TM) It seems that Zope has a couple of systems to do Virtual Hosting and the best option is the use of the product called Virtual Host Monster, a weird and confusing system (IMHO, of course), that does the job once the right configuration settings are in place. The best solution to our problem was to modify the requests done by the reverse proxy without touching anything on the Plone site (the original one already had a Virtual Host Monster object installed and that was the only thing that we needed to add). The nginx configuration for the new /web path is the following:
  location /web/  
    proxy_pass http://plone:9673/plone/VirtualHostRoot/_vh_web/;
    include    /etc/nginx/proxy.conf;
   
With this change, when the user asks for anything under the /web/ path the Zope server gets the contents traversing the /plone object and adding to it the elements that appear after the VirtualHostRoot component, ignoring components that start with the _vh_ prefix (the protocol and host name of the requests are not modified, as we did not touched that). Once the object is found, the server rewrites the URLs included on the HTML files using the path components that appear after the VirtualHostRoot one, including the suffix of the components that start with the prefix _vh_. For example, when the Zope server receives a request for an URL like:
  http://plone:9673/plone/VirtualHostRoot/_vh_web/home
it publishes the content found on:
  http://plone:9673/plone/home
but the HTML files returned assume that their base URL is:
  http://plone:9673/web/home

16 January 2008

Andrew Pollock: [tech] Linux software RAID hates me

After the debacle last time I tried to grow the size of my existing RAID1 when I put new disks in daedalus, I thought this time I'd do my homework. I did some research, I found out the way I should have done it. I did a practice run on a USB key. I fully planned how I was going to do it:
mdadm /dev/md2 --fail /dev/sdb3
<delete /dev/sdb3, recreate at new full size>
reboot
mdadm /dev/md2 --add /dev/sdb3
<wait for sync>
mdadm /dev/md2 --fail /dev/sda3
<delete /dev/sda3, recreate at new full size>
reboot
mdadm /dev/md2 --add /dev/sda3
<wait for sync>
mdadm --grow /dev/md2
<wait for sync>
pvresize /dev/md2
Everything went as planned, until I went to grow the RAID1 volume. It still thought the underlying device was the same size. There was nothing to grow. So at this point, I decided to do something similar to what I did last time to get around the failing disk, and should have done last time anyway. I broke the mirror, created a new degraded RAID1 using the full size of the new partition on the half I pulled out of the mirror, and did a pvmove from the old non-full-sized degraded mirror to the new full-sized degraded mirror. All of that went swimmingly until the pvmove was around 50% complete, when the kernel decided to oops spectacularly. I had to power cycle daedalus to get it back under control, and even in single-user mode, without me doing anything, the kernel started oopsing again. Dammit. I had to boot into emergency-mode (insert standard gripe about Debian's single-user mode being far too non-singular here), then I could resume the pvmove without any further oopsing. After that completed, I was able to ditch the old non-full-sized degraded RAID1 device and resync the new one onto the old partition. There was still some minor filesystem corruption, more likely because I had everything mounted at the time of the crash. Yes, I still haven't learned not to do this kind of thing in multi-user mode. It seems every time I try to minimise the size and duration of an outage, it bites me in the arse. Even though I should have been able to move open logical volumes between physical volumes, the kernel oops seemed to be in the dm_mirror code. daedalus is running a fairly old kernel. The annoying thing is that getting some additional disk space on board was the dependency for doing a general upgrade of all of the software on it. Argh. Anyway, it's done. I hope not to have to go through this again. I just have to sit through a potentially nail-biting remote upgrade of Debian now, and I should be good for a couple more years hopefully.

Next.