Search Results: "nhandler"

15 September 2020

Molly de Blanc: Actions, Inactions, and Consequences: Doctrine of Doing and Allowing W. Quinn

There are a lot of interesting and valid things to say about the philosophy and actual arguments of the Actions, Inactions, and Consequences: Doctrine of Doing and Allowing by Warren Quinn. Unfortunately for me, none of them are things I feel particularly inspired by. I m much more attracted to the many things implied in this paper. Among them are the role of social responsibility in making moral decisions. At various points in the text, Quinn makes brief comments about how we have roles that we need to fulfill for the sake of society. These roles carry with them responsibilities that may supersede our regular moral responsibilities. Examples Quinn makes include being a private life guard (and being responsible for the life of one particular person) and being a trolley driver (and your responsibility is to make sure the train doesn t kill anyone). This is part of what has led to me brushing Quinn off as another classist. Still, I am interested in the question of whether social responsibilities are more important than moral ones or whether there are times when this might occur. One of the things I maintain is that we cannot be the best versions of ourselves because we are not living in societies that value our best selves. We survive capitalism. We negotiate climate change. We make decisions to trade the ideal for the functional. For me, this frequently means I click through terms of service, agree to surveillance, and partake in the use and proliferation of oppressive technology. I also buy an iced coffee that comes in a single use plastic cup; I shop at the store with questionable labor practices; I use Facebook. But also, I don t give money to panhandlers. I see suffering and I let it pass. I do not get involved or take action in many situations because I have a pass to not. These things make society work as it is, and it makes me work within society. This is a self-perpetuating, mutually-abusive, co-dependent relationship. I must tell myself stories about how it is okay that I am preferring the status quo, that I am buying into the system, because I need to do it to survive within it and that people are relying on the system as it stands to survive, because that is how they know to survive. Among other things, I am worried about the psychic damage this causes us. When we view ourselves as social actors rather than moral actors, we tell ourselves it is okay to take non-moral actions (or in-actions); however, we carry within ourselves intuitions and feelings about what is right, just, and moral. We ignore these in order to act in our social roles. From the perspective of the individual, we re hurting ourselves and suffering for the sake of benefiting and perpetuating an caustic society. From the perspective of society, we are perpetuating something that is not just less than ideal, but actually not good because it is based on allowing suffering.[1] [1] This is for the sake of this text. I don t know if I actually feel that this is correct. My goal was to make this only 500 words, so I am going to stop here.

26 August 2017

Nathan Handler: freenode #live

For those of you who might not be aware, freenode is an IRC network that caters to free and open source projects. We have had a goal for a number of years to hold freenode #live, an in-person conference where our staff, projects, FOSS supporters, and IRC enthusiasts could all get together in one place. Thanks to some very generous sponsorship, primarily by Private Internet Access, this conference is finally happening! The first ever freenode #live conference will take place October 28 - 29, 2017 in Bristol, United Kingdom. This conference will only be a success with support from the community. Luckily, there are many ways to help out.
  1. Register to attend. We are working on putting together a great lineup of speakers that you will not want to miss. Registering allows us to finalize many logistical details surrounding the event to make sure everything runs smoothly.
  2. Do you represent a FOSS group? We would love to have your group present at the event. There will be an exhibitor hall where you can advertise your organization and attract new users and contributors.
  3. Submit a talk. Our call for proposals is still open. New and experienced speakers are welcome. Feel free to contact 2017-team@freenode.live and we would be happy to work with you to come up with a talk idea or provide feedback.
  4. If you or your company are interested in helping to sponsor this event, please reach out to exhibit@freenode.live. We have opportunities for budgets of all sizes.
I look forward to hopefully seeing you at freenode #live!

9 October 2016

Nathan Handler: Ohio Linux Fest

This weekend, I traveled to Columbus, Ohio to attend Ohio Linux Fest. I departed San Francisco early on Thursday. It was interesting getting to experience the luxurious side of flying as I enjoyed a mimosa in the American Express Centurion lounge for the first time. I even happend to cross paths with Corey Quinn, who was on his way to [DevOpsDays Boise]. While connecting in Houston, I met up with the always awesome Jos Antonio Rey, who was to be my travel companion for this trip. The long day of travel took its toll on us, so we had a lazy Friday morning before checking in for the conference around lunch time. I was not that interested in the afternoon sessions, so I spent the majority of the first day helping out at the Ubuntu booth and catching up with friends and colleagues. The day ended with a nice Happy Hour sponsored by Oracle. Saturday was the main day for the conference. Ethan Galstad, Founder and CEO of Nagios, started the day with a Keynote about Becoming the Next Tech Entrepreneur. Next up was Elizabeth K. Joseph with A Tour of OpenStack Deployment Scenarios. While I ve read plenty about OpenStack, I ve never actually used it before. As a result, this demo and introduction was great to watch. It was entertaining to watch her login to CirrOS with the default password of cubswin:), as the Chicago Cubs are currently playing the San Francisco Giants in the National League Divisional Series (and winning). Unfortunately, I was not able to win a copy of her new Common OpenStack Deployments book, but it was great getting to watch her signing copies for other attendees after all of the hard work that went into writing the book. For lunch, Jos , Elizabeth, and Svetlana Belkin all gathered together for an informal Ubuntu lunch. Finally, it was time for me to give my talk. This was the same talk I gave at FOSSCON, but this time, I had a significantly larger audience. Practice definitely makes perfect, as my delivery was a lot better the second time giving this talk. Afterwards, I had a number of people come up to me to let me know that they really enjoyed the presentation. Pro Tip: If you ever attend a talk, the speaker will really appreciate any feedback you send their way. Even if it is a simple, Thank You , it really means a lot. One of the people who came up to me after the talk was Unit193. We have known each other through Ubuntu for years, but there has never been an opportunity to meet in person. I am proud to be able to say with 99% confidence that he is not a robot, and is in fact a real person. Next up was a lesson about the /proc filesystem. While I ve explored it a bit on my own before, I still learned a few tips and tricks about information that can be gained from the files in this magical directory. Following this was a talk about Leading When You re Not the Boss. It was even partially taught by a dummy (the speaker was a ventriloquist). The last regular talk of the day was one of the more interesting ones I attended. It was a talk by Patrick Shuff from Facebook about how they have built a load balancer than can handle a billion users. The slide deck was well-made with very clear diagrams. The speaker was also very knowledgeable and dealt with the plethora of questions he received. Prior to the closing keynote was a series of lightning talks. These served as a great means to get people laughing after a long day of talks. The closing keynote was given by father and daughter Joe and Lilly Born about The Democratization of Invention. Both of them had very interesting stories, and Lily was quite impressive given her age. We skipped the Nagios After Party in favor of a more casual pizza dinner. Overall, it was a great conference, and I am very glad to have had the opportunity to attend. A big thanks to Canonical and the Ubuntu Community for fudning my travel through the Ubuntu Community Fund and to the Ohio Linux Fest staff for allowing me the opportunity to speak at such a great conference.

6 October 2016

Nathan Handler: FOSSCON

This post is long past due, but I figured it is better late than never. At the start of the year, I set a goal to get more involved with attending and speaking at conferences. Through work, I was able to attend the Southern California Linux Expo (SCALE) in Pasadena, CA in January. I also got to give a talk at O'Relly's Open Source Convention (OSCON) in Austin, TX in May. However, I really wanted to give a talk about my experience contributing in the Ubuntu community. Jos Antonio Rey encouraged me to submit the talk to FOSSCON. While I've been aware of FOSSCON for years thanks to my involvement with the freenode IRC network (which has had a reference to FOSSCON in the /motd for years), I had never actually attended it before. I also wasn't quite sure how I would handle traveling from San Francisco, CA to Philadelphia, PA. Regardless, I decided to go ahead and apply. Fast forward a few weeks, and imagine my surprise when I woke up to an email saying that my talk proposal was accepted. People were actually interested in me and what I had to say. I immediately began researching flights. While they weren't crazy expensive, they were still more money than I was comfortable spending. Luckily, Jos had a solution to this problem as well; he suggested applying for funding through the Ubuntu Community Donations fund. While I've been an Ubuntu Member for over 8 years, I've never used this resource before. However, I was happy when I received a very quick approval. The conference itself was smaller than I was expecting. However, it was packed with lots of friendly and familiar faces of people I've interacted with online and in person over the years at various Open Source events. I started off the day by learning from Jos how to use Juju to quickly setup applications in the cloud. While Juju has definitely come a long way over the last couple of years, and it appears t be quite easy to learn and use, it still appears to be lacking some of the features needed to take full control over how the underlying applications interact with each other. However, I look forward to continuing to watch it grow and mature. Net up, we had a lunch break. There was no catered lunch at this conference, so we decided to get some cheesesteak at Abner's (is any trip to Philadelphia complete without cheesesteak?). Following lunch, I took some time to make a few last minute changes to my presentation and rehearse a bit. Finally, it was time. I got up in front of the audience and gave my presentation. Overall, I was quite pleased. It was not perfect, but for the first time giving the talk, I thought it went pretty well. I will work hard to make it even better for next tme. Following my talk was a series of brief lightning talks prior to the closing keynote. Another long time friend of mine, Elizabeth Krumbach Joseph, was giving the keynote about listening to the needs of your global open source community. While I have seen her speak on several other occassions, I really enjoyed this particular talk. It was full of great examples and anecdotes that were easy for the audience to relate to and start applying to their own communities. After the conference, a few of us went off and played tourist, paying the Liberty Bell a visit before concluding our trip in Philadelpha. Overall, I had a great time as FOSSCON. It was great being re-united with so many friends. A big thank you to Jos for his constant support and encouragement and to Canonical and the Ubuntu Community for helping to make it possible for me to attend this conference. Finally, thanks to the terrific FOSSCON staff for volunteering so much time to put on this great event.

3 November 2015

Nathan Handler: Ubuntu California 15.10 Release Party

Ubuntu California 15.10 Release Party On Thursday, October 22, 2015, Ubuntu 15.10 (Wily Werewolf) was released. To celebrate, some San Francisco members of the Ubuntu California LoCo Team held a small release party. Yelp was gracious enough to host the event and provide us with food and drinks. Canonical also sent us a box of swag for the event. Unfortunately, it did not arrive in time. Luckily, James Ouyang had some extra goodies from a previous event for us to hand out. Despite having a rather small turnout for the event, it was still a fun night. Several people borrowed the USB flash drives I had setup with Ubuntu, Kubuntu, and Xubuntu 15.10 in order to install Ubuntu on their machines. Other people were happy to play around with the new release in a virtual machine on my computer. Overall, it was a good night. Hopefully, we can put together an even better and larger release party for Ubuntu 16.04 LTS.

31 March 2013

Nathan Handler: Debian Mobile

For one of my classes, the final project is to work on a personal project for four weeks. Since I have never done mobile programming before, I decided I would use this project as a chance to teach myself the basics of Android development. Paul Tagliamonte recently blogged about a Google Summer of Code project idea for a Debian Android application. It was this post that motivated me to create a Debian Android application for my final project. We are only in the middle of week one, but my plan is to have at least a basic BTS and PTS interface done by the end of the project. Then, depending on a few things, I hope to turn it into a Google Summer of Code project or simply continue developing it on my own. As I mentioned, I am only a few days into the project. One of the first hurdles I encountered was interacting with the SOAP interfaces available for the PTS and BTS. There are a few examples available on the wiki, but there are not any for Java. A Google search also failed to return any clear and simple results. As a result, I decided I would create a Python script that would accept requests from the Java Android application via a REST API and pass them on to the SOAP API before returning the result as JSON. Before I could finish writing the script, Paul mentioned that he had already written a similar program a few months ago. I immediately stopped working on my script and began preparing a patch for Paul's program to add support for some missing PTS SOAP functions as well as the entire BTS. After a few hours, I had the ability to query the PTS and BTS via a REST API from Java. While I normally prefer to use VIM for my programming, I decided to follow Google's recommendation and use Eclipse. I was able to follow their guide to get the SDK setup and a simple demo application running. One of the few hurdles I encountered was when I wanted to make web requests. Due to the ability for such requests to stall, you are required to perform them in a separate thread. This only becomes aparant when you try to test your application on an actual device; the emulator will happily run the process in the main thread. Regardless, I was eventually able to modify the demo application to allow me to make requests to my Python REST server and then parse and display the result. Here are some screenshots of the application running. Main Screen API Functions Information Entered Result If you have any feature suggestions, questions, or comments, feel free to send me an email (nhandler@debian.org). The source code for the Android application is currently not available, but I plan to put it up in a git repository sometime in the near future. The source code for the Python script that provides a REST API to access the PTS and BTS SOAP interface is available on GitHub.

26 December 2012

Gregor Herrmann: goodbye nanoblogger, hello chronicle

after having used nanoblogger since august 2005, I decided to switch my blog engine. the reason was mainly that nanoblogger gets terribly slow with more than a handful of entries. inspired by nhandler, I took another look at chronicle, which had caught my interest before already. it's simple, produces static html pages from text files that I can keep in git, & it's written in perl :) let's see how many years chronicle will accompany me

16 November 2012

Paul Tagliamonte: pathfinding debian's social connections

As a followup to my toy over at debtree, I ve taken some suggestions from my buddy Arno and found some time in the hotel during a visit for work up to Canada with the Open North folks hacking on open13. The hack is called debfriends (I need to purge personal details from the source tree before I can push it to VCS, since it uses the nm.d.o dump. Sorry, I should have been better about that.) I ve posted a cute front-end on my staging machine to play with over at graph.lucifer.pault.ag There s also an API, but it has all the data in the NM dump exposed. If you re a DD who will treat this data right, I can add a key (it s not secure, like, at all). I m going to publish the source ASAP so that others can stand up an instance if they d like. It s a cute toy, though! In the tradition of using nhandler for my examples, here s an example path from pabs to Nathan: pabs > nhandler Also, we re a shockingly well connected bunch. Well done :) Feel free to play with it!

12 November 2012

Stefano Zacchiroli: Debian newcomer experience survey

In recent times we have worked quite a bit to improve the NM process, i.e. the process newcomers go through to become members of the Debian Project. As it happens, I've just read Nathan's recent post on his NM experience and I think it is a perfect example of the joining experience we are trying to offer to all newcomers. But examples, be them positive or negative, are only anecdotal. To evaluate a process one needs actual data and someone analyzing them, ideally with a scientific approach. This is why I'm happy to host below a guest blog post by Kevin Carillo, who is doing a pretty thorough scientific study about how newcomers join a wide range of Free Software projects, including KDE, OpenSUSE, GNOME, and Debian, of course! TL;DR: if you started contributing to Debian after January 2010, there's a survey for you; participating will help us improving the NM process even further. Kevin's guest blog post follows.
Newcomer experience in Debian and other FOSS communities - Survey My name is Kevin Carillo. I am a PhD student currently living in Wellington (New Zealand) and I am doing some research on Free/Open Source Software communities. If you have started contributing to the Debian project after January 2010 (within approximately the last 3 years), I would like to kindly request your help. I am interested in hearing from people who are either technical or non-technical contributors, and who have had either positive or negative newcomer experiences. The purpose of the research is to work out how newcomers to a FOSS community become valued sustainable contributors. The survey is online and will be available until Tuesday, 27 November, 2012. Inspiration from Debian New Member Debian is a successful community that keeps attracting new contributors and that relies on a very unique way to handle the integration of new contributors: the New Member process. The idea behind the NM process is that it is some sort of filtering procedure allowing to only retain the individuals who have the potential to become valued sustainable contributors in Debian. Within Debian, there is a lot of enthusiasm and pride around the NM process as it seems to be functioning pretty well but the question is: Is this really enough to ensure that Debian remains a healthy and growing community? How does it compare to the way newcomers are integrated in other large projects such as KDE, GNOME, or in other non-Linux related communities such as Mozilla? I have to admit that the Debian NM process has been among the main sources of inspiration that made me embark in this research project. I have kept being quite impressed when talking to people who had gone through the process as all of them came out of it with a real passion for the project and love for its community. When reflecting on the reasons why the NM process succeeds, I have a feeling it is some instance of ritualized socialization. In other words, barriers and initiation rituals that require some effort from newcomers, generate members with higher commitment and sense of identification towards the Debian community. What do newcomers really experience? The main assumption that motivated this project is that attracting new members has become crucial for a large majority of FOSS communities but this is not a sufficient condition to ensure the success and prosperity of a project. A proportion of a community's newcomers must contribute to the well-being and growth of the community. Keeping all that in mind, FOSS projects have thus to do a good job at "socializing" their newcomers and turning them into 'good' contributors. Doing a good job here means that FOSS projects shall ensure that they help generate citizenship-like behaviors from newcomers by designing appropriate newcomer programs and procedures. FOSS communities rely on a wide array of initiatives to facilitate the integration of newcomers but it seems like the other side of the coin is less understood: What do newcomers really experience? And how does this influence their contributions and actions within a project? How is this study going to help Debian? The data will help gain insights about the experience of newcomers within the Debian community. In addition, it will allow to understand how to design effective newcomer initiatives to ensure that Debian will remain a successful and healthy community. The dataset will be released under a share-alike ODbL license so that Debian contributors can extract as much value as possible from the data. Since this survey also involves other large FOSS projects such as Mozilla, KDE, Gnome, Ubuntu, Gentoo, OpenSUSE, and NetBSD, it will also be possible to compare practices across projects in order to identify what works from what does not work when facilitating the integration of newcomers. About the survey This survey is anonymous. The raw dataset of everything one fills in the survey will be released under the ODbL. Since all the questions but one are optional, one is free to control the amount of information they are giving away about themselves. I expect the survey to take around 20 minutes of your time. If you know members of the Debian community who you think would be interested in completing it, please do not hesitate to let them know about this research. I will post news about my progress with this research, and the results on my blog. Don't hesitate to contact me. --- Kevin Carillo

11 November 2012

Nathan Handler: Debian Developer

Today, I officially got approved by the Debian Account Managers as a Debian Developer (still waiting on keyring-maint and DSA). Over the years, I have seen many people complain about the New Member Process. The most common complaint was with regards to the (usually) long amount of time the process can take to complete. I am writing this blog post to provide one more perspective on this process. Hopefully, it will prove useful to people considering starting the New Member Process. The most difficult part about the New Member Process for me had to do with getting my GPG key signed by Debian Developers. I have not been able to attend any large conferences, which are great places to get your key signed. I also have not been able to meet up with the few Debian Developers living in/around Chicago. As a result, I was forced to patiently wait to start the NM process. This waiting period lasted a couple of years. It wasn't until this October, at the [Association for Computing Machinery at Urbana-Champaign's Reflections Projections Conference], that this changed. Stefano Zacchiroli was present to give a talk about Debian. Asheesh Laroia was also present to lead an OpenHatch Workshop about contributing to open source projects. Both of these Developers were more than willing to sign my key when I asked. If you look at my key, you will see that these signatures were made on October 7 and October 9, 2012. With the signatures out of the way, the next step in the process was to actually apply. Since I did not already have an account in the system, I had to send an email to the Front Desk and have them enter my information into the system. Details on this step, along with a sample email are available here. Once I was in the system, the next step was to get some Debian Developers to serve as my advocates. Advocates should be Debian Developers you have worked closely with, and usually include your sponsor(s). If these people believe you are ready to become a Debian Developer, they write a message describing the work you have been doing with them and why they feel you are ready. Paul Tagliamonte had helped review and sponsor a number of my uploads. I had been working with him for a number of years, and he really helped encourage and help me to reach this milestone. He served as my first advocate. Gregor Herrmann is responsible for getting me started in contributing to Debian. When I first tried to get involved, I had a hard time finding sponsors for my uploads and bugs to work on. Eventually, I discovered the Debian Perl Group. This team collectively maintains most of the Perl modules that are included in the Debian repositories. Gregor and the other Debian Developers on the team were really good about reviewing and sponsoring uploads in a very timely manner. This allowed me to learn quickly and make a number of contributions to Debian. He served as my second advocate. With my advocations taken care of, the next step in the process was for the Front Desk to assign me an Application Manager and for the Application Manager to accept the appointment. Thijs Kinkhorst was appointed as my Application Manager. He also agreed to take on this task. For those of you who might not know, the Application Manager is in charge of asking the applicant questions, collecting information, and ultimately making a recommendation to the Debian Account Managers about whether or not they should accept the applicant as a Developer. They can go about this in a variety of ways, but most choose to utilize a set of template questions that are adjusted slightly on a per-applicant basis. Remember that period of waiting to get my GPG key signed? I had used that time to go through and prepare answers to most of the template questions. This served two purposes. First, it allowed me to prove to myself that I had the knowledge to become a Debian Developer. Second, it helped to greatly speed up the New Member process once I actually applied. There were some questions that were added/removed/modified, but by answering the template questions befrehoand, I had become quite familiar with documents such as the Debian Policy and the Debian Developer's Rerference. These documents are the basis for almost all questions that are asked. After several rounds of questions, testing my knowledge of Debian's philosophy and procedures as well as my technical knowledge and skills, some of my uploads were reviewed. This is a pretty standard step. Be prepared to explain any changes you made (or chose not to make) in your uploads. If you have any outstanding bugs or issues with your packages, you might also be asked to resolve them. Eventually, once your Application Manager has collected enough information to ensure you are capable of becoming a Debian Developer, they will send their recommendation and a brief biography about you to the debian-newmaint mailing list and forward all information and emails from you to the Debian Account Managers (after the Front Desk checks and confirms that all of the important information is present). The Debian Account Managers have the actual authority to approve new Debian Developers. They will review all information sent to them and reach their own decision. If they approve your application, they will submit requests for your GPG key to be added to the Debian Keyring and for an account to be created for you in the Debian systems. At this point, the New Member process is complete. For me, it took exactly 1 month from the time I officially applied to become a Debian Developer until the time of my application being approved by the Debian Account Managers. Hopefully, it will not be long until my GPG key is added to the keyring and my account is created. I feel the entire process went by very quickly and was pain-free. Hopefully, this blog post will help to encourage more people to apply to become Debian Developers.

Nathan Handler: Introducing nmbot

While going through the NM Process, I spent a lot of time on https://nm.debian.org. At the bottom of the page, there is a TODO list of some new features they would like to implement. One item on the list jumped out at me as something I was capable of completing: IRC bot to update stats in the #debian-newmaint channel topic. I immediately reached out to Enrico Zini who was very supportive of the idea. He also explained how he wanted to expand on that idea and have the bot send updates to the channel whenever the progress of an applicant changes. Thanks to Paul Tagliamonte, I was able to get my hands on a copy of the nm.debian.org database (with private information replaced with dummy data). This allowed me to create some code to parse the database and create counts for the number of people waiting in the various queues. I also created a list of events that occurred in the last n days. Enrico then took this code and modified it to integrate it into the website. You can specify the number of days to show events for, and even have the information produced in JSON format. This information is generated upon requesting the page, so it is always up-to-date. It took a couple of rounds of revisions to ensure that the website was exposing all of the necessary information in the JSON export. At this stage, I converted the code to be an IRC bot. Based on prior experience, I decided to implement this as an irssi5 script. The bot is currently running as nmbot in #debian-newmaint on OFTC. Every minute, it fetches the JSON statistics to see if any new events have occurred. If they have, it updates the topic and sends announcements as necessary. While the bot is running, there are still a few more things on its TODO list. First, we need to move it to a stable and permanent home. Running it off of my personal laptop is fine for testing, but it is not a long term solution. Luckily, Joerg Jaspert (Ganneff) has graciously offerred to host the bot. He also made the suggestion of converting the bot to a Supybot plugin so that it could be integrated into the existing ftpmaster bot (dak). The bot's code is currently pretty simple, so I do not expect too much difficulty in converting it to Python/supybot. One last item on the list is something that Enrico is working on implementing. He is going to have the website generate static versions of the JSON output whenever an applicant's progress changes. The bot could then fetch this file, which would reduce the number of times the site needs to generate the JSON. The code for the bot is available in a public git repository, and feedback/suggestions are always appreciated.

30 October 2012

Nathan Handler: Debian Screencast Challenge

Today in #debian-mentors, a brief discussion about screencasts came up. Some people felt that a series of short screencasts might be a nice addition to the existing Debian Developer documentation. After seeing the popularity of of Ubuntu's various video series over the years, I tend to agree that video documentation can be a nice way to supplement traditional text-based documentation. No, this is not designed to replace the existing documentation; it is merely designed to provide an alternative method for people to learn the material. From past experience, I know that if I were to start making screencasts by myself, I would end up making one or two before getting bored and moving on to a new project. That is why I am posing a challenge to the Debian community: I will make one screencast for every screencast made by someone in the Debian community. These screencasts can and should be short (only a couple of minutes in length) and focus on accomplishing one small task or explaining a single concept related to Debian Development. I discovered a Debian Screencasts page on the Debian wiki that links to Ubuntu's guide to creating screencasts. While I have done some traditional documentation work in the past, I have never really tried doing screencasts. As a result, this will be a bit of a learning experience for me at first. If you are interested in taking me up on this challenge, simply send me an email (nhandler [at] ubuntu [dot] com) or poke me on IRC (nhandler@ freenode,oftc ). This will allow us to coordinate and plan the screencasts. As I mentioned earlier, I have no experience creating screencasts, so if you are familar with the Debian Development processes and think this challenge sounds interesting, feel free to contact me.

29 October 2012

Nathan Handler: Hello Chronicle!

Over the years, I have had many different blogs. I mainly use them to post updates regarding my work in the FOSS community, with most of these relating to Ubuntu/Debian. My most recent blog was a Wordpress instance I ran on a VPS provided by a friend. This worked pretty well, but for various reasons, I no longer have access to that box. Earlier today, I started thinking about launching a new blog. The first problem I had to overcome was deciding where to host it. Many of the groups I am involved with provide members with a ~/public_html directory that can be used to host simple HTML+Javascript+CSS websites. However, most of these sites do not have the ability to use PHP, CGI, or access databases. I realized rather quickly that if I were to find a blogging platform that I could run from a ~/public_html directory on one of those machines, I would never have the problem of not having a host for my blog ever again. There is a near infinite number of free web hosts that can handle simple HTML sites. One key feature of any blog is the ability for readers to subscribe to it. Usually, the blogging platform handles automatically updating the feed when a new post is published. However, my simple HTML blog would not be able to do that. Since I knew I did not want to manually update the feeds, I started looking for a way to generate the feed locally on my personal machine and then push it to the remote site that is hosting my blog. About this time, I started compiling a list of features I would need to make this blogging platform usable. In no particular order, they included: After some searching, I found an application developed by Steve Kemp called Chronicle. Chronicle supports all of the features I described plus many more. It is also very quick and easy to setup. If you are running Debian, Chronicle is available from the repositories. Start by installing Chronicle. Note that these steps should all be done on your personal machine, not the machine that is going to host your blog.
sudo apt-get install chronicle
There are a few other packages that you can install to extend the functionality of Chronicle
sudo apt-get install memcached libcache-memcached-perl libhtml-calendarmonthsimple-perl libtext-vimcolor-perl libtext-markdown-perl
Next, create a few folders to hold our blog. We will store our raw blog posts in ~/blog/input and have the generated HTML versions go in ~/blog/output.
mkdir -p ~/blog/ input,output 
Now, we need to configure some preferences. We will base our preferences on the global configuration file shipped with the package.
cp /etc/chroniclerc ~/.chroniclerc
Open up the new ~/.chroniclerc file in your favorite editor. It is pretty well commented, but I will still explain the changes I made. input is the path to where your blog posts can be found. These are the raw blog posts, not the generated HTML versions of the posts. In our example, this should be set to ~/blog/input. pattern tells Chronicle which files in the input directory are blog posts. We will use a .txt extension to denote these files. Therefore, set pattern to *.txt. output is the directory where Chronicle should store the generated HTML versions of the blog posts. This should be ~/blog/output theme-dir is the directory that holds all of the themes. Chronicle ships with a couple of sample themes that you can use. Set this to /usr/share/chronicle/themes theme is the name of the theme you want to use. We will just use the default them, so set this to default entry-count specifies how many blog posts should be displayed on the HTML index page for the blog. I like to keep this fairly small, so I set mine to 10. rss-count is similar to entry-count, but it specifies the number of blog posts to include in the RSS feed. I also set this to 10 format specifies the format used in the raw blog posts. I write mine in markdown, but multimarkdown, textile, and html are also supported. Since most ~/public_html directories do not support CGI. We will disable comments by setting no-comments to 1 suffix gets appended to the filenames of all of the generated blog posts. Set this to .html url_prefix is the first part of the URL for the blog. If the index.html file for the blog is going to be available at http://example.com/blog/ , set this to *http://example.com/~user/blog/ blog_title is whatever you want your blog to be called. The title is displayed at the top of the site. Set it to something like Your Name's Blog post-build is a very useful setting. It allows us to have Chronicle perform some action after it finishes running. We will use this to copy our blog to the remote host's public_html directory. Set this field to something like scp -r output/* user@example.com:~/public_html/blog. You will probably want to setup SSH so that you do not need to enter a password each time. Finally, date-archive-path will allow you to get the nice year/month directories to sort your posts. Set this to 1. Your final ~/.chroniclerc should look something like this:
input = ~/blog
pattern = *.txt
output = ~/blog/output
theme-dir = /usr/share/chronicle/themes
theme = default
entry-count = 10
rss-count = 10
format = markdown
no-comments = 1
suffix = .html
url_prefix = http://example.com/~user/blog/
blog_title    = Your Name's Blog
post-build = scp -r output/* user2@example.com:~/public_html/blog
date-archive-path = 1
You are now ready to start writing your first blog post. Open up ~/blog/MyFirstBlogPost.txt in your favorite text editor. Modify it to look like this:
Title: My First Blog Post
Tags: MyTag
Date: October 29, 2012
This is my **first** blog post.
Notice the metadata stored at the top? This is required for every post. Chronicle uses it to figure out the post title, tags, and date. It has some logic to try and guess this information if missing, but it is probably safest if you specify it. The empty line between the metadata and the post content is also required. Remember, we configured Chronicle to treat these as Markdown files, so feel free to utilize any Markdown formatting you wish. Finally, run chronicle from a terminal. If all goes well, it should generate your blog and copy the files to the remote host. You can now go to http://example.com/~user/blog to view it. One final suggestion would be to store the entire blog in a git repository. This will allow you to utilize all of the features of a VCS, such as reverting any accidental changes or allowing multiple people to collaborate on a blog post at the same time.

23 August 2011

Nathan Handler: Google Summer of Code Report #5

This is my fifth and final report about dextools for the 2011 Google Summer of Code program. During the summer, I worked with Matt Zimmerman and Stefano Zacchiroli to create a replacement web dashboard for the Debian Derivatives Exchange (DEX). The dashboard displays a list of projects and their respective tasks and then allows users to easily update the status of these tasks. The dashboard also contains two graphs for tracking the progress of projects and providing instant recognition of all contributions to a project. At the start of the summer, I knew that I wanted to to work on improving the DEX infrastructure. I had worked with the team on the ancient-merges project, and while a lot of good work was accomplished, the process for tracking our progress was a bit clunky and difficult to interact with. Anyone who read my initial proposal for the Summer of Code would probably agree that it was quite vague. I really did not have a solid plan for what I wanted to accomplish. This began to change after my initial phone call with my mentor, Matt Zimmerman. We decided that the first thing I would work on would be a basic dashboard. Our plan was to have all of the data stored on the Debian BTS. This would allow Debian Maintainers to benefit from DEX without needing to learn about DEX s tools and workflows. The dashboard would support multiple derivatives and multiple projects for each derivative. Each project would be made up of a list of tasks, where each task is linked to a BTS bug. I prepared a mockup and some initial code based on that plan. The dashboard was able to successfully display a list of BTS bugs and their information. It generated the lists by assuming that bugs would be usertagged with a user of debian-derivatives@lists.debian.org and a tag in the format of dex
. At this point, we started trying to make sure that the dashboard would work for things like the ancient-patches, python2.7, and the upcoming large-merges projects. It did not take long to determine that it would not work for all of these things. The ancient-patches project started with a simple list of patch names. While most of these patches eventually ended up as bugs on the BTS, they did not start this way. The dashboard would need to support specifying a list of task names. This meant that it would also need to store data itself and that not all data could be located on the BTS. This is when we first started to define what a task is. The dashboard tracks tasks. It does not directly track bugs or patches. A task is nothing more than a title and status with an optional assignee, note, and bug. This made it simple to convert a list of patch names or a list of bug numbers to a list of tasks. During the early early stages of the project, the dashboard moved around a lot. We hoped to have it end up on dex.alioth.debian.org, but we were not sure whether Alioth would meet all of our needs. This was right around the time of the Alioth migration. Thanks to some tips, I figured out that I could use a simple cronjob on wagner to pull the git repository for vasks. This would allow the running instance of the dashboard to always be up-to-date. The Alioth admins have also been quite supportive. They have installed several additional packages for me that were necessary to keep the dashboard running. From the start, Matt felt it was important to have some public documentation about the project. This would allow us to point people to something other than my blog posts. I decided to put this information on the wiki. At one point, I also had a copy of the wiki page stored in the git repository. This file was updated via a cronjob, and I then committed and pushed it when I had code changes. I ultimately decided that the best approach would be to maintain the page on the wiki. A basic README file is included in the git repository that simply links to the wiki page. On wagner, I have a cronjob running to pull a copy of the wiki page so that it can be accessed via the web. Since Matt was traveling/moving this Summer, he arranged for Stefano Zacchiroli to fill in for him as my mentor temporarily. It was at this point in the summer that I began working on the first graph script. The goal was to have a script to parse the list of tasks and generate a graph showing the number of open tasks versus time. This would allow us to easily track our progress, estimate when a project will be complete, and detect periods of slowing down. We decided that while we liked the Ubuntu burndown charts, they were a bit more complex than we needed, so we did not reuse their code. A dashboard that can t be modified is not that useful. An early goal of this project was to allow users to update the dashboard via the web. This proved to be a bit more difficult than I initially thought. First, all of the html files that make up the dashboard are generated by a script. This means that if you modify one of the html files directly, all changes will be lost the next time the script runs. When you make a change on a web form and hit submit, you usually expect to see the changes the next time you visit the page. In order to accomplish this, I made the form processing script directly modify the html files. However, at this point, we did not have a way to locally store extended information about tasks, causing the python script to delete all changes made via the web form. This issue was rectified fairly quickly. There is now a changes file that stores all information submitted via the web form. The python script reads this file and applies the changes when it is generating the html files. A second problem concerned the text inputs that were being used in most of the cells on the table. By default, some fields would have text cut-off if the string was too long. Text inputs have a size attribute that is meant to specify the number of visible characters. I attempted to set this attribute in a script to the length of the input s value. Strangely, the inputs ended up with a lot of extra whitespace following the text. This resulted in the table being stretched horizontally. After many hours of research and testing, I was unable to find a solution to this problem. We ultimately decided to simply remove the size attribute and accept that text will be cut-off. My code got some early testing from Allison Randal and other people working on the dex-ubuntu-python2.7 project. They were a huge help in providing feedback and finding bugs in the code. While the dashboard was unable to meet all of their needs (due to being in the early stages of development), I hope that it at least helped to make it easier to track the project s status. Whenever a script is accepting input from a user, it is important to validate it. Although the dashboard uses a select box to limit the choices for the status , it is still trivial to submit an arbitrary status for a task. That is why I added some validation to the form processing script. It will ignore any unknown values, ensuring that all tasks have a valid status. The other fields are a bit more tricky. They were designed to be arbitrary text fields. As a result, almost anything can be entered. There is no way to tell if something is a title or a person. We thought of a few ways we could change this, but most of them involved locking down the dashboard and restricting changes. The old dashboard treated the fields as arbitrary text, so we decided to not include any validation in the new one. One popular feature request was the ability to link directly to a project. Early versions of the dashboard had one main dex.html page that used javascript to pull in the various projects. To get around this, I first trying using the query string to allow users to specify a distribution and project. While this worked, it resulted in long and ugly URLs. It took a while for me to be able to implement a cleaner solution. However, I eventually split each project into its own static HTML file. This meant that people could simply copy the URL from the address bar to link to a particular project. The main reason for doing this was that we felt the typical person would be interested in working on a specific project; most people won t be interested in navigating between the different projects. This change also allowed links that utilized the #graph anchor to function properly. Before, due to the way the page went about loading the project, trying to specify a specific project and #graph in the URL did not work properly. There is a plugin for jquery that allows tables to be sorted by particular columns. For tables that just contain text, this works fine without any issues. However, once I started using select boxes and text inputs, things got a bit messed up. After some research, I finally figured out how to use addParser to instruct the tablesorter plugin in how to sort each column in the table. In some of the earlier versions, I used some zebra stripes to make the table easy to read. Thanks to a suggestion from Paul Wise (pabs), I got rid of this striping and modified the code to color the rows based on the task s status. Complete tasks are green, in progress tasks are yellow, and incomplete tasks are red. This makes it very easy to find tasks that still need work as well as measure the overall status of a project. Sometimes, a large task list can feel very intimidating. That is why I added the ability to hide completed tasks with the check of a box. Matt wanted to take this one step farther; he wanted the ability to also hide tasks with a closed bug. This is why you will find 2 checkboxes at the top of the dashboard that allow you to custimize which tasks are visible. Eventually, I might add support for specifying this in the URL to allow groups to link to a particular view of the dashboard. In order to make it as easy as possible for new contributors to get involved with DEX projects, I wanted to have a way to document what a project is about and how to handle tasks. I decided to use the wiki for this. All of the per-project documentation lives at http://wiki.debian.org/DEX//
. You can even take advantage of wiki markup to format the documentation. The script will parse the HTML files generated by the wiki, do some minor cleanup, and then display the documentation at the top of the page. My original plan was to use the ?action=raw output, but the lack of formatting proved too restrictive. My second plan was to try and use something like BeautifulSoup or a regex to filter non-whitelisted tags out of the wiki-generated html. This failed and proved to be unnecessary. The wiki does a pretty decent job of preventing malicious markup from ending up on the dashboard. There is a second graph that is displayed on the dashboard. This graph shows the number of tasks each person has completed. The graph is sorted so that the tallest bars on the left, and the goal is to provide some immediate recognition of work done by contributors. Since we essentially allow anyone to go ahead and modify the dashboard, we knew it was important to have a method in place for dealing with abuse. Any malicious edits made on the wiki can easily be reverted. This idea of reverting to an earlier revision is what inspired me to use a second local git repository to store the projects/ directory. This directory stores all of the changes made via the web interface. Whenever the form is submitted or the script runs to update the list of tasks, any changes made to the projects/ directory are committed. This means that if a malicious user decides to delete everything on the dashboard, we can quickly and easily revert the changes. The revision history might also be interesting to analyze at a future date. When multiple people are collaborating on a project, an issue tracker can be quite useful. For some reason, we did not set one up until quite late into the project. Despite that, we are still using it to track known bugs and feature requests. The issue tracker will become even more useful once the dashboard starts to be utilized by more people. I spent a fair bit of time working on having the dashboard immediately respond to changes. For example, if you change the status of a task, the row color will instantly update. The form will also be submitted in the background, eliminating the need to hit submit to save the change. I also use ajax in some new dialogs that allow the user to create new projects and/or tasks via the web. While these forms are submitted in the background via ajax, I am unable to have the changes show up on the dashboard immediately. The generation of the task list is a bit slow, so having the script run more often is not really a good idea. The next step for the dashboard is to really open it up for testing. We hope to start up a new DEX project shortly. This will allow us to see which features work and which need fixing. It will also help us find some of the bugs that are probably hiding in the code. Finally, if you are interested in helping out with DEX or dextools, my code is available in a git repository (http://anonscm.debian.org/gitweb/?p=dex/gsoc2011.git) on alioth. There is also the issue tracker (https://alioth.debian.org/tracker/index.php?group_id=100600&atid=413120) where you can report bugs and request new features. Finally, you can join the debian-derivatives mailing list, join #debian-derivatives on OFTC, or contact me directly via email or on IRC (nhandler). I have really enjoyed working on dextools this summer. I would like to thank Matt Zimmerman, Stefano Zacchiroli, and everyone else who helped with the project. I look forward to working with all of you more in the future.

1 August 2011

Nathan Handler: Google Summer of Code Report #4

Due to the midterm evaluation, this report will cover a span of 4 weeks. As a result, it will be quite a bit longer than my previous reports. One of the first things I did after my last report was create a FAQ for the project. At first, the FAQ was a plain text file living only in the git repository. Then, it moved to being maintained on the Debian wiki with a cronjob pulling it into my local repository for me to commit and push. Finally, the cronjob moved to wagner. The FAQ is not actually committed to the git repository, but it is available in the dextools instance running on wagner or on the wiki. There is also now a basic README file that is part of the git repository that simply points people to the FAQ on the wiki (so people who branch the repository can easily find the documentation). For those of you who have been reading my other reports and/or testing the dashboard, you have probably noticed the problems I was having getting the text boxes sized properly. After spending countless hours researching and testing different solutions, we finally decided to give up on this for the time being. Therefore, all of the size attributes have been removed from the text inputs, and the background color has been reset to its default value. This means that we will no longer experience the issue of very wide text boxes and a horizontally stretched table. However, it also means that long strings will get cut-off when displayed in a text input. Since my initial mockup design at the start of the summer, I have had the rows of the table zebra-striped. Based on some feedback from Paul Wise (pabs), I changed this so that the row colors are based on the status of the task. Open tasks are red, closed tasks are green, and in progress tasks are yellow. This makes it much easier to quickly see the progress of the project and find tasks you are interested in. My original dashboard consisted of one main dex.html file. This file used some javascript to pull in each project s task list. We felt that a much more common use case would involve a user only interested in one particular project. As a result, they would not be likely to use the select boxes to change projects. This ultimately led to me creating standalone dex.html dashboards for each project. These per-project dashboards were identical to the original dex.html, except they did not include the select boxes at the top for changing projects. Eventually, I decided to completely get rid of the original dex.html dashboard. That file now consists of a simple list of links to all of the projects sorted by distribution. Another advantage of this change is that it is much easier to link to a specific project. For example, you might now link to /projects/dex-ubuntu-ancient-patches/dex.html (you can get the URL by simply copying it from the address bar). Another advantage is that you can now link directly to a specific project s graphs (just append #graph). Another feature that I am still working on is per-project documentation. From the start, we wanted to make it easy for new contributors to get involved in DEX. One way we can do this is by having clear documentation about what the project is and how to deal with tasks. I decided that the easiest approach would be to have this documentation live on the wiki. This allows users to use the familiar wiki markup language to format their text, and it gives us all of the other benefits a wiki provides. Documentation will live at /DEX/<distribution>/ <project>. The dashboard will then take care of pulling in this documentation and displaying it at the top of the proper page. For a while, I was thinking that it would be necessary to filter the HTML produced by the wiki to prevent malicious code from ending up on the dashboard. However, after playing around for a bit, I think that it should be pretty safe to just use the HTML as-is (if someone spams the wiki, we can always revert the change). I am currently finishing up some of the code to handle this, and it should be live sometime this week (a partially working version is live on wagner). There are also two new checkboxes at the top of each dashboard. These allow users to add completed tasks and/or tasks with completed bugs. This is useful for projects that are close to being complete, as it allows users to quickly view the few remaining tasks. This feature was requested by several people, and I am glad to finally be able to implement it for them. There is also a second graph on each page. This graph is a bar graph that shows the number of tasks each person has completed. It gets the people from the assignee column for the project, and it only counts tasks with a status of Complete . The graph is updated whenever the getbugs.py script runs to update the task listing (currently every 10 minutes). The bars are sorted from tallest to shortest. The idea behind this graph is to recognize the people working on a project. The form processing script has not worked on wagner. This was mainly due to a permission issue. Thanks to a hint from Rapha l Hertzog, I was able to use setfacl to fix this. The form can now be submitted, and it should successfully save all changes made via the web form. This involved some updates to support the per-project dex.html files. The form processing script will also ignore any status that is not complete , in progress , or incomplete , which prevents people from messing up the dashboard by submitting invalid information. Finally, the script will now try to use the HTTP_REFERER header to send the user back to the correct page. If this header is not set, the user will be sent back to the main dex.html index page. Sorting the table has been partially broken since I started including text inputs. I finally got this issue resolved, and I should now be able to sort by text inputs, select boxes, or anything else I might end up including in the table. You can test this by clicking on any header in the table. I will probably add a tiny image to make it more clear which header is being used to sort the table. By default, it is sorted based on the status and then the task name. The status column now has a select box for setting the status. Since the row coloring is done via javascript, the minute you change the status in the select box, the row color will change to reflect the new status. Keep in mind, for the time being, any change you make via the web form will be lost if you do not hit the Submit button at the top of the form. That covers most of the dashboard changes that have taken place since my last report. However, there were also some other changes that are worth noting. Matt Zimmerman, my original mentor, settled down from his traveling and resumed serving as my mentor for the project. I would like to thank Stefano Zacchiroli for all of his help this summer. I look forward to working with him more in the future. I have also started using the issue tracker feature on alioth for dextools. Right now, it is mainly being used to help me keep track of bugs and features that I need to deal with. I am hoping that as dextools starts to be used by more people that they will also help report issues on the tracker. This issue tracker is a good source of information for people interested in knowing what I will be working on in the future. I will briefly talk about some of the more important features that I will be working on. The first issue is one that I have already touched upon in this report. I want to sort out the per-project documentation. This is rather important, as it will most likely be utilized by all projects using the dashboard. It is also essential if we are to get new contributors involved in DEX projects. Second, I want to setup some method for backing up the projects/ directory on wagner. This directory is not maintained in the git repository, and it contains all of the information submitted via the web form and generated by the scripts. My current plan is to have this directory maintained in its own VCS. Every time getbugs.py runs or the form is submitted, a new commit will be made. This will allow us to easily revert any changes (spam) made to the dashboard. Since we do not foresee the dashboard being the target of much abuse, any user will be able to modify data via the web form. I also want to add the ability for users to create new projects and task via the web. I envision this as a separate form where they enter the distribution name, project name, a list of BTS bug numbers, and a list of any additional tasks. The script will then take care of applying the necessary usertag to the bugs and/or creating a tasks file in the projects directory for the non-bts tasks. Doing this will make starting a new project trivial, and will eliminate the need for VCS access to create projects that do not utilize BTS bugs. Finally, I want to eliminate the need for the user to hit the Submit button to save their changes. Any change the user makes should be submitted and saved instantly using AJAX. This will also help eliminate some confusion that was caused by the row color instantly changing when the status is changed. Some users thought that the color change meant that the change was saved when it really was not. Matt asked me the other day what features I think need to get added before the dashboard is really ready for public use. While the features and issues I mentioned above will help make the dashboard easier to use, the dashboard is still perfectly usable without them. There are only a few more weeks left in the Google Summer of Code 2011. By that time, all of these features and bugs should be sorted out, and we should be able to properly announce the dashboard and make it available for use on all DEX projects. That does not mean that the dashboard will be done . Instead, it will probably result in many new bugs and feature requests being submitted. However, at that point, I will make a very strong effort to keep the dashboard stable and to prevent data loss. I am looking forward to the finally finishing up this project and seeing it used by the DEX team. As always, if you are interested in helping out with DEX or dextools, my code is available in a git repository on alioth. There is also the new issue tracker where you can report bugs and request new features. Finally, you can join the debian-derivatives mailing list, join #debian-derivatives on OFTC, or contact me directly via email or on IRC (nhandler).</project></distribution>

4 July 2011

Nathan Handler: Google Summer of Code Report #3

Since my last report, a lot of visible changes have been made. First, if you edit the value of any of the text inputs on the dashboard table, your changes will be saved when you hit the submit button. This feature still needs a bit of work though. For example, I noticed during testing that if you submit the form multiple times, you will sometimes end up with approximately duplicate entries in the changes file (the file that stores user-submitted changes to the table). I also need to implement some checks to ensure that the values submitted are actually valid (i.e. we probably only want to allow a few possible values for the status column). Another big improvement is the addition of a graph at the bottom of each project s page. This graph tracks the number of open tasks versus time. The graph is re-generated once each day based on a progress file. This file simply lists a date and the number of open tasks on each line. While the graph does correctly show a line tracking the progress, it still has some issues to sort out. For example, the x-axis does not properly display a time increment. Work on the graph was blocked for a while due to not having a fully-functioning dashboard. Now that the dashboard is functioning, I should be able to start working on fixing these graph issues. Several people approached me to request the ability to link directly to a particular project. That way, users do not need to mess around with the select boxes. Above each table is a title. If you click on the title, the URL should update to link directly to the project. This is done with the distro and project query parameters. For example, a link to dex.html?distro=ubuntu&project=ancient-patches will take you directly to Ubuntu s ancient-patches project. Attempting to specify an invalid distro or project will cause the query parameters to be ignored. While looking at the table title, you will also notice a new (Graph) link. This link will take you directly to the bottom of the page so you can view the graph. Currently, it is not possible to link directly to the graph anchor tag of a specific project. Attempting to do so will cause the #graph portion of the URL to be ignored. However, if you click on the actual graph, it will load a full-size graph.png that you can link to. One rather significant issue that has shown up is that the table is very wide. On most monitors, you will have to scroll horizontally to view all of the columns. This is due to the fact that most of the cells are made up of text inputs (they don t have a background or border, but if you click on some text in the table, you should see the box) rather than plain text. The size parameter for a text input was meant to specify how many characters should be visible. However, while I have the size parameter set correctly for each input, it is showing up much wider than the text. I will continue to play around with this, since excluding the size parameter results in some values getting cut-off. As mentioned in my last report, I have continued to work with Allison Randal to make the dashboard more useful. I have received feedback directly from her and indirectly from other people working on the python2.7 project. During my last phone call with my mentor, Stefano Zacchiroli, we agreed that it would be very helpful to spend some time documenting how to go about adding a new project to the dashboard and how to utilize all of the dashboard s features. The goal of this is to not require people to go through me or any other individual to be able to use the dashboard. So while the Summer of Code is meant to focus on programming, I will probably spend some time adding some documentation, as a program is rather useless if nobody knows how to use it. Another bug that I will be working on has to do with sorting the table. I had this feature working early on in the project. However, the sorting broke when I starting using text inputs instead of plain text. I have done some research, and I hope to get the sorting re-implemented in the near future. One other feature that has been requested by several people is the ability to hide completed tasks. This has been on my todo list for a while. Matt Zimmerman wanted to take this a step farther. He felt that the status of the task should be separate from the bug. This means that you can have an open task with a closed bug or a closed task with an open bug. The user could then have some more control over what data they are seeing. I will probably attempt to add some check boxes to control whether certain information is displayed. The user will also be able to specify this directly in the URL. This means that a team could link directly to all open tasks in a project. It is also worth noting that thanks to some help from the fantastic Alioth admins, I now have a live instance of the dashboard running on Alioth. While the URL might change in the future, I hope to keep it running on Alioth. If you are interested in helping out with DEX or dextools, my code is available in a git repository on alioth. You can also join the debian-derivatives mailing list, join #debian-derivatives on OFTC, or contact me directly via email or on IRC (nhandler).

20 June 2011

Nathan Handler: Google Summer of Code Report #2

A lot of work has taken place since my last report. First, since my original mentor, Matt Zimmerman, is going to be moving, he helped arrange for Stefano Zacchiroli to fill in for him temporarily. I have had a call with Stefano, and I am looking forward to getting to work with him more over the next couple of weeks. Shortly after my first report, we realized that having the dashboard only display bugs severely limited its usefulness. In our original design, we had it pulling a list of bugs with specific usertags from the BTS. This forced all information to be added to bugs, which we viewed as a good thing since it made the information available to the Debian Maintainers without requiring them to familiarize themselves with DEX. The issue was that when looking back at the ancient-patches project or forward to the big merges project, a large portion of the work will be triaging and determining whether or not the changes are even applicable in Debian. If they aren t, there is no reason for a bug to be filed on the BTS. As a result, I spent a fair amount of time re-writing most of the dashboard. It still supports finding bugs on the BTS that have been tagged with a specific DEX usertag, but it is now task-based rather than bug-based. A task can be a bug number, a package name, a patch name, or anything. It is merely something that needs to be done for a project. Each task will have a name, an assignee, a status, a note, and a bug associated with it. There are two ways to define a list of tasks for a project. The first is to tag bugs on the BTS with a user of debian-derivatives@lists.debian.org and a tag with a format of dex-<distribution>- <project> (i.e. dex-ubuntu-ancient-patches). The script will convert the bugs to tasks and automatically set all of the fields. The second method is to specify the list of tasks in a plain text file on the server the scripts are being run on. Eventually, this list will be able to be specified via a web interface. Currently, you are only able to specify the task name in the file. This means that all of the other fields for a task created in this manner are initially blank. I am still deciding on the best way to specify additional information about each task in the file. Once I decide on a method, it should not be too difficult to modify the script to parse and store that additional information. For the ancient patches DEX project, everyone involved had to update a status file stored in a VCS on alioth. This was rather annoying, so we decided early on that we want to be prevent this from being necessary for future projects. This means that the dashboard needs a way for users to modify the information on it. For tasks that are based on bugs, this is rather easy; you simply use the BTS email interface to adjust the bug. However, for other tasks, we needed the ability to modify them on the dashboard. I have managed to get this partially implemented. Currently, all cells on the table are text input fields (with css being used to hide the boxes). This means that you can click on any cell and edit its contents. Hitting the submit button will execute a script to save these changes. This is the tricky part. The dashboard is a collection of html files generated by a python script. The script is meant to be run by cron every n minutes. When you update something via a web interface, you usually expect to see the changes instantly. This means that we can t wait for the python script to run again and re-generate the html files. As a result, I currently have it setup so that when you submit the form, it will go and modify the html files to reflect the changes. The next time cron runs the python script, the html files will be re-created, and the changes made by the user will be lost. Once I come up with a method for storing additional information about a task, it shouldn t be too hard to make changes made by a user via the web interface permanent. I also did some work on a script to generate graphs. There are two types of graphs I hope to be able to display on the dashboard. One graph will help show which individuals have done the most work on a project. I have not started working on this graph yet. The second type of graph is meant to show the number of open tasks versus time. This will allow people to quickly see the progress of a project, estimate how long the project will take to complete, and notice if things start to slow down. I have some initial code for this graph using matplotlib. However, due to the many changes that were made to the dashboard, I had to hold off a bit on this graph, as it relies on certain information provided by the dashboard. I hope to sort out some of the last few issues with this graph, such as getting the axes to display properly, and then merge it with the dashboard sometime in the next few weeks. Allison Randal approached me a while ago to see if my Summer of Code project might be in a state where it could be used to assist with a Python 2.7 DEX project that she was working on. For a while, I did not think that it would be ready. However, last week, I helped her get the related bugs usertagged appropriately, and they now show up in the DEX dashboard. While the dashboard is still not complete, it does allow her and everyone else working on the project to clearly see the tasks (bugs) that they need to focus on and the current status of each of them. I plan to try and incorporate any feedback I receive from people working on this project into my designs. All of my code is available in a git repository on alioth. Keep in mind that it should not be considered stable at this point. Further status updates will be posted on my blog, and I will try to keep the wiki page updated. I am still trying work out where we will host the actual dashboard. In the meantime, you can try out my testing version of it. During the next few weeks, I will be focusing my efforts on coming up with a method for storing additional task information. That will allow me to support permanently storing changes made by users via the web interface. I also hope to finish the first graph script and merge it with the dashboard. Finally, I hope to find a permanent home for the dashboard, as having it on my personal server is not a viable long-term solution. Finally, if anyone is interested in getting involved with DEX, feel free to join the debian-derivatives mailing list, join #debian-derivatives on OFTC, or contact me via email or on IRC (nhandler).</project></distribution>

17 June 2011

Nathan Handler: Google Summer of Code Update #4

Since Matt is going to be busy the next few weeks moving and traveling, he arranged for Stefano Zacchiroli to fill in for him temporarily as my mentor. I must say, I am quite lucky to have the chance to have both Matt and Stefano as mentors this summer. Yesterday, I had my first call with Stefano. I started by filling him in on the current status of the project. We discussed how the dashboard is going to be based on tasks rather than bug . He reminded me that we want to make sure all useful information makes its way back to the BTS (ideally in an automated or) in a fast and easy way. Most of the information that will be stored in the dashboard probably won t be that useful to the package maintainers. However, there will be a column for DEX members to add notes about tasks. This column could potentially contain some useful information. That is one reason that the dashboard will parse this column for bug numbers and then display extended information about the bug. Hopefully, this will encourage DEX members to send information to the BTS when relevant. Having the bug numbers will also allow us to automate sending information from the DEX dashboard to the BTS if we choose to. I also spent some time this week working on a script to generate some graphs. These graphs will allow us to track the number of remaining tasks in a project vs. time. Stefano mentioned possibly borrowing some code from the Ubuntu burndown charts, but right now, I think the Ubuntu charts might be a bit more complicated than what we need. We can always decide to borrow some features/code down the road if necessary. It is also just about time for me to prepare my second formal report for the Summer of Code. I will use that to showcase the current status of the code and make it available for people to test and review.

11 June 2011

Nathan Handler: Google Summer of Code Update #3

Last week was a rather slow week. I did some work on allowing the dashboard to store and process information that does not come from the BTS. This exposed a few issues with the way we were storing and presenting information. After talking to Matt, we decided that the dashboard should deal with task objects. These tasks would contain basic information such as a name, person assigned to it, status, and possibly a link to a bug. This means that we would be able to start up a new DEX project by simply choosing a project name and creating a basic text file containing a list of tasks (package names, patches, bugs, etc). I also have been trying to get a hold of some people who participated in the last DEX project. I am interested in getting some more feedback about what went well, problems they encountered, and tools that might have made things easier. I am also talking to some people who are currently working on starting up new DEX projects to try and make dextools align with as many workflows as possible. If you fall into either of these categories, I would really appreciate it if you contact me. There is also a new wiki page that I created so that it is easier for people who are not interested in me or the Google Summer of Code to quickly see what dextools is about and monitor its progress. I will be adding more information over the next few days. Currently, I am continuing to work on the dashboard and talking to people about their DEX experiences. I have also begun working on some scripts to generate graphs that will go on the dashboard. These graphs will allow us to quickly check our progress on different projects. They will also allow us to see and acknowledge individuals working on the tasks. I will add more information about these graphs to this blog and to the dextools wiki page.

Nathan Handler: Google Summer of Code Update #3

Last week was a rather slow week. I did some work on allowing the dashboard to store and process information that does not come from the BTS. This exposed a few issues with the way we were storing and presenting information. After talking to Matt, we decided that the dashboard should deal with task objects. These tasks would contain basic information such as a name, person assigned to it, status, and possibly a link to a bug. This means that we would be able to start up a new DEX project by simply choosing a project name and creating a basic text file containing a list of tasks (package names, patches, bugs, etc). I also have been trying to get a hold of some people who participated in the last DEX project. I am interested in getting some more feedback about what went well, problems they encountered, and tools that might have made things easier. I am also talking to some people who are currently working on starting up new DEX projects to try and make dextools align with as many workflows as possible. If you fall into either of these categories, I would really appreciate it if you contact me. There is also a new wiki page that I created so that it is easier for people who are not interested in me or the Google Summer of Code to quickly see what dextools is about and monitor its progress. I will be adding more information over the next few days. Currently, I am continuing to work on the dashboard and talking to people about their DEX experiences. I have also begun working on some scripts to generate graphs that will go on the dashboard. These graphs will allow us to quickly check our progress on different projects. They will also allow us to see and acknowledge individuals working on the tasks. I will add more information about these graphs to this blog and to the dextools wiki page.

Next.