Search Results: "rudy"

2 April 2015

Rudy Godoy: No SSH for you how to fix OpenRG routers

I find interesting when faced a taken-for-granted situation, in particular in the tech world. This time s history is about an Internet ADSL connection that allows all traffic but SSH. Yes, you read it correctly. It was very bizarre confirming such event was a real-life situation. I don t claim to be a networking expert but at least I want to think I m well educated. After few minutes I ve focused my efforts on dealing with the ADSL router/modem s networking configuration. The device is provided by Movistar (formerly Telefonica) and it runs OpenRG. I ve discovered that other people have experienced the same issue and what Movistar did was basically replacing the device. Of course the problem is gone after that. So, this post is dedicated to those who don t give up. Following the steps below will allow SSH outbound traffic for a OpenRG-based device. OpenRG device specs
Software Version: 6.0.18.1.110.1.52 Upgrade
Release Date: Oct 7 2014
Diagnostic When you do the command below, it shows nothing but timeout. Even when you SSH the router it doesn t establish connection to it.
ssh -vv host.somewhere.com
Solution Change router s SSH service port. This step will allow you to access the console-based configuration for the router (since I haven t found any way to do the steps described below from the web management interface). To do so, go to System > Management > SSH. Update the service port to something else than 22, for instance 2222.
OpenRG SSH service configuration

OpenRG SSH service configuration

Connect to the SSH interface Once you have changed the SSH service port, you can access it from a SSH client.
ssh -p 2222 admin@192.168.1.1
admin@192.168.1.1's password: 
OpenRG>
Once you have the console prompt, issue the following commands to allow SSH outbound traffic coming from the LAN and Wifi networks. After the last command, which saves and updates the device s configuration, you should be able to do SSH from any computer in your network to the Internet (thanks to tips from inkhorn).
OpenRG> conf set fw/policy/0/chain/fw_br0_in/rule/0/enabled 0
Returned 0
OpenRG> conf set fw/policy/0/chain/fw_br1_in/rule/0/enabled 0
Returned 0
OpenRG> conf reconf 1
Returned 0

20 December 2014

Rudy Godoy: Apache Phoenix for Cloudera CDH

Apache Phoenix is a relational database layer over HBase delivered as a client-embedded JDBC driver targeting low latency queries over HBase data. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. What the above statement means for developers or data scientists is that you can talk SQL to your HBase cluster. Sounds good right? Setting up Phoenix on Cloudera CDH can be really frustrating and time-consuming. I wrapped-up references from across the web with my own findings to have both play nice. Building Apache Phoenix Because of dependency mismatch for the pre-built binaries, supporting Cloudera s CDH requires to build Phoenix using the versions that match the CDH deployment. The CDH version I used is CDH4.7.0. This guide applies for any version of CDH4+. Note: You can find CDH components version in the CDH Packaging and Tarball Information section for the Cloudera Release Guide . Current release information (CDH5.2.1) is available in this link. Preparing Phoenix build environment Phoenix can be built using maven or gradle. General instructions can be found in the Building Phoenix Project webpage. Before building Phoenix you need to have: Checkout correct Phoenix branch Phoenix has two major release versions: Clone the Phoenix git repository
git clone https://github.com/apache/phoenix.git
Work with the correct branch
git fetch origin
git checkout 3.2
Modify dependencies to match CDH Before building Phoenix, you will need to modify the dependencies to match the version of CDH you are trying to support. Edit phoenix/pom.xml and do the following changes: Add Cloudera s Maven repository
+  <repository>
+  <id>cloudera</id>
+  https://repository.cloudera.com/artifactory/cloudera-repos/
+  </repository>
Change component versions to match CDH s.
  
-  <hadoop-one.version>1.0.4</hadoop-one.version>
-  <hadoop-two.version>2.0.4-alpha</hadoop-two.version>
+  <hadoop-one.version>2.0.0-mr1-cdh4.7.0</hadoop-one.version>
+  <hadoop-two.version>2.0.0-cdh4.7.0</hadoop-two.version>
  <!-- Dependency versions -->
-  <hbase.version>0.94.19
+  <hbase.version>0.94.15-cdh4.7.0
  <commons-cli.version>1.2</commons-cli.version>
-  <hadoop.version>1.0.4
+  <hadoop.version>2.0.0-cdh4.7.0
  <pig.version>0.12.0</pig.version>
  <jackson.version>1.8.8</jackson.version>
  <antlr.version>3.5</antlr.version>
  <log4j.version>1.2.16</log4j.version>
  <slf4j-api.version>1.4.3.jar</slf4j-api.version>
  <slf4j-log4j.version>1.4.3</slf4j-log4j.version>
-  <protobuf-java.version>2.4.0</protobuf-java.version>
+  <protobuf-java.version>2.4.0a</protobuf-java.version>
  <commons-configuration.version>1.6</commons-configuration.version>
  <commons-io.version>2.1</commons-io.version>
  <commons-lang.version>2.5</commons-lang.version>
Change target version, only if you are building for Java 6. CDH4 is built for JRE 6.
  <artifactId>maven-compiler-plugin</artifactId>
  <version>3.0</version>
  <configuration>
-  <source>1.7</source>
-  <target>1.7</target>
+  <source>1.6</source>
+  <target>1.6</target>
  </configuration>
Phoenix building Once, you have made the changes you are set to build Phoenix. Our CDH4.7.0 cluster uses Hadoop 2, so make sure to activate the hadoop2 profile.
mvn package -DskipTests -Dhadoop.profile=2
If everything goes well, you should see the following result:
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Phoenix .................................... SUCCESS [2.729s]
[INFO] Phoenix Hadoop Compatibility ...................... SUCCESS [0.882s]
[INFO] Phoenix Core ...................................... SUCCESS [24.040s]
[INFO] Phoenix - Flume ................................... SUCCESS [1.679s]
[INFO] Phoenix - Pig ..................................... SUCCESS [1.741s]
[INFO] Phoenix Hadoop2 Compatibility ..................... SUCCESS [0.200s]
[INFO] Phoenix Assembly .................................. SUCCESS [30.176s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:02.186s
[INFO] Finished at: Mon Dec 15 13:18:48 PET 2014
[INFO] Final Memory: 45M/1330M
[INFO] ------------------------------------------------------------------------
Phoenix Server component deployment Since Phoenix is a JDBC layer on top of HBase a server component has to be deployed on every HBase node. The goal is to have Phoenix server component added to HBase classpath. You can achieve this goal either by copying the server component directly to HBase s lib directory, or copy the component to an alternative path then modify HBase classpath definition. For the first approach, do:
cp phoenix-assembly/target/phoenix-3.2.3-SNAPSHOT-server.jar /opt/cloudera/parcels/CDH/lib/hbase/lib/
Note: In this case CDH is a synlink to the current active CDH version. For the second approach, do:
cp phoenix-assembly/target/phoenix-3.2.3-SNAPSHOT-server.jar /opt/phoenix/
Then add the following line to /etc/hbase/conf/hbase-env.sh
/etc/hbase/conf/hbase-env.sh
export HBASE_CLASSPATH_PREFIX=/opt/phoenix/phoenix-3.2.3-SNAPSHOT-server.jar
Wether you ve used any of the methods, you have to restart HBase. If you are using Cloudera Manager, restart the HBase service. To validate that Phoenix is on HBase class path, do:
sudo -u hbase hbase classpath   tr ':' '\n'   grep phoenix
Phoenix server validation Phoenix provides a set of client tools that you can use to validate the server component functioning. However, since we are supporting CDH4.7.0 we ll need to make few changes to such utilities so they use the correct dependencies. phoenix/bin/sqlline.py: sqlline.py is a wrapper for the JDBC client, it provides a SQL console interface to HBase through Phoenix.
index f48e527..bf06148 100755
--- a/bin/sqlline.py
+++ b/bin/sqlline.py
@@ -53,7 +53,8 @@ colorSetting = "true"
 if os.name == 'nt':
  colorSetting = "false"
-java_cmd = 'java -cp "' + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
+extrajars="/opt/cloudera/parcels/CDH/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-common-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcerls/CDH/lib/oozie/libserver/hbase-0.94.15-cdh4.7.0.jar"
+java_cmd = 'java -cp ".' + os.pathsep + extrajars + os.pathsep + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
  '" -Dlog4j.configuration=file:' + \
  os.path.join(phoenix_utils.current_dir, "log4j.properties") + \
  " sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver \
phoenix/bin/psql.py: psql.py is a wrapper tool that can be used to create and populate HBase tables.
index 34a95df..b61fde4 100755
--- a/bin/psql.py
+++ b/bin/psql.py
@@ -34,7 +34,8 @@ else:
 # HBase configuration folder path (where hbase-site.xml reside) for
 # HBase/Phoenix client side property override
-java_cmd = 'java -cp "' + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
+extrajars="/opt/cloudera/parcels/CDH/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-common-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcerls/CDH/lib/oozie/libserver/hbase-0.94.15-cdh4.7.0.jar"
+java_cmd = 'java -cp ".' + os.pathsep + extrajars + os.pathsep + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
  '" -Dlog4j.configuration=file:' + \
  os.path.join(phoenix_utils.current_dir, "log4j.properties") + \
  " org.apache.phoenix.util.PhoenixRuntime " + args
After you have done such changes you can test connectivity by issuing the following commands:
./bin/sqlline.py zookeeper.local
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:zookeeper.local none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:zookeeper.local
14/12/16 19:26:10 WARN conf.Configuration: dfs.df.interval is deprecated. Instead, use fs.df.interval
14/12/16 19:26:10 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
14/12/16 19:26:10 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:10 WARN conf.Configuration: topology.script.number.args is deprecated. Instead, use net.topology.script.number.args
14/12/16 19:26:10 WARN conf.Configuration: dfs.umaskmode is deprecated. Instead, use fs.permissions.umask-mode
14/12/16 19:26:10 WARN conf.Configuration: topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl
14/12/16 19:26:11 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/12/16 19:26:12 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:12 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
Connected to: Phoenix (version 3.2)
Driver: PhoenixEmbeddedDriver (version 3.2)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
77/77 (100%) Done
Done
sqlline version 1.1.2
0: jdbc:phoenix:zookeeper.local>
Then, you can either issue SQL-commands or Phoenix-commands.
0: jdbc:phoenix:zookeeper.local> !tables
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
  TABLE_CAT    TABLE_SCHEM    TABLE_NAME    TABLE_TYPE  
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
  null    SYSTEM    CATALOG    SYSTEM TABLE   
  null    SYSTEM    SEQUENCE    SYSTEM TABLE   
  null    SYSTEM    STATS    SYSTEM TABLE   
  null    null    STOCK_SYMBOL    TABLE   
  null    null    WEB_STAT    TABLE   
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+

14 February 2014

Rudy Godoy: Getting Movistar Peru ZTE MF193 work in Debian

After so many attempts to have my shiny Movistar Peru (Internet M vil) 3G ZTE MF193 modem to work out-of-the-box in Debian jessie (unstable) with NetworkManager, the word frustration was hitting on my head. Even trying to do the right thing, led me to craziness. I gave up on fanzines and decided to take the old-school route. Release wvdial and friends! Trying different combinations for wvdial.conf was no heaven for sure, but I ve found this wonderful guide from Christian Ams ss from Vienna, Austria! that really made a difference. Of course he s talking about the MF180 model but you get the idea. So I m sharing what was different for the MF193. Basically, I ve done the eject and disable CD-ROM thing already, but still no progress. I ve also tried using wvdial to send AT commands to the evasive /dev/ttyUSBX device. Starting from scratch confirmed that I ve done such things properly indeed. I was amused by the fact that I could use screen to talk to the modem! (yo, all the time wasted trying to have minicom and friends play nice) So, let s get to the point. After following this procedure, you should be able to use NetworkManager to connect to the Interwebs using the 3G data service from Movistar Peru.
  1. Step 1 follow the guide
  2. Step 2 Here I had to use /dev/ttyUSB4
  3. Step 3 follow the guide
  4. Unplug your USB modem
  5. Plug your USB modem. This time you should see only /dev/ttyUSB 0,1,2 and /dev/gsmmodem should be missing (not sure if this is a bug). Now /dev/ttyUSB2 is your guy.
  6. Step 4 use /dev/ttyUSB2
  7. Run wvdial from CLI it should connect successfully.
  8. Stop wvdial
  9. Select the Network icon on GNOME3, click on the Mobile Broadband configuration you have, if not create one.
  10. Voil . Happy surfing!
I m pasting my wvdial.conf, just in case.
[Dialer Defaults]
Modem = /dev/ttyUSB2
Username = movistar@datos
Password = movistar
APN = movistar.pe
Phone = *99#
Stupid Mode = 1
Init2 = AT+CGDCONT=4,"IP","movistar.pe"

30 January 2014

Rudy Godoy: What have we done?

Couple of weeks ago I was in the situation of having to setup up a new laptop. I decided to go with wheezy s DVD installer. So far so good. I didn t expect (somewhere in the lies-I-tell-myself dept.) to have GNOME as the default Debian desktop. However after install I ve figured it was the new GNOME3 everybody was talking about. Before that I ve seen it in Ubuntu systems used by my classmates. I thought yeah, OK GNOME3, I think that s fine for them as a Linux desktop. Turns out that I ve started using the Debian desktop aka GNOME3 and noticed that it was not as bloated as the Ubuntu desktop I ve seen before looked, so I sticked with it, (I thought for a while). Turns out that I did like this new so called GNOME3, the non-window based but an application-based system (that is something that sticks in my head). I liked the way it makes sense as a desktop system, like when you look for applications, documents, connect to networks, use pluggable devices or just configure stuff every time with less and less effort. Good practices and concepts learned from Mac OS X-like environments and for sure taking advantage of the new features the Linux kernel and user-space environment got over the years. So, like one month later I stick with it and it makes sense for me to keep it. I had no chance to try the latest XFCE or KDE, my default choices before this experience. Kudos GNOME team, even after the depictions you guys had on GNOME Shell; as I learned. This whole situation got me into some pondering about the past of the Linux user experience and how we in the community lead people into joining. I remember that when I guy asked: how do I configure this new monitor/VGA card/network card/Etc? the answer was in the lines of: what is the exact chipset model and specific whole product code number that your device has? Putting myself in the shoes of such people or today s people I d say: what are you talking about? what it is a chipset? I mean, like it was too technical that only one guy with more than average knowledge could grasp. From a product perspective this is similar for a car manufacturer tell to a customer to look for the exact layout or design your car s engine has, so that they are able to tell whether is the 82 s model A or 83 s model C. Simplicity on naming and identification was not in the mindset of most of us. This is funny because as technology advances it also becomes more transparent to the customer. So, for instance, today s kids can become really power users of any new technology as if they had, indeed, many pre-set chipsets in their brain. But when going into the details we had to grasp few years ago they have some hard time figuring out the complexity of the product that presents itself on this clean and simple interface. New times, interesting times. Always good to look back. No, I m not getting old.

15 November 2013

Rudy Godoy: New GPG Key 65F80A4D

A bit late but I ve created a new 4096 RSA GPG key. I ve published a transition document as well. I hope I can meet fellow Debian developers soon to get my new key signed. So, if you are in town (Arequipa, Peru) drop me an email!

10 August 2013

Rudy Godoy: Bulding non-service cartridges for RedHat Openshift

Cloud is passing the hype curve and we are seeing more stable developments and offerings on the market. Lately I ve been playing with RedHat s Openshift. Openshift is a PaaS (Platform as a Service) offering that intends to be an alternative for vendors such as Heroku. The focus for such offerings is to give developers enough flexibility and tools that handle the application deployment and maintenance process in a way that is integrated with their existing development workflow and tools. I ve been using Heroku for a while to deploy small to medium size projects. I liked the tools and developer-centered experience they offer. Openshift is quite new on the market and it comes in two flavors: Openshift Online, which is a PaaS service, and Openshift Enterprise, which allows organization to setup a PaaS within their own infrastructure. Both of them powered by the Openshift origin software. I ll not compare Heroku vs. Openshift feature by feature but from my experience I can tell that Openshift is far from mature and will need to give developers more features to increase adoption. When developing applications for Openshift developers are given a set of application stack components, similar to Heroku s buildpacks. They call them cartridges. You can think of them as operating system packages, since the idea is the same: have a particular application stack component ready to be used by application developers. Most of the cartridges offered by the service are base components such as application server, programming language interpreters (Ruby, Python, etc), web development frameworks and data storage, such as relational and non-relational databases. Cartridges are installed inside a gear, which is a sort of application container (I believe it uses LXC internally). Unsurprisingly this Openshift component doesn t leverage on existing packaging for RHEL 64bit, the OS that powers the service. I d expect such things from the RedHat ecosystem. I had to develop a cartridge to have a particular BI engine to be used as embed component by application developers. After reading docs and reference I realized this can be piece of cake, since I have packaging experience. Wrong. Well, quite so. The tricky part for Openshift Online is that it does not offer enough information on the cartridge install process so you can see what s going wrong. To be able to see more details on the process you ll need to setup an Openshift origin server and use it as a testing facility. Turns out that having a Origin server to operate correctly is also a challenging task and consumed a lot of my time. Over the recent weeks I ve learned from origin developers that such features are on the road map for the upcoming releases. That s good news. One of the challenges I had, and still have to figure out, is that unlike the normal cartridges mine didn t required to launch any service. Since it is a BI engine I just needed to configure and deploy to an application server such as JBoss. Cartridge format requires to have a sort of service init.d script under bin along with setup, install and configuration scripts that are ran on install. Although every day I become more familiar with origin and Openshift internals I still have work to do. Nice thing is that I was already familiar with LXC and Ruby-based applications so I could figure where things are placed and where to look for errors on origin quite easily. The cartridges are on my github account if you care to take a look and offer advice.

14 March 2013

Rudy Godoy: Subversion auth using SSH external key-pair

Normally when using Subversion s SSH authentication facility you ll use your own SSH generated key-pair that svn will read from the proper location, usually $HOME/.ssh. However, there could be situations when you ll need to use a different key-pair. In such case you can use a nice trick to make svn+ssh authentication work smoothly. Let s say you have an external key-pair and the public key it s already set on the svn server. You store the private key somewhere in your home directory. Now when doing a svn checkout you ll find that you have to use SSH s -i parameter. However, when used together with svn it won t work as expected as you ll normally do when using ssh with an external key-pair. Subversion makes your life easier by using the $SVN_SSH environment variable, so you can put the ssh command and modifiers that fit your needs. In the external key case you can do something like:
export SVN_SSH="ssh -i </path/to/external-key>"
Next time you use svn s ssh auth it will read $SVN_SSH then use it. So now you can use svn commands such checkout, etc in the same fashion you normally do.
svn co svn+ssh://rudy@somesvnserver.at.somwhere.org/repo/for/software
Update: Jeff Epler offered great advice for using .ssh/config and key-pairs based on hostname. See the comments below.

12 January 2013

Russ Allbery: Review: Asimov's, April/May 2011

Review: Asimov's Science Fiction, April/May 2011
Editor: Sheila Williams
Issue: Volume 35, No. 4 & 5
ISSN: 1065-2698
Pages: 192
Williams's editorial this issue is about the tendency of SF to take a rose-colored view of the world, which on the surface seems odd given the tendency of recent SF towards dystopia. But she makes a good point that the portrayal of the past is rose-colored, linking that into the current steampunk trend. She doesn't take the argument quite as far as I'd like, but I'm glad to see editorials raising points like this. I'm inclined to think that a lot of the rose-colored frame of the past is because few of us want to read about real historic conditions at any length, even for edification, because the stench and discomfort isn't fun to read about. Silverberg's column is another discussion of programmatic plot generators, which mostly makes the point that plot ideas are the easy part of writing. James Gunn contributes an extended biography of Isaac Asimov that probably won't be new to long-time genre readers but may fill in some details (although it politely sticks to mostly flattering material). Spinrad's book review column is one of his better ones; it looks at two novels by China Mi ville and two by Ian McDonald and explores differences in world-building. Spinrad predictably makes the case in favor of science fiction with rules and against the New Weird, but the discussion along the way was worth reading. "The Day the Wires Came Down" by Alexander Jablokov: Speaking of steampunk, here's an example. There is even an airship, although the primary technological focus is suspended street cars. Jablokov postulates a city-wide transportation network of suspended carriages called telpher cars, along with a city built around the telpher cables: stores on roofs, windows displaying merchandise to passing cars, and even a history of heated competition and dirty tricks between competing telpher networks. The story is set, as the title would indicate, on the last day of the network. It's being shut down for cost, with some hints that progress is destroying something precious. There is a plot here, revolving around some mysteries of the history of the telpher network and the roles of several people in that history. But the story is primarily a celebration of old technology. It's a rail fan's story recast with a steampunk technology, featuring the same mix of fascination with mechanics and a sense that the intricate details are falling out of common knowledge (and perhaps usefulness). As a story, it's a bit slow-moving, but I enjoyed the elegiac tone. (7) "An Empty House with Many Doors" by Michael Swanwick: This is a very short story, more of an emotional profile, involving a man's reaction to the death of his wife. Oh, and parallel universes. It's sort of the inverse of Niven's classic "All the Myriad Ways." Similar to Niven's story, I found the idea vaguely interesting but the conclusion and emotional reaction unbelievable and alien. (5) "The Homecoming" by Mike Resnick: Resnick tends to yank on the heart-strings rather sharply in his stories, so I knew roughly what to expect when a father comes home to find his son is visiting. A son who, rather against his father's wishes, has been significantly altered to be able to live with aliens. Throw in a mother with serious dementia, and you can probably predict what Resnick does with this. Still, most of the story is a two-sided conversation, and I thought he succeeded in doing justice to both sides, even though one of them was destined to lose. (6) "North Shore Friday" by Nick Mamatas: Illegal Greek immigrants, a family-run system for getting them married off before the INS catch them, government psi probes and eavesdropping on thoughts, joint projects between computer and religion departments, secret government experiments, and even ghosts... this story is a complex mess, with numerous thoughts stuck into small boxes and scattered through the surface story. It's one of those stories where figuring out what's going on, and even how to read the story in a sensible way, is much of the fun. If you find that fun, that is; if not, it will probably be frustrating. I wished there was a bit more plot, but there's something delightful about how much stuff Mamatas packs into it. (6) "Clockworks" by William Preston: This is a prequel to Preston's earlier "Helping Them Take the Old Man Down". Like that story, it's primarily a pulp adventure, but layered with another level of analysis and thoughtfulness that tries to embed the pulp adventure in our understanding of human behavior and the nature of the world, although this one stays a bit more pulp than its predecessor. As with Preston's other story, we don't get directly in the head of the Old Man (here, just called the man, but identifiable from clues in both stories as Doc Savage); instead, the protagonist is a former villain named Simon Lukic who the man hopes to have fixed by operating on his brain. The undercurrent that lies beneath a more typical pulp adventure is the question of whether Lukic is actually healed. I think there was a bit too much daring-do and human perfection, but it's a perfectly servicable pulp story with some depth. (6) "The Fnoor Hen" by Rudy Rucker: If you've read any of Rucker's work before, you probably know what to expect: a mind-boggling blizzard of mathematically-inspired technobabble that turns into vaguely coherent surrealism. (You can probably tell that I'm not much of a fan, although the clear good humor in these stories makes it hard to dislike them too much.) There's a mutated chicken and some sort of alternate mathematical space and then something that seems like magic... I'd be lying if I said that I followed this story. If you like Rucker, this seems like the sort of thing that you'd like. (4) "Smoke City" by Christopher Barzak: At the start of this story, I thought it was going to be an emotional parable about immigration. The progatonist lives two lives: one in our world, and one in the Smoke City of industry, a world of hard labor, pollution, and little reward, with families in both. But nearly all of the story is set within Smoke City, and the parable turns out to be a caustic indictment of industry and its exploitation of labor. I kind of wish Barzak hadn't used rape as a metaphor, but when the captains of industry show up, I can't argue with how deeply and accurately the story shoves in the knife. There isn't much subtlety here, but it's still one of the better stories in this issue. (7) "A Response from EST17" by Tom Purdom: I'm very happy to see Purdom's writing appearing regularly. His stories are always quiet and matter-of-fact, and at first seem to miss emotional zest, but they almost always grow on me. He lets the reader fill in their own emotional reactions to events, and there's always a lot going on. This story is a first-contact story, except that the "humans" here are not human at all. They're automated probes sent by two separate human civilizations, with different programming and different governance algorithms, and they quickly start competing negotiations. The aliens they've discovered similarly have factions, who start talking to the different probes in an elaborate dance of gathering information without giving too much away. The twist is that this pattern has replayed itself many times in the past, and information itself can be a weapon. I enjoyed this one from start to finish. (7) "The One That Got Away" by Esther M. Friesner: Friesner is best known, at least to me, for humorous fantasy, and this story is advertised as such from early on. The first-person protagonist is a prostitute in a seaside town. She's bemused to finally be invited over by a sailor who's been eyeing her all evening, but that sailor has something else in mind than normal business. For much of this story, the fantasy element is unclear; when it finally comes, it was an amusing twist. (7) "The Flow and Dream" by Jack Skillingstead: This is a mildly interesting variation on the old SF story of hibernating humans (on a generation ship or elsewhere) waking up to a transformed world. Here, it's not a ship, it's a planet, and the hiberation was to wait for terraforming rather than for transit. The twist comes from an excessively literal computer and the fun of putting together the pieces. Sadly, the story trails off at the end without much new to say. (5) "Becalmed" by Kristine Kathryn Rusch: "Becalmed" takes place immediately before "Becoming One with the Ghosts" and explains the incident that created the situation explored in that story. The first-person protagonist of "Becalmed" is a linguist, an expert in learning alien languages so that the Fleet can understand the civilizations that it runs across. But something went horribly wrong at their last stop, something that she's largely suppressed, and now she's confined to quarters and possibly in deep trouble. As is the ship; they're in foldspace, and they have been for days. "Becalmed" is structed like a mystery, centered around recovering the protagonist's memories. It's also a bit of a legal procedural; the ship is trying to determine what to do with her and to what degree she's responsible. But the heart of the story is a linguistic and cultural puzzle. This is another great SF story from Rusch, whose name on a cover will make me eager to start reading a new magazine. I love both angles on the universe she's built, but I think I like the Fleet even better than the divers. The Fleet captures some of the magic of the original Star Trek, but with much more mature characters, more believable situations, and a more sensible and nuanced version of the Prime Directive. Rusch writes substantial, interesting plots that hold my interest. I'd love to see more like this. (8) Rating: 7 out of 10

26 December 2012

Rudy Godoy: s3tools Simple Storage Service for non-Amazon providers

One of the nicest developments in the cloud arena is the increasing adoption of standards, this of course will impact on maturity and market confidence on such technologies. Amazon, as one of the pioneers, made a good choice on their offering design by making their API implementation public. Now, vendors such as Eucalyptus private/hybrid cloud offering and many other providers can leverage and build upon the specs to offer compatible services without the hassle for their customers to learn a new technology/tool. I ve bare-metal servers siting on Connectria s data-center. A couple of months ago I ve learned about their new cloud storage offering. Since I m working a lot on cloud lately I checked the service. It s was nice to learn they are not re-inventing the wheel but instead implementing Amazon s Simple Storage Service (S3) de-facto standard for cloud storage. Currently there are many S3-compatible tools available both FLOSS and freeware/closed source (Connectria provides a nice list of them). I ve been using s3tools, which is already available in the Debian archive, to interact with S3-compatible services. Usage is pretty straightforward. For my use case I intend to store copies of certain files on my provider s S3 service. I ll need to create buckets to be able to store files. For the ones not very familiar with S3 terminology buckets can be seen as containers or folders (in the desktop paradigm). First thing to do is configure your keys and credentials for accessing S3 from your provider. I do recommend you to use the --configure option to create the $HOME/.s3cfg file because it will put all the available options for a standard S3 service, leaving you the work of just tweaking them based on your needs. You can go and create the file all by yourself if you prefer, of course.
$ sudo aptitude install s3cmd
$ s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3
Access Key:
...
You ll be required to enter the access key and the secret key. You ll be asked for a encryption password (use only if you plan to use this feature). Finally, the software will test the configuration against Amazon s service. Since this is not our case it will fail. Tell the configuration instead to not retry configuration and say Y to Save configuration. Now, edit the $HOME/.s3cfg file and set the address for your private/third-party S3 provider. This is done here:
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
Change s3.amazonaws.com to your provider s address and the host_bucket configuration also. For my case I had to use:
host_base = rs1.connectria.com
host_bucket = %(bucket)s.rs1.connectria.com
Now, save the file and test the service by listing the available buckets (of course there is none yet).
$ s3cmd ls
If you don t get an error then the tool is properly configured. Now you can create buckets, put files, list, etc.
$ s3cmd mb s3://testbucket
Bucket 's3://testbucket/' created
$ s3cmd put testfile.txt s3://testbucket
testfile.txt -> s3://testbucket/testfile.txt [1 of 1]
8331 of 8331 100% in 1s 7.48 kB/s done
$ s3cmd ls s3://testbucket
2012-12-26 22:09 8331 s3://testbucket/testfile.txt

11 December 2012

Rudy Godoy: Puppet weird SSL error: SSL_read:: pkcs1 padding too short

While setting a puppet agent to talk to my puppetmaster I ve got this weird SSL error:

Error: SSL_read:: pkcs1 padding too short
Debugging both on the agent and master sides didn t offered much information. On the master:

puppet master --no-daemonize --debug
On the agent:

puppet agent --test --debug
Although I had my master using 3.0.1 and the tested agent using 2.7, the problem didn t look related to that. People @ #puppet also haven t seen this error before. I figured the problem reduced to an issue with openssl. So, I checked versions and there I got! agent s openssl is using version 1.0.0j-1.43.amzn1 and master s was using openssl-0.9.8b-10.el5_2.1 So I upgraded master s openssl to openssl.i686 0:0.9.8e-22.el5_8.4 and voil , the problem is gone. I learned that there has been a patch in OpenSSL s VCS that is apparently related to the issue. Hope this helps if you got into the described situation.

14 November 2012

Rudy Godoy: Function testing using function pointers in C++

I do like the way programmers think in terms of DRY, for instance, among other forms of optimization. A couple of days ago I wanted to test different implementations of the same algorithm in C++. The usual approach would be to implement those algorithms, call them in main() and see the results. A more interesting approach would be to write a validator for each function and print whether the result is correct or not. Since both are boring and break the rule of DRY, I ve decided to go complicated but productive, so I ve implemented a function testing method using function pointers that will allow me to write a single function and use it to test the algorithms using different input and expected results. Let s say I have 3 algorithms that implement Modular Exponentiation.
uint64 modexp1new(const uint64 &a, const uint64 &_p, const uint64 &n)
...


uint64 modexp2(const uint64 &_a, const uint64 &_p, const uint64 &_n)
...


uint64 modexp3(const uint64 &a, const uint64 &p, const uint64 &n)
...

If you are not familiar with the syntax, const means that the parameter is constant, thus it will not be modified inside the function. The & means that the parameter is passed as a reference, read the memory address of the calling variable, so we don t overload RAM with new variables. Now, the testing part. For my very own purposes I just want to test those particular functions and know whether if the result they output conforms what I do expect. Some people call this unit testing.
I do also want to test it in different scenarios, meaning input and output. So, I m creating a function that will get the three parameters required to call the functions, and a fourth parameter that is the value I do expect to be the correct answer. Now, since I don t want to repeat myself writing a call for each case, I m going to create a function pointer that is an array and has the function s address as their value. That way I can call them in a loop, and voila! we are done. Finally, after calling the function I check the result with the expected value and print an OK or FAIL. Tricky part for this could be understanding the function pointer. Two things to consider: the returning value from the referenced functions has to be the same for all. Second: the input definition for each function has to be similar too. This is important because function pointers are just pointers and they need to know the size of the data type in order to navigate the memory. For this sample code the uint64 is a typedef for long long, of course. Full code is below.
void testAlgos(const uint64 &a, const uint64 &e, const uint64 &n, const uint64 &r)
uint64 (*fP[3])(const uint64&, const uint64&, const uint64&) = &modexp1new, &modexp2, &modexp3 ;
uint64 t;
for(int i=0; i<3; i++) t = fP[i](a, e, n); std::cout < < "F(" << i << ") " << t << "\t";
if (t == r)
std::cout << "OK";
else
std::cout << "FAIL";

std::cout << std::endl;

Now that I have this, I can use it in main this way.
int main()
uint64 a = 3740332;
uint64 e = 44383;
uint64 n = 3130164467;
uint64 r = 1976425102; testAlgos(a, e, n, r); a = 404137263;
r = 2520752541; testAlgos(a, e, n, r); a = 21;
e = 3;
n = 1003;
r = 234;
testAlgos(a, e, n, r); return 0;
Resulting in:
F(0) 657519405 FAIL
F(1) 1976425102 OK
F(2) 657519405 FAIL
F(0) -2752082808 FAIL
F(1) 2520752541 OK
F(2) -2752082808 FAIL
F(0) 234 OK
F(1) 234 OK
F(2) 234 OK
I m happy about not having to write tests like this for all use cases:
std::cout < < "f1: " << modexp1new(a, e, n) << std::endl;
std::cout << "f2: " << modexp2(a, e, n) << std::endl;
std::cout << "f3: " << modexp3(a, e, n) << std::endl;
Now, happiness can be more exciting if I do use templates so I can test any function independent of the data type for the returning and input values. You know, more abstraction. Homework if time allows! Have fun!
Related posts:
  1. Compute Clusters Integration for Debian Development and Building Report 4 Hello, this is the fourth report for the project. First,...
  2. Compute Clusters Integration for Debian Development and Testing Report 3 Hello this is the third report regarding my project. As...
  3. Compute clusters for Debian development and building final report Compute clusters for Debian development and building final report...

8 October 2012

Rudy Godoy: MySQL data enconding conversion latin1 utf8

I was about to post a different story but things turned out differently, for good. Now I ll post tips that may save the day in the event of encoding issues when dealing with MySQL table data. Hopefully you find it useful. Say you have a MySQL table that has collation set to latin1 but you have utf8-encoded data stored. This happened when importing a dump, or because of Murphy s law. When querying the data from your application or from mysql s tools you get awful results for non-ascii characters such tildes. Changing the table s collation definition will only work for new columns and records. Altering the collation for the said column would make things worse. So what to do? Fortunately MySQL offers a nice way to save the day. You can use a transition conversion and then set the table encoding back to utf8, so it pairs accordingly with the encoding type for the data it has stored. ALTER TABLE table_name CHANGE column_name column_name BLOB; Issuing the previous command will make the table change it s type to BLOB. Why? Because by doing it the stored data is kept untouched. This is important. If you use the CONVERT or MODIFY commands the data will be converted by MySQL. We don t want that since the data is already in the encoding type we want. You can note that column_name is repeated twice, this is because the CHANGE command copies the data and we want the resulting column to have the same name. Now, we have to put the data back on the encoding we want, so everything is under control again. This is done by issuing the following command. ALTER TABLE table_name CHANGE column_name column_name COLUMN_TYPE CHARACTER SET utf8; COLUMN_TYPE depends on the content s column type. For instance, for a TEXT column it should be. ALTER TABLE table_name CHANGE column_name column_name TEXT CHARACTER SET utf8; It works for VARCHAR, LONGTEXT and other data types that are used to store characters.

14 July 2012

Rudy Godoy: svn changelist

Not few times I ve come upon the situation where I m working over a Subversion working copy locally and I wanted to commit bits of what I ve already baked and ready to go. Odds are that there s only one file not belonging to such changeset but mostly there are many files scattered around. This situation, I learned, can be managed with a lot more flexibility and fun.
svn changelist is a nice tool for such situation. svn changelist allows you to create a list of files I want to do something with meaning that you can use the changelist on your usual svn operations, read commit, remove, etc. Let s suppose you start with a project myapp, then you start creating three files A, B and C.
dragon% touch A B C
dragon% ls -lh
total 0
-rw-r--r-- 1 rudy rudy 0 Jul 13 20:16 A
-rw-r--r-- 1 rudy rudy 0 Jul 13 20:16 B
-rw-r--r-- 1 rudy rudy 0 Jul 13 20:16 C
dragon% svn add A B C
A A
A B
A C
dragon% svn ci "first commit of A, B and C" .
Adding A
Adding B
Adding C
Transmitting file data ...
Committed revision 1.
Now we want to edit A and add few pieces of text. Then edit B also.
dragon% echo "This file looks empty, so let's put some text inside it" > A
dragon%
dragon% echo "I'm afraid this one also is empty. Just in case" >> B
Now, I need to work on this C file and that implies touching B also. I don t want to commit the changes I ve already made since I haven t made up my mind about them.
dragon% echo "C is a nice name for a source file, but this one doesn't look like one does it?" > C
dragon% echo "I'm going to add few lines to B too" >> B

dragon% svn st
M A
M B
M C
So, let s suppose I m ready to hit the key and commit the 2nd revision. But I m not done on A, so I want to commit just B and C the way they are now. Here svn changelist enters the arena. I can create a changelist for the two files and do several tasks related to them. Let s see.
dragon% svn changelist mylist B C
Path 'B' is now a member of changelist 'mylist'.
Path 'C' is now a member of changelist 'mylist'.
Now I have a changelist named mylist that I can do operations with. For instance,
dragon% svn st
M A
--- Changelist 'mylist':
M B
M C

dragon% svn diff --changelist mylist
Index: B
===================================================================
--- B (revision 1)
+++ B (working copy)
@@ -0,0 +1,2 @@
+I'm afraid this one also is empty. Just in case
+I'm going to add few lines to B too
Index: C
===================================================================
--- C (revision 1)
+++ C (working copy)
@@ -0,0 +1 @@
+C is a nice name for a source file, but this one doesn't look like one does it?
Now let s commit the list.
dragon% svn ci -m "Comitting files from the mylist changelist" --changelist mylist
Sending B
Sending C
Transmitting file data ..
Committed revision 2.

dragon% svn st
M A
Now that I ve committed the files, the changelist mylist vanishes from our working copy. I cannot do any new operations with it. Few things you have to consider when using svn changelist.
changelist s are created in a one-time list, that means you cannot add another file to an existing list. If I type svn changelist mylist A it will define the changelist mylist to have just one member, A. It will not add A to the existing member files, B and C. Now, you are free to remove any file from an existing changelist using svn changelist mylist --remove B, for our previous example. That will make the changelist shrink to just have one element, C. You can also have as many changelists as you want, and member files can be associated to each one of them. Use case for that could be a diff over specific files. For this test project it almost makes no difference using this feature, but when you have lots of file sitting modified on your working copy, this feature will keep your hair on it s place, and maybe save the day. svn changelist is available since Subversion 1.5.

3 May 2012

Rudy Godoy: FLISOL Puno

Past Saturday April 28 I attended to the FLISOL 2012 Puno hosted at the Casa de la Cultura in downtown Puno. It was a great experience to deliver a talk after a while, the second speech since I enrolled into the UCSP s Computer and Engineering Faculty to get my Computer Science BS degree. Puno is really a nice city with a great landscape for those like me who enjoy the sun, lakes, wonderful mountains and, as it s name says, the Andes plane 3860m above sea level. Pretty high and very cold at night indeed but a lovely place. FLISOL is short for Latin American Free Software Installation Festival that just aged 12 this year. In my talk I wanted to describe the motivations behind the Free Software movement origins. I specially remarked the fact that the movement doesn t look for technical excellence, even though this is something we thrive for, rather than preserving a common set of guidelines for protecting assets we do value such software and freedom. Later I discussed about how Debian have become a key actor in the free software ecosystem. Primarily as a huge software library, second as a building block for specialized distributions (such as Ubuntu) and third, but not the last, as a facilitator for new developments and contributions to the ecosystem, and I mentioned here the Debian GNU/Hurd project. Through such topics I explained what we do, why we do it and why they may want to contribute. There was a good reception from the audience and there were many questions on the subjects I elaborated on. I was impressed to see many Debian users, some of them power users , and even tried to help one to configure his obscure Xorg settings on his laptop (didn t succeed because lack of connectivity, will send him testing CDs eventually). I d like to thank to the organization who made a great job. The team, nearly 30 students from Universidad Nacional del Altiplano plus local activists, started working in February collecting funds and managed to gather more than 330 attendees during the almost 10 hours of the event. Also special thanks for the coffee machine that was available!

1 March 2012

Rudy Godoy: DebianPeru

Hello, this is just a quick post to announce some good news. Back in 2004 we ve formed a local Debian group in Peru. We were active and spread the word country-wide. There were good times. As time passed and interests changed things slowed down. Past two weeks we are rebooting it. We ll be working with focus towards mentoring people to contribute upstream and Debian. We ll also host meetings for teaching new users in a more concrete problem solving or the basics . Finally, we will be glad to host Debian Developers, as we have done in the past with people that landed in Peru, and organize either social or technical meetings. We ve set a Wiki page, that will be our official site, and has information regarding our group. Also our mailing list is getting traction. We ll be hosting a social meeting during the Creative Commons Movie Festival this Thursday March 1st. That s it!

14 October 2011

Rudy Godoy: Dennis M. Ritchie aka dmr

Last weekend Dennis M. Ritchie, creator of the C Programming Language, the UNIX Operating System, The Plan 9 Operating System and many other key contributions to computing, passed away due an extended illness. I learned about it yesterday trough Rob Pike, former dmr s colleague and friend. I m writing this post to honor him and his legacy. While I ve seen people mourn his departure, I ve also noticed most technical people don t really understand why his work is important for today s computing, so I m here to give a refresh. Since I m good with visual tools (mindmaps) I ve made one to describe the extension of dmr s contribution to the world. You can get a glimpse of how this is important. The main topics are their key contributions, the sub-topics are the current technologies that were built upon dmr s work. Click on the image to see a larger resolution version.

The C Programming Language

You can name this a the mother of UNIX, UNIX-like and many programming languages that people use today. The focus on simplicity by giving just a small set of tools is where it s power resides. Most operating systems available, and in development, today are written in C. Most of the services that make the Internet work have been programmed in C. C also influenced other languages such as C++, Objective-C, even Python and Google s Go.
The UNIX operating system
Today s Internet rely in UNIX-like operating systems running crucial services such DNS, webservers, email servers, etc. Nothing of this could be possible without an Operating system that was built with the simplicity and design that it has. The vision/philosophy behind is where it s power resides. write programs that do one thing and do it well . The GNU Project, Linux, the *BSD where conceived on the idea of replicating such invention. Even Windows has some UNIX bits inside.
IPC
This is probably the less-known Dennis Ritchie s contribution. Today is crucial. Web 2.0 would not be able to be AJAXian if there was not IPC. IPC means Interprocess Communication, simply put: make two system processes exchange messages. This foundation was key for concepts such threads, RPC and others. Today s AJAX relies on the RPC concept, for instance.
So, either you are programming the next hot Web 2.0 or just writing a C program for playing with sockets remember there were giants on which shoulders you are standing now. Thank you Dennis, I have the book.
UNIX is very simple, it just needs a genius to understand its simplicity. -Dennis Ritchie

23 August 2011

Rudy Godoy: Compute clusters for Debian development and building final report

Compute clusters for Debian development and building final report Summary
-
The goal of this project was intended to have Eucalyptus cloud to
support ARM images so it allows Debian developers and users to be able
to use such facility for taks such package building, software
development (ie. Android) under a Debian pre-set image,
software testing many others. What was expected to have at the end is a modified version of Eucalyptus
Cloud that supports ARM images on the first place. To date this goal has
been reached but is not complete, read production-ready. Extensive test
needs to be done. Besides that we have another goal which is to get
Debian community to use this new extended tool. Project overview
-
Eucalyptus is a hybrid cloud computing platform which has a Open Source
(FLOSS) version. It s targetted to be used for PaaS (Platform as a
Service), IaaS (Infrastructure as a Service) and other distribution
models. It can be used also for cloud management. Given that it has
implemented the EC2, for computing, and the S3, for storage, API, it s
compatible with existing public clouds such Amazon EC2. Currently it
supports running i386 and amd64 images or NC (Node Controllers) in
Eucalyptus naming. Eucalyptus is a complex piece of software. It s architechture it s
modular composed by five components. Cloud Controller (CLC), Walrus (W),
Storage Controller (SC), Cluster Controller (CC) and Node Controller
(NC). The first three are written in Java and the remaining are written
in C. Our project modifications was targetted to the NC component,
altough there is a remaining task for hacking the UI to allow setting
the arch of the uploaded NC image instance. The Node Controller is in charge of setting the proper information for
Eucalyptus to be able to run virtualized images. Eucalyptus uses XEN and
KVM hyphervisor to handle internal virtualization subsystem, and
Libvirt, which is an abstraction of the existing virtualization
libraries, for program interfacing. Having that in mind we are back to our project. For our project to be
sucessful we had various tasks to do. Beginning with understanding such
complex piece of software, followed by hacking the requiered bits and
later integrate the work so it results in a useful tool. Our approach to
the project was then inline with such description. The project s history
I started my participation in GSoC approaching Steffen to discuss about
what was expected, or more accurately, what was in his mind. We
exchanged emails previous to my application and that resulted on my
application being submitted. From the beginning I should tell that I
wasn t much clear on the final product of the project, however we
refined ideas and goals during the weeks following to the official
begining of the project. What was clear for me, after all, is that for
this to see the light and gain adoption Eucalyptus code needed to be
dealt with. Our first task was review ARM image creation from scratch, using the
work done by Aurelien and Dominique in previous GSoC. I ve managed to
get an updated ARM image running under qemu-system-arm. The issues
presented in the past were almost unexistant now that the versatile
kernel is official. You can see this on my first report. After that being done and sometimes in parallel my main goal was then understand
the internals of the Eucalyptus software in order to figure out it s
feasibility, where do we need to extend and how big the task is, we
didn t knew upfront. From the beginning of the project Steffen was kind
to introduce me to the Eucalyptus developers and that have resulted in a
good outcome for Debian, IMHO, to date. Understanding Eucalyptus internals was quite a fun task to say the less. As you
can see on the first part of the report Eucalyptus is modularized and
the component I was expected to work with was the NC (Node
Controller). Given that I isolated my work focus, I started to focus on
learning the internals about that component. Node Controller as described is in charge of talking with
virtualization hyphervisors in order to arrange things to run guest
image instances. The module has basically one main component: handler.c,
in charge to talk to the appropiate hyphervisor and there are
extensions (think of OOP polymorphism but acknoledge it s plain C)
that interact with KVM or XEN. I figured that if qemu-kvm is ran in Hyphervisor mode we can manage to
run an ARM image with qemu-system-arm. Given that Eucalyptus has
interaction with KVM hyphervisor in place, this answered the question of
the project s feasibility. First green light on.
>From this point the scope reduced to interacting with the KVM handler
(node/handler_kvm.c) and extend it to support running an image that is
not amd64 or i386. NC makes use of libvirt to abstract interaction with Hyphervisors. So,
the next phase was learning about it s API and figure what s needed to
be modified next. Libvirt uses a XML file for setting Domain (in
libvirt s naming) definitions. So, Eucalyptus provides a Perl wrapper to
generate this file on runtime and allow the NC to invoke libvirt s API
to run the instance. Next task then, was adapt such script to support
ARM arch. The current script is taiolred for amd64 and i383. I worked on
that front and managed to get a script prototype, that can later be
improved and support more arch s. Generating an adequate Libvirt s XML Domain definition file for ARM can
be a heroic task. There are many things to have in mind given the
diversity of the ARM processor and vendors. I focused on the versatile
flavour given that I was going to test the image I built on the first
place and it ran fine under qemu-system-arm.
The Perl script wrapper then was adapted for such configuration and it
can be tested independently by issuing the following command: $ tools/gen_kvm_libvirt_xml arch arm The arch parameter is what I implemented and it s intended to be
called from the kvm handler on instance creation. With the extensibility
in mind I ve created a hash to associate the arch with their
corresponding emulator. our %arches = (
amd64 => /usr/bin/kvm ,
arm => /usr/bin/qemu-system-arm ,
); Added the arch parameter to the GetOptions() function and used
conditions to tell whether the user is looking for a particular arch,
arm in our case. The most important parts to be considered are the
, and entries. hvm $local_kvm There s also and that require to be tailored for arm. root=0800 As output the script generates an XML template that can be adapted to
your needs and it could be used and tested with tools like Libvirt s
virsh. So that, it can be useful independently of Eucalyptus. I managed
to get to this point before the midterm evaluation. Now that we had the XML wrapper almost done, the next task was to make
the handler call pass the arch as an argument to the script, so the
image is loaded with the proper settings and we are able to run it. Eucalyptus doesn t have a arch field for NC s instances. So, after approaching
Eucalyptus developers, with whom we had already being interacting, I
settled to my proposal of extending the ncInstance struct with an arch
field (util/data.h). The ncInstance_t struct stores the metadata for the
instance being created. It s used for storing runtime data as well as
network configuration and more. It was indeed the right place to add a
new field. I did so by creating the archId field.
Now I needed to make sure the arch information is stored and later used on the
libvirt s call. typedef struct ncInstance_t
char instanceId[CHAR_BUFFER_SIZE];
char imageId[CHAR_BUFFER_SIZE];
char imageURL[CHAR_BUFFER_SIZE];
char kernelId[CHAR_BUFFER_SIZE];
char kernelURL[CHAR_BUFFER_SIZE];
char ramdiskId[CHAR_BUFFER_SIZE];
char ramdiskURL[CHAR_BUFFER_SIZE];
char reservationId[CHAR_BUFFER_SIZE];
char userId[CHAR_BUFFER_SIZE];
char archId[CHAR_BUFFER_SIZE];
int retries; /* state as reported to CC & CLC */
char stateName[CHAR_BUFFER_SIZE]; /* as string */
int stateCode; /* as int */ /* state as NC thinks of it */
instance_states state; char keyName[CHAR_BUFFER_SIZE*4];
char privateDnsName[CHAR_BUFFER_SIZE];
char dnsName[CHAR_BUFFER_SIZE];
int launchTime; // timestamp of RunInstances request arrival
int bootTime; // timestamp of STAGING->BOOTING transition
int terminationTime; // timestamp of when resources are released (->TEARDOWN transition) virtualMachine params;
netConfig ncnet;
pthread_t tcb; /* passed into NC via runInstances for safekeeping */
char userData[CHAR_BUFFER_SIZE*10];
char launchIndex[CHAR_BUFFER_SIZE];
char groupNames[EUCA_MAX_GROUPS][CHAR_BUFFER_SIZE];
int groupNamesSize; /* updated by NC upon Attach/DetachVolume */
ncVolume volumes[EUCA_MAX_VOLUMES];
int volumesSize;
ncInstance; With that in place what was left was modifying the functions that
store/update the ncInstance data, and also the libvirt s function that
calls the gen_kvm_libvirt_xml script. I ve modified the following files:
- util/data.c
- node/handlers_kvm.c
- node/handlers. c,h
- node/test.c
- node/client-marshall-adb.c Most important the allocate_instance() function that is in charge of setting up
the instance metadata a prepare it to pass it to the handler, then
hyphervisor trough Libvirt s API. The function now has a new archId
parameter as well, to keep coherence with the field name. It also
handles whether the field is set or not. We (Eucalyptus developers and
I) haven t settled wether to initialize this field with a default value
or not. I d stepped up and initialized with a NULL value for this string
type var. if (archId != NULL)
strncpy(inst->archId, archId, CHAR_BUFFER_SIZE);
Stores the value only if it s passed to the function. This is essential to
don t break existing functionality and keep consistency for later releases.
With this we almost finished the part of extending Eucalyptus to support
arm images. It took quite long to get to that point. Next step: test. As I mentioned in my previous report the current Eucalyptus packaging
has issues. From the beginning I ve approached the pkg-eucalyptus team,
who were very hepful. So I myself set me up and joined the pkg team with
commit access to the repo. Even the GSoC is not intended for packaging
labour, I needed to clear things on that side because what we want is
adoption so anyone, say a DSA or a user, could setup a cloud that has
support for running arm instances under amd64 arch. Over the weeks between mid-term evaluation and past week I ve worked on
that front. Results were good, as you can see in the pkg-eucalyptus
mailing list and SVN repo.
Eucalyptus has a couple of issues that because of it s nature and complexity
triggered build errors. Those issues came from the Java side of the project and
few form the C-side. The C-part was a problem with AXIS2 Jar used to generate
stubs for later inclusion on build-time. There s no definitive solution to date
because one of the Makefile on gatherlog/Makefile issues a subshell call that
doesn t catches the AXIS_HOME environment variable. I ve worked around this
by defining it as a session env var from .zshrc. The other problem is related
to the Java libary versioning and most likely (we had to investigate further)
to groovy versioning. As I explained Eucalyptus has 3 components written in Java. The Java code uses
extensively the AXIS2 library for implementing webservices calls, the groovy
language for JVM, and many other Java libaries. Eucalyptus open source version
ships both online and offline versions. The difference is basically that
the offline version ships the Java libaries JAR files inside the source code,
so you can build with all required deps. Our former packaging used a file to list which Java libraries the package
depends on and it symlinked them using system wide installed packages. This
rather than helping out triggered a lot of problems I explaing with more detail
on the pkg-eucalyptus ml. I worked around this by telling the debian/rules
not to use such file to symlink and instead use the existing ones from
the source. The last weekend I managed to get to that point and built
sucessfully the software under Debian. I ve done that in order to have a
complete build-cycle for my changes, and then test. In fact today I ve
spotted and fixed a couple of bugs there. What s available now

To date we have a Eucalyptus branch where my patches are sent. The level of
extension is nearly complete, more testing needed. One missing bit is to
change the UI so that the user selects which arch her image is and that
value is passed to the corresponding functions on the Node Controller
component. We also have an ARM image that can be tested and later automated. I ve
recently learned about David Went s VMBuilder repo that extends Ubuntu s
vmbuilder to support image creationg. I might talk to him to adapt the tool
in order to support creation of arm images and use it to upload to an
Eucalyptus cloud. I ve also created a wiki page with different bits on which I was working on.
I ve yet need to craft it to be more educational. I guess I m going to use
this report as a source :) We also had more strong cooperation with the Eucalyptus project and
involved their developers on the packaging and this project, which is
something I consider a great outcome for this project and GSoC goals:
to attract more people to contribute to FLOSS. I ve also been contacted by
people from the Panda Project and we might cooperate also. Future work

I plan to keep working on the project regardless the GSoC ends. Our goal
as Debian could be integrating this into the Eucalyptus upstream
branch. We have good relations with them so expect news from that side.
I also plan to keep working with the Eucalyptus team and related
software in Debian, given that I m familiarized with the tools and
projects.
I m also planning to advocate SoC for Debian in my faculty. To date we
had already 4 students, IIRC, that have participated in the past. Project challenges and lessons
During the project I faced many challenges both in the personal side
and in the technical one. This part is a bit personal but I expect we
learn from this. First challenge was the where do I sit problem. My project was
particular, indeed quite different from the others on it s nature. I had
to work on a software that is not Debian s and then come and say Hey we
have this nice tool for you, come test it! . So, indeed, my focus was on
the understanding of such tool, Eucalyptus, and not much on looking back
at Debian because I didn t feel I had something to advertise yet and
indeed I got most valuable/useful feedback from Eucalyptus developers
rather than Debian s, which for the case is OK IMHO. I had great
mentoring. This situation probably looked a bit of isolation from my
side, from Debian s POV, but it was not intended it s just how it needed
to be IMHO. I felt that I wasn t contributing directly to the
project. I ve spoke about such thing with Steffen a couple of times. I d
like to say that it was also my concern but I probably failed on
communicating this, second challenge. Third challenge was more technical and related to the first one. Since I
was not writing any code for a Debian native package/program but instead for
other project I faced the situation of where to publish my
contributions. Eucalyptus had some issues on managing their FLOSS repo
and the current development one is outdated comparing to the
-src-offline 2.0.3 release. The project didn t fit pkg-eucalyptus repo
neither, even we thought to branch the existing trunk and ship my
contribs as quilt patches, but it involved a lot of more work and burn
on reworking something that has issues now. Later on time I branched the
bzr eucalyptus-devel Launchpad repo and synced there with the
src-offline code. I was slow to react on thing I can say. Fourth challenge was also technical and still dealing with. I was looking to
setup a machine to set a cloud and test my branch. The Dean of my faculty
kindly offered a machine but I ve never managed to get things arranged
with the technical team. Steffen and I discussed about this and evaluated
options, at the end we settled on pushing adoption rather than showcasing it.
Since for adoption we need a use case, I ve spend some SoC money on hardware
for doing this. I should have news this week on that side :) Finally, I d like to thank everyone on the Debian SoC team for this
opportunity to participate. I like to thank Steffen for his effort on
arranging things for me, Graziano from Eucalyptus for his advice on
working with the software code. The different people from Eucalyptus I
interacted with and showed interest on the project s sucess. My
classmates and teachers from the Computer Science School at UCSP in
Arequipa-Peru, friends from the computing community in Peru SPC and
finally my family. Thank you, I learned a lot, specially on the personal
side. As Randy Pausch put: The brick walls are there to give us a
chance to show how badly we want something. Resources
Please see the Wiki page[1] for any reference to the project s resources. 1- http://wiki.debian.org/SummerOfCode2011/BuildWithEucalyptus/ProjectLog Best regards,
Rudy

5 August 2011

Rudy Godoy: Compute Clusters Integration for Debian Development and Building Report 4

Hello, this is the fourth report for the project. First, I d like to
apologize for being late, I took a short trip over the weekend and
missed the deadline. This time I ve good news for my project. As
expected I ve made the necesary changes to have Eucalyptus support ARM
images. The approach was extending and exposing an arch field that
will be used by libvirt s XML domain file. I m working currently on
having the involved functions working with the new extension and I
expect to have this done over the middle of the next week. What was done so far

- Extended the ncInstance_t struct on util/data.h adding the archId
parameter. This struct keeps the information regarding a node (nc),
say RAM, Kernel, etc. Currently it has no way to set an arch in order
to be used later. typdef struct ncInstance_t
...
char reservationId[CHAR_BUFFER_SIZE];
char userId[CHAR_BUFFER_SIZE];
char archId[CHAR_BUFFER_SIZE]; // arch field added
int retries;
...

The Eucalyptus team was OK with this approach that I ve proposed few
weeks ago. For all my changes the approach is maintain compatibility
with existing features.
We still have to define if we are setting a default value, say amd64,
or keep it unset until the user sets one. For now we will be supporting
amd64 and arm as valid fields. - Modified the allocate_intance() function to support the archId
field. This involved adding a new parameter and detecting whether its
present and setting de value accordingly. This function is later used
by the doRunInstance() function in node/handlers_kvm.c - Modified the involved functions that are called by the KVM handlers
and result in passing parameters to the XML file generator for being
used by libvirt s and KVM and have the image running under
Eucalyptus. The new parameters and field are not required and only
being used if they are set with a value. The archId parameter has been
added as the last one, so existing function calls can work. This could
be rearranged later, but will involve a more extensive revision with
Eucalyptus developers. - I ve set a git local repo in order to keep track of the changes to
Eucalyptus. Since this part is a sort of a Eucalyptus branch I took
this approach. Steffen and I are discussing whether to release this as
a pkg-eucalyptus branch (as dpatch, using existing packaging). Either
way I m putting the diff files form my git repo over the wiki and will
put them on my site too. I plan to use git-svn in order to send them
later to the pkg-eucalyptus repo once we are settled with this. - Updated the project s wiki[1] page with information regarding all that s
involved on my work, so it can be re-played or resumed later. 1- http://wiki.debian.org/SummerOfCode2011/BuildWithEucalyptus/ProjectLog What I plan to to over the next weeks
-
- Finish the modifications to the relevant functions. Expected dealine:
next week.
- Do integration tests:
Write a test program, there s an example under node/ that could be
helpful.
Generate the XML domain definition from Eucalyptus functions. The
full cycle: handler_kvm -> allocate_instance -> get_instance_xml ->
have the XML file properly generated -> qemu-kvm runs the instance
using the proper emulator (arm in our case).
- Test runnning the image I ve worked on the project s begining.
- Fixing bits and cooperate with the pkg-eucalyptus team on the
packaging bits.
- Talk to Eucalyptus so they adopt our patches.
- Rock and roll :) What would be nice to have after the project ends
-
- The Debian packaging *needs* work. I ve already discussed this on the
pkg-eucalyptus mailing list. I ve also committed myself to help on
that side even after the SoC ends.
- Talk with upstream and Ubuntu to coordinate the packaging efforts and
modifications to the software.
- Get adopters to use our work :) By now, few weeks about to finish the SoC, I can say that it s been a
good experience and as I stated I m planning to keep maintaing/working
on the software after that. I ve already joined the pkg-eucalyptus team
and plan to contribute actively. I m sad I didn t managed to get Debconf
this year, time was a constraint again, hopefully we can meet in 2012! I
guess that all by now! Best regards,
Rud

5 July 2011

Rudy Godoy: Compute Clusters Integration for Debian Development and Testing Report 3

Hello this is the third report regarding my project. As usual, quick
summary first. From my last report I have progressed much little than I
would like but, as stated previously also, I m finishing semester
and between exams and projects it takes almost all my time. However, I m
free of academic duties next week and I ll double the speed on the project. What was done so far

- Started working on the modifications to the Eucalyptus code, as
expected.
I ve almost complete the code for the Libvirt s XML Domain
definition. I ve abstracted the wrapper in order to support a broader
number of arches. Previously it was tailored for supporting x86/amd64
based machines. The idea is to have a list of supported arches and
generate the XML file according that. For ARM some parameters and
elements are different than for the others. This would allow for
people to support another architectures, given they are well supported
by KVM-Qemu.
Started working on changes to util/data.c to make sure the correct
parameters are being passed to the XML generator and to libvirt kvm
calls.
- Define and start to setup the testing / demo environment. Basically
involves having one machine to run KVM as hypervisor so the Eucalyptus
handler uses it to run ARM kvm-qemu images.
- Over this weekend I ll be publising the code changes on my
website. Steffen suggested to create some docs regarding the progress
too. I ll do that once I feel I m on track again for my current
pending work, that s coding. What I ll be working over the next two weeks

- My primary goal is to catch up on the work pace and finish the neccesary
code for Eucalyptus to support ARM images. This means to finish
modifications to:
node/handlers_kvm.c handling nc creation. Little changes are need
here from what I ve indentified already.
util/data. c,h handling instance creation. This is the part where
most of the work shall be done. I ve started to work here already.
- Test the code I ve complete already and send the patches to the teams
involved. To date I m still working on a local environment, because no
usable patches have been generated already, just little pieces here
and there.
- Since this is a sort of gluing project, that is code here and there
and make it work together. I m in the process of finding a machine to
be able to test parts of it once they are complete. I m still
contacting people to get such machine, in anycase I gues I can manage
to get one myself. I can t use my current machine since it involves
reinstalling and working at the OS level.
- For the project s demo I ll use both machines, one as the front-end
and the kvm one for NCs (nodes). That s the goal. What is required after that

- I d like to release a package that includes my changes, so it can be
installed either from official packages or my own packages, this
involves:
Updating the packaging bits to the pkg-Eucalyptus repo.
Talking to upstream about it.
- Write the reference Steffen suggested. I believe that I can finish the first part in one week, because of the
work I ve done in the past weeks to understand the Eucalyptus code. I m
also will be talking more to the Eucalyptus team and also to the
maintainers about such changes. I guess that s all for now.

20 June 2011

Rudy Godoy: Compute Clusters Integration for Debian Development and Building Report 2

Hello, this is the 2nd report regarding my project. I'll offer a brief
summary of how are we doing and then the details, so you don't have to
read everything at all.
I have to start saying that over the last two weeks I made a lot of
progress on the project. The most important part: Debian ARM image
integration with Eucalyptus is now cleared. The work to be done for the
next two weeks is pretty much fully coding to achieve what we've committed:
supporting Debian ARM images for Eucalyptus.
What was done so far
--------------------
- I've have identified and mapped what needs to be modified over the
  eucalyptus source for supporting ARM. I'm puting the details on the
  next section. I think this is the most crucial part that is now
  sucessfully achieved.
- I started a Project Log[1] on the wiki, so anyone can check and
  eventyally use the information I've already gathered. The log is being
  updated constantly. You can also check what I've done and what I'm
  working on.
- We got feeback[2] from Eucalyptus developers and packaging
  team regarding developers and packaging team regarding the ultimate
  source for debian/ packaging bits. This was quite important because
  in the past I've managed to build Eucalyptus from source using the debian/
  included in the upstream tarball. However the people in charge
  told us that they were using a different one, which indeed was
  different and resulted in different output than I expected. I'm
  still working on fixing the bits to get a clean build.
- I've joined the pkg-eucalyptus team and I've been given commit rights
  to the repo[2].
1- http://wiki.debian.org/SummerOfCode2011/BuildWithEucalyptus/ProjectLog
2- http://lists.alioth.debian.org/pipermail/pkg-eucalyptus-maintainers/2011-June/000289.html
What I'll be working over the next two weeks
--------------------------------------------
- Heavily code on the different bits that I've identified and need to be
  changed. They are:
  C code:
  - node/handlers_kvm. c,h  - handles nc creation. nc's are node
  clusters for Eucalyptus, the code is in charge of node creation. It
  prepares the "machine" that is able to run a compatible image.
  - util/data. c,h  - handles instance creation. The code over there
  implements the functions node/handlers_* use, like instance creation
  and libvirt's API parameter passing. Since libvirt's parameters for
  ARM vary from x86's ones, we need to support that.
  Perl code:
  - tools/gen_kvm_libvirt_xml - wrapper tool for creating KVM/QEMU
  libvirt's XML Domain definition. This needs to be heavily
  changed/updated - redesigned, to support ARM architecture.
- Depending on the previous task progress: Modify the Debian packaging
  bits for Eucalyptus and push them to the pkg-eucalyptus's team
  repository. This is not crucial for my project's success but I'd like
  to work with Debian official builds. I've yet to commit the
  Eucalyptus team version to the debian/ dir. I expect to have fixed
  the bits to get a successful and clean build, then commit.
Other plans
-----------
- I expect a slow down on the work pace over the second week due end of
  semester exams on the week from July 3 to July 9. After that week I'm
  on vacations and will be fully dedicated to the project.
- Code changes to Eucalyptus source will be pushed upstream. I would
  not like to keep a separate repo for this. I'll talk about this with
  Steffen. Changes to Debian bits will be pushed to the appropiate repos.
Other bits
----------
When the project started I noted it was too vague for some people and
it's results/output were unclear. This concerned me. However, while the
project started and we got lot of feedback things got much clear and I'd
like to take the opportunity to explain it a bit.
Currently, developers targeting ARM devices are struggling[1] with having
a development environment. There are missing bits that sometimes are not
in place and result on hard time for them. Also, due mobile devices
growth and vendors using Linux-based[2] operating systems it's a nice
opportunity for Debian to keep it's well known fame of: supporting every
arch over the sun. How this project fits, well by offering the ability
for developers to run ARM-based Debian images over x86/x86_64 CPU's
would impact the way they could be more productive and for Debian to be
more adopted as development environment.
I can say that I'm very enthusiast of this opportunity and I'm
certain that the outcome will result in more adoption. Even the Ubuntu
people is now pushing more and more efforts for ARM official support,
due the facts I've stated.
1- http://lists.debian.org/debian-arm/2011/06/msg00020.html
2- http://events.linuxfoundation.org/events/android-builders-summit/back

Next.