Search Results: "James Bromberger"

6 November 2017

James Bromberger: Web Security 2017

I started web development around late 1994. Some of my earliest paid web work is still online (dated June 1995). Clearly, that was a simpler time for content! I went on to be Webmaster (yes, for those joining us in the last decade, that was a job title once) for UWA, and then for Hartley Poynton/JDV.com at time when security became important as commerce boomed online. At the dawn of the web era, the consideration of backwards compatibility with older web clients (browsers) was deemed to be important; content had to degrade nicely, even without any CSS being applied. As the years stretched out, the legacy became longer and longer. Until now. In mid-2018, the Payment Card Industry (PCI) Data Security Standard (DSS) 3.2 comes into effect, requiring card holder environments to use (at minimum) TLS 1.2 for the encrypted transfer of data. Of course, that s also the maximum version typically available today (TLS 1.3 is in draft 21 at this point in time of writing). This effort by the PCI is forcing people to adopt new browsers that can do the TLS 1.2 protocol (and the encryption ciphers that permits), typically by running modern/recent Chrome, Firefox, Safari or Edge browsers. And for the majority of people, Chrome is their choice, and the majority of those are all auto-updating on every release. Many are pushing to be compliant with the 2018 PCI DSS 3.2 as early as possible; your logging of negotiated protocols and ciphers will show if your client base is ready as well. I ve already worked with one government agency to demonstrate they were ready, and have already helped disable TLS 1.0 and 1.1 on their public facing web sites (and previously SSL v3). We ve removed RC4 ciphers, 3DES ciphers, and enabled ephemeral key ciphers to provide forward secrecy. Web developers (writing Javascript and using various frameworks) can rejoice the age of having to support legacy MS IE 6/7/8/9/10 is pretty much over. None of those browsers support TLS 1.2 out of the box (IE 10 can turn this on, but for some reason, it is off by default). This makes Javascript code smaller as it doesn t have to have conditional code to work with the quirks of those older clients. But as we find ourselves with modern clients, we can now ask those clients to be complicit in our attempts to secure the content we serve. They understand modern security constructs such as Content Security Policies and other HTTP security-related headers. There s two tools I am currently using to help in this battle to improve web security. One is SSLLabs.com, the work of Ivan Risti (and now owned/sponsored by Qualys). This tool gives a good view of the encryption in flight (protocols, ciphers), chain of trust (certificate), and a new addition of checking DNS records for CAA records (which I and others piled on a feature request for AWS Route53 to support). The second tool is Scott Helm s SecurityHeaders.io, which looks at the HTTP headers that web content uses to ask browsers to enforce security on the client side. There s a really important reason why these tools are good; they are maintained. As new recommendations on ciphers, protocols, signature algorithms or other actions become recommended, they re updated on these tools. And these tools are produced by very small, but agile teams like one person teams, without the bureaucracy (and lag) associated with large enterprise tools. But these shouldn t be used blindly. These services make suggestions, and you should research them yourselves. For some, not all the recommendations may meet your personal risk profile. Personally, I m uncomfortable with Public-Key-Pins, so that can wait for a while indeed, Chrome has now signalled they will drop this. So while PCI is hitting merchants with their DSS-compliance stick (and making it plainly obvious what they have to do), we re getting a side-effect of having a concrete reason for drawing a line under where our backward compatibility must stretch back to, and the ability to have the web client assist in ensure security of content.

31 October 2016

James Bromberger: The Debian Cloud Sprint 2016

I m at an airport, about to board the first of three flights across the world, from timezone +8 to timezone -8. I ll be in transit 27 hours to get to Seattle, Washington state. I m leaving my wife and two young children behind. My work has given me a days worth of leave under the Corporate Social Responsibility program, and I m taking three days annual leave, to do this. 27 hours each way in transit, for 3 days on the ground. Why? Backstory I started playing in technology as a kid in the 1980s; my first PC was a clone (as they were called) 286 running MS-DOS. It was clunky, and the most I could do to extend it was to write batch scripts. As a child I had no funds for commercial compilers, no network connections (this was pre Internet in Australia), no access to documentation, and no idea where to start programming properly. It was a closed world. I hit university in the summer of 1994 to study Computer Science and French. I d heard of Linux, and soon found myself installing the Linux distributions of the day. The Freedom of the licensing, the encouragement to use, modify, share, was in stark contrast to the world of consumer PCs of the late 1980 s. It was there at the UCC at UWA I discovered Debian. Some of the kind network/system admins at the University maintained a Debian mirror on the campus LAN, updated regularly and always online. It was fast, and more importantly, free for me to access. Back in the 1990s, bandwidth in Australia was incredibly expensive. The vast distances of the country mean that bandwidth was scarce. Telcos were in races to put fiber between Perth and the Eastern States, and without that in place, IP connectivity was constrained, and thus costly. Over many long days and nights I huddled down, learning window managers, protocols, programming and scripting languages. I became a system/network administrator, web developer, dev ops engineer, etc. My official degree workload, algorithmic complexity, protocol stacks, were interesting, but fiddling with Linux based implementations was practical. Volunteer After years of consuming the output of Debian and running many services with it I decided to put my hand up and volunteer as a Debian Developer: it was time to give back. I had benefited from Debian, and I saw others benefit from it as well. As the 2000 s started, I had my PGP key in the Debian key ring. I had adopted a package and was maintaining it load balancing Apache web servers. The web was yet to expand to the traffic levels you see today; most web sites were served from one physical web server. Site Reliability Engineering was a term not yet dreamed of. What became more apparent was the applicability of Linux, Open Source, and in my line-of-sight Debian to a wider community beyond myself and my university peers. Debain was being used to revive recycled computers that were being donated to charities; in some cases, unable to transfer commercial software licenses with the hardware that was no longer required by organisations that had upgraded. It appeared that Debian was being used as a baseline above which society in general had access to fundamental capability of computing and network services. The removal of subscriptions, registrations, and the encouragement of distribution meant this occurred at rates that could never be tracked, and more importantly, the consensus was that it should not be automatically tracked. The privacy of the user is paramount more important than some statistics for the Developer to ponder. When the Bosnia-Herzegovina war ended in 1995, I recall an email from academics there, having found some connectivity, writing to ask if they would be able to use Debian as part of their re-deployment of services for the Tertiary institutions in the region. This was an unnecessary request as Debian GNU/Linux is freely available, but it was a reminder that, for the country to have tried to procure commercial solutions at that time would have been difficult. Instead, those that could do the task just got on with it. There s been many similar project where the grass-roots organisations non profits, NGOs, and even just loose collectives of individuals have turned to Linux, Open Source, and sometimes Debian to solve their problems. Many fine projects have been established to make technology accessible to all, regardless of race, gender, nationality, class, or any other label society has used to divide humans. Big hat tip to Humanitarian Open Street Map, Serval Project. I ve always loved Debian s position on being the Universal operating system. Its vast range of packages and wide range of computing architectures supported means that quite often a litmus test of is project X a good project? was met with is it packaged for Debian? . That wide range of architectures has meant that administrators of systems had fewer surprises and a faster adoption cycle when changing platforms, such as the switch from x86 32 bit to x86 64 bit. Enter the Cloud I first laid eyes on the AWS Cloud in 2008. It was nothing like the rich environment you see today. The first thing I looked for was my favourite operating system, so that what I already knew and was familiar with was available in this environment to minimise the learning curve. However there were no official images, which was disconcerting. In 2012 I joined AWS as an employee. Living in Australia they hired me into the field sales team as a Solution Architect a sort of pre-sales tech with a customer focused depth in security. It was a wonderful opportunity, and I learnt a great deal. It also made sense (to me, at least) to do something about getting Debian s images blessed. It turned out, that I had to almost define what that was: images endorsed by a Debian Developer, handed to the AWS Marketplace team. And so since 2013 I have done so, keeping track of Debian s releases across the AWS regions, collaborating with other Debian folk on other cloud platforms to attempt a unified approach to generating and maintaining these images. This included (for a stint) generating them into the AWS GovCloud Region, and still into the AWS China (Beijing) Region the other side of the so-called Great Firewall of China. So why the trip? We ve had focus groups at the Debconf (Debian conference) around the world, but its often difficult to get the right group of people in the same rooms at the same time. So the proposal was to hold a focused Debian Cloud Sprint. Google was good enough to host this, for all the volunteers across all the cloud providers. Furthermore, donated funds were found to secure the travel for a set of people to attend who otherwise could not. I was lucky enough to be given a flight. So here I am, in the terminal in Australia: my kids are tucked up in bed, dreaming of the candy they just collected for Halloween. It will be a draining week I am sure, but if it helps set and improve the state of Debian then its worth it.

12 June 2015

James Bromberger: Logical Volume Management with Debian on Amazon EC2

The recent AWS introduction of the Elastic File System gives you an automatic grow-and-shrink capability as an NFS mount, an exciting option that takes away the previous overhead in creating shared block file systems for EC2 instances. However it should be noted that the same auto-management of capacity is not true in the EC2 instance s Elastic Block Store (EBS) block storage disks; sizing (and resizing) is left to the customer. With current 2015 EBS, one cannot simply increase the size of an EBS Volume as the storage becomes full; (as at June 2015) an EBS volume, once created, has fixed size. For many applications, that lack of resize function on its local EBS disks is not a problem; many server instances come into existence for a brief period, process some data and then get Terminated, so long term managment is not needed. However for a long term data store on an instance (instead of S3, which I would recommend looking closely at from a durability and pricing fit), and where I want to harness the capacity to grow (or shrink) disk for my data, then I will need to leverage some slightly more advanced disk management. And just to make life interesting, I wish to do all this while the data is live and in-use, if possible. Enter: Logical Volume Management, or LVM. It s been around for a long, long time: LVM 2 made a debut around 2002-2003 (2.00.09 was Mar 2004) and LVM 1 was many years before that so it s pretty mature now. It s a powerful layer that sits between your raw storage block devices (as seen by the operating system), and the partitions and file systems you would normally put on them. In this post, I ll walk through the process of getting set up with LVM on Debian in the AWS EC2 environment, and how you d do some basic maintenance to add and remove (where possible) storage with minimal interruption. Getting Started First a little prep work for a new Debian instance with LVM. As I d like to give the instance its own ability to manage its storage, I ll want to provision an IAM Role for EC2 Instances for this host. In the AWS console, visit IAM, Roles, and I ll create a new Role I ll name EC2-MyServer (or similar), and at this point I ll skip giving it any actual privileges (later we ll update this). As at this date, we can only associate an instance role/profile at instance launch time. Now I launch a base image Debian EC2 instance launched with this IAM Role/Profile; the root file system is an EBS Volume. I am going to put data that I ll be managing on a separate disk from the root file system. First, I need to get the LVM utilities installed. It s a simple package to install: the lvm2 package. From my EC2 instance I need to get root privileges (sudo -i) and run:
apt update && apt install lvm2
After a few moments, the package is installed. I ll choose a location that I want my data to live in, such as /opt/. I want a separate disk for this task for a number of reasons:
  1. Root EBS volumes cannot currently be encrypted using Amazon s Encrypted EBS Volumes at this point in time. If I want to also use AWS encryption option, it ll have to be on a non-root disk. Note that instance-size restrictions also exist for EBS Encrypted Volumes.
  2. It s possibly not worth make a snapshot of the Operating System at the same time as the user content data I am saving. The OS install (except the /etc/ folder) can almost entirely be recreated from a fresh install. so why snapshot that as well (unless that s your strategy for preserving /etc, /home, etc).
  3. The type of EBS volume that you require may be different for different data: today (Apr 2015) there is a choice of Magnetic, General Purpose 2 (GP2) SSD, and Provisioned IO/s (PIOPS) SSD, each with different costs; and depending on our volume, we may want to select one for our root volume (operating system), and something else for our data storage.
  4. I may want to use EBS snapshots to clone the disk to another host, without the base OS bundled in with the data I am cloning.
I will create this extra volume in the AWS console and present it to this host. I ll start by using a web browser (we ll use CLI later) with the EC2 console. The first piece of information we need to know is where my EC2 instance is running. Specifically, the AWS Region and Availability Zone (AZ). EBS Volumes only exist within the one designated AZ. If I accidentally make the volume(s) in the wrong AZ, then I won t be able to connect them to my instance. It s not a huge issue, as I would just delete the volume and try again. I navigate to the Instances panel of the EC2 Console, and find my instance in the list:
EC2 instance listA (redacted) list of instance from the EC2 console.
Here I can see I have located an instance and it s running in US-East-1A: that s AZ A in Region US-East-1. I can also grab this with a wget from my running Debian instance by asking the MetaData server:
wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone
The returned text is simply: us-east-1a . Time to navigate to Elastic Block Store , choose Volumes and click Create :
Creating a volume in AWS EC2: ensure the AZ is the same as your instanceCreating a volume in AWS EC2: ensure the AZ is the same as your instance
You ll see I selected that I wanted AWS to encrypt this and as noted above, at this time that doesn t include the t2 family. However, you have an option of using encryption with LVM where the customer looks after the encryption key see LUKS. What s nice is that I can do both have AWS Encrypted Volumes, and then use encryption on top of this, but I have to manage my own keys with LUKS, and should I lose them, then I can keep all the cyphertext! I deselected this for my example (with a t2.micro), and continue; I could see the new volume in the list as creating , and then shortly afterwards as available . Time to attach it: select the disk, and either right-click and choose Attach , or from the menu at the top of the list, chose Actions -> Attach (both do the same thing).
Attach volumeAttaching a volume to an instance: you ll be prompted for the compatible instances in the same AZ.
At this point in time your EC2 instance will now notice a new disk; you can confirm this with dmesg tail , and you ll see something like:
[1994151.231815]  xvdg: unknown partition table
(Note the time-stamp in square brackets will be different). Previously at this juncture you would format the entire disk with your favourite file system, mount it in the desired location, and be done. But we re adding in LVM here between this raw device, and the filesystem we are yet to make . Marking the block device for LVM Our first operation with LVM is to put a marker on the volume to indicate it s being use for LVM so that when we scan the block device, we know what it s for. It s a really simple command:
pvcreate /dev/xvdg
The device name above (/dev/xvdg) should correspond to the one we saw from the dmesg output above. The output of the above is rather straight forward:
  Physical volume "/dev/xvdg" successfully created
Checking our EBS Volume We can check on the EBS volume which LVM sees as a Physical Volume using the pvs command.
# pvs
  PV  VG  Fmt  Attr PSize PFree
  /dev/xvdg  lvm2 ---  5.00g 5.00g
Here we see the entire disk is currently unused. Creating our First Volume Group Next step, we need to make an initial LVM Volume Group which will use our Physical volume (xvdg). The Volume Group will then contain one (or more) Logical Volumes that we ll format and use. Again, a simple command to create a volume group by giving it its first physical device that it will use:
# vgcreate  OptVG /dev/xvdg
  Volume group "OptVG" successfully created
And likewise we can check our set of Volume Groups with vgs :
# vgs
  VG  #PV #LV #SN Attr  VSize VFree
  OptVG  1  0  0 wz--n- 5.00g 5.00g
The Attribute flags here indicate this is writable, resizable, and allocating extents in normal mode. Lets proceed to make our (first) Logical Volume in this Volume Group:
# lvcreate -n OptLV -L 4.9G OptVG
  Rounding up size to full physical extent 4.90 GiB
  Logical volume "OptLV" created
You ll note that I have created our Logical Volume as almost the same size as the entire Volume Group (which is currently one disk) but I left some space unused: the reason for this comes down to keeping some space available for any jobs that LVM may want to use on the disk and this will be used later when we want to move data between raw disk devices. If I wanted to use LVM for Snapshots, then I d want to leave more space free (unallocated) again. We can check on our Logical Volume:
# lvs
  LV  VG  Attr  LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  OptLV OptVG -wi-a----- 4.90g
The attribytes indicating that the Logical Volume is writeable, is allocating its data to the disk in inherit mode (ie, as the Volume Group is doing), and that it is active. At this stage you may also discover we have a device /dev/OptVG/OptLV, and this is what we re going to format and mount. But before we do, we should review what file system we ll use.
Filesystems
Popular Linux file systems
Name Shrink Grow Journal Max File Sz Max Vol Sz
btrfs Y Y N 16 EB 16 EB
ext3 Y off-line Y Y 2 TB 32 TB
ext4 Y off-line Y Y 16 TB 1 EB
xfs N Y Y 8 EB 8 EB
zfs* N Y Y 16 EB 256 ZB
For more details see Wikipedia comparison. Note that ZFS requires 3rd party kernel module of FUSE layer, so I ll discount that here. BTRFS only went stable with Linux kernel 3.10, so with Debian Jessie that s a possibility; but for tried and trusted, I ll use ext4. The selection of ext4 also means that I ll only be able to shrink this file system off-line (unmounted). I ll make the filesystem:
# mkfs.ext4 /dev/OptVG/OptLV
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 1285120 4k blocks and 321280 inodes
Filesystem UUID: 4f831d17-2b80-495f-8113-580bd74389dd
Superblock backups stored on blocks:
  32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
And now mount this volume and check it out:
# mount /dev/OptVG/OptLV /opt/
# df -HT /opt
Filesystem  Type  Size  Used Avail Use% Mounted on
/dev/mapper/OptVG-OptLV ext4  5.1G  11M  4.8G  1% /opt
Lastly, we want this to be mounted next time we reboot, so edit /etc/fstab and add the line:
/dev/OptVG/OptLV /opt ext4 noatime,nodiratime 0 0
With this in place, we can now start using this disk. I selected here not to update the filesystem every time I access a file or folder updates get logged as normal but access time is just ignored. Time to expand After some time, our 5 GB /opt/ disk is rather full, and we need to make it bigger, but we wish to do so without any downtime. Amazon EBS doesn t support resizing volumes, so our strategy is to add a new larger volume, and remove the older one that no longer suits us; LVM and ext4 s online resize ability will allow us to do this transparently. For this example, we ll decide that we want a 10 GB volume. It can be a different type of EBS volume to our original we re going to online-migrate all our data from one to the other. As when we created the original 5 GB EBS volume above, create a new one in the same AZ and attach it to the host (perhaps a /dev/xvdh this time). We can check the new volume is visible with dmesg again:
[1999786.341602]  xvdh: unknown partition table
And now we initalise this as a Physical volume for LVM:
# pvcreate /dev/xvdh
  Physical volume "/dev/xvdh" successfully created
And then add this disk to our existing OptVG Volume Group:
# vgextend OptVG /dev/xvdh
  Volume group "OptVG" successfully extended
We can now review our Volume group with vgs, and see our physical volumes with pvs:
# vgs
  VG  #PV #LV #SN Attr  VSize  VFree
  OptVG  2  1  0 wz--n- 14.99g 10.09g
# pvs
  PV  VG  Fmt  Attr PSize  PFree
  /dev/xvdg  OptVG lvm2 a--  5.00g 96.00m
  /dev/xvdh  OptVG lvm2 a--  10.00g 10.00g
There are now 2 Physical Volumes we have a 4.9 GB filesystem taking up space, so 10.09 GB of unallocated space in the VG. Now its time to stop using the /dev/xvgd volume for any new requests:
# pvchange -x n /dev/xvdg
  Physical volume "/dev/xvdg" changed
  1 physical volume changed / 0 physical volumes not changed
At this time, our existing data is on the old disk, and our new data is on the new one. Its now that I d recommend running GNU screen (or similar) so you can detach from this shell session and reconnect, as the process of migrating the existing data can take some time (hours for large volumes):
# pvmove /dev/sdb1 /dev/sdd1
  /dev/xvdg: Moved: 0.1%
  /dev/xvdg: Moved: 8.6%
  /dev/xvdg: Moved: 17.1%
  /dev/xvdg: Moved: 25.7%
  /dev/xvdg: Moved: 34.2%
  /dev/xvdg: Moved: 42.5%
  /dev/xvdg: Moved: 51.2%
  /dev/xvdg: Moved: 59.7%
  /dev/xvdg: Moved: 68.0%
  /dev/xvdg: Moved: 76.4%
  /dev/xvdg: Moved: 84.7%
  /dev/xvdg: Moved: 93.3%
  /dev/xvdg: Moved: 100.0%
During the move, checking the Monitoring tab in the AWS EC2 Console for the two volumes should show one with a large data Read metric, and one with a large data Write metric clearly data should be flowing off the old disk, and on to the new. A note on disk throughput The above move was a pretty small, and empty volume. Larger disks will take longer, naturally, so getting some speed out of the process maybe key. There s a few things we can do to tweak this: Back to the move Upon completion I can see that the disk in use is the new disk and not the old one, using pvs again:
# pvs
  PV  VG  Fmt  Attr PSize  PFree
  /dev/xvdg  OptVG lvm2 ---  5.00g 5.00g
  /dev/xvdh  OptVG lvm2 a--  10.00g 5.09g
So all 5 GB is now unused (compare to above, where only 96 MB was PFree). With that disk not containing data, I can tell LVM to remove the disk from the Volume Group:
# vgreduce OptVG /dev/xvdg
  Removed "/dev/xvdg" from volume group "OptVG"
Then I cleanly wipe the labels from the volume:
# pvremove /dev/xvdg
  Labels on physical volume "/dev/xvdg" successfully wiped
If I really want to clean the disk, I could choose to use shred(1) on the disk to overwrite with random data. This can take a lng time Now the disk is completely unused and disassociated from the VG, I can return to the AWS EC2 Console, and detach the disk:
Detatch volume dialog boxDetach an EBS volume from an EC2 instance
Wait for a few seconds, and the disk is then shown as available ; I then chose to delete the disk in the EC2 console (and stop paying for it). Back to the Logical Volume it s still 4.9 GB, so I add 4.5 GB to it:
# lvresize -L +4.5G /dev/OptVG/OptLV
  Size of logical volume OptVG/OptLV changed from 4.90 GiB (1255 extents) to 9.40 GiB (2407 extents).
  Logical volume OptLV successfully resized
We now have 0.6GB free space on the physical volume (pvs confirms this). Finally, its time to expand out ext4 file system:
# resize2fs /dev/OptVG/OptLV
resize2fs 1.42.12 (29-Aug-2014)
Filesystem at /dev/OptVG/OptLV is mounted on /opt; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/OptVG/OptLV is now 2464768 (4k) blocks long.
And with df we can now see:
# df -HT /opt/
Filesystem  Type  Size  Used Avail Use% Mounted on
/dev/mapper/OptVG-OptLV ext4  9.9G  12M  9.4G  1% /opt
Automating this The IAM Role I made at the beginning of this post is now going to be useful. I ll start by adding an IAM Policy to the Role to permit me to List Volumes, Create Volumes, Attach Volumes and Detach Volumes to my instance-id. Lets start with creating a volume, with a policy like this:
 
  "Version": "2012-10-17",
  "Statement": [
   
  "Sid": "CreateNewVolumes",
  "Action": "ec2:CreateVolume",
  "Effect": "Allow",
  "Resource": "*",
  "Condition":  
  "StringEquals":  
  "ec2:AvailabilityZone": "us-east-1a",
  "ec2:VolumeType": "gp2"
   ,
  "NumericLessThanEquals":  
  "ec2:VolumeSize": "250"
   
   
   
  ]
 
This policy puts some restrictions on the volumes that this instance can create: only within the given Availability Zone (matching our instance), only GP2 SSD (no PIOPs volumes), and size no more than 250 GB. I ll add another policy to permit this instance role to tag volumes in this AZ that don t yet have a tag called InstanceId:
 
  "Version": "2012-10-17",
  "Statement": [
   
  "Sid": "TagUntaggedVolumeWithInstanceId",
  "Action": [
  "ec2:CreateTags"
  ],
  "Effect": "Allow",
  "Resource": "arn:aws:ec2:us-east-1:1234567890:volume/*",
  "Condition":  
  "Null":  
  "ec2:ResourceTag/InstanceId": "true"
   
   
   
  ]
 
Now that I can create (and then tag) volumes, this becomes a simple procedure as to what else I can do to this volume. Deleting and creating snapshots of this volume are two obvious options, and the corresponding policy:
 
  "Version": "2012-10-17",
  "Statement": [
   
  "Sid": "CreateDeleteSnapshots-DeleteVolume-DescribeModifyVolume",
  "Action": [
  "ec2:CreateSnapshot",
  "ec2:DeleteSnapshot",
  "ec2:DeleteVolume",
  "ec2:DescribeSnapshotAttribute",
  "ec2:DescribeVolumeAttribute",
  "ec2:DescribeVolumeStatus",
  "ec2:ModifyVolumeAttribute"
  ],
  "Effect": "Allow",
  "Resource": "*",
  "Condition":  
  "StringEquals":  
  "ec2:ResourceTag/InstanceId": "i-123456"
   
   
   
  ]
 
Of course it would be lovely if I could use a variable inside the policy condition instead of the literal string of the instance ID, but that s not currently possible. Clearly some of the more important actions I want to take are to attach and detach a volume to my instance:
 
  "Version": "2012-10-17",
  "Statement": [
     
      "Sid": "Stmt1434114682836",
      "Action": [
        "ec2:AttachVolume"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:us-east-1:123456789:volume/*",
      "Condition":  
        "StringEquals":  
          "ec2:ResourceTag/InstanceID": "i-123456"
         
       
     ,
     
      "Sid": "Stmt1434114745717",
      "Action": [
        "ec2:AttachVolume"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:us-east-1:123456789:instance/i-123456"
     
  ]
 
Now with this in place, we can start to fire up the AWS CLI we spoke of. We ll let the CLI inherit its credentials form the IAM Instance Role and the polices we just defined.
AZ= wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone 
Region= wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone rev cut -c 2- rev 
InstanceId= wget -q -O - http://169.254.169.254/latest/meta-data/instance-id
VolumeId= aws ec2 --region $ Region  create-volume --availability-zone $ AZ  --volume-type gp2 --size 1 --query "VolumeId" --output text 
aws ec2 --region $ Region  create-tags --resource $ VolumeID  --tags Key=InstanceId,Value=$ InstanceId 
aws ec2 --region $ Region  attach-volume --volume-id $ VolumeId  --instance-id $ InstanceId 
and at this stage, the above manipulation of the raw block device with LVM can begin. Likewise you can then use the CLI to detach and destroy any unwanted volumes if you are migrating off old block devices.

18 January 2014

James Bromberger: Linux.conf.au 2014: LCA TV

The radio silence here on my blog has been not from lack of activity, but the inverse. Linux.conf.au chewed up the few remaining spare cycles I have had recently (after family and work), but not from organising the conference (been there, got the T-Shirt and the bag). So, let s do a run through of what has happened LCA2014 Perth has come and gone in pretty smooth fashion. A remarkable effort from the likes of the Perth crew of Luke, Paul, Euan, Leon, Jason, Michael, and a slew of volunteers who stepped up not to mention our interstate firends of Steve and Erin, Matthew, James I, Tim the Streaming guy and others, and our pro organisers at Manhattan Events. It was a reasonably smooth ride: the UWA campus was beautiful, the leacture theatres were workable, and the Octogon Theatre was at its best when filled with just shy of 500 like minded people and an accomplished person gracing the stage. What was impressive (to me, at least) was the effort of the AV team (which I was on the extreme edges of); videos of keynotes hit the Linux Australia mirror within hours of the event. Recording and live streaming of all keynotes and sessions happend almost flawlessly. Leon had built a reasonably robust video capture management system (eventstreamer on github) to ensure that people fresh to DVswitch had nothing break so bad it didn t automatically fix itself and all of this was monitored from the Operations Room (called the TAVNOC, which would have been the AV NOC, but somehow a loose reference to the UWA Tavern the Tav crept in there). Some 167 videos were made and uploaded most of this was also mirrored on campus before th end of the conference so attendees could load up their laptops with plenty of content for the return trip home. Euan s quick Blender work meant there was a nice intro and outro graphic, and Leon s scripting ensured that Zookeepr, the LCA conference manegment software, was the source of truth in getting all videos processed and tagged correctly. I was scheduled (and did give) a presentation at LCA 2014 about Debian on Amazon Web Services (on Thursday), and attended as many of the sessions as possible, but my good friend Michael Davies (LCA 2004 chair, and chair of the LCA Papers Committee for a good many years) had another role for this year. We wanted to capture some of the hallway track of Linux.conf.au that is missed in all the videos of presentations. And thus was born LCA TV. LCA TV consisted of the video equipment for an additional stream mixer host, cameras, cables and switches, hooking into the same streaming framework as the rest of the sessions. We took over a corner of the registration room (UWA Undercroft), brought in a few stage lights, a couch, coffee table, seat, some extra mics, and aimed to fill the session gaps with informal chats with some of the people at Linux.conf.au speakers, attendees, volunteers alike. And come they did. One or two interviews didn t succeed (this was an experiment), but in the end, we ve got over 20 interviews with some interesting people. These streamed out live to the people watching LCA from afar; those unable to make it to Perth in early January; but they were recorded too and we can start to watch them (see below) I was also lucky enough to mix the video for the three keynotes as well as the opening and closing, with very capable crew around the Octogon Theatre. As the curtain came down, and the 2014 crew took to the stage to be congratulated by the attendees, I couldn t help but feel a little bit proud and a touch nostalgic memories from 11 years earlier when LCA 2003 came to a close in the very same venue. So, before we head into the viewing season for LCA TV, let me thank all the volunteers who organised, the AV volunteers, the Registration volunteers, the UWA team who helped with Octogon, Networking, awesome CB Radios hooked up to the UWA repeated that worked all the way to the airport. Thanks to the Speakers who submitted proposals. The Speakers who were accepted, made the journey and took to the stage. The people who attended. The sponsors who help make this happen. All of the above helps share the knowledge, and ultimately, move the community forward. But my thanks to Luke and Paul for agreeing to stand there in the middle of all this madness and hive of semi structured activity that just worked. Please remember this was experimental; the noise was the buzz of the conference going on around us. There was pretty much only one person on the AV kit my thanks to Andrew Cooks who I ll dub as our sound editor, vision director, floor manager, and anything else. So who did we interview? One or two talks did not work, so appologies to those that are missing. Here s the playlist to start you off! Enjoy.

1 February 2013

James Bromberger: LCA 2013

LCA Past Organisers

Previous core organisers of Linux.conf.au, taken at Mt Stromolo Observatory during LCA 2013 (pic by Josh Stewart); except one of these people organised CALU, and another hasn t organised one at all!

Thanks to all the people at LCA2013 in Canberra; it was a blast! So good to see old friends and chat freely on what s hot and happening. Radia (known for STP, TRILL), Sir Tim (the web) and old friend Bdale (Debian, SPI, Freedom Box) were inspiring. As was Robert Llewellyn (Kryten, Red Dwarf), who was a complete pleasure he wandered back and talked for a while with the volunteer video crew. Hats off to Pia for organising the TBL tour, to Mary Gardner for being awarded the Rusty Wrench, and to the team from PLUG (Euan, Jason, Leon, Luke) who stepped up to help with the video team and to Paul who graciously accepted the help. Next up LCA2014 Perth! Y all come back now.. it s been a decade.

6 December 2012

James Bromberger: Official Debian Images on Amazon Web Services EC2

Please Note: this article is written from my personal perspective as a Debian Developer, and is not the opinion or expression of my employer.
Amazon Web Service s EC2 offers customers a number of Operating Systems to run. There are many Linux Distributions available, however for all this time, there has never been an Official Debian Image or Amazon Machine Image (AMI), created by Debian. For some Debian users this has not been an issue as there are several solutions of creating your own personal AMI. However for the AWS Users who wanted to run a recognised image, it has been a little confusing at times; several Debian AIMs have been made available by other customers, but the source of those images has not been Debian . In October 2012 the AWS Marketplace engaged in discussions with the Debian Project Leader, Stefano Zacchiroli. A group of Debian Developers and the wider community formed to generated a set of AMIs using Anders Ingemann s ec2debian-build-ami script. These AMIs are published in the AWS Marketplace, and you can find the listing here: No fees are collected for Debian for the use of these images via the AWS Marketplace; they are listed here for your convenience. This is the same AMI that you may generate yourself, but this one has been put together by Debian Developers. If you plan to use this AMI, I suggest you read http://wiki.debian.org/Cloud/AmazonEC2Image, and more explicity, SSH as the user admin and then sudo -i to root. Additional details Anders Ingemann and others maintain a GitHub project called ec2debian-build-ami which generates a Debian AMI. This script supports several desired features, an was also updated to add in some new requirements. This means the generated image supports: Debian Stable (Squeeze; 6.0.6 at this point in time) does not contain the cloud-init package, and neither does Debian Testing (Wheezy). A fresh AWS account (ID 379101102735) was used for the initial generation of this image. Any Debian Developer who would like access is welcome to contact me. Minimal charges for the resource utilisation of this account (storage, some EC2 instances for testing) are being absorbed by Amazon for this. Co-ordination of this effort is held on the debian-cloud mailing list. The current Debian stable is 6.0.6 Squeeze , and we re in deep freeze for the Wheezy release. Squeeze has a Xen kernel that works on the Parallel Virtual Machine (PVM) EC2 instance, and hence this is what we support on EC2. (HVM images are a next phase, being headed up by Yasuhiro Akarki <ar@d.o>). Marketplace Listing The process of listing in the AWS Marketplace was conducted as follows: This image went out on the 19th of November 2012. Additional documentation was put into the Wiki at: http://wiki.debian.org/Cloud/AmazonEC2Image/Squeeze A CloudFormation template may help you launch a Debian instance by containing a mapping to the relevent AMI in the region you re using: see the wiki link above. What s Next The goal is to continue stable releases as they come out. Further work is happening to support generation of Wheezy images, and HVM (which may all collapse into one effort with a Linux 3.x kernel in Wheezy). If you re a Debian Developer and would like a login to the AWS account we ve been using, then please drop me a line. Further work to improve this process has come from Marcin Kulisz, who is starting to package ec2debian-build-ami into a Debian: this will complete the circle of the entire stack being in main (one day)! Thanks goes to Stefano, Anders, Charles, and everyone who contributed to this effort. Resources

8 May 2012

James Bromberger: Courier IMAP and FAM

Last Friday, while tracking Debian Testing, the courier package was updated, and while authentication could be seen to be successful, actually using IMAP seemed to fail. Turns out the FAM package was somehow to blame; installing fam and libfam0 was the solution. This uninstalled gamin for me. So if you re pulling your hair out with a similar courier/imap issue, then perhaps have a look at the courier-imap mailing list.

4 April 2012

James Bromberger: Goodbye Linux.2.6.x

It s taken some time, but now none of my personal Linux hosts (4 in total) are running the 2.6 kernel any more. From the start (January) my company web host on Amazon EC2 has been running a 3.x kernel. My little Acer Aspire Revo low power home server, with attached disk pack that sits in my shed in a network cabinet has run 3.x for the last 6 months or so. My Linux laptop (Dell Studio 1558) which only recently got installed (and, since removing Windows, hasn t overheated once!) went to 3.x immediately. And the last piece of the puzzel is a virtual machine I ve had for many years with Bytemark.co.uk they re now offering a 3.2 kernel in their menu of selectable kernels. Not that 3.x is that much different than 2.6.3x; but its a line in the sand of feature and security thats easy to identify. But with nearly 15 years of looking at a 2.x kernel, its about time we moved to 3.x!

1 March 2012

Rapha&#235;l Hertzog: My Debian Activities in February 2012

This is my monthly summary of my Debian related activities. If you re among the people who made a donation to support my work (384.14 , thanks everybody!), then you can learn how I spent your money. Otherwise it s just an interesting status update on my various projects. Dpkg and multiarch The month started with a decision of the technical committee which allowed me to proceed with an upload of a multiarch dpkg even if Guillem had not yet finished his review (and related changes). Given this decision, Guillem made the experimental upload himself. I announced the availability of this test version and invited people to test it. This lead to new discussions on debian-devel. We learned in those discussions that Guillem changed his mind about the possibility of sharing (identical) files between multiple Multi-Arch: same packages, and that he dropped that feature. But if this point of the multiarch design had been reverted, it would mean that we had to update again all library packages which had already been updated for multi-arch. The discussions mostly stalled at this point with a final note of Guillem explaining that there was a tension between convenience and doing the right things every time that we discuss far-reaching changes. After a few weeks (and a helpful summary from Russ Allbery), Guillem said that he remained unconvinced but that he put back the feature. He also announced that he s close to having completed the work and that he would push the remaining parts of the multiarch branch to master this week (with the 1.16.2 upload planned next week). That s it for the summary. Obviously I participated in the discussions but I didn t do much besides this I have a mandate to upload a multiarch dpkg to sid but I did not want to make use of it while those discussions remained pretty unconclusive. Also Guillem made it pretty clear that the multiarch implementation was buggy , not right and not finished and that he had reworked code fixing at least some of the issues since he never shared that work in progress, I also had no way to help even just by reviewing what he s doing. We also got a few multiarch bug reports, but I couldn t care to get them fixed since Guillem clearly held a lock on the codebase having done many private changes it s not quite like this that I expect to collaborate on a free software project but life is full of surprises! I ll be relieved once this story is over. In the mean time, I have added one new thing on my TODO list since I made a proposal to handle bin-nmu changelogs and it s something that could also fix #440094. Misc dpkg stuff After a discussion with Guillem, we agreed that copyright notices should only appear in the sources and not in manual pages or --version output, both of which are translated and cause useless work to translators when updated. Guillem already had some code to do it for --version strings, and I took care of the changes for the manual pages. I merged some minor documentation updates, fixed a bug with a missing manpage. Later I discovered that some recent changes lead to the loss of all the translated manual pages. I suggested an improvement to dh_installman to fix this (and even prepared a patch). In the end, Guillem opted for another way of installing translated manual pages. Triggered by a discussion on debian-devel, I added a new entry to my TODO list: implementing dpkg-maintscript-helper rm_conffile_if_owner to deal with the case where a conffile is taken over by another package which might (or might not) be installed. Misc packaging At the start of the month, I packaged quilt 0.51. The number of Debian specific patches is slowly getting down. With version 0.51, we dropped 5 patches and introduced a new one. Later in the month I submitted 4 supplementary patches upstream which have been accepted for version 0.60. This new version (just released, I will package it soon) is an important milestone since it s the first version without any C code (Debian had this for a long time but we were carrying an intrusive patch for this). Upstream developer Jean Delvare worked on this and based his work on our patch, but he went further to make it much more efficient. Besides quilt, I also uploaded dh-linktree 0.2 (minor doc update), sql-ledger 2.8.36 (new upstream version), logidee-tools 1.2.12 (minor fixes) and publican 2.8-2 (to fix release critical bug #660795). Debian Consultants The Debian Project Leader is working on federating Debian Companies. As the owner of Freexian SARL, I was highly interested in it since Freexian contributes to Debian, offers support for Debian and has a strategic interest in Debian . There s only one problem, you need to have at least 2 Debian developers on staff but I have no employees (it s me only). I tried to argue that I have already worked with multiple Debian developers (as contractors) when projects were too big for me alone (or when I did not have enough time). Alas this argument was not accepted. Instead, and since our fearless leader is never afraid to propose compromises, he suggested me (and MJ Ray who argued something similar than me) to try to bring life to the Debian Consultants list which (in his mind) would be more appropriate for one-man companies like mine. I accepted to help animate the list, and on his side, he s going to promote both the Debian Companies and the Debian Consultants lists. In any case, the list has seen some traffic lately and you re encouraged to join if you re a freelancer offering services around Debian. The most promising thing is that James Bromberger offered to implement a real database of consultants instead of the current static page. Book update We made quite some progress this month. There s only one chapter left to translate. I thus decided to start with proofreading. I made a call for volunteers and I submitted one (different) chapter to 5 proofreaders. The liberation campaign made a nice leap forwards thanks to good coverage on barrapunto.com. We have reached 80% while we were only at 72% at the start of the month (thanks to the 113 new supporters!). There s thus less than 5000 EUR to raise before the book gets published under a free license. Looking at the progression in the past months, this is unlikely to be completed on time for the release of the book in April. It would be nice though so please share the news around you. Speaking of the book s release, I m slowly preparing it. Translating docbook files is not enough, I must be able to generate HTML, ePub and PDF versions of the book. I m using Publican for most formats, but for the PDF version Publican is moving away of fop and the replacement (webkit-based) is far from being satisfactory to generate a book ready for print. So I plan to use dblatex and get Publican to support dblatex as a backend. I have hired Beno t Guillon, the upstream author of dblatex, to fix some annoying bugs and to improve it to suit my needs for the book (some results are already in the upstream CVS repository). I m also working with a professional book designer to get a nice design. I have also started to look for a Python Django developer to build the website that I will use to commercialize the book. The website will have a larger goal than just this though ( helping to fund free software developers ) but in free software it s always good to start with your own case. :-) Hopefully everything will be ready in April. I m working hard to meet that deadline (you might have noticed that my blog has been relatively quiet in the last month ). Thanks See you next month for a new summary of my activities.

No comment Liked this article? Click here. My blog is Flattr-enabled.

23 February 2012

James Bromberger: Hurricane Electric IPv6 tunnel MTU

I ve been running an IPv6 tunnel for a long time, but occasionally I ve been seeing traffic hang on it. It looks like it was the MTU, defaulting at 1500 bytes, causing issues when large amounts of data were being shuffled OUT from my Linux box, back to the net . The fix is easy: /etc/network/interfaces should have an up line for the interface definition saying: up ip link set mtu 1280 dev henet, where henet is the name of your tunnel interface. Easy enough to skip this line if your tunnel appears to be working OK, but interesting to track down.

13 February 2012

James Bromberger: Debian Wheezy: US$19 Billion. Your price FREE!

As many would know, Debian GNU/Linux is one of the oldest, and the largest Linux distributions that is available for free. Since it was first released in 1993, several people have analysed the size and produced cost estimates for the project. In 2001, Jes s M. Gonz lez-Barahona et al produced an article entitled Counting Potatoes , an analysis of Debian 2.2 (code named Potato). When Potato was released in June 2003, it contained 2,800 source packages of software, totalling around 55 million lines of source code. When using David A. Wheeler s sloccount tool to apply the COCOMO model of development, and an average developer salary of US$56,000, the projected development cost that Gonz lez-Barahona calculated to start-from-scratch and build Debian 2.2 in 2003 was US$1.9 billion. In 2007 an analysis entitled Macro-level software evolution: a case study of a large software compilation by Jes s M. Gonz lez-Barahona, Gregorio Robles, Martin Michlmayr, Juan Jos Amor and Daniel M. German was released. It found that Debian 4.0 (codename Etch released April 2007) had just over 10,000 source packages of software and 288 million lines of source code. This analysis also delved into the dependencies of software packages, and the update flow between Debian release (not all packages are updated with each release). Today (February 2012) the current development version of Debian, codenamed Wheezy, contains some 17,141 source packages of software, but as it s still in development this number may change over the coming months. I analysied the source code in Wheezy, looking at the content from the original software that Debian distributes from its upstream authors without including the additional patches that Debian Developers apply to this software, or the package management scripts (used to install, configure and de-install packages). One might argue that these patches and configuration scripts are the added value of Debian, however the in my analysis I only examined the pristine upstream source code. By using David A Wheeler s sloccount tool and average wage of a developer of US$72,533 (using median estimates from Salary.com and PayScale.com for 2011) I summed the individual results to find a total of 419,776,604 source lines of code for the pristine upstream sources, in 31 programming languages including 429 lines of Cobol and 1933 lines of Modula3! In my analysis the projected cost of producing Debian Wheezy in February 2012 is US$19,070,177,727 (AU$17.7B, EUR 14.4B, GBP 12.11B), making each package s upstream source code wrth an average of US$1,112,547.56 (AU$837K) to produce. Impressively, this is all free (of cost). Zooming in on the Linux Kernel In 2004 David A. Wheeler did a cost analysis of the Linux Kernel project by itself. He found 4,000,000 source lines of code (SLOC), and a projected cost between US$175M and US$611M depending on the complexity rating of the software. Within my analysis above, I used the standard (default) complexity with the adjusted salary for 2011 (US$72K), and deducted that Kernel version 3.1.8 with almost 10,000,000 lines of source code would be worth US$540M at standard complexity, or US$1,877M when rated as complex . Another Kernel Costing in 2011 put this figure at US$3 billion, so perhaps there s some more variance in here to play with. Individual Projects Other highlights by project included:
Project Version Thousands
of SLOC
Projected cost
at US$72,533/developer/year
Samba 3.6.1 2,000 US$101 (AU$93M)
Apache 2.2.9 693 US$33.5M (AU$31M)
MySQL 5.5.17 1,200 US$64.2M (AU$59.7M)
Perl 5.14.2 669 US$32.3M (AU$30M)
PHP 5.3.9 693 US$33.5M (AU$31.1M)
Bind 9.7.3 319 US$14.8M (AU$13.8M)
Moodle 1.9.9 396 US$18.6M (AU$17.3M)
Dasher 4.11 109 US$4.8M (AU$4.4M)
DVSwitch 0.8.3.6 6 US$250K (AU$232K)
Debian Wheezy by Programming Language The upstream code that Debian distributes is written in many different languages. ANSI C with 168,536,758 is the dominant language (40% of all lines), followed by C++ at 83,187,329 (20%) and Java with 34,698,990 (8%).
Line chart

Break down of Wheezy by Language

If you are intersted in finding the line count and cost projections for any of the 17,000+ projects, you will find them in the raw data CSV. Other Tools and Comparisons Ohcount is another source code cost analysis tool. In March 2011 Ohcount was run across Debian Sid: its results are here. In comparison, its results appear much lower than the sloccount tool. There s also the Ohloh.net Debian Estimate which only finds 55 Million source lines of code and a projected cost of US$1B. However Ohloh uses Ohcount for its estimates, and seems to be to be around 370 million SLOC missing compared to my recent analysis. Summary Over the last 10 years the cost to develop Debian has increased ten-fold. It s intersting to know that US$19 billion of software is available to use, review, extend, and share, for the bargain price of $0. If we were to add in Debian patches and install scripts then this projected figure would increase. If only more organisations would realise the potential they have before them. Need help with Linux (including Debian), Perl, or AWS? See www.jamesbromberger.com.