Search Results: "Dariusz Dwornikowski"

13 August 2016

Dariusz Dwornikowski: Automatic PostgreSQL config with Ansible

If for some reasons you can t use dedicated DBaaS for your PostgreSQL (like AWS RDS) then you need to run your database server on a cloud instance. In these kind of setup, when you scale up or down your instance size, you need to adjust PostgreSQL parameters according to the changing RAM size. There are several parameters in PostgreSQL that highly depend on RAM size. An example is shared_buffers for which a rule of thumb says that is should be set to 0.25*RAM. In DBaaS, when you scale the DB instance up or down, parameters are adjusted for you by the cloud provider, e.g. AWS RDS uses parameter groups for that reason, where particular parameters are defined depending on the size of the RAM of the RDS instance. So what can you when you do not have RDS or any other DBaaS? You can always keep several configuration files on your instance, each for a different memory size, you can rewrite you config every time you change the size of the instance or you can use Ansible role for that. Our Ansible role will be very simple, we will have two tasks. One will change the PostgreSQL config, the second one will just restart the database server:
---
- name: Update PostgreSQL config
  template: src=postgresql.conf.j2 dest=/etc/postgresql/9.5/main/postgresql.conf
  register: pgconf
- name: Restart postgresql
  service: name=postgresql state=restarted
  when: pgconf.changed
Now we need the template, where are the calculations take place. RAM size will be taken from the Ansible s fact called ansible_memtotal_mb. Since it returns RAM size in MBs, we will stick to MBs. We will define the following parameters, you can adjust them to your needs: For max_connections we will define a default role variable of 100 but we will allow to specify it at a runtime. The relevant parts of the postgresql.conf.j2 are below:
 max_connections =   max_connections        
 shared_buffers =   (((ansible_memtotal_mb/1024.0) round int)*0.25) int*1024  MB
 work_mem =   ((((ansible_memtotal_mb/1024.0) round int)*0.25)/max_connections*1024) round int  MB
 maintenance_work_mem =   ((ansible_memtotal_mb/1024.0) round int)*64  MB
 effective_cache_size =   (((ansible_memtotal_mb/1024.0) round int)*0.75) int*1024  MB
You can now run the role every time you change the instance size, and the config will be changed accordingly to the RAM size. You can extend the role and maybe add other constraints and change max_connections to you specific needs. An example playbook could look like:
---
hosts: my_postgres
roles:
  - postgres-config 
vars:
  - max_connection: 300
And run it:
$ ansible-playbook playbook.yml
The complete role can be found in my github repo.

19 April 2016

Dariusz Dwornikowski: HAProxy and 503 HTTP errors with AWS ELB as a backend

Although, AWS provides load balancer service in the form of Elastic Load Balancer (ELB), a common trick is to use HAProxy in the middle to provide SSL offloading, complex routing and better logging.
In this scenario, a public ELB is the frontier of all the traffic, HAProxy farm in the middle is managed by an Auto Scaling Group, and one (or more) internal backend ELBs stay in front of Web farm. haproxy I think that HAProxy does not need any introductions here. It is highly scalable and reliable piece of software. There is however a small caveat when you use it with domain names and not IP addresses. To speed up things, HAProxy resolves all the domain named during startup (during config file parsing in fact). Hence, when the IP of a domain changes, you end up with a lot of 503s (Service Unavailable). Why is this important ? In AWS, ELB's IP can change over time, so it is recommended to use ELB's domain name. Now, when you use this domain name in HAProxy's backend, you can end up with 503s. ELB IPs do not change so often but still you would not want any downtimes. The solution is to configure runtime resolvers in HAProxy and use them in the backend (unforntunatelly this works only in HAProxy 1.6):
 ::haproxy
 resolvers myresolver
      nameserver dns1 10.10.10.10:53
      resolve_retries       30
      timeout retry         1s
      hold valid           10s
  backend mybackend
      server myelb-internal.123456.eu-west-1.elb.amazonaws.com check resolvers myresolver
Now HAProxy will check the domain at runtime, no more 503s.

18 April 2016

Dariusz Dwornikowski: XWiki and slashes in URI

XWiki is a great open source Atlassian Confluence replacement (some argue it is better, I leave it to your assessment). We use XWiki a lot at Tenesys to document internal projects, and create documentation of clients' platforms. We run XWiki in Tomcat application server, behind nginx proxy. We use great XWiki's plugin, called FAQ, which can be used to create, well FAQs. The problem we had was that sometimes people (me especially) created FAQ entries with a / in the name, which resulted in XWiki creating a slug with / character, which is used to delimit page hierarchy in XWIki. Basically, you wanted to write How to install Debian/Ubuntu package and you ended up with two pages: How to install Debian and a subpage Ubuntu package. You can't easily delete the 'slashed' FAQ page because by default the last one is deleted only. The solution to this problems is twofold. First of all, you need to tell Tomcat to allow passing encoded slash (%2F) oto XWiki. Add to -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true to CATALINA_OPTS. You can either do it via catalina.sh or catalina.opts. Second of all, you need to make sure that your nginx proxy directive is bare, i.e. does not contain URI part, see relevant stack question here. Basically you want your proxy_pass to look like that:
location /  
  proxy_pass http://backend;
 
... not like that.
location /  
  proxy_pass http://backend/xwiki;
 
I spent quite a lot of time before I discovered that nginx caveat. Hope it helps somebody too.

14 March 2016

Bits from Debian: New Debian Developers and Maintainers (January and February 2016)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

29 September 2015

Dariusz Dwornikowski: Delete until signature in vim

It has been bugging me for a while. When responding to an email, you often want to delete all the content (or part of the previous content) until the end of the email's body. However it would be nice to leave your signature in place. For that I came up with this nifty little vim trick:
nnoremap <silent> <leader>gr <Esc>d/--\_.*Dariusz<CR>:nohl<CR>O
Assuming that your signature starts with -- and the following line starts with your name (in my case it is Dariusz), this will delete all the content from the current line until the signature. Then it will remove search highlighting, and finally move one line up.

3 August 2015

Lunar: Reproducible builds: week 14 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes akira submitted a patch to make cdbs export SOURCE_DATE_EPOCH. She uploded a package with the enhancement to the experimental reproducible repository. Packages fixed The following 15 packages became reproducible due to changes in their build dependencies: dracut, editorconfig-core, elasticsearch, fish, libftdi1, liblouisxml, mk-configure, nanoc, octave-bim, octave-data-smoothing, octave-financial, octave-ga, octave-missing-functions, octave-secs1d, octave-splines, valgrind. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: In contrib, Dmitry Smirnov improved libdvd-pkg with 1.3.99-1-1. Patches submitted which have not made their way to the archive yet: reproducible.debian.net Four armhf build hosts were provided by Vagrant Cascadian and have been configured to be used by jenkins.debian.net. Work on including armhf builds in the reproducible.debian.net webpages has begun. So far the repository comparison page just shows us which armhf binary packages are currently missing in our repo. (h01ger) The scheduler has been changed to re-schedule more packages from stretch than sid, as the gcc5 transition has started This mostly affects build log age. (h01ger) A new depwait status has been introduced for packages which can't be built because of missing build dependencies. (Mattia Rizzolo) debbindiff development Finally, on August 31st, Lunar released debbindiff 27 containing a complete overhaul of the code for the comparison stage. The new architecture is more versatile and extensible while minimizing code duplication. libarchive is now used to handle cpio archives and iso9660 images through the newly packaged python-libarchive-c. This should also help support a couple other archive formats in the future. Symlinks and devices are now properly compared. Text files are compared as Unicode after being decoded, and encoding differences are reported. Support for Sqlite3 and Mono/.NET executables has been added. Thanks to Valentin Lorentz, the test suite should now run on more systems. A small defiency in unquashfs has been identified in the process. A long standing optimization is now performed on Debian package: based on the content of the md5sums control file, we skip comparing files with matching hashes. This makes debbindiff usable on packages with many files. Fuzzy-matching is now performed for files in the same container (like a tarball) to handle renames. Also, for Debian .changes, listed files are now compared without looking the embedded version number. This makes debbindiff a lot more useful when comparing different versions of the same package. Based on the rearchitecturing work has been done to allow parallel processing. The branch now seems to work most of the time. More test needs to be done before it can be merged. The current fuzzy-matching algorithm, ssdeep, has showed disappointing results. One important use case is being able to properly compare debug symbols. Their path is made using the Build ID. As this identifier is made with a checksum of the binary content, finding things like CPP macros is much easier when a diff of the debug symbols is available. Good news is that TLSH, another fuzzy-matching algorithm, has been tested with much better results. A package is waiting in NEW and the code is ready for it to become available. A follow-up release 28 was made on August 2nd fixing content label used for gzip2, bzip2 and xz files and an error on text files only differing in their encoding. It also contains a small code improvement on how comments on Difference object are handled. This is the last release name debbindiff. A new name has been chosen to better reflect that it is not a Debian specific tool. Stay tuned! Documentation update Valentin Lorentz updated the patch submission template to suggest to write the kind of issue in the bug subject. Small progress have been made on the Reproducible Builds HOWTO while preparing the related CCCamp15 talk. Package reviews 235 obsolete reviews have been removed, 47 added and 113 updated this week. 42 reports for packages failing to build from source have been made by Chris West (Faux). New issue added this week: haskell_devscripts_locale_substvars. Misc. Valentin Lorentz wrote a script to report packages tested as unreproducible installed on a system. We encourage everyone to run it on their systems and give feedback!

26 July 2015

Lunar: Reproducible builds: week 13 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes akira uploaded a new version of doxygen in the experimental reproducible repository incorporating upstream patch for SOURCE_DATE_EPOCH, and now producing timezone independent timestamps. Dhole updated Peter De Wachter's patch on ghostscript to use SOURCE_DATE_EPOCH and use UTC as a timezone. A modified package is now being experimented. Packages fixed The following 14 packages became reproducible due to changes in their build dependencies: bino, cfengine2, fwknop, gnome-software, jnr-constants, libextractor, libgtop2, maven-compiler-plugin, mk-configure, nanoc, octave-splines, octave-symbolic, riece, vdr-plugin-infosatepg. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: reproducible.debian.net Packages identified as failing to build from source with no bugs filed and older than 10 days are scheduled more often now (except in experimental). (h01ger) Package reviews 178 obsolete reviews have been removed, 59 added and 122 updated this week. New issue identified this week: random_order_in_ruby_rdoc_indices. 18 new bugs for packages failing to build from sources have been reported by Chris West (Faux), and h01ger.

23 September 2014

Dariusz Dwornikowski: debrfstats software for RFS statistics

Last time I told that I would release software I used to make RFS stats plots. You can find it in my github repo - github.com/tdi/debrfstats. The software contains small class to get data needed to generate plots, as well as for doing some simple bug analysis. The software also contains an R script to make plots from a CSV file. For now debrfstats uses SOAP interface to Debbugs but I am now working on adding a UDD data source. The software is written in Python 2 (SOAPpy does not come in 3 flavour), some usage examples are in the main.py file in the repository. If you have any questions or wishes for debrfstats do not hesitate to contact me.

21 September 2014

Dariusz Dwornikowski: statistics of RFS bugs and sponsoring process

For some days I have been working on statistics of the sponsoring process in Debian. I find this to be one of the most important things that Debian has to attract and enable new contributions. It is important to know how this process works, whether we need more sponsors, how effective is the sponsoring and what are the timings connected to it. How I did this ? I have used Debbugs SOAP interface to get all bugs that are filed against sponsorship-requests pseudo package. SOAP gives a little bit of overhead because it needs to download a complete list of bugs for the sponsorship-requests package, and then process them according to given date ranges. The same information can be easily extracted from the UDD database in the future, it will be faster because SQL is better when working with date ranges than python obviously. The most problematic part was getting the "real done date" of a particular bug, and frankly most of my time I have spent on writing a rather dirty and complicated script. The script gets a log for a particular bug number and returns a "real done date". I have published a proof of concept in a previous post.. What I measured ? RFSs is a queue, and in every queue one is interested in a mean time to get processed. In this case I called the metric global MTTGS (mean time to get sponsored). This is a metric that gives the overall performance insight in RFS queue. Time to get sponsored (TTGS) for a bug is a number of days that passed between filing an RFS bug and closing it (bug was sponsored). Mean time to get sponsored is calculated as a sum of TTGSs of all bugs divided by number of bugs (in a given period of time). Global MTTGS is MTTGS calculated for a period of time 2012-1-1 until today(). Besides MTTGS I have also measured typical bug related metrics: Plots and graphs Below is a plot of global MTTGS vs. time (click for a larger image). mttgs plot As you can see, the trend is roughly exponential and MTTGS tends to settle around 60 days at the end of the year 2013. This does not mean that your package will wait 60 days on average nowadays to get sponsored. I remind that this is a global MTTGS, so even if the MTTGS of last month was very low, the global MTTGS would decrease just slightly. It gives, however, a good glance in performance of the process. Even that more packages are filed for sponsoring (see next graphs) now, than in the beginning of the epoch, the sponsoring rate is high enough to flatten the global MTTGS, and with time maybe decrease it. The image below (click for a larger one) shows how many bugs reside in a queue with status open or closed (calculated for each day). For closed we have an almost linear function, so each day more or less the same amount of bugs are closed and they increase the pool of bugs with status closed. For bugs with status open the interesting part begins around May 2012 after the system is saturated or gets popular. It can be interpreted as a plot of how many bugs reside in the queue, the important part is that it is stable and does not show clear increasing trend. open done plot The last plot shows arrival and departure rate of bugs from RFS queue, i.e. how many bugs are opened and closed each day. The interesting part here are the maxima. Let's look at them. opened closed plot Maximal number of opened bugs (21) was on 2012-05-06. As it appears it was a bunch upload of RFSs for tryton-modules-*..
  706953  RFS: tryton-modules-account-stock-anglo-saxon/2.8.0-1 
  706954  RFS: tryton-modules-purchase-shipment-cost/2.8.0-1 
  706948  RFS: tryton-modules-production/2.8.0-1 
  706969  RFS: tryton-modules-account-fr/2.8.0-1 
  706946  RFS: tryton-modules-project-invoice/2.8.0-1 
  706950  RFS: tryton-modules-stock-supply-production/2.8.0-1 
  706942  RFS: tryton-modules-product-attribute/2.8.0-1 
  706957  RFS: tryton-modules-stock-lot/2.8.0-1 
  706958  RFS: tryton-modules-carrier-weight/2.8.0-1 
  706941  RFS: tryton-modules-stock-supply-forecast/2.8.0-1 
  706955  RFS: tryton-modules-product-measurements/2.8.0-1 
  706952  RFS: tryton-modules-carrier-percentage/2.8.0-1 
  706949  RFS: tryton-modules-account-asset/2.8.0-1 
  706904  RFS: chinese-checkers/0.4-1 
  706944  RFS: tryton-modules-stock-split/2.8.0-1 
  706981  RFS: distcc/3.1-6 
  706945  RFS: tryton-modules-sale-supply/2.8.0-1 
  706959  RFS: tryton-modules-carrier/2.8.0-1 
  706951  RFS: tryton-modules-sale-shipment-cost/2.8.0-1 
  706943  RFS: tryton-modules-account-stock-continental/2.8.0-1 
  706956  RFS: tryton-modules-sale-supply-drop-shipment/2.8.0-1
Maximum number of closed bugs (18) was on 2013-09-24, and as you probably guessed right also tryton modules had impact on that.
  706953  RFS: tryton-modules-account-stock-anglo-saxon/2.8.0-1 
  706954  RFS: tryton-modules-purchase-shipment-cost/2.8.0-1 
  706948  RFS: tryton-modules-production/2.8.0-1 
  706969  RFS: tryton-modules-account-fr/2.8.0-1 
  706946  RFS: tryton-modules-project-invoice/2.8.0-1 
  706950  RFS: tryton-modules-stock-supply-production/2.8.0-1 
  706942  RFS: tryton-modules-product-attribute/2.8.0-1 
  706958  RFS: tryton-modules-carrier-weight/2.8.0-1 
  706941  RFS: tryton-modules-stock-supply-forecast/2.8.0-1 
  706955  RFS: tryton-modules-product-measurements/2.8.0-1 
  706952  RFS: tryton-modules-carrier-percentage/2.8.0-1 
  706949  RFS: tryton-modules-account-asset/2.8.0-1 
  706944  RFS: tryton-modules-stock-split/2.8.0-1 
  706959  RFS: tryton-modules-carrier/2.8.0-1 
  723991  RFS: mapserver/6.4.0-2 
  706951  RFS: tryton-modules-sale-shipment-cost/2.8.0-1 
  706943  RFS: tryton-modules-account-stock-continental/2.8.0-1 
  706956  RFS: tryton-modules-sale-supply-drop-shipment/2.8.0-1
The software Most of the software was written in Python. Graphs were generated in R. After a code cleanup I will publish a complete solution on my github account, free to use by everybody. If you would like to see another statistics, please let me know, I can create them if the data provides sufficient information.

19 September 2014

Dariusz Dwornikowski: getting real "done date" of a bug from Debian BTS

As I wrote in my last post currently, SOAP interface, nor Ultimate Debian Database do not provide a date when a given bug was closed (done date). It is quite hard to calculate statistics on a bug tracker when you do not know when a bug was closed (!!). Done date of bug can be found in its log. The log itself can be downloaded by SOAP method get_bug_log but the processing of it is quite complicated. The same comes to web scrapping of a BTS's web interface. Fortunatelly the web interface gives a possibility to download a log in an mbox format. Below is a script that extracts the done date of a bug from its log in mbox format. It uses requests to download the mbox and caches the result in ~/.cache/rfs_bugs, which you need to create. It performs different checks:
  1. Check existence of a header e.g. Received: (at 657783-done) by bugs.debian.org; 29 Jan 2012 13:27:42 +0000
  2. Check for header CC: NUMBER-close done
  3. Check for header TO: NUMBER-close done
  4. Check for Close: NUMBER in body.
The code is below:
import requests
from datetime import datetime
import mailbox
import re
import os
import tempfile
def get_done_date(bug_num):
    CACHE_DIR = os.path.expanduser("~") + "/.cache/rfs_bugs/"
    def get_from_cache():
        if os.path.exists(" ".format(CACHE_DIR, bug_num)):
            with open(" ".format(CACHE_DIR, bug_num)) as f:
                return datetime.strptime(f.readlines()[0].rstrip(), "%Y-%m-%d").date()
        else:
            return None
    done_date = get_from_cache()
    if done_date is not None:
        return done_date
    else:
        r = requests.get("https://bugs.debian.org/cgi-bin/bugreport.cgi?mbox=yes;bug= ;mboxstatus=yes".format(self._num))
        d = try_header(r.text)
        if d is None:
            d = try_cc(r.text)
        if d is None:
            d = try_body(r.text)
        if d is not None:
            with open(" ".format(CACHE_DIR, bug_num), "w") as f:
                f.write(" ".format(d.date()))
        else:
            return None
        return d.date()
    def try_body(text):
        reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d 1,2 \s\w\w\w\s\d\d\d\d)"
        handle, name = tempfile.mkstemp()
        with open(name, "w") as f:
            f.write(text.encode('latin-1'))
        mbox = mailbox.mbox(name)
        for i in mbox.items():
            if i[1].is_multipart():
                for m in i[1].get_payload():
                    if "close" in str(m) or "done" in str(m):
                        try:
                            result = re.search(reg, i[1]['Received'])
                            return datetime.strptime(result.group(1), "%d %b %Y")
                        except:
                            return None
            else:
                if "close" in i[1].get_payload() or "done" in i[1].get_payload():
                    try:
                        result = re.search(reg, i[1]['Received'])
                        return datetime.strptime(result.group(1), "%d %b %Y")
                    except:
                        return None
        return None
    def try_header(text):
        reg = "Received:\s\(at\s\d\d\d\d\d\d-(close done)\)\s+by.+"
        try:
            result = re.search(reg, r.text)
            line = result.group(0)
            reg2 = "\d 1,2 \s\w\w\w\s\d\d\d\d"
            result = re.search(reg2, line)
            d = datetime.strptime(result.group(0), "%d %b %Y")
            return d
        except:
            return None
    def try_cc(text):
        reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d 1,2 \s\w\w\w\s\d\d\d\d)"
        handle, name = tempfile.mkstemp()
        with open(name, "w") as f:
            f.write(text.encode('latin-1'))
        mbox = mailbox.mbox(name)
        for i in mbox.items():
            if ('CC' in i[1] and "done" in i[1]['CC']) or ('To' in i[1] and "done" in i[1]['To']):
                try:
                    result = re.search(reg, i[1]['Received'])
                    return datetime.strptime(result.group(1), "%d %b %Y")
                except:
                    return None
if __name__ == "__main__":
    print get_done_date(752210)
PS: I hope that the script will be not needed in the near future, as Don Armstrong plans a new BTS database, a Debconf14 video is here.

18 September 2014

Dariusz Dwornikowski: RFS health in Debian

I am working on a small project to create WNPP like statistics for open RFS bugs. I think this could improve a little bit effectiveness of sponsoring new packages by giving insight into bugs that are on their way to being starved (i.e. not ever sponsored, or rotting in a queue). The script attached in this post is written in Python and uses Debbugs SOAP interface to get currently open RFS bugs and calculates their dust and age. The dust factor is calculated as an absolute value of a difference between bugs's age and log_modified. Later I would like to create fully blown stats for an RFS queue, taking into account the whole history (i.e. 2012-1-1 until now), and check its health, calculate MTTGS (mean time to get sponsored). The list looks more or less like this:
Age  Dust Number  Title
37   0    757966  RFS: lutris/0.3.5-1 [ITP]
1    0    762015  RFS: s3fs-fuse/1.78-1 [ITP #601789] -- FUSE-based file system backed by Amazon S3
81   0    753110  RFS: mrrescue/1.02c-1 [ITP]
456  0    712787  RFS: distkeys/1.0-1 [ITP] -- distribute SSH keys
120  1    748878  RFS: mwc/1.7.2-1 [ITP] -- Powerful website-tracking tool
1    1    762012  RFS: fadecut/0.1.4-1
3    1    761687  RFS: abraca/0.8.0+dfsg-1 -- Simple and powerful graphical client for XMMS2
35   2    758163  RFS: kcm-ufw/0.4.3-1 ITP
3    2    761636  RFS: raceintospace/1.1+dfsg1-1 [ITP]
....
....
The script rfs_health.py can be found below, it uses SOAPpy (only python <3 unfortunately).
#!/usr/bin/python
import SOAPpy
import time
from datetime import date, timedelta, datetime
url = 'http://bugs.debian.org/cgi-bin/soap.cgi'
namespace = 'Debbugs/SOAP'
server = SOAPpy.SOAPProxy(url, namespace)
class RFS(object):
    def __init__(self, obj):
        self._obj = obj
        self._last_modified = date.fromtimestamp(obj.log_modified)
        self._date = date.fromtimestamp(obj.date)
        if self._obj.pending != 'done':
            self._pending = "pending"
            self._dust = abs(date.today() - self._last_modified).days
        else:
            self._pending = "done"
            self._dust = abs(self._date - self._last_modified).days
        today = date.today()
        self._age = abs(today - self._date).days
    @property
    def status(self):
        return self._pending
    @property
    def date(self):
        return self._date
    @property
    def last_modified(self):
        return self._last_modified
    @property
    def subject(self):
        return self._obj.subject
    @property
    def bug_number(self):
        return self._obj.bug_num
    @property
    def age(self):
        return self._age
    @property
    def dust(self):
        return self._dust
    def __str__(self):
        return "  subject:   age:  dust: ".format(self._obj.bug_num, self._obj.subject, self._age, self._dust)
if __name__ == "__main__":
    bugi = server.get_bugs("package", "sponsorship-requests", "status", "open")
    buglist = [RFS(b.value) for b in server.get_status(bugi).item]
    buglist_sorted_by_dust = sorted(buglist, key=lambda x: x.dust, reverse=False)
    print("Age  Dust Number  Title")
    for i in buglist_sorted_by_dust:
        print(" :<4   :<4   :<7   ".format(i.age, i.dust, i.bug_number, i.subject))

12 September 2014

Dariusz Dwornikowski: forwarding messages with attachments in mutt

This is a pain for every mutt user. I do not know why this solution is so hard to find. Just add these two lines to your .muttrc.
set mime_forward
set mime_forward_rest=yes
This will forward an email with all the attachments, no scripts needed, no fancy tagging or reediting.

Dariusz Dwornikowski: profanity and libstrophe status in Debian

profanity is a great console based XMPP client written in ncurses and C by James Booth. The code has a great quality, upstream is super collaborative, and willing, so packaging should be pretty straightforward. This post will show that this was not the case here. profanity First obstacle was that profanity depended on libstrophe, an XMPP library, which was not in Debian. As it occurred libstrophe's upstream was not responsive, so any changes that were needed to prepare libstrophe for high quality packaging could not be met.
  1. First of all libstrophe's build system (automake and friends) built only a static library.
  2. The second problem was that libstrophe did not tag releases on github, this was needed to make Debian watch file work.
  3. A third, smaller problem was the presence of debian/ directory in upstream's source. It can be neglected most of the time, since you can tell git-import-orig to delete it.
To solve those 3 problems I created a pull request fixing the build system to build also a shared library, deleting debian/ directory and politely asking for tagging releases. You can see my pull request here dated on April 26th. There was no answer for the libstrophe's upstream but I has some support from profanity's developers and other users wanting to make those changes. Finally metajack (libstrophe upstream) gave us right to the repo and we could merge the pull request on August 6th. The lesson learned - be patient and know autotools (a great tutorial is here). With profanity there were less changes to do. The most important one was that it linked to OpenSSL and due to the license incompatibility with GPL it could not go into Debian. Fortunately upstream added the OpenSSL exception, and profanity could be finally packaged. Now both profanity and libstrophe are in NEW queue and hopefully they will be accepted by ftp masters. When they are, there is plenty to do with them in the future, upstream closed some bugs, new upstream versions are tagged.