Search Results: "chaica"

2 October 2017

Antoine Beaupr : My free software activities, September 2017

Debian Long Term Support (LTS) This is my monthly Debian LTS report. I mostly worked on the git, git-annex and ruby packages this month but didn't have time to completely use my allocated hours because I started too late in the month.

Ruby I was hoping someone would pick up the Ruby work I submitted in August, but it seems no one wanted to touch that mess, understandably. Since then, new issues came up, and not only did I have to work on the rubygems and ruby1.9 package, but now the ruby1.8 package also had to get security updates. Yes: it's bad enough that the rubygems code is duplicated in one other package, but wheezy had the misfortune of having two Ruby versions supported. The Ruby 1.9 also failed to build from source because of test suite issues, which I haven't found a clean and easy fix for, so I ended up making test suite failures non-fatal in 1.9, which they were already in 1.8. I did keep a close eye on changes in the test suite output to make sure tests introduced in the security fixes would pass and that I wouldn't introduce new regressions as well. So I published the following advisories:
  • ruby 1.8: DLA-1113-1, fixing CVE-2017-0898 and CVE-2017-10784. 1.8 doesn't seem affected by CVE-2017-14033 as the provided test does not fail (but it does fail in 1.9.1). test suite was, before patch:
    2199 tests, 1672513 assertions, 18 failures, 51 errors
    
    and after patch:
    2200 tests, 1672514 assertions, 18 failures, 51 errors
    
  • rubygems: uploaded the package prepared in August as is in DLA-1112-1, fixing CVE-2017-0899, CVE-2017-0900, CVE-2017-0901. here the test suite passed normally.
  • ruby 1.9: here I used the used 2.2.8 release tarball to generate a patch that would cover all issues and published DLA-1114-1 that fixes the CVEs of the two packages above. the test suite was, before patches:
    10179 tests, 2232711 assertions, 26 failures, 23 errors, 51 skips
    
    and after patches:
    1.9 after patches (B): 10184 tests, 2232771 assertions, 26 failures, 23 errors, 53 skips
    

Git I also quickly issued an advisory (DLA-1120-1) for CVE-2017-14867, an odd issue affecting git in wheezy. The backport was tricky because it wouldn't apply cleanly and the git package had a custom patching system which made it tricky to work on.

Git-annex I did a quick stint on git-annex as well: I was able to reproduce the issue and confirm an approach to fixing the issue in wheezy, although I didn't have time to complete the work before the end of the month.

Other free software work

New project: feed2exec I should probably make a separate blog post about this, but ironically, I don't want to spend too much time writing those reports, so this will be quick. I wrote a new program, called feed2exec. It's basically a combination of feed2imap, rss2email and feed2tweet: it allows you to fetch RSS feeds and send them in a mailbox, but what's special about it, compared to the other programs above, is that it is more generic: you can basically make it do whatever you want on new feed items. I have, for example, replaced my feed2tweet instance with it, using this simple configuration:
[anarcat]
url = https://anarc.at/blog/index.rss
output = feed2exec.plugins.exec
args = tweet "%(title)0.70s %(link)0.70s"
The sample configuration file also has examples to talk with Mastodon, Pump.io and, why not, a torrent server to download torrent files available over RSS feeds. A trivial configuration can also make it work as a crude podcast client. My main motivation to work on this was that it was difficult to extend feed2imap to do what I needed (which was to talk to transmission to download torrent files) and rss2email didn't support my workflow (which is delivering to feed-specific mail folders). Because both projects also seemed abandoned, it seemed like a good idea at the time to start a new one, although the rss2email community has now restarted the project and may produce interesting results. As an experiment, I tracked my time working on this project. It turns out it took about 45 hours to write that software. Considering feed2exec is about 1400 SLOC, that's 30 lines of code per hour. I don't know if that's slow or fast, but it's an interesting metric for future projects. It sure seems slow to me, but we need to keep in mind those 30 lines of code don't include documentation and repeated head banging on the keyboard. For example, I found two issues with the upstream feedparser package which I use to parse feeds which also seems unmaintained, unfortunately. Feed2exec is beta software at this point, but it's working well enough for me and the design is much simpler than the other programs of the kind. The main issue people can expect from it at this point is formatting issues or parse errors on exotic feeds, and noisy error messages on network errors, all of which should be fairly easy to fix in the test suite. I hope it will be useful for the community and, as usual, I welcome contributions, help and suggestions on how to improve the software.

More Python templates As part of the work on feed2exec, I did cleanup a few things in the ecdysis project, mostly to hook tests up in the CI, improve on the advancedConfig logger and cleanup more stuff. While I was there, it turns out that I built a pretty decent basic CI configuration for Python on GitLab. Whereas the previous templates only had a non-working Django example, you should now be able to chose a Python template when you configure CI on GitLab 10 and above, which should hook you up with normal Python setup procedures like setup.py install and setup.py test.

Selfspy I mentioned working on a monitoring tool in my last post, because it was a feature from Workrave missing in SafeEyes. It turns out there is already such a tool called selfspy. I did an extensive review of the software to make sure it wouldn't leak out confidential information out before using it, and it looks, well... kind of okay. It crashed on me at least once so far, which is too bad because then it loses track of the precious activity. I have used it at least once to figure out what the heck I worked on during the day, so it's pretty useful. I particularly used it to backtrack my work on feed2exec as I didn't originally track my time on the project. Unfortunately, selfspy seems unmaintained. I have proposed a maintenance team and hopefully the project maintainer will respond and at least share access so we don't end up in a situation like linkchecker. I also sent a bunch of pull requests to fix some issues like being secure by default and fixing the build. Apart from the crash, the main issue I have found with the software is that it doesn't detect idle time which means certain apps are disproportionatly represented in statistics. There are also some weaknesses in the crypto that should be adressed for people that encrypt their database. Next step is to package selfspy in Debian which should hopefully be simple enough...

Restic documentation security As part of a documentation patch on the Restic backup software, I have improved on my previous Perl script to snoop on process commandline arguments. A common flaw in shell scripts and cron jobs is to pass secret material in the environment (usually safe) but often through commandline arguments (definitely not safe). The challenge, in this peculiar case, was the env binary, but the last time I encountered such an issue was with the Drush commandline tool, which was passing database credentials in clear to the mysql binary. Using my Perl sniffer, I could get to 60 checks per second (or 60Hz). After reimplementing it in Python, this number went up to 160Hz, which still wasn't enough to catch the elusive env command, which is much faster at hiding arguments than MySQL, in large part because it simply does an execve() once the environment is setup. Eventually, I just went crazy and rewrote the whole thing in C which was able to get 700-900Hz and did catch the env command about 10-20% of the time. I could probably have rewritten this by simply walking /proc myself (since this is what all those libraries do in the end) to get better result, but then my point was made. I was able to prove to the restic author the security issues that warranted the warning. It's too bad I need to repeat this again and again, but then my tools are getting better at proving that issue... I suspect it's not the last time I have to deal with this issue and I am happy to think that I can come up with an even more efficient proof of concept tool the next time around.

Ansible 101 After working on documentation last month, I ended up writing my first Ansible playbook this month, converting my tasksel list to a working Ansible configuration. This was a useful exercise: it allow me to find a bunch of packages which have been removed from Debian and provides much better usability than tasksel. For example, it provides a --diff argument that shows which packages are missing from a given setup. I am still unsure about Ansible. Manifests do seem really verbose and I still can't get used to the YAML DSL. I could probably have done the same thing with Puppet and just run puppet apply on the resulting config. But I must admit my bias towards Python is showing here: I can't help but think Puppet is going to be way less accessible with its rewrite in Clojure and C (!)... But then again, I really like Puppet's approach of having generic types like package or service rather than Ansible's clunky apt/yum/dnf/package/win_package types...

Pat and Ham radio After responding (too late) to a request for volunteers to help in Puerto Rico, I realized that my amateur radio skills were somewhat lacking in the "packet" (data transmission in ham jargon) domain, as I wasn't used to operate a Winlink node. Such a node can receive and transmit actual emails over the airwaves, for free, without direct access to the internet, which is very useful in disaster relief efforts. Through summary research, I stumbled upon the new and very promising Pat project which provides one of the first user-friendly Linux-compatible Winlink programs. I provided improvements on the documentation and some questions regarding compatibility issues which are still pending. But my pet issue is the establishment of pat as a normal internet citizen by using standard protocols for receiving and sending email. Not sure how that can be implemented, but we'll see. I am also hoping to upload an official Debian package and hopefully write more about this soon. Stay tuned!

Random stuff I ended up fixing my Kodi issue by starting it as a standalone systemd service, instead of gdm3, which is now completely disabled on the media box. I simply used the following /etc/systemd/service/kodi.service file:
[Unit]
Description=Kodi Media Center
After=systemd-user-sessions.service network.target sound.target
[Service]
User=xbmc
Group=video
Type=simple
TTYPath=/dev/tty7
StandardInput=tty
ExecStart=/usr/bin/xinit /usr/bin/dbus-launch --exit-with-session /usr/bin/kodi-standalone -- :1 -nolisten tcp vt7
Restart=on-abort
RestartSec=5
[Install]
WantedBy=multi-user.target
The downside of this is that it needs Xorg to run as root, whereas modern Xorg can now run rootless. Not sure how to fix this or where... But if I put needs_root_rights=no in Xwrapper.config, I get the following error in .local/share/xorg/Xorg.1.log:
[  2502.533] (EE) modeset(0): drmSetMaster failed: Permission denied
After fooling around with iPython, I ended up trying the xonsh shell, which is supposed to provide a bash-compatible Python shell environment. Unfortunately, I found it pretty unusable as a shell: it works fine to do Python stuff, but then all my environment and legacy bash configuration files were basically ignored so I couldn't get working quickly. This is too bad because the project looked very promising... Finally, one of my TLS hosts using a Let's Encrypt certificate wasn't renewing properly, and I figured out why. It turns out the ProxyPass command was passing everything to the backend, including the /.well-known requests, which obviously broke ACME verification. The solution was simple enough, disable the proxy for that directory:
ProxyPass /.well-known/ !

Antoine Beaupr : My free software activities, September 2017

Debian Long Term Support (LTS) This is my monthly Debian LTS report. I mostly worked on the git, git-annex and ruby packages this month but didn't have time to completely use my allocated hours because I started too late in the month.

Ruby I was hoping someone would pick up the Ruby work I submitted in August, but it seems no one wanted to touch that mess, understandably. Since then, new issues came up, and not only did I have to work on the rubygems and ruby1.9 package, but now the ruby1.8 package also had to get security updates. Yes: it's bad enough that the rubygems code is duplicated in one other package, but wheezy had the misfortune of having two Ruby versions supported. The Ruby 1.9 also failed to build from source because of test suite issues, which I haven't found a clean and easy fix for, so I ended up making test suite failures non-fatal in 1.9, which they were already in 1.8. I did keep a close eye on changes in the test suite output to make sure tests introduced in the security fixes would pass and that I wouldn't introduce new regressions as well. So I published the following advisories:
  • ruby 1.8: DLA-1113-1, fixing CVE-2017-0898 and CVE-2017-10784. 1.8 doesn't seem affected by CVE-2017-14033 as the provided test does not fail (but it does fail in 1.9.1). test suite was, before patch:
    2199 tests, 1672513 assertions, 18 failures, 51 errors
    
    and after patch:
    2200 tests, 1672514 assertions, 18 failures, 51 errors
    
  • rubygems: uploaded the package prepared in August as is in DLA-1112-1, fixing CVE-2017-0899, CVE-2017-0900, CVE-2017-0901. here the test suite passed normally.
  • ruby 1.9: here I used the used 2.2.8 release tarball to generate a patch that would cover all issues and published DLA-1114-1 that fixes the CVEs of the two packages above. the test suite was, before patches:
    10179 tests, 2232711 assertions, 26 failures, 23 errors, 51 skips
    
    and after patches:
    1.9 after patches (B): 10184 tests, 2232771 assertions, 26 failures, 23 errors, 53 skips
    

Git I also quickly issued an advisory (DLA-1120-1) for CVE-2017-14867, an odd issue affecting git in wheezy. The backport was tricky because it wouldn't apply cleanly and the git package had a custom patching system which made it tricky to work on.

Git-annex I did a quick stint on git-annex as well: I was able to reproduce the issue and confirm an approach to fixing the issue in wheezy, although I didn't have time to complete the work before the end of the month.

Other free software work

New project: feed2exec I should probably make a separate blog post about this, but ironically, I don't want to spend too much time writing those reports, so this will be quick. I wrote a new program, called feed2exec. It's basically a combination of feed2imap, rss2email and feed2tweet: it allows you to fetch RSS feeds and send them in a mailbox, but what's special about it, compared to the other programs above, is that it is more generic: you can basically make it do whatever you want on new feed items. I have, for example, replaced my feed2tweet instance with it, using this simple configuration:
[anarcat]
url = https://anarc.at/blog/index.rss
output = feed2exec.plugins.exec
args = tweet "%(title)0.70s %(link)0.70s"
The sample configuration file also has examples to talk with Mastodon, Pump.io and, why not, a torrent server to download torrent files available over RSS feeds. A trivial configuration can also make it work as a crude podcast client. My main motivation to work on this was that it was difficult to extend feed2imap to do what I needed (which was to talk to transmission to download torrent files) and rss2email didn't support my workflow (which is delivering to feed-specific mail folders). Because both projects also seemed abandoned, it seemed like a good idea at the time to start a new one, although the rss2email community has now restarted the project and may produce interesting results. As an experiment, I tracked my time working on this project. It turns out it took about 45 hours to write that software. Considering feed2exec is about 1400 SLOC, that's 30 lines of code per hour. I don't know if that's slow or fast, but it's an interesting metric for future projects. It sure seems slow to me, but we need to keep in mind those 30 lines of code don't include documentation and repeated head banging on the keyboard. For example, I found two issues with the upstream feedparser package which I use to parse feeds which also seems unmaintained, unfortunately. Feed2exec is beta software at this point, but it's working well enough for me and the design is much simpler than the other programs of the kind. The main issue people can expect from it at this point is formatting issues or parse errors on exotic feeds, and noisy error messages on network errors, all of which should be fairly easy to fix in the test suite. I hope it will be useful for the community and, as usual, I welcome contributions, help and suggestions on how to improve the software.

More Python templates As part of the work on feed2exec, I did cleanup a few things in the ecdysis project, mostly to hook tests up in the CI, improve on the advancedConfig logger and cleanup more stuff. While I was there, it turns out that I built a pretty decent basic CI configuration for Python on GitLab. Whereas the previous templates only had a non-working Django example, you should now be able to chose a Python template when you configure CI on GitLab 10 and above, which should hook you up with normal Python setup procedures like setup.py install and setup.py test.

Selfspy I mentioned working on a monitoring tool in my last post, because it was a feature from Workrave missing in SafeEyes. It turns out there is already such a tool called selfspy. I did an extensive review of the software to make sure it wouldn't leak out confidential information out before using it, and it looks, well... kind of okay. It crashed on me at least once so far, which is too bad because then it loses track of the precious activity. I have used it at least once to figure out what the heck I worked on during the day, so it's pretty useful. I particularly used it to backtrack my work on feed2exec as I didn't originally track my time on the project. Unfortunately, selfspy seems unmaintained. I have proposed a maintenance team and hopefully the project maintainer will respond and at least share access so we don't end up in a situation like linkchecker. I also sent a bunch of pull requests to fix some issues like being secure by default and fixing the build. Apart from the crash, the main issue I have found with the software is that it doesn't detect idle time which means certain apps are disproportionatly represented in statistics. There are also some weaknesses in the crypto that should be adressed for people that encrypt their database. Next step is to package selfspy in Debian which should hopefully be simple enough...

Restic documentation security As part of a documentation patch on the Restic backup software, I have improved on my previous Perl script to snoop on process commandline arguments. A common flaw in shell scripts and cron jobs is to pass secret material in the environment (usually safe) but often through commandline arguments (definitely not safe). The challenge, in this peculiar case, was the env binary, but the last time I encountered such an issue was with the Drush commandline tool, which was passing database credentials in clear to the mysql binary. Using my Perl sniffer, I could get to 60 checks per second (or 60Hz). After reimplementing it in Python, this number went up to 160Hz, which still wasn't enough to catch the elusive env command, which is much faster at hiding arguments than MySQL, in large part because it simply does an execve() once the environment is setup. Eventually, I just went crazy and rewrote the whole thing in C which was able to get 700-900Hz and did catch the env command about 10-20% of the time. I could probably have rewritten this by simply walking /proc myself (since this is what all those libraries do in the end) to get better result, but then my point was made. I was able to prove to the restic author the security issues that warranted the warning. It's too bad I need to repeat this again and again, but then my tools are getting better at proving that issue... I suspect it's not the last time I have to deal with this issue and I am happy to think that I can come up with an even more efficient proof of concept tool the next time around.

Ansible 101 After working on documentation last month, I ended up writing my first Ansible playbook this month, converting my tasksel list to a working Ansible configuration. This was a useful exercise: it allow me to find a bunch of packages which have been removed from Debian and provides much better usability than tasksel. For example, it provides a --diff argument that shows which packages are missing from a given setup. I am still unsure about Ansible. Manifests do seem really verbose and I still can't get used to the YAML DSL. I could probably have done the same thing with Puppet and just run puppet apply on the resulting config. But I must admit my bias towards Python is showing here: I can't help but think Puppet is going to be way less accessible with its rewrite in Clojure and C (!)... But then again, I really like Puppet's approach of having generic types like package or service rather than Ansible's clunky apt/yum/dnf/package/win_package types...

Pat and Ham radio After responding (too late) to a request for volunteers to help in Puerto Rico, I realized that my amateur radio skills were somewhat lacking in the "packet" (data transmission in ham jargon) domain, as I wasn't used to operate a Winlink node. Such a node can receive and transmit actual emails over the airwaves, for free, without direct access to the internet, which is very useful in disaster relief efforts. Through summary research, I stumbled upon the new and very promising Pat project which provides one of the first user-friendly Linux-compatible Winlink programs. I provided improvements on the documentation and some questions regarding compatibility issues which are still pending. But my pet issue is the establishment of pat as a normal internet citizen by using standard protocols for receiving and sending email. Not sure how that can be implemented, but we'll see. I am also hoping to upload an official Debian package and hopefully write more about this soon. Stay tuned!

Random stuff I ended up fixing my Kodi issue by starting it as a standalone systemd service, instead of gdm3, which is now completely disabled on the media box. I simply used the following /etc/systemd/service/kodi.service file:
[Unit]
Description=Kodi Media Center
After=systemd-user-sessions.service network.target sound.target
[Service]
User=xbmc
Group=video
Type=simple
TTYPath=/dev/tty7
StandardInput=tty
ExecStart=/usr/bin/xinit /usr/bin/dbus-launch --exit-with-session /usr/bin/kodi-standalone -- :1 -nolisten tcp vt7
Restart=on-abort
RestartSec=5
[Install]
WantedBy=multi-user.target
The downside of this is that it needs Xorg to run as root, whereas modern Xorg can now run rootless. Not sure how to fix this or where... But if I put needs_root_rights=no in Xwrapper.config, I get the following error in .local/share/xorg/Xorg.1.log:
[  2502.533] (EE) modeset(0): drmSetMaster failed: Permission denied
After fooling around with iPython, I ended up trying the xonsh shell, which is supposed to provide a bash-compatible Python shell environment. Unfortunately, I found it pretty unusable as a shell: it works fine to do Python stuff, but then all my environment and legacy bash configuration files were basically ignored so I couldn't get working quickly. This is too bad because the project looked very promising... Finally, one of my TLS hosts using a Let's Encrypt certificate wasn't renewing properly, and I figured out why. It turns out the ProxyPass command was passing everything to the backend, including the /.well-known requests, which obviously broke ACME verification. The solution was simple enough, disable the proxy for that directory:
ProxyPass /.well-known/ !

29 August 2017

Carl Chenet: Send scheduled messages to both Twitter and Mastodon with the Remindr bot

Do you need to send messages to both Twitter and Mastodon? Use the Remindr bot! Remindr is written in Python, released under the GPLv3 license. 1. How to install Remindr Install Remindr from PyPI:
# pip3 install remindr
2. How to configure Remindr First, start by writing a messages.txt file with the following content:
o en Send scheduled messages to both Twitter and Mastodon with the Remindr bot https://carlchenet.com/send-scheduled-messages-to-both-twitter-and-mastodon-with-the-remindr-bot #remindr #twitter #mastodon
x en Follow Carl Chenet for great news about Free Software! https://carlchenet.com #freesoftware
The first field only indicates if the line is the next one to be considered by Remindr, the o indicates the next line to be sent, x means it won t. The second field is the 2-letters code language your content is using, in my example en or fr. Next content on the line will compose the body of your messages to Mastodon and Twitter. You need to configure the Mastodon and the Twitter credentials in order to allow Remindr to send the messages. First you need to generate the credentials. For Twitter, you need to manually create an app on apps.twitter.com. For Mastodon, just launch the following command:
$ register_remindr_app
Some information will be asked by the command. At the end, two files are created, remindr_usercred.txt and remindr_clientcred.txt. You re going to need them for the Remindr configuration above. For the Remindr configuration, here is a complete configuration using the
[mastodon]
instance_url=https://mastodon.social
user_credentials=remindr_usercred.txt
client_credentials=remindr_clientcred.txt
[twitter]
consumer_key=a6lv2gZxkvk6UbQ30N4vFmlwP
consumer_secret=j4VxM2slv0Ud4rbgZeGbBzPG1zoauBGLiUMOX0MGF6nsjcyn4a
access_token=1234567897-Npq5fYybhacYxnTqb42Kbb3A0bKgmB3wm2hGczB
access_token_secret=HU1sjUif010DkcQ3SmUAdObAST14dZvZpuuWxGAV0xFnC
[image]
path_to_image=/home/chaica/blog-carl-chenet.png
[entrylist]
path_to_list=/etc/remindr/messages.txt
Your configuration is complete! Now we have to check if everything is fine.

Read the full documentation on Readthedocs.

3. How to use Remindr Now let s try your configuration by launching Remindr the first time by-hand:
$ remindr -c /etc/remindr/remindr.ini
The messages should appear on both Twitter and Mastodon. 4. How to schedule the Remindr execution The easiest way is to use you user crontab. Just add the following line in your crontab file, editing it with crontab -e
00 10 * * * remindr -c /etc/remindr/remindr.ini
From now on, your message will be sent every day at 10:00AM. Going further with Remindr and finally You can help me developing tools for Mastodon and other social networks by donating anything through Liberaypay (also possible with cryptocurrencies). Any contribution will be appreciated. That s a big factor motivation
Donate You also may follow my account @carlchenet on Mastodon

27 August 2017

Carl Chenet: The Importance of Choosing the Correct Mastodon Instance

Remember, Mastodon is a new decentralized social network, based on a free software which is rapidly gaining users (already there is more than 1.5 million accounts). As I ve created my account in June, I was a fast addict and I ve already created several tools for this network, Feed2toot, Remindr and Boost (mostly written in Python).

Now, with all this experience I have to stress out the importance of choosing the correct Mastodon instance.

Some technical reminders on how Mastodon works

First, let s quickly clarify something about the decentralized part. In Mastodon, decentralization is made through a federation of dedicated servers, called instances , each one with a complete independent administration. Your user account is created on one specific instance. You have two choices:

  • Create your own instance. Which requires advanced technical knowledge.
  • Create your user account on a public instance. Which is the easiest and fastest way to start using Mastodon.

You can move your user account from one instance to another, but you have to follow a special procedure which can be quite long, considering your own interest for technical manipulation and the total amount of your followers you ll have to warn about your change. As such, you ll have to create another account on a new instance and import three lists: the one with your followers, the one with the accounts you have blocked, and the one with the account you have muted.

From this working process, several technical and human factors will interest us.

A good technical administration for instance

As a social network, Mastodon is truly decentralized, with more than 1.5 million users on more than 2350 existing instances. As such, the most common usage is to create an account on an open instance. To create its own instance is way too difficult for the average user. Yet, using an open instance creates a strong dependence on the technical administrator of the chosen instance.

The technical administrator will have to deal with several obligations to ensure its service continuity, with high-quality hardware and regular back-ups. All of these have a price, either in money and in time. Regarding the time factor, it would be better to choose an administration team over an individual, as life events can change quite fast everyone s interests. As such, Framasoft, a French association dedicated to promoting the Free software use, offers its own Mastodon instance named: Framapiaf. The creator of the mastodon project, also offers a quite solid instance, Mastodon.social (see below).

Regarding the money factor, many instance administrators with a large number of users are currently asking for donation via Patreon, as hosting an instance server or renting one cost money.

Mastodon.social, the first instance of the Mastodon network

The Ideological Trend Of Your Instance

If anybody could have guessed the previous technical points since the recent registration explosion on the Mastodon social network, the following point took almost everyone by surprise. Little by little, different instances show their culture , their protest action, and their propaganda on this social network.

As the instance administrator has all the powers over its instance, he or she can block the instance of interacting with some other instances, or ban its instance s users from any interaction with other instances users.

With everyone having in mind the main advantages to have federalized instance from, this partial independence of some instances from the federation was a huge surprise. One of the most recent example was when the Unixcorn.xyz instance administrator banned its users from reading Aeris account, which was on its own instance. It was a cataclysm with several consequences, which I ve named the #AerisGate as it shows the different views on moderation and on its reception by various Mastodon users.

If you don t manage your own instance, when you ll have to choose the one where to create your account, make sure that the content you plan to toot is within the rules and compatible with the ideology of said instance s administrator. Yes, I know, it may seem surprising but, as stated above, by entering a public instance you become dependent on someone else s infrastructure, who may have an ideological way to conceive its Mastodon hosting service. As such, if you re a nazi, for example, don t open your Mastodon account on a far-left LGBT instance. Your account wouldn t stay open for long.

The moderation rules are described in the about/more page of the instance, and may contain ideological elements.

To ease the process for newcomers, it is now possible to use a great tool to select what instance should be the best to host your account.

Remember that, as stated above, Mastodon is decentralized, and as such there is no central authority which can be reached in case you have a conflict with your instance administrator. And nobody can force said administrator to follow its own rules, or not to change them on the fly. Think Twice Before Creating Your Account

If you want to create an account on an instance you don t control, you need to check two elements: the availability of the instance hosting service in the long run, often linked to the administrator or the administration group of said instance, and the ideological orientation of your instance. With these two elements checked, you ll be able to let your Mastodon account growth peacefully, without fearing an outage of your instance, or simple your account blocked one morning because it doesn t align with your instance s ideological line.

in Conclusion

To help me get involved in free software and writing articles for this blog, please consider a donation through my Liberapay page, even if it s only a few cents per week. My contact Bitcoin and Monero are also available on this page. Follow me on Mastodon Translated from French to English by St phanie Chaptal.

21 August 2017

Carl Chenet: Remind people about your great content using social networks with Remindr

Each time I remind people about one of my best blog posts, I do have positive reviews and a peak of traffic on my blog. But as an IT guy, I hate (so much) manually (gosh!) posting reminders of my articles on both my Twitter account and my Mastodon account. So I wrote Remindr. Each time you launch it, it posts a content through both Mastodon and Twitter. You can attach an image for each message and using different languages is managed for your different content is managed. Under the hood, it s a self hosted (my instance just runs on my workstation) Python 3 application released under the GPLv3 license. Going further with Remindr How does exactly Remindr work? Remindr iterates through a list of messages in a file you write and extract one line for each execution of Remindr, adding a user-defined prefix and send them to both the Mastodon and Twitter social networks. Here is the format of the file:
o en Automatically Send Toots To The Mastodon Social Network https://carlchenet.com/automatically-send-toots-to-the-mastodon-social-network/ #Mastodon
x fr Sur Mastodon, cr er son compte de secours  ou tout perdre https://carlchenet.com/sur-mastodon-creer-son-compte-de-secours-ou-tout-perdre/ #Mastodon
x en Automatically boost cool toots on Mastodon with the Boost bot https://carlchenet.com/automatically-boost-cool-toots-on-mastodon-with-the-boost-bot/ #Mastodon
x en Cryptocurrencies On the New Social Network Mastodon #Mastodon #bitcoin #ethereum #monero
The first field only indicates if the line is the next one to be considered by Remindr, the o indicates the next line to be sent. The second field is the 2-letters code language your content is using, in my example en or fr. Next content on the line will compose the body of your messages to Mastodon and Twitter. So each time you launch Remindr, one of the lines of your file will be sent to Mastodon and Twitter. How easy to use is that? and finally You can help me developing tools for Mastodon and other social networks by donating anything through Liberaypay (also possible with cryptocurrencies). Any contribution will be appreciated. That s a big factor motivation
Donate You also may follow my account @carlchenet on Mastodon  Carl Chenet On Mastodon

7 February 2017

Carl Chenet: The Gitlab database incident and the Backup Checker project

The Gitlab.com database incident of 2017/01/31 and the resulting data loss reminded everyone (at least for the next days) how it s easy to lose data, even when you think all your systems are safe. Being really interested by the process of backing up data, I read with interest the report (kudos to the Gitlab company for being so transparent about it) and I was soooo excited to find the following sentence:
Regular backups seem to also only be taken once per 24 hours, though team-member-1 has not yet been able to figure out where they are stored. According to team-member-2 these don t appear to be working, producing files only a few bytes in size.
Whoa, guys! I m so sorry for you about the data loss, but from my point of view I was so excited to find a big FOSS company publicly admitting and communicating about a perfect use case for the Backup Checker project, a Free Software I ve been writing these last years. Data loss: nobody cares before, everybody cries after Usually people don t care about the backups. It s a serious business for web hosters and the backup team from big companies but otherwise and in other places, nobody cares. Usually everybody agrees about how backups are important but few people make them or install an automatized system to create backups and the day before, nobody verifies they are usable. The reason is obvious: it s totally boring, and in some cases e.g for large archives, difficult. Because verifying backups is boring for humans, I launched the Backup Checker project in order to automatize this task. Backup Checker offers a wide range of features, checking lots of different archives (tar. gz,bz2,xz , zip, tree of files and offer lots of different tests (hash sum, size equal, smaller/greater than , unix rights, ,). Have a look at the official documentation for a exhaustive list of features and possible tests. Automatize the controls of your backups with Backup Checker Checking your backups means to describe in a configuration file how a backup should be, e.g a gzipped database dump. You usually know about what size the archive is going to be, what the owner and the group owner should be. Even easier, with Backup Checker you can generate this list of criterias from an actual archive, and remove uneeded criterias to create a template you can re-use for different kind of archives. Ok, 2 minutes of your time for a real word example, I use an existing database sql dump in an tar.gz archive to automatically create the list describing this backup:
$ backupchecker -G database-dump.tar.gz
$ cat database-dump.list
[archive]
mtime  1486480274.2923253
[files]
database.sql  =7854803 uid 1000 gid 1000 owner chaica group chaica mode 644 type f mtime 1486480253.0
Now, just remove parameters too precise from this list to get a backup template. Here is a possible result:
[files]
database.sql  >6m uid 1000 gid 1000 mode 644 type f
We define here a template for the archive, meaning that the database.sql file in the archive should have a size greater than 6 megabytes, be owned by the user with the uid of 1000 and the group with a gid of 1000, this file should have the mode 644 and be a regular file. In order to use a template instead of the complete list, you also need to remove the sha512 from the .conf file. Pretty easy hmm? Ok, just for fun, lets replicate the part of the Gitlab.com database incident mentioned above and write an archive with an empty sql dump inside an archive:
$ touch /tmp/database.sql && \
tar zcvf /tmp/database-dump.tar.gz /tmp/database.sql && \
cp /tmp/database-dump.tar.gz .
Now we launch Backup Checker with the previously created template. If you didn t change the name of database-dump.list file, the command should only be:
$ backupchecker -C database-dump.conf
$ cat a.out 
WARNING:root:1 file smaller than expected while checking /tmp/article-backup-checker/database-dump.tar.gz: 
WARNING:root:database.sql size is 0. Should have been bigger than 6291456.
The automatized controls of Backup Checker trigger a warning in the log file. The empty sql dump has been identified inside the archive. A step further As you could read in this article, verifying some of your backups is not a time consuming task, given the fact you have a FOSS project dedicated to this task, with an easy way to realize a template of your backups and to use it. This article provided a really simple example of such a use case, the Backup Checker has lots of features to offer when verifying your backups. Read the official documentation for more complete descriptions of the available possibilities. Data loss, especially for projets storing user data is always a terrible event in the life of an organization. Lets try to learn from mistakes which could happen to anyone and build better backup systems. More information about the Backup Checker project

1 February 2017

Antoine Beaupr : My free software activities, January 2017

manpages.debian.org launched The debmans package I had so lovingly worked on last month is now officially abandoned. It turns out that another developer, Michael Stapelberg wrote his own implementation from scratch, called debiman. Both software share a similar design: they are both static site generators that parse an existing archive and call another tool to convert manpages into HTML. We even both settled on the same converter (mdoc). But while I wrote debmans in Python, debiman is written in Go. debiman also seems much faster, being written with concurrency in mind from the start. Finally, debiman is more feature complete: it properly deals with conflicting packages, localization and all sorts redirections. Heck, it even has a pretty logo, how can I compete? While debmans was written first and was in the process of being deployed, I had to give it up. It was a frustrating experience because I felt I wasted a lot of time working on software that ended up being discarded, especially because I put so much work on it, creating extensive documentation, an almost complete test suite and even filing a detailed core infrastructure best practices report In the end, I think that was the right choice: debiman seemed clearly superior and the best tool should win. Plus, it meant less work for me: Michael and Javier (the previous manpages.debian.org maintainer) did all the work of putting the site online. I also learned a lot about the CII best practices program, flask, click and, ultimately, the Go programming language itself, which I'll refer to as Golang for brievity. debiman definitely brought Golang into the spotlight for me. I had looked at Go before, but it seemed to be yet another language. But seeing Michael beat me to rebuilding the service really made me look at it again more seriously. While I really appreciate Python and I will probably still use it as my language of choice for GUI work and smaller scripts, but for daemons, network programs and servers, I will seriously consider Golang in the future. The site is now online at https://manpages.debian.org/. I even got credited in the about page which makes up for the disappointment.

Wallabako downloads Wallabag articles on my Kobo e-reader This obviously brings me to the latest project I worked on, Wallabako, my first Golang program ever. Wallabako is basically a client for the Wallabag application, which is a free software "read it later" service, an alternative to the likes of Pocket, Pinboard or Evernote. Back in April, I had looked downloading my "unread articles" into my new ebook reader, going through convoluted ways like implementing OPDS support into Wallabag, which turned out to be too difficult. Instead, I used this as an opportunity to learn Golang. After reading the quite readable golang specification over the weekend, I found the language to be quite elegant and simple, yet very powerful. Golang feels like C, but built with concurrency and memory (and to a certain extent, type) safety in mind, along with a novel approach to OO programming. The fact that everything can be compiled in one neat little static binary was also a key feature in selecting golang for this project, as I do not have much control over the platform my E-Reader is running: it is a Linux machine running under the ARM architecture, but beyond that, there isn't much available. I couldn't afford to ship a Python interpreter in there and while there are solutions there like pyinstaller, I felt that it may be so easy to deploy on ARM. The borg team had trouble building a ARM binary, restoring to tricks like building on a Raspberry PI or inside an emulator. In comparison, the native go compiler supports cross-compilation out of the box through a simple environment variable. So far Wallabako works amazingly well: when I "bag" a new article in Wallabag, either from my phone or my web browser, it will show up on my ebook reader then next time I open the wifi. I still need to "tap" the screen to fake the insertion of the USB cable, but we're working on automating that. I also need to make the installation of the software much easier and improve the documentation, because so far it's unlikely that someone unfamiliar with Kobo hardware hacking will be able to install it.

Other work According to Github, I filed a bunch of bugs all over the place (25 issues in 16 repositories), sent patches everywhere (13 pull requests in 6 repositories), and tried to fix everythin (created 38 commits in 7 repositories). Note that excludes most of my work, which happens on Gitlab. January was still a very busy month, especially considering I had an accident which kept me mostly offline for about a week. Here are some details on specific projects.

Stressant and a new computer I revived the stressant project and got a new computer. This is be covered in a separate article.

Linkchecker forked After much discussions, it was decided to fork the linkchecker project, which now lives in its own organization. I still have to write community guidelines and figure out the best way to maintain a stable branch, but I am hopeful that the community will pick up the project as multiple people volunteer to co-maintain the project. There has already been pull requests and issues reported, so that's a good sign.

Feed2tweet refresh I re-rolled my pull requests to the feed2tweet project: last time they were closed before I had time to rebase them. The author was okay with me re-submitting them, but he hasn't commented, reviewed or merged the patches yet so I am worried they will be dropped again. At that point, I would more likely rewrite this from scratch than try to collaborate with someone that is clearly not interested in doing so...

Debian uploads

Debian Long Term Support (LTS) This is my 10th month working on Debian LTS, started by Raphael Hertzog at Freexian. I took two months off last summer, which means it's actually been a year of work on the LTS project. This month I worked on a few issues, but they were big issues, so they took a lot of time. I have done a lot of work trying to backport the heading sanitization patches for CVE-2016-8743. The full report explain all the gritty details, but I ran out of time and couldn't upload the final version either. The issue mostly affects Apache servers in proxy configurations so it's not so severe as to warrant an immediate upload anyways. A lot of my time was spent battling the tiff package. The report mentions fixes for 15 CVEs and I uploaded the result in the DLA-795-1 advisory. I also worked on a small update to graphics magic for CVE-2016-9830 that is still pending because the issue is minor and we're waiting for more to pile up. See the full report for details. Finally, there was a small discussion surrounding tools to use when building and testing update to LTS packages. The resulting conversation was interesting, but it showed that we have a big documentation problem in the Debian project. There are a lot of tools, and the documentation is old and distributed everywhere. Every time I want to contribute something to the documentation, I never know where to start or go. This is why I wrote a separate debian development guide instead of contributing to existing documentation...

4 January 2017

Carl Chenet: My Free Software activities in December 2016

My Monthly report for December 2016 gives an extended list of what were my Free Software related activities during this month. Personal projects: That s all folks! See you next month!

14 December 2016

Carl Chenet: Feed2tweet 0.8, tool to post RSS feeds to Twitter, released

Feed2tweet 0.8, a self-hosted Python app to automatically post RSS feeds to the Twitter social network, was released this December, 14th. With this release Feed2tweet now smartly manages the hashtags, adding as much as possible given the size of the tweet. Also 2 new options are available : Feed2tweet 0.8 is already in production for Le Journal du hacker, a French-speaking Hacker News-like website, LinuxJobs.fr, a French-speaking job board and this very blog. fiesta What s the purpose of Feed2tweet? Some online services offer to convert your RSS entries into Twitter posts. Theses services are usually not reliable, slow and don t respect your privacy. Feed2tweet is Python self-hosted app, the source code is easy to read and you can enjoy the official documentation online with lots of examples. Twitter Out Of The Browser Have a look at my Github account for my other Twitter automation tools: What about you? Do you use tools to automate the management of your Twitter account? Feel free to give me feedback in the comments below.

30 November 2016

Carl Chenet: My Free Software activities in November 2016

My Monthly report for Novembre 2016 gives an extended list of what were my Free Software related activities during this month. Personal projects: Journal du hacker: The Journal du hacker is a frenck-speaking Hacker News-like website dedicated to the french-speaking Free and Open source Software community. logo-journal-du-hacker That s all folks! See you next month!

16 November 2016

Carl Chenet: Retweet 0.10: Automatically retweet now using regex

Retweet 0.10, a self-hosted Python app to automatically retweet and like tweets from another user-defined Twitter account, was released this November, 17th. With this release Retweet is now able to retweet only if a tweet matches a user-provided regular expression (regex) pattern. This feature was fully provided by Vanekjar, lots of thanks to him! Retweet 0.10 is already in production for Le Journal du hacker, a French-speaking Hacker News-like website, LinuxJobs.fr, a French-speaking job board and this very blog. fiesta What s the purpose of Retweet? Let s face it, it s more and more difficult to communicate about our projects. Even writing an awesome app is not enough any more. If you don t appear on a regular basis on social networks, everybody thinks you quit or that the project is stalled. But what if you already have built an audience on Twitter for, let s say, your personal account. Now you want to automatically retweet and like all tweets from the account of your new project, to push it forward. Sure, you can do it manually, like in the old good 90 s or you can use Retweet! Twitter Out Of The Browser Have a look at my Github account for my other Twitter automation tools: What about you? Do you use tools to automate the management of your Twitter account? Feel free to give me feedback in the comments below.

23 October 2016

Carl Chenet: PyMoneroWallet: the Python library for the Monero wallet

Do you know the Monero crytocurrency? It s a cryptocurrency, like Bitcoin, focused on the security, the privacy and the untracabily. That s a great project launched in 2014, today called XMR on all cryptocurrency exchange platforms (like Kraken or Poloniex). So what s new? In order to work with a Monero wallet from some Python applications, I just wrote a Python library to use the Monero wallet: PyMoneroWallet monero-logo Using PyMoneroWallet is as easy as:
$ python3
>>> from monerowallet import MoneroWallet
>>> mw = MoneroWallet()
>>> mw.getbalance()
 'unlocked_balance': 2262265030000, 'balance': 2262265030000 
Lots of features are included, you should have a look at the documentation of the monerowallet module to know them all, but quickly here are some of them: And so on. Have a look at the complete documentation for extensive available functions. UPDATE: I m trying to launch a crowdfunding of the PyMoneroWallet project. Feel free to comment in this thread of the official Monero forum to let them know you think that PyMoneroWallet is a great idea  Feel free to contribute to this starting project to help spreading the Monero use by using the PyMoneroWallet project with your Python applications

4 September 2016

Carl Chenet: Retweet 0.9: Automatically retweet & like

Retweet 0.9, a self-hosted Python app to automatically retweet and like tweets from another user-defined Twitter account, was released this September, 2nd. Retweet 0.9 is already in production for Le Journal du hacker, a French-speaking Hacker News-like website, LinuxJobs.fr, a French-speaking job board and this very blog. fiesta What s the purpose of Retweet? Let s face it, it s more and more difficult to communicate about our projects. Even writing an awesome app is not enough any more. If you don t appear on a regular basis on social networks, everybody thinks you quit or that the project is stalled. But what if you already have built an audience on Twitter for, let s say, your personal account. Now you want to automatically retweet and like all tweets from the account of your new project, to push it forward. Sure, you can do it manually, like in the old good 90 s or you can use Retweet! Twitter Out Of The Browser Have a look at my Github account for my other Twitter automation tools: What about you? Do you use tools to automate the management of your Twitter account? Feel free to give me feedback in the comments below.

6 June 2016

Carl Chenet: My Free Activities in May 2015

Follow me also on Diaspora*diaspora-banner or Twitter Trying to catch up with my blog posts about My Free Activities. This blog post will tell you about my free activities from January to May 2016. 1. Personal projects 2. Journal du hacker That s all folks! See you next month!

24 May 2016

Carl Chenet: Tweet your database with db2twitter

Follow me also on Diaspora*diaspora-banner or Twitter You have a database (MySQL, PostgreSQL, see supported database types), a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter. A quick example of a tweet generated by db2twitter: db2twitter The new version 0.6 offers the support of tweets with an image. How cool is that? db2twitter is developed by and run for LinuxJobs.fr, the job board of th french-speaking Free Software and Opensource community. banner-linuxjobs-small db2twitter also has cool options like; db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and Tweepy. The official documentation is available on readthedocs.

Carl Chenet: Tweet your database with db2twitter

Follow me also on Diaspora*diaspora-banner or Twitter You have a database (MySQL, PostgreSQL, see supported database types), a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter. A quick example of a tweet generated by db2twitter: db2twitter The new version 0.6 offers the support of tweets with an image. How cool is that? db2twitter is developed by and run for LinuxJobs.fr, the job board of th french-speaking Free Software and Opensource community. banner-linuxjobs-small db2twitter also has cool options like; db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and Tweepy. The official documentation is available on readthedocs.

3 May 2016

Carl Chenet: Feed2tweet, your RSS feed to Twitter Python self-hosted app

Feed2tweet is a self-hosted Python app to send you RSS feed to Twitter. Feed2tweet is in production for Le Journal du hacker, a French Hacker News-style FOSS website and LinuxJobs.fr, the job board of the French-speaking FOSS community. linuxjobs-horizontale Feed2tweet 0.3 now only runs with Python 3. It also fixes a nasty bug with RSS feeds modifying the RSS entry orders. Have a look at the Feed2tweet 0.3 changelog: Using Feed2tweet? Send us bug reports/feature requests/push requests/comments about it!

Carl Chenet: Feed2tweet, your RSS feed to Twitter Python self-hosted app

Feed2tweet is a self-hosted Python app to send you RSS feed to Twitter. Feed2tweet is in production for Le Journal du hacker, a French Hacker News-style FOSS website and LinuxJobs.fr, the job board of the French-speaking FOSS community. linuxjobs-horizontale Feed2tweet 0.3 now only runs with Python 3. It also fixes a nasty bug with RSS feeds modifying the RSS entry orders. Have a look at the Feed2tweet 0.3 changelog: Using Feed2tweet? Send us bug reports/feature requests/push requests/comments about it!

11 January 2016

Carl Chenet: Extend your Twitter network with Retweet

Retweet is self-hosted app coded in Python 3 allowing to retweet all the statuses from a given Twitter account to another one. Lots of filters can be used to retweet only tweets matching given criterias. Retweet 0.8 is available on the PyPI repository and is already in the official Debian unstable repository. Retweet is in production already for Le Journal Du hacker , a French FOSS community website to share and relay news and LinuxJobs.fr , a job board for the French-speaking FOSS community. logo-journal-du-hacker linuxjobs-horizontale The new features of the 0.8 allow Retweet to manage the tweets given how old they are, retweeting only if : Retweet is extensively documented, have a look at the official documentation to understand how to install it, configure it and use it. What about you? does Retweet allow you to develop your Twitter account? Let your comments in this article.

10 January 2016

Carl Chenet: Feed2tweet 0.2: power of the command line sending your Feed RSS to Twitter

Feed2tweet is a self-hosted Python app to send you RSS feed to Twitter. A long descriptions about why and how to use it is available in my last post about it. Feed2tweet is in production for Le Journal du hacker, a French Hacker News-style FOSS website. logo-journal-du-hacker Feed2tweet 0.2 brings a lot of new command line options, contributed by Antoine Beaupr @theanarcat. Taking a short extract of the Feed2tweet 0.2 changelog: Lots of issues from the previous project was also fixed. Using Feed2tweet? Send us bug reports/feature requests/push requests/comments about it!

Next.