Search Results: "Jan Wagner"

4 January 2021

Jan Wagner: Backing up Windows (the hard way)

Sometimes you need to do things you don't like and you don't know where you will end up.
In our household there exists one (production) system running Windows. Don't ask why and please no recommandations how to substitute it. Some things are hard to (ex)change, for example your love partner. Looking into Backup with rsync on Windows (WSL) I needed to start a privileged powershell, so I first started an unprivileged one:
powershell
Just to start a privileged:
Start-Process powershell -Verb runAs
Now you can follow the Instructions from Microsoft to install OpenSSH. Or just install the OpenSSH Server:
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
Check if a firewall rule was created (maybe you want to adjust it):
Get-NetFirewallRule -Name *ssh*
Start the OpenSSH server:
Start-Service sshd
Running OpenSSH server as service:
Set-Service -Name sshd -StartupType 'Automatic'
You can create the .ssh directory with the correct permissions by connecting to localhost and creating the known_hosts file.
ssh user@127.0.0.1
When you intend to use public key authentication for users in the administrators group, have a look into How to Login Windows Using SSH Key Under Local Admin. Indeed you can get rsync running via WSL. But why load tons of dependencies on your system? With the installation of rsync I cheated a bit and used chocolately by running choco install rsync, but there is also an issue requesting rsync support for the OpenSSH server which includes an archive with a rsync.exe and libraries which may also fit. You can place those files for example into C:\Windows\System32\OpenSSH so they are in the PATH. So here we are. Now I can solve all my other issues with BackupPC, Windows firewall and the network challenges to get access to the isolated dedicated network of the windows system.

22 December 2020

Jan Wagner: Call for testing: monitoring-plugins 2.3 in experimental

As announced recently I prepared a monitoring-plugins 2.3 package for experimental. If there is enough positive feedback until 12th January 2021, I intend to upload this into unstable targeted for Debian Bullseye.

Happy testing.

13 December 2020

Jan Wagner: Monitoring Plugins 2.3 released

While our last release has matured for quite a little time, there raised demands within our community for a new release. The development has settled this fall and @sni was already using master for a while in production, so we thought about to release. Anyway Debian Freeze is coming, let's cut a new upstream release! The last question was: Who should cut the release? The last releases was done by @JeremyHolger, but these days everybody is short on time. Fortunately Holger has documented the whole release process very well, so I jumped on the band wagon and slipped into the release wizard role.
Surprisingly it seems I was able to fix all faults I've done in the release process (beside using the 2.2 link in the release announcement mail) and kicked the release of monitoring-plugins 2.3 successfully. Grab the new Monitoring Plugins release while it's hot (and give the new check_curl a try)!
Hopefully monitoring-plugins 2.3 will hit Debian experimental soon.

18 January 2017

Jan Wagner: Migrating Gitlab non-packaged PostgreSQL into omnibus-packaged

With the release of Gitlab 8.15 it was announced that PostgreSQL needs to be upgraded. As I migrated from a source installation I used to have an external PostgreSQL database instead of using the one shiped with the omnibus package.
So I decided to do the data migration into the omnibus PostgreSQL database now which I skipped before. Let's have a look into the databases:
$ sudo -u postgres psql -d template1
psql (9.2.18)  
Type "help" for help.
gitlabhq_production=# \l  
                                             List of databases
         Name                  Owner         Encoding   Collate    Ctype           Access privileges
-----------------------+-------------------+----------+---------+---------+---------------------------------
 gitlabhq_production     git                 UTF8       C.UTF-8   C.UTF-8  
 gitlab_mattermost       git                 UTF8       C.UTF-8   C.UTF-8  
gitlabhq_production=# \q  
Dumping the databases and stop PostgreSQL. Maybe you need to adjust database names and users for your needs.
$ su postgres -c "pg_dump gitlabhq_production -f /tmp/gitlabhq_production.sql" && \
su postgres -c "pg_dump gitlab_mattermost -f /tmp/gitlab_mattermost.sql" && \  
/etc/init.d/postgresql stop
Activate PostgreSQL shipped with Gitlab Omnibus
$ sed -i "s/^postgresql\['enable'\] = false/#postgresql\['enable'\] = false/g" /etc/gitlab/gitlab.rb && \
sed -i "s/^#mattermost\['enable'\] = true/mattermost\['enable'\] = true/" /etc/gitlab/gitlab.rb && \  
gitlab-ctl reconfigure  
Testing if the connection to the databases works
$ su - git -c "psql --username=gitlab  --dbname=gitlabhq_production --host=/var/opt/gitlab/postgresql/"
psql (9.2.18)  
Type "help" for help.
gitlabhq_production=# \q  
$ su - git -c "psql --username=gitlab  --dbname=mattermost_production --host=/var/opt/gitlab/postgresql/"
psql (9.2.18)  
Type "help" for help.
mattermost_production=# \q  
Ensure pg_trgm extension is enabled
$ sudo gitlab-psql -d gitlabhq_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";'
$ sudo gitlab-psql -d mattermost_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";'
Adjust permissions in the database dumps. Indeed please verify that users and databases might need to be adjusted too.
$ sed -i "s/OWNER TO git;/OWNER TO gitlab;/" /tmp/gitlabhq_production.sql && \
sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlabhq_production.sql  
$ sed -i "s/OWNER TO git;/OWNER TO gitlab_mattermost;/" /tmp/gitlab_mattermost.sql && \
sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlab_mattermost.sql  
(Re)import the data
$ sudo gitlab-psql -d gitlabhq_production -f /tmp/gitlabhq_production.sql
$ sudo gitlab-psql -d gitlabhq_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \
sudo gitlab-psql -d gitlabhq_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";'  
$ sudo gitlab-psql -d mattermost_production -f /tmp/gitlab_mattermost.sql
$ sudo gitlab-psql -d mattermost_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \
sudo gitlab-psql -d mattermost_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";'  
Make use of the shipped PostgreSQL
$ sed -i "s/^gitlab_rails\['db_/#gitlab_rails\['db_/" /etc/gitlab/gitlab.rb && \
sed -i "s/^mattermost\['sql_/#mattermost\['sql_/" /etc/gitlab/gitlab.rb && \  
gitlab-ctl reconfigure  
Now you should be able to connect to all the Gitlab services again. Optionally remove the external database
apt-get remove postgresql postgresql-client postgresql-9.4 postgresql-client-9.4 postgresql-client-common postgresql-common  
Maybe you also want to purge the old database content
apt-get purge postgresql-9.4  

Jan Wagner: Migrating Gitlab non-packaged PostgreSQL into omnibus-packaged

With the release of Gitlab 8.15 it was announced that PostgreSQL needs to be upgraded. As I migrated from a source installation I used to have an external PostgreSQL database instead of using the one shiped with the omnibus package.
So I decided to do the data migration into the omnibus PostgreSQL database now which I skipped before. Let's have a look into the databases:
$ sudo -u postgres psql -d template1
psql (9.2.18)  
Type "help" for help.
gitlabhq_production=# \l  
                                             List of databases
         Name                  Owner         Encoding   Collate    Ctype           Access privileges
-----------------------+-------------------+----------+---------+---------+---------------------------------
 gitlabhq_production     git                 UTF8       C.UTF-8   C.UTF-8  
 gitlab_mattermost       git                 UTF8       C.UTF-8   C.UTF-8  
gitlabhq_production=# \q  
Dumping the databases and stop PostgreSQL. Maybe you need to adjust database names and users for your needs.
$ su postgres -c "pg_dump gitlabhq_production -f /tmp/gitlabhq_production.sql" && \
su postgres -c "pg_dump gitlab_mattermost -f /tmp/gitlab_mattermost.sql" && \  
/etc/init.d/postgresql stop
Activate PostgreSQL shipped with Gitlab Omnibus
$ sed -i "s/^postgresql\['enable'\] = false/#postgresql\['enable'\] = false/g" /etc/gitlab/gitlab.rb && \
sed -i "s/^#mattermost\['enable'\] = true/mattermost\['enable'\] = true/" /etc/gitlab/gitlab.rb && \  
gitlab-ctl reconfigure  
Testing if the connection to the databases works
$ su - git -c "psql --username=gitlab  --dbname=gitlabhq_production --host=/var/opt/gitlab/postgresql/"
psql (9.2.18)  
Type "help" for help.
gitlabhq_production=# \q  
$ su - git -c "psql --username=gitlab  --dbname=mattermost_production --host=/var/opt/gitlab/postgresql/"
psql (9.2.18)  
Type "help" for help.
mattermost_production=# \q  
Ensure pg_trgm extension is enabled
$ sudo gitlab-psql -d gitlabhq_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";'
$ sudo gitlab-psql -d mattermost_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";'
Adjust permissions in the database dumps. Indeed please verify that users and databases might need to be adjusted too.
$ sed -i "s/OWNER TO git;/OWNER TO gitlab;/" /tmp/gitlabhq_production.sql && \
sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlabhq_production.sql  
$ sed -i "s/OWNER TO git;/OWNER TO gitlab_mattermost;/" /tmp/gitlab_mattermost.sql && \
sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlab_mattermost.sql  
(Re)import the data
$ sudo gitlab-psql -d gitlabhq_production -f /tmp/gitlabhq_production.sql
$ sudo gitlab-psql -d gitlabhq_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \
sudo gitlab-psql -d gitlabhq_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";'  
$ sudo gitlab-psql -d mattermost_production -f /tmp/gitlab_mattermost.sql
$ sudo gitlab-psql -d mattermost_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \
sudo gitlab-psql -d mattermost_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";'  
Make use of the shipped PostgreSQL
$ sed -i "s/^gitlab_rails\['db_/#gitlab_rails\['db_/" /etc/gitlab/gitlab.rb && \
sed -i "s/^mattermost\['sql_/#mattermost\['sql_/" /etc/gitlab/gitlab.rb && \  
gitlab-ctl reconfigure  
Now you should be able to connect to all the Gitlab services again. Optionally remove the external database
apt-get remove postgresql postgresql-client postgresql-9.4 postgresql-client-9.4 postgresql-client-common postgresql-common  
Maybe you also want to purge the old database content
apt-get purge postgresql-9.4  

3 November 2016

Jan Wagner: Container Orchestration Thoughts

Container Orchestration ThoughtsSince some time everybody (read developer) want to run his new microservice stacks in containers. I can understand that building and testing an application is important for developers.
One of the benefits of containers is, that developer (in theory) can put their new version of applications into production on their own. This is the point where operations is affected and operations needs to evaluate, if that might evolve into better workflow. For yolo^WdevOps people there are some challenges that needs to be solved, or at least mitigated, when things needs to be done in large(r) scale.

Orchestration Engine Running Docker, which is actual the most preferred container solution, on a single host with docker command line client is something you can do, but there you leave the gap between dev and ops.

UI For Docker Since some time there is UI For Docker available for visualizing and managing containers on a single docker node. It's pretty awesome and the best feature so far is the Container Network view, which also shows the linked container. Container Orchestration Thoughts

Portainer Portainer is pretty new and it can be deployed as easy as UI For Docker. But the (first) great advantage: it can handle Docker Swarm. Beside that it has many other great features. Container Orchestration Thoughts

Rancher Rancher describes themselves as 'container management platform' that 'supports and manages all of your Kubernetes, Mesos, and Swarm clusters'. This is great because this are all of the relevant docker cluster orchestrations at the market actually. Container Orchestration Thoughts For the use cases, we are facing, Kubernetes and Mesos seems both like bloated beasts. Usman Ismail has written a really good comparison of Orchestration Engine options which goes into details. Container Orchestration Thoughts

Docker Swarm As there is actually no clear defacto standard/winner of the (container) orchestration wars, I would prevent to be in a vendor lock-in situation (yet). Docker swarm seems to be evolving and is getting more nice features other competitors doesn't provide.
Due the native integration into the docker framework and great community I believe Docker Swarm will be the Docker Orchestration of the choice on the long run. This should be supported by Rancher 1.2 which is not released yet.
From this point of view it looks very reasonable that Docker Swarm in combination with Rancher (1.2) might be a good strategy to maintain your container farms in the future. If you think to put Docker Swarm into production in the actual state, I recommend to read Docker swarm mode: What to know before going live on production by Panjamapong Sermsawatsri.

Persistent Storage While it is a best practice to use data volume container these days, providing persistent storage across multiple hosts for shared volumes seems to be tricky. In theory you can mount a shared-storage volume as a data volume and there are several volume plugins which supports shared storage. For example you can use the convoy plugin which gives you:
  • thin provisioned volumes
  • snapshots of volumes
  • backup of snapshots
  • restore volumes
As backend you can use:
  • Device Mapper
  • Virtual File System(VFS)/Network File System(NFS)
  • Amazon Elastic Block Store(EBS)
The good thing is, that convoy is integrated into Rancher. For more information I suggest to read Setting Up Shared Volumes with Convoy-NFS, which also mentions some limitations. If you want test Persistent Storage Service, Rancher provides some documentation. Actually I did not evaluate shared-storage volumes yet, but I don't see a solution I would love to use in production (at least on-premise) without strong downsides. But maybe things will go further and there might be a great solution for this caveats in the future.

Keeping base images up-to-date Since some time there are many projects that tries to detect security problems in your container images in several ways.
Beside general security considerations you need to deal somehow with issues in your base images that you build your applications on. Of course, even if you know you have a security issue in your application image, you need to fix it, which depends on the way how you based your application upon.

Ways to base your application image
  • You can build your application image entire from scratch, which leaves all the work to your development team and I wouldn't recommend it that way.
  • You also can create one (or more) intermediate image(s) that will be used by your development team.
  • The development team might ground their work on images in public available or private (for example the one bundled to your gitlab CI/CD solution) registries.

Whats the struggle with the base image? If you are using images being not (well) maintained by other people, you have to wait for them to fix your base image. Using external images might also lead into trust problems (can you trust those people in general?).
In an ideal world, your developers have always fresh base images with fixed security issues. This can probably be done by rebuilding every intermediate image periodically or when the base image changes.

Paradigm change Anyway, if you have a new application image available (with no known security issues), you need to deploy it to production. This is summarized by Jason McKay in his article Docker Security: How to Monitor and Patch Containers in the Cloud:
To implement a patch, update the base image and then rebuild the application image. This will require systems and development teams to work closely together.
So patching security issues in the container world changes workflow significant. In the old world operation teams mostly rolled security fixes for the base systems independent from development teams.
Now hitting containers the production area this might change things significant.

Bringing updated images to production Imagine your development team doesn't work steady on a project, cause the product owner consider it feature complete. The base image is provided (in some way) consistently without security issues. The application image is build on top of that automatically on every update of the base image.
How do you push in such a scenario the security fixes to production? From my point of view you have two choices:
  • Let the development team require to test the resulting application image and put it into production
  • Push the new application image without review by the development team into production
The first scenario might lead into a significant delay until the fixes hit production created by the probably infrequent work of the development team. The latter one brings your security fixes early to production by the notable higher risk to break your application. This risk can be reduced by implementing massive tests into CI/CD pipelines by the development team. Rolling updates provided by Docker Swarm might also reduce the risk of ending with a broken application. When you are implementing an update process of your (application) images to production, you should consider Watchtower that provides Automatic Updates for Docker Containers.

Conclusion Not being a product owner or the operations part of an application that is facing a widely adopted usage that would compensate the actual tradeoffs we are still facing I tend not to move large scale production projects into a container environment.
This means not that this might be a bad idea for others, but I'd like to sort out some of the caveats before. I'm still interested to put smaller projects into production, being not scared to reimplement or move them on a new stack.
For smaller projects with a small number of hosts Portainer looks not bad as well as Rancher with the Cattle orchestration engine if you just want to manage a couple of nodes. Things are going to be interesting if Rancher 1.2 supports Docker swarm cluster out of the box. Let's see what the future will bring us to the container world and how to make a great stack out of it.

Update I suggest to read Docker in Production: A History of Failure and the answer Docker in Production: A retort to understand the actual challenges when running Docker in larger scale production environments.

29 January 2016

Jan Wagner: Oxidized - silly attempt at (Really Awesome New Cisco confIg Differ)

Since ages I wanted have replaced this freaking backup solution of our Network Equipment based on some hacky shell scripts and expect uploading the configs on a TFTP server. Years ago I stumbled upon RANCID (Really Awesome New Cisco confIg Differ) but had no time to implement it. Now I returned to my idea to get rid of all our old crap.
I don't know where, I think it was at DENOG2, I saw RANCID coupled with a VCS, where the NOC was notified about configuration (and inventory) changes by mailing the configuration diff and the history was indeed in the VCS.
The good old RANCID seems not to support to write into a VCS out of the box. But for the rescue there is rancid-git, a fork that promises git extensions and support for colorized emails. So far so good. While I was searching for a VCS capable RANCID, somewhere under a stone I found Oxidized, a 'silly attempt at rancid'. Looking at it, it seems more sophisticated, so I thought this might be the right attempt. Unfortunately there is no Debian package available, but I found an ITP created by Jonas. Anyway, for just looking into it, I thought the Docker path for a testbed might be a good idea, as no Debian package ist available (yet). For oxidized configuration is only a configfile needed and as nodes source a rancid compatible router.db file can be used (beside SQLite and http backend). A migration into a production environment seems pretty easy. So I gave it a go. I assume Docker is installed already. There seems to be a Docker image on Docker Hub, that looks official, but it seems not maintained (actually). An issue is open for automated building the image.

Creating Oxidized container image The official documentation describes the procedure. I used a slightly different approach.
docking-station:~# mkdir -p /srv/docker/oxidized/  
docking-station:~# git clone https://github.com/ytti/oxidized \  
 /srv/docker/oxidized/oxidized.git
docking-station:~# docker build -q -t oxidized/oxidized:latest \  
 /srv/docker/oxidized/oxidized.git
I thought it might be a good idea to also tag the image with the actual version of the gem.
docking-station:~# docker tag oxidized/oxidized:latest \  
 oxidized/oxidized:0.11.0
docking-station:~# docker images   grep oxidized  
oxidized/oxidized   latest    35a325792078  15 seconds ago  496.1 MB  
oxidized/oxidized   0.11.0    35a325792078  15 seconds ago  496.1 MB  
Create initial default configuration like described in the documentation.
docking-station:~# mkir -p /srv/docker/oxidized/.config/  
docking-station:~# docker run -e CONFIG_RELOAD_INTERVAL=600 \  
 -v /srv/docker/oxidized/.config/:/root/.config/oxidized \
 -p 8888:8888/tcp -t oxidized/oxidized:latest oxidized

Adjusting configuration After this I adjusted the default configuration for writing a log, the nodes config into a bare git, having nodes secrets in router.db and some hooks for debugging.

Creating node configuration
docking-station:~# echo "7204vxr.lab.cyconet.org:cisco:admin:password:enable" >> \  
 /srv/docker/oxidized/.config/router.db
docking-station:~# echo "ccr1036.lab.cyconet.org:routeros:admin:password" >> \  
 /srv/docker/oxidized/.config/router.db

Starting the oxidized beast
docking-station:~# docker run -e CONFIG_RELOAD_INTERVAL=600 \  
 -v /srv/docker/oxidized/.config/:/root/.config/oxidized \
 -p 8888:8888/tcp -t oxidized/oxidized:latest oxidized
Puma 2.16.0 starting...  
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://127.0.0.1:8888
If you want to have the container get started with the docker daemon automatically, you can start the container with --restart always and docker will take care of it. If I wanted to make it running permanent, I would use a systemd unitfile.

Reload configuration immediately If you don't want to wait to automatically reload of the configuration, you can trigger it.
docking-station:~# curl -s http://localhost:8888/reload?format=json \  
 -O /dev/null
docking-station:~# tail -2 /srv/docker/oxidized/.config/log/oxidized.log  
I, [2016-01-29T16:50:46.971904 #1]  INFO -- : Oxidized starting, running as pid 1  
I, [2016-01-29T16:50:47.073307 #1]  INFO -- : Loaded 2 nodes  

Writing nodes configuration
docking-station:/srv/docker/oxidized/.config/oxidized.git# git shortlog  
Oxidizied (2):  
      update 7204vxr.lab.cyconet.org
      update ccr1036.lab.cyconet.org
Writing the nodes configurations into a local bare git repository is neat but far from perfect. It would be cool to have all the stuff in a central VCS. So I'm pushing it every 5 minutes into one with a cron job.
docking-station:~# cat /etc/cron.d/doxidized  
# m h dom mon dow user  command                                                 
*/5 * * * *    root    $(/srv/docker/oxidized/bin/oxidized_cron_git_push.sh)
docking-station:~# cat /srv/docker/oxidized/bin/oxidized_cron_git_push.sh  
#!/bin/bash
DOCKER_OXIDIZED_BASE="/srv/docker/oxidized/"  
OXIDIZED_GIT_DIR=".config/oxidized.git"
cd $ DOCKER_OXIDIZED_BASE /$ OXIDIZED_GIT_DIR   
git push origin master --quiet  
Now having all the nodes configurations in a source code hosting system, we can browse the configurations, changes, history and even establish notifications for changes. Mission accomplished! Now I can test the coverage of our equipment. The last thing that would make me super happy, a oxidized Debian package!

21 January 2016

Jan Wagner: Using nginx as reverse proxy (for containered Ghost)

In some cases it might be a good idea to use a reverse proxy in front of a web application. Nginx is a very common solution for this scenario these days. As I started with containers for some of my playgrounds, I decided to go this route.

Container security When looking around to implement a nginx in front of a Docker web application, in most cases nginx itself is also a Docker container.
In my eyes Docker containers have a huge disadvantage. To get updated software (at least security related) into production, you have to hope that your container image is well maintained or you have to care about it yourself. If this not the case, you might worry.
As long as you don't have container solutions deployed in large scale (and make use of automatically rebuilding and deploying your container images) I would recommend to keep the footprint of your containered applications as small as possible from security point of view. So I decided to run my nginx on the same system where the Docker web applications are living, but you can also have it placed on a system in front of your container systems. Updates are supplied via usual Distribution security updates.

Installing nginx
# aptitude install nginx
I don't will advise you on the usual steps about setting up nginx, but will focus on things required to proxy into your container web application.

Configuration of nginx As our Docker container for Ghost exposes port 2368, we need to define our upstream server. I've done that in conf.d/docker-ghost.conf.
upstream docker-ghost    
  server localhost:2368;
 
The vHost configuration can be taken into /etc/nginx/nginx.conf but I would recommend to use a config file in /etc/nginx/sites-available/ instead.
server    
  listen 80;
  server_name log.cyconet.org;
  include /etc/nginx/snippets/ghost_vhost.conf;
  location /  
    proxy_pass                          http://docker-ghost;
    proxy_set_header  Host              $http_host;   # required for docker client's sake
    proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
    proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header  X-Forwarded-Proto $scheme;
    proxy_read_timeout                  900;
   
 
Let's enable the configuration and reload nginx:
# ln -s ../sites-available/ghost.conf /etc/nginx/sites-enabled/ghost.conf && \
 service nginx configtest && service nginx reload

Going further This is a very basic configuration. You might think about delivering static content (like images) directly from your Docker data volume, caching and maybe encryption.

19 January 2016

Jan Wagner: Trying icinga2 and icingaweb2 with Docker

In case you ever wanted to look at Icinga2, even into distributed features, without messing with installing whole server setups, this might interesting for you. At first, you need to have a running Docker on your system. For more information, have a look into my previous post!

Initiating Docker images
$ git clone https://github.com/joshuacox/docker-icinga2.git && \
  cd docker-icinga2
$ make temp
[...]
$ make grab
[...]
$ make prod
[...]

Setting IcingaWeb2 password (Or using the default one)
$ make enter
docker exec -i -t  cat cid  /bin/bash  
root@ce705e592611:/# openssl passwd -1 f00b4r  
$1$jgAqBcIm$aQxyTPIniE1hx4VtIsWvt/
root@ce705e592611:/# mysql -h mysql icingaweb2 -p -e \  
  "UPDATE icingaweb_user SET password_hash='$1$jgAqBcIm$aQxyTPIniE1hx4VtIsWvt/' WHERE name='icingaadmin';"
Enter password:  
root@ce705e592611:/# exit  

Setting Icinga Classic UI password
$ make enter
docker exec -i -t  cat cid  /bin/bash  
root@ce705e592611:/# htpasswd /etc/icinga2-classicui/htpasswd.users icingaadmin  
New password:  
Re-type new password:  
Adding password for user icingaadmin  
root@ce705e592611:/# exit  

Cleaning things up and making permanent
$ docker stop icinga2 && docker stop icinga2-mysql
icinga2  
icinga2-mysql  
$ cp -a /tmp/datadir ~/docker-icinga2.datadir
$ echo "~/docker-icinga2.datadir" > ./DATADIR
$ docker start icinga2-mysql && rm cid && docker rm icinga2 && \
  make runprod
icinga2-mysql  
icinga2  
chmod 777 /tmp/tmp.08c34zjRMpDOCKERTMP  
d34d56258d50957492560f481093525795d547a1c8fc985e178b2a29b313d47a  
Now you should be able to access the IcingaWeb2 web interface on http://localhost:4080/icingaweb2 and the Icinga Classic UI web interface at http://localhost:4080/icinga2-classicui. For further information about this Docker setup please consult the documentation written by Joshua Cox who has worked on this project. For information about Icinga2 itself, please have a look into the Icinga2 Documentation.

14 January 2016

Jan Wagner: Running Ghost blogging platform via Docker

When I was thinking about using Ghost, I did read the installations guide and then I just closed the browser window.
I didn't wanted to install npm, yet another package manager, and just hack init scripts. Not speaking about updating Ghost itself. Some weeks later I did think about using Ghost again. It has a nice Markdown Editor and some nice other features. Since everybody is jumping on the Docker band wagon actually and I had used it for some tests already, I thought trying the Ghost Docker image might be a good idea. If you are interested into how I did that, read on. I suppose you have installed a stock Debian Jessie.

Installing Docker

Pulling the Docker image Just in case you didn't, you need to (re)start docker to work with service docker restart
# docker pull ghost

Making Ghost (container image) run forever I did not like systemd in the first place for many reasons. But in some circumstances it makes sense. In case of handling a Docker container, using a systemd unit file makes life much easier.
# mkdir -p /srv/docker/ghost/
# cat > /etc/systemd/system/ghost.service << EOF
[Unit]
Description=GHost Service  
After=docker.service  
Requires=docker.service
[Service]
ExecStartPre=-/usr/bin/docker kill ghost  
ExecStartPre=-/usr/bin/docker rm ghost  
ExecStartPre=-/usr/bin/docker pull ghost  
ExecStart=/usr/bin/docker run  --name ghost --publish 2368:2368 --env 'NODE_ENV=production' --volume /srv/docker/ghost/:/var/lib/ghost ghost  
ExecStop=/usr/bin/docker stop ghost
[Install]
WantedBy=multi-user.target  
EOF  
# systemctl enable ghost && systemctl daemon-reload && systemctl start ghost 
This will start your container on start and even is looking for a new Docker image and is fetching it, if needed. If you don't like this behavior, just comment out the line in the config and reread it with systemctl daemon-reload. Now you should have listening something on port 2368:
# netstat -tapn   grep 2368
tcp6       0      0 :::2368                 :::*                    LISTEN      7061/docker-proxy  
Update: Jo l Dinel did send me a mail, that starting your Docker container with --restart always will take care that it is brought up again if Docker or (even) the whole system will get restarted. For real I used that before and might be a lightweight solution, but I liked the systemd unit file solution a lot more.

Persistent Data Thanks to the Docker mount option you can find all your data in /srv/docker/ghost/. So your blog will still have content, even if the ghost Docker images is updated:
# ls /srv/docker/ghost/
apps  config.js  data  images  themes  

Accessing the container To kick your ghost into production, it might be useful if you make it available on port 80 at least. This can be done for example by changing your Docker publish configuration or adding a DNAT to your firewall. But I would recommand using a proxy in front of your Docker container. This might be part of one of my next articles.

9 January 2016

Jan Wagner: New blogging engine

Exactly 3 years after I moved on from Wordpress to Octopress I thought it's time for something new. Some of you might have noticed that I've not much blogged in the past. A new Octopress version was promised a year ago. While I've liked writing in Markdown, the deployment workflow was horribly broken and keeping Octopress up to date was impossible. I blogged so seldom that I needed to consult the documentation every time in the recent days. After looking into several projects, Ghost seems most promising. And the good news: it has a split-screen Markdown editor with integrated live preview. The Ghost Logo There are several migration scripts out there, but I only found one which was able to also export tags. The import into Ghost worked like a charm.

25 August 2015

Lunar: Reproducible builds: week 17 in Stretch cycle

A good amount of the Debian reproducible builds team had the chance to enjoy face-to-face interactions during DebConf15.
Names in red and blue were all present at DebConf15
Picture of the  reproducible builds  talk during DebConf15
Hugging people with whom one has been working tirelessly for months gives a lot of warm-fuzzy feelings. Several recorded and hallway discussions paved the way to solve the remaining issues to get reproducible builds part of Debian proper. Both talks from the Debian Project Leader and the release team mentioned the effort as important for the future of Debian. A forty-five minutes talk presented the state of the reproducible builds effort. It was then followed by an hour long roundtable to discuss current blockers regarding dpkg, .buildinfo and their integration in the archive. Picture of the  reproducible builds  roundtable during DebConf15 Toolchain fixes Reiner Herrmann submitted a patch to make rdfind sort the processed files before doing any operation. Chris Lamb proposed a new patch for wheel implementing support for SOURCE_DATE_EPOCH instead of the custom WHEEL_FORCE_TIMESTAMP. akira sent one making man2html SOURCE_DATE_EPOCH aware. St phane Glondu reported that dpkg-source would not respect tarball permissions when unpacking under a umask of 002. After hours of iterative testing during the DebConf workshop, Sandro Knau created a test case showing how pdflatex output can be non-deterministic with some PNG files. Packages fixed The following 65 packages became reproducible due to changes in their build dependencies: alacarte, arbtt, bullet, ccfits, commons-daemon, crack-attack, d-conf, ejabberd-contrib, erlang-bear, erlang-cherly, erlang-cowlib, erlang-folsom, erlang-goldrush, erlang-ibrowse, erlang-jiffy, erlang-lager, erlang-lhttpc, erlang-meck, erlang-p1-cache-tab, erlang-p1-iconv, erlang-p1-logger, erlang-p1-mysql, erlang-p1-pam, erlang-p1-pgsql, erlang-p1-sip, erlang-p1-stringprep, erlang-p1-stun, erlang-p1-tls, erlang-p1-utils, erlang-p1-xml, erlang-p1-yaml, erlang-p1-zlib, erlang-ranch, erlang-redis-client, erlang-uuid, freecontact, givaro, glade, gnome-shell, gupnp, gvfs, htseq, jags, jana, knot, libconfig, libkolab, libmatio, libvsqlitepp, mpmath, octave-zenity, openigtlink, paman, pisa, pynifti, qof, ruby-blankslate, ruby-xml-simple, timingframework, trace-cmd, tsung, wings3d, xdg-user-dirs, xz-utils, zpspell. The following packages became reproducible after getting fixed: Uploads that might have fixed reproducibility issues: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: St phane Glondu reported two issues regarding embedded build date in omake and cduce. Aur lien Jarno submitted a fix for the breakage of make-dfsg test suite. As binutils now creates deterministic libraries by default, Aur lien's patch makes use of a wrapper to give the U flag to ar. Reiner Herrmann reported an issue with pound which embeds random dhparams in its code during the build. Better solutions are yet to be found. reproducible.debian.net Package pages on reproducible.debian.net now have a new layout improving readability designed by Mattia Rizzolo, h01ger, and Ulrike. The navigation is now on the left as vertical space is more valuable nowadays. armhf is now enabled on all pages except the dashboard. Actual tests on armhf are expected to start shortly. (Mattia Rizzolo, h01ger) The limit on how many packages people can schedule using the reschedule script on Alioth has been bumped to 200. (h01ger) mod_rewrite is now used instead of JavaScript for the form in the dashboard. (h01ger) Following the rename of the software, debbindiff has mostly been replaced by either diffoscope or differences in generated HTML and IRC notification output. Connections to UDD have been made more robust. (Mattia Rizzolo) diffoscope development diffoscope version 31 was released on August 21st. This version improves fuzzy-matching by using the tlsh algorithm instead of ssdeep. New command line options are available: --max-diff-input-lines and --max-diff-block-lines to override limits on diff input and output (Reiner Herrmann), --debugger to dump the user into pdb in case of crashes (Mattia Rizzolo). jar archives should now be detected properly (Reiner Herrman). Several general code cleanups were also done by Chris Lamb. strip-nondeterminism development Andrew Ayer released strip-nondeterminism version 0.010-1. Java properties file in jar should now be detected more accurately. A missing dependency spotted by St phane Glondu has been added. Testing directory ordering issues: disorderfs During the reproducible builds workshop at DebConf, participants identified that we were still short of a good way to test variations on filesystem behaviors (e.g. file ordering or disk usage). Andrew Ayer took a couple of hours to create disorderfs. Based on FUSE, disorderfs in an overlay filesystem that will mount the content of a directory at another location. For this first version, it will make the order in which files appear in a directory random. Documentation update Dhole documented how to implement support for SOURCE_DATE_EPOCH in Python, bash, Makefiles, CMake, and C. Chris Lamb started to convert the wiki page describing SOURCE_DATE_EPOCH into a Freedesktop-like specification in the hope that it will convince more upstream to adopt it. Package reviews 44 reviews have been removed, 192 added and 77 updated this week. New issues identified this week: locale_dependent_order_in_devlibs_depends, randomness_in_ocaml_startup_files, randomness_in_ocaml_packed_libraries, randomness_in_ocaml_custom_executables, undeterministic_symlinking_by_rdfind, random_build_path_by_golang_compiler, and images_in_pdf_generated_by_latex. 117 new FTBFS bugs have been reported by Chris Lamb, Chris West (Faux), and Niko Tyni. Misc. Some reproducibility issues might face us very late. Chris Lamb noticed that the test suite for python-pykmip was now failing because its test certificates have expired. Let's hope no packages are hiding a certificate valid for 10 years somewhere in their source! Pictures courtesy and copyright of Debian's own paparazzi: Aigars Mahinovs.

23 March 2015

Jan Wagner: Wordpress dictionary attack

Today early in the morning my monitoring system notified me about unusual high outgoing traffic on my hosting plattform. I traced the problem down the webserver which is also hosting this abondened website. Looking into this with iptraf revealed that this traffic is coming only from one IP. At first I thought anybody might grabbing my Debian packages from ftp.cyconet.org. But no, it was targeting my highly sophisticated blogging plattform.
$ grep 46.235.43.146 /var/log/nginx/vhosts/access_logs/blog.waja.info-access.log   tail -2
46.235.43.146 - - [23/Mar/2015:08:20:12 +0100] "POST /wp-login.php HTTP/1.0" 404 22106 "-" "-"
46.235.43.146 - - [23/Mar/2015:08:20:12 +0100] "POST /wp-login.php HTTP/1.0" 404 22106 "-" "-"
$ grep 46.235.43.146 /var/log/nginx/vhosts/access_logs/blog.waja.info-access.log   wc -l
83676
$ grep 46.235.43.146 /var/log/nginx/vhosts/access_logs/blog.waja.info-access.log   wc -l
83782
$ grep 46.235.43.146 /var/log/nginx/vhosts/access_logs/blog.waja.info-access.log   grep -v wp-login.php   wc -l
0
It makes me really sad to see, that dictionary attacks are smashing with such a high power these days, even without evaluating the 404 response.

9 October 2014

Jan Wagner: Updated Monitoring Plugins Version is coming soon

Three months ago version 2.0 of Monitoring Plugins was released. Since then many changes were integrated. You can find a quick overview in the upstream NEWS. Now it's time to move forward and a new release is expected soon. It would be very welcome if you could give the latest source snapshot a try. You also can give the Debian packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at http://ftp.cyconet.org/. Right after the stable release, the new packages will be uploaded into Debian unstable. The whole packaging changes can be observed in the changelog. Feedback is very appreciated via Issue tracker or the Monitoring Plugins Development Mailinglist. Update: The official call for testing is available.

8 October 2014

Jan Wagner: Updated Monitoring Plugins Version is coming soon

Three months ago version 2.0 of Monitoring Plugins was released. Since then many changes were integrated. You can find a quick overview in the upstream NEWS. Now it's time to move forward and a new release is expected soon. It would be very welcome if you could give the latest source snapshot a try. You also can give the Debian packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at http://ftp.cyconet.org/. Right after the stable release, the new packages will be uploaded into Debian unstable. The whole packaging changes can be observed in the changelog. Feedback is very appreciated via Issue tracker or the Monitoring Plugins Development Mailinglist.

25 September 2014

Jan Wagner: Redis HA with Redis Sentinel and VIP

For an actual project we decided to use Redis for some reasons. As there is availability a critical part, we discovered that Redis Sentinel can monitor Redis and handle an automatic master failover to a available slave. Setting up the Redis replication was straight forward and even setting up Sentinel. Please keep in mind, if you configure Redis to require an authentication password, you even need to provide that for the replication process (masterauth) and for the Sentinel connection (auth-pass). The more interesting part is, how to migrate over the clients to the new master in case of a failover process. While Redis Sentinel could also be used as configuration provider, we decided not to use this feature, as the application needs to request the actual master node from Redis Sentinel much often, which will maybe a performance impact.
The first idea was to use some kind of VRRP, implemented into keepalived or something like this. The problem with such a solution is, you need to notify the VRRP process when a redis failover is in progress.
Well, Redis Sentinel has a configuration option called 'sentinel client-reconfig-script':
# When the master changed because of a failover a script can be called in
# order to perform application-specific tasks to notify the clients that the
# configuration has changed and the master is at a different address.
# 
# The following arguments are passed to the script:
#
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
#
# <state> is currently always "failover"
# <role> is either "leader" or "observer"
# 
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
# the old address of the master and the new address of the elected slave
# (now a master).
#
# This script should be resistant to multiple invocations.
This looks pretty good and as there is provided a <role>, I thought it would be a good idea to just call a script which evaluates this value and based on it's return, it adds the VIP to the local network interface, when we get 'leader' and removes it when we get 'observer'. It turned out that this was not working as <role> didn't returned 'leader' when the local redis instance got master and 'observer' when got slave in any case. This was pretty annoying and I was short before giving up.
Fortunately I stumpled upon a (maybe) chinese post about Redis Sentinal, where was done the same like I did. On the second look I recognized that the decision was made on $ 6 which is <to-ip>, nothing more then the new IP of the Redis master instance. So I rewrote my tiny shell script and after some other pitfalls this strategy worked out well. Some notes about convergence. Actually it takes round about 6-7 seconds to have the VIP migrated over to the new node after Redis Sentinel notifies a broken master. This is not the best performance, but as we expect this happen not so often, we need to design the application using our Redis setup to handle this (hopefully) rare scenario.

8 August 2014

Jan Wagner: Monitoring Plugins Debian packages

You may wonder why the old good nagios-plugins are not up to date in Debian unstable and testing. Since the people behind and maintaining the plugins <= 1.5 were forced to rename the software project into Monitoring Plugins there was some work behind the scenes and much QA work necessary to release the software in a proper state. This happened 4 weeks ago with the release of the version 2.0 of the Monitoring Plugins. With one day of delay the package was uploaded into unstable, but did hit the Debian NEW queue due the changed package name(s). Now we (and maybe you) are waiting to get them reviewed by ftp-master. This will hopefully happen before the jessie freeze. Until this will happen, you may grab packages for wheezy by the 'wheezy-backports' suite from ftp.cyconet.org/debian/ or 'debmon-wheezy' suite from debmon.org. Feedback is much appreciated.

7 July 2014

Jan Wagner: Monitoring Plugins release ahead

It seems to be a great time for monitoring solutions. Some of you may have noticed that Icinga has released it's first stable version of the completely redeveloped Icinga 2. After several changes in the recent past, where the Team maintaining the Plugins used for several Monitoring solutions was busy moving everything to new infrastructure, they are now back on track. The recent development milestone is reached and a call for testing was also sent out. In the meanwhile I prepared the packaging for this bigger move. The packages are now moved to the source package monitoring-plugins, the whole packaging changes can be observed in the changelog. With this new release we have also some NEWS, which might be useful to check. Same counts for upstream NEWS. You can give the packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at http://ftp.cyconet.org/debian/. Right after the stable release, the packages will be uploaded into debian unstable, but might get delayed by the NEW queue due the new package names.

19 March 2014

Jan Dittberner: CLT 2014 was great again

as in previous years we had a Debian booth at the Chemnitzer Linux-Tage it was as well organized as the years before and I enjoyed meeting a lot of great people from the Debian and free software communities as well as CAcert again. At our booth we presented the awesome work of Debian Installer translators in a BabelBox surrounded by xpenguins which attracted young as well as older passers-by. We got thanks for our work which I want to forward to the whole Debian community. A Debian user told us that he is able to use some PC hardware from the late 1990s that does not work with other desktop OSes anymore. We fed 3 kg of strategic jelly bear reserves as well as some packs of savoury snacks to our visitors. Alexander Wirt brought some T-Shirts, Stickers and Hoodies that we sold almost completely. We did some keysigning at the booth to help to get better keys into the Debian keyring and helped a prospective new Debian Developer to get a strong key signed to his FD approval. I also attended the Key signing party organized by Jens Kubieziel. Thanks to all people who helped at the booth:
  • Alexander Mundt
  • Alexander Wirt
  • Florian Baumann
  • Jan H rsch
  • Jan Wagner
  • Jonas Genannt
  • Rene Engelhard
  • Rhalina
  • Y Plentyn
Thanks to TMT for sponsoring the booth hardware.

14 March 2014

Jan Wagner: Chemnitzer Linuxtage 2014 ahead

As Jan has previously announced, the Debian project is maintaining a booth at Chemnitzer Linux-Tage 2014, which is also organized by him. This year we will have merchandising at the booth, which is provided by Alexander Wirt and of course a demo system with Debian wheezy BabelBox as the past years. I'll drop it tomorrow, as I have a conflicting appointment on Saterday, maybe I can attend later on Sunday. In case you have spare time at the weekend ahead, it may be worth to spend it with great lectures and meet nice people over there.

Next.