As promised, on this post I m going to explain how I ve configured this blog
using hugo, asciidoctor and the papermod theme, how I publish it using
nginx, how I ve integrated the remark42 comment system and how I ve
automated its publication using gitea and json2file-go.
It is a long post, but I hope that at least parts of it can be interesting for
some, feel free to ignore it if that is not your case
Hugo Configuration
Theme settingsThe site is using the PaperMod theme and as I m
using asciidoctor to publish my content I ve adjusted
the settings to improve how things are shown with it.
The current config.yml file is the one shown below (probably some of the
settings are not required nor being used right now, but I m including the
current file, so this post will have always the latest version of it):
config.yml
disableHLJS and assets.disableHLJS are set to true; we plan to use
rouge on adoc and the inclusion of the hljs assets adds styles that
collide with the ones used by rouge.
ShowToc is set to true and the TocOpen setting is set to false to
make the ToC appear collapsed initially. My plan was to use the asciidoctor
ToC, but after trying I believe that the theme one looks nice and I don t
need to adjust styles, although it has some issues with the html5s
processor (the admonition titles use <h6> and they are shown on the ToC,
which is weird), to fix it I ve copied the layouts/partial/toc.html to my
site repository and replaced the range of headings to end at 5 instead of
6 (in fact 5 still seems a lot, but as I don t think I ll use that heading
level on the posts it doesn t really matter).
params.profileMode values are adjusted, but for now I ve left it disabled
setting params.profileMode.enabled to false and I ve set the
homeInfoParams to show more or less the same content with the latest posts
under it (I ve added some styles to my custom.css style sheet to center the
text and image of the first post to match the look and feel of the profile).
On the asciidocExt section I ve adjusted the backend to use html5s,
I ve added the asciidoctor-html5s and asciidoctor-diagram extensions to
asciidoctor and adjusted the workingFolderCurrent to true to make
asciidoctor-diagram work right (haven t tested it yet).
Theme customisationsTo write in asciidoctor using the html5s processor I ve added some files to
the assets/css/extended directory:
As said before, I ve added the file assets/css/extended/custom.css to
make the homeInfoParams look like the profile page and I ve also changed a
little bit some theme styles to make things look better with the html5s
output:custom.css
/* Fix first entry alignment to make it look like the profile */.first-entrytext-align:center;.first-entryimgdisplay:inline;/**
* Remove margin for .post-content code and reduce padding to make it look
* better with the asciidoctor html5s output.
**/.post-contentcodemargin:auto0;padding:4px;
I ve also added the file assets/css/extended/adoc.css with some styles
taken from the asciidoctor-default.css, see this
blog
post about the original file; mine is the same after formatting it with
css-beautify and editing it to use variables for
the colors to support light and dark themes:adoc.css
The previous file uses variables from a partial copy of the theme-vars.css
file that changes the highlighted code background color and adds the color
definitions used by the admonitions:theme-vars.css
The previous styles use font-awesome, so I ve downloaded its resources for
version 4.7.0 (the one used by asciidoctor) storing the
font-awesome.css into on the assets/css/extended dir (that way it is
merged with the rest of .css files) and copying the fonts to the
static/assets/fonts/ dir (will be served directly):
FA_BASE_URL="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0"
curl "$FA_BASE_URL/css/font-awesome.css"\> assets/css/extended/font-awesome.css
for f in FontAwesome.otf fontawesome-webfont.eot \
fontawesome-webfont.svg fontawesome-webfont.ttf \
fontawesome-webfont.woff fontawesome-webfont.woff2;do
curl "$FA_BASE_URL/fonts/$f">"static/assets/fonts/$f"done
As already said the default highlighter is disabled (it provided a css
compatible with rouge) so we need a css to do the highlight styling; as
rouge provides a way to export them, I ve created the
assets/css/extended/rouge.css file with the thankful_eyes theme:
To support the use of the html5s backend with admonitions I ve added a
variation of the example found on this
blog
post to assets/js/adoc-admonitions.js:adoc-admonitions.js
// replace the default admonitions block with a table that uses a format// similar to the standard asciidoctor ... as we are using fa-icons here there// is no need to add the icons: font entry on the document.window.addEventListener('load',function()constadmonitions=document.getElementsByClassName('admonition-block')for(leti=admonitions.length-1;i>=0;i--)constelm=admonitions[i]consttype=elm.classList[1]consttitle=elm.getElementsByClassName('block-title')[0];constlabel=title.getElementsByClassName('title-label')[0].innerHTML.slice(0,-1);elm.removeChild(elm.getElementsByClassName('block-title')[0]);consttext=elm.innerHTMLconstparent=elm.parentNodeconsttempDiv=document.createElement('div')tempDiv.innerHTML= <div class="admonitionblock $ type">
<table>
<tbody>
<tr>
<td class="icon">
<i class="fa icon-$ type" title="$ label"></i>
</td>
<td class="content">
$ text
</td>
</tr>
</tbody>
</table>
</div> constinput=tempDiv.childNodes[0]parent.replaceChild(input,elm) )
and enabled its minified use on the layouts/partials/extend_footer.html file
adding the following lines to it:
Remark42 configurationTo integrate Remark42 with the PaperMod theme I ve
created the file layouts/partials/comments.html with the following content
based on the remark42
documentation, including extra code to sync the dark/light setting with the
one set on the site:
comments.html
<divid="remark42"></div><script>varremark_config=host:.Site.Params.remark42Url ,site_id:.Site.Params.remark42SiteID ,url:.Permalink ,locale:.Site.Language.Lang ;(function(c)/* Adjust the theme using the local-storage pref-theme if set */if(localStorage.getItem("pref-theme")==="dark")remark_config.theme="dark";elseif(localStorage.getItem("pref-theme")==="light")remark_config.theme="light";/* Add remark42 widget */for(vari=0;i<c.length;i++) vard=document,s=d.createElement('script');s.src=remark_config.host+'/web/'+c[i]+'.js';s.defer=true;(d.headd.body).appendChild(s); )(remark_config.components['embed']);</script>
In development I use it with anonymous comments enabled, but to avoid SPAM
the production site uses social logins (for now I ve only enabled Github
& Google, if someone requests additional services I ll check them, but those
were the easy ones for me initially).
To support theme switching with remark42 I ve also added the following inside
the layouts/partials/extend_footer.html file:
- if (not site.Params.disableThemeToggle)
<script>/* Function to change theme when the toggle button is pressed */document.getElementById("theme-toggle").addEventListener("click",()=>if(typeofwindow.REMARK42!="undefined")if(document.body.className.includes('dark'))window.REMARK42.changeTheme('light');elsewindow.REMARK42.changeTheme('dark'); );</script>
- end
With this code if the theme-toggle button is pressed we change the remark42
theme before the PaperMod one (that s needed here only, on page loads the
remark42 theme is synced with the main one using the code from the
layouts/partials/comments.html shown earlier).
Development setupTo preview the site on my laptop I m using docker-compose with the following
configuration:
docker-compose.yaml
To run it properly we have to create the .env file with the current user ID
and GID on the variables APP_UID and APP_GID (if we don t do it the files
can end up being owned by a user that is not the same as the one running the
services):
$echo"APP_UID=$(id-u)\nAPP_GID=$(id-g)"> .env
The Dockerfile used to generate the sto/hugo-adoc is:
Dockerfile
FROM asciidoctor/docker-asciidoctor:latestRUN gem install--no-document asciidoctor-html5s &&\
apk update && apk add --no-cache curl libc6-compat &&\
repo_path="gohugoio/hugo"&&\
api_url="https://api.github.com/repos/$repo_path/releases/latest"&&\
download_url="$(\
curl -sL"$api_url"\
sed-n"s/^.*download_url\": \"\\(.*.extended.*Linux-64bit.tar.gz\)\"/\1/p"\
)"&&\
curl -sL"$download_url"-o /tmp/hugo.tgz &&\
tar xf /tmp/hugo.tgz hugo &&\
install hugo /usr/bin/ &&\
rm-f hugo /tmp/hugo.tgz &&\
/usr/bin/hugo version &&\
apk del curl &&rm-rf /var/cache/apk/*# Expose port for live serverEXPOSE 1313ENTRYPOINT ["/usr/bin/hugo"]CMD [""]
If you review it you will see that I m using the
docker-asciidoctor image as
the base; the idea is that this image has all I need to work with asciidoctor
and to use hugo I only need to download the binary from their latest release
at github (as we are using an
image based on alpine we also need to install the
libc6-compat package, but once that is done things are working fine for me so
far).
The image does not launch the server by default because I don t want it to; in
fact I use the same docker-compose.yml file to publish the site in production
simply calling the container without the arguments passed on the
docker-compose.yml file (see later).
When running the containers with docker-compose up (or docker compose up if
you have the docker-compose-plugin package installed) we also launch a nginx
container and the remark42 service so we can test everything together.
The Dockerfile for the remark42 image is the original one with an updated
version of the init.sh script:
Dockerfile
FROM umputun/remark42:latestCOPY init.sh /init.sh
The updated init.sh is similar to the original, but allows us to use an
APP_GID variable and updates the /etc/group file of the container so the
files get the right user and group (with the original script the group is
always 1001):
init.sh
#!/sbin/dinit /bin/shuid="$(id-u)"if["$ uid"-eq"0"];then
echo"init container"# set container's time zonecp"/usr/share/zoneinfo/$ TIME_ZONE" /etc/localtime
echo"$ TIME_ZONE">/etc/timezone
echo"set timezone $ TIME_ZONE ($(date))"# set UID & GID for the appif["$ APP_UID"]["$ APP_GID"];then["$ APP_UID"]APP_UID="1001"["$ APP_GID"]APP_GID="$ APP_UID"echo"set custom APP_UID=$ APP_UID & APP_GID=$ APP_GID"sed-i"s/^app:x:1001:1001:/app:x:$ APP_UID:$ APP_GID:/" /etc/passwd
sed-i"s/^app:x:1001:/app:x:$ APP_GID:/" /etc/group
else
echo"custom APP_UID and/or APP_GID not defined, using 1001:1001"fi
chown-R app:app /srv /home/app
fi
echo"prepare environment"# replace % REMARK_URL % by content of REMARK_URL variable
find /srv -regex'.*\.\(html\ js\ mjs\)$'-print\-execsed-i"s % REMARK_URL % $ REMARK_URL g"\;if[-n"$ SITE_ID"];then#replace "site_id: 'remark'" by SITE_IDsed-i"s 'remark' '$ SITE_ID' g" /srv/web/*.html
fi
echo"execute \"$*\""if["$ uid"-eq"0"];then
exec su-exec app "$@"else
exec"$@"fi
The environment file used with remark42 for development is quite minimal:
env.dev
Production setupThe VM where I m publishing the blog runs Debian GNU/Linux
and uses binaries from local packages and applications packaged inside
containers.
To run the containers I m using
docker-ce (I could have used
podman instead, but I already had it installed on the
machine, so I stayed with it).
The binaries used on this project are included on the following packages from
the main Debian repository:
git to clone & pull the repository,
jq to parse json files from shell scripts,
json2file-go to save the webhook messages to files,
inotify-tools to detect when new files are stored by json2file-go and
launch scripts to process them,
nginx to publish the site using HTTPS and work as proxy for
json2file-go and remark42 (I run it using a container),
task-spool to queue the scripts that update the deployment.
And I m using docker and docker compose from the debian packages on the
docker repository:
docker-ce to run the containers,
docker-compose-plugin to run docker compose (it is a plugin, so no - in
the name).
Repository checkoutTo manage the git repository I ve created a deploy key, added it to gitea
and cloned the project on the /srv/blogops PATH (that route is owned by a
regular user that has permissions to run docker, as I said before).
Compiling the site with hugoTo compile the site we are using the docker-compose.yml file seen before, to
be able to run it first we build the container images and once we have them we
launch hugo using docker compose run:
$cd /srv/blogops
$git pull
$docker compose build
$if[-d"./public"];then rm-rf ./public;fi$docker compose run hugo --
The compilation leaves the static HTML on /srv/blogops/public (we remove the
directory first because hugo does not clean the destination folder as
jekyll does).
The deploy script re-generates the site as described and moves the public
directory to its final place for publishing.
Running remark42 with dockerOn the /srv/blogops/remark42 folder I have the following docker-compose.yml:
docker-compose.yml
The ../.env file is loaded to get the APP_UID and APP_GID variables that
are used by my version of the init.sh script to adjust file permissions and
the env.prod file contains the rest of the settings for remark42, including
the social network tokens (see the
remark42 documentation for
the available parameters, I don t include my configuration here because some of
them are secrets).
Nginx configurationThe nginx configuration for the blogops.mixinet.net site is as simple as:
On this configuration the certificates are managed by
certbot and the server root directory is on
/srv/blogops/nginx/public_html and not on /srv/blogops/public; the reason
for that is that I want to be able to compile without affecting the running
site, the deployment script generates the site on /srv/blogops/public and if
all works well we rename folders to do the switch, making the change feel almost
atomic.
json2file-go configurationAs I have a working WireGuard VPN between the
machine running gitea at my home and the VM where the blog is served, I m
going to configure the json2file-go to listen for connections on a high port
using a self signed certificate and listening on IP addresses only reachable
through the VPN.
To do it we create a systemd socket to run json2file-go and adjust its
configuration to listen on a private IP (we use the FreeBind option on its
definition to be able to launch the service even when the IP is not available,
that is, when the VPN is down).
The following script can be used to set up the json2file-go configuration:
setup-json2file.sh
#!/bin/shset-e# ---------# VARIABLES# ---------BASE_DIR="/srv/blogops/webhook"J2F_DIR="$BASE_DIR/json2file"TLS_DIR="$BASE_DIR/tls"J2F_SERVICE_NAME="json2file-go"J2F_SERVICE_DIR="/etc/systemd/system/json2file-go.service.d"J2F_SERVICE_OVERRIDE="$J2F_SERVICE_DIR/override.conf"J2F_SOCKET_DIR="/etc/systemd/system/json2file-go.socket.d"J2F_SOCKET_OVERRIDE="$J2F_SOCKET_DIR/override.conf"J2F_BASEDIR_FILE="/etc/json2file-go/basedir"J2F_DIRLIST_FILE="/etc/json2file-go/dirlist"J2F_CRT_FILE="/etc/json2file-go/certfile"J2F_KEY_FILE="/etc/json2file-go/keyfile"J2F_CRT_PATH="$TLS_DIR/crt.pem"J2F_KEY_PATH="$TLS_DIR/key.pem"# ----# MAIN# ----# Install packages used with json2file for the blogops sitesudo apt update
sudo apt install-y json2file-go uuid
if[-z"$(type mkcert)"];then
sudo apt install-y mkcert
fi
sudo apt clean
# Configuration file valuesJ2F_USER="$(id-u)"J2F_GROUP="$(id-g)"J2F_DIRLIST="blogops:$(uuid)"J2F_LISTEN_STREAM="172.31.31.1:4443"# Configure json2file[-d"$J2F_DIR"]mkdir"$J2F_DIR"sudo sh -c"echo '$J2F_DIR' >'$J2F_BASEDIR_FILE'"[-d"$TLS_DIR"]mkdir"$TLS_DIR"if[!-f"$J2F_CRT_PATH"][!-f"$J2F_KEY_PATH"];then
mkcert -cert-file"$J2F_CRT_PATH"-key-file"$J2F_KEY_PATH""$(hostname-f)"fi
sudo sh -c"echo '$J2F_CRT_PATH' >'$J2F_CRT_FILE'"sudo sh -c"echo '$J2F_KEY_PATH' >'$J2F_KEY_FILE'"sudo sh -c"cat >'$J2F_DIRLIST_FILE'"<<EOF$(echo"$J2F_DIRLIST"tr';''\n')EOF
# Service override[-d"$J2F_SERVICE_DIR"]sudo mkdir"$J2F_SERVICE_DIR"sudo sh -c"cat >'$J2F_SERVICE_OVERRIDE'"<<EOF
[Service]
User=$J2F_USER
Group=$J2F_GROUPEOF
# Socket override[-d"$J2F_SOCKET_DIR"]sudo mkdir"$J2F_SOCKET_DIR"sudo sh -c"cat >'$J2F_SOCKET_OVERRIDE'"<<EOF
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to nothing to clear its value and add the new value later
ListenStream=
ListenStream=$J2F_LISTEN_STREAMEOF
# Restart and enable servicesudo systemctl daemon-reload
sudo systemctl stop "$J2F_SERVICE_NAME"sudo systemctl start "$J2F_SERVICE_NAME"sudo systemctl enable"$J2F_SERVICE_NAME"# ----# vim: ts=2:sw=2:et:ai:sts=2
Warning: The script uses mkcert to create the temporary certificates, to install the
package on bullseye the backports repository must be available.
Gitea configurationTo make gitea use our json2file-go server we go to the project and enter into
the hooks/gitea/new page, once there we create a new webhook of type gitea
and set the target URL to https://172.31.31.1:4443/blogops and on the secret
field we put the token generated with uuid by the setup script:
sed-n-e's/blogops://p' /etc/json2file-go/dirlist
The rest of the settings can be left as they are:
Trigger on: Push events
Branch filter: *
Warning: We are using an internal IP and a self signed certificate, that means that we
have to review that the webhook section of the app.ini of our gitea
server allows us to call the IP and skips the TLS verification (you can see the
available options on the
gitea
documentation).
The [webhook] section of my server looks like this:
Once we have the webhook configured we can try it and if it works our
json2file server will store the file on the
/srv/blogops/webhook/json2file/blogops/ folder.
The json2file spooler scriptWith the previous configuration our system is ready to receive webhook calls
from gitea and store the messages on files, but we have to do something to
process those files once they are saved in our machine.
An option could be to use a cronjob to look for new files, but we can do
better on Linux using inotify we will use the inotifywait command from
inotify-tools to watch the json2file output directory and execute a script
each time a new file is moved inside it or closed after writing
(IN_CLOSE_WRITE and IN_MOVED_TO events).
To avoid concurrency problems we are going to use task-spooler to launch the
scripts that process the webhooks using a queue of length 1, so they are
executed one by one in a FIFO queue.
The spooler script is this:
blogops-spooler.sh
#!/bin/shset-e# ---------# VARIABLES# ---------BASE_DIR="/srv/blogops/webhook"BIN_DIR="$BASE_DIR/bin"TSP_DIR="$BASE_DIR/tsp"WEBHOOK_COMMAND="$BIN_DIR/blogops-webhook.sh"# ---------# FUNCTIONS# ---------
queue_job()echo"Queuing job to process file '$1'"TMPDIR="$TSP_DIR"TS_SLOTS="1"TS_MAXFINISHED="10"\
tsp -n"$WEBHOOK_COMMAND""$1"# ----# MAIN# ----INPUT_DIR="$1"if[!-d"$INPUT_DIR"];then
echo"Input directory '$INPUT_DIR' does not exist, aborting!"exit 1
fi[-d"$TSP_DIR"]mkdir"$TSP_DIR"echo"Processing existing files under '$INPUT_DIR'"
find "$INPUT_DIR"-type f sortwhile read-r _filename;do
queue_job "$_filename"done# Use inotifywatch to process new filesecho"Watching for new files under '$INPUT_DIR'"
inotifywait -q-m-e close_write,moved_to --format"%w%f"-r"$INPUT_DIR"while read-r _filename;do
queue_job "$_filename"done# ----# vim: ts=2:sw=2:et:ai:sts=2
To run it as a daemon we install it as a systemd service using the following
script:
setup-spooler.sh
#!/bin/shset-e# ---------# VARIABLES# ---------BASE_DIR="/srv/blogops/webhook"BIN_DIR="$BASE_DIR/bin"J2F_DIR="$BASE_DIR/json2file"SPOOLER_COMMAND="$BIN_DIR/blogops-spooler.sh '$J2F_DIR'"SPOOLER_SERVICE_NAME="blogops-j2f-spooler"SPOOLER_SERVICE_FILE="/etc/systemd/system/$SPOOLER_SERVICE_NAME.service"# Configuration file valuesJ2F_USER="$(id-u)"J2F_GROUP="$(id-g)"# ----# MAIN# ----# Install packages used with the webhook processorsudo apt update
sudo apt install-y inotify-tools jq task-spooler
sudo apt clean
# Configure process servicesudo sh -c"cat > $SPOOLER_SERVICE_FILE"<<EOF
[Install]
WantedBy=multi-user.target
[Unit]
Description=json2file processor for $J2F_USER
After=docker.service
[Service]
Type=simple
User=$J2F_USER
Group=$J2F_GROUP
ExecStart=$SPOOLER_COMMANDEOF
# Restart and enable servicesudo systemctl daemon-reload
sudo systemctl stop "$SPOOLER_SERVICE_NAME"true
sudo systemctl start "$SPOOLER_SERVICE_NAME"sudo systemctl enable"$SPOOLER_SERVICE_NAME"# ----# vim: ts=2:sw=2:et:ai:sts=2
The gitea webhook processorFinally, the script that processes the JSON files does the following:
First, it checks if the repository and branch are right,
Then, it fetches and checks out the commit referenced on the JSON file,
Once the files are updated, compiles the site using hugo with docker
compose,
If the compilation succeeds the script renames directories to swap the old
version of the site by the new one.
If there is a failure the script aborts but before doing it or if the swap
succeeded the system sends an email to the configured address and/or the user
that pushed updates to the repository with a log of what happened.
The current script is this one:
blogops-webhook.sh
#!/bin/shset-e# ---------# VARIABLES# ---------# ValuesREPO_REF="refs/heads/main"REPO_CLONE_URL="https://gitea.mixinet.net/mixinet/blogops.git"MAIL_PREFIX="[BLOGOPS-WEBHOOK] "# Address that gets all messages, leave it empty if not wantedMAIL_TO_ADDR="blogops@mixinet.net"# If the following variable is set to 'true' the pusher gets mail on failuresMAIL_ERRFILE="false"# If the following variable is set to 'true' the pusher gets mail on successMAIL_LOGFILE="false"# gitea's conf/app.ini value of NO_REPLY_ADDRESS, it is used for email domains# when the KeepEmailPrivate option is enabled for a userNO_REPLY_ADDRESS="noreply.example.org"# DirectoriesBASE_DIR="/srv/blogops"PUBLIC_DIR="$BASE_DIR/public"NGINX_BASE_DIR="$BASE_DIR/nginx"PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"WEBHOOK_BASE_DIR="$BASE_DIR/webhook"WEBHOOK_SPOOL_DIR="$WEBHOOK_BASE_DIR/spool"WEBHOOK_ACCEPTED="$WEBHOOK_SPOOL_DIR/accepted"WEBHOOK_DEPLOYED="$WEBHOOK_SPOOL_DIR/deployed"WEBHOOK_REJECTED="$WEBHOOK_SPOOL_DIR/rejected"WEBHOOK_TROUBLED="$WEBHOOK_SPOOL_DIR/troubled"WEBHOOK_LOG_DIR="$WEBHOOK_SPOOL_DIR/log"# FilesTODAY="$(date +%Y%m%d)"OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"WEBHOOK_LOGFILE_PATH="$WEBHOOK_LOG_DIR/$OUTPUT_BASENAME.log"WEBHOOK_ACCEPTED_JSON="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.json"WEBHOOK_ACCEPTED_LOGF="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.log"WEBHOOK_REJECTED_TODAY="$WEBHOOK_REJECTED/$TODAY"WEBHOOK_REJECTED_JSON="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.json"WEBHOOK_REJECTED_LOGF="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.log"WEBHOOK_DEPLOYED_TODAY="$WEBHOOK_DEPLOYED/$TODAY"WEBHOOK_DEPLOYED_JSON="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.json"WEBHOOK_DEPLOYED_LOGF="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.log"WEBHOOK_TROUBLED_TODAY="$WEBHOOK_TROUBLED/$TODAY"WEBHOOK_TROUBLED_JSON="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.json"WEBHOOK_TROUBLED_LOGF="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.log"# Query to get variables from a gitea webhook jsonENV_VARS_QUERY="$(printf"%s"\'(. @sh "gt_ref=\(.ref);"),'\'(. @sh "gt_after=\(.after);"),'\'(.repository @sh "gt_repo_clone_url=\(.clone_url);"),'\'(.repository @sh "gt_repo_name=\(.name);"),'\'(.pusher @sh "gt_pusher_full_name=\(.full_name);"),'\'(.pusher @sh "gt_pusher_email=\(.email);")')"# ---------# Functions# ---------
webhook_log()echo"$(date-R)$*">>"$WEBHOOK_LOGFILE_PATH"
webhook_check_directories()for _d in"$WEBHOOK_SPOOL_DIR""$WEBHOOK_ACCEPTED""$WEBHOOK_DEPLOYED"\"$WEBHOOK_REJECTED""$WEBHOOK_TROUBLED""$WEBHOOK_LOG_DIR";do[-d"$_d"]mkdir"$_d"done
webhook_clean_directories()# Try to remove empty dirsfor _d in"$WEBHOOK_ACCEPTED""$WEBHOOK_DEPLOYED""$WEBHOOK_REJECTED"\"$WEBHOOK_TROUBLED""$WEBHOOK_LOG_DIR""$WEBHOOK_SPOOL_DIR";do
if[-d"$_d"];then
rmdir"$_d" 2>/dev/null true
fi
done
webhook_accept()
webhook_log "Accepted: $*"mv"$WEBHOOK_JSON_INPUT_FILE""$WEBHOOK_ACCEPTED_JSON"mv"$WEBHOOK_LOGFILE_PATH""$WEBHOOK_ACCEPTED_LOGF"WEBHOOK_LOGFILE_PATH="$WEBHOOK_ACCEPTED_LOGF"
webhook_reject()[-d"$WEBHOOK_REJECTED_TODAY"]mkdir"$WEBHOOK_REJECTED_TODAY"
webhook_log "Rejected: $*"if[-f"$WEBHOOK_JSON_INPUT_FILE"];then
mv"$WEBHOOK_JSON_INPUT_FILE""$WEBHOOK_REJECTED_JSON"fi
mv"$WEBHOOK_LOGFILE_PATH""$WEBHOOK_REJECTED_LOGF"exit 0
webhook_deployed()[-d"$WEBHOOK_DEPLOYED_TODAY"]mkdir"$WEBHOOK_DEPLOYED_TODAY"
webhook_log "Deployed: $*"mv"$WEBHOOK_ACCEPTED_JSON""$WEBHOOK_DEPLOYED_JSON"mv"$WEBHOOK_ACCEPTED_LOGF""$WEBHOOK_DEPLOYED_LOGF"WEBHOOK_LOGFILE_PATH="$WEBHOOK_DEPLOYED_LOGF"
webhook_troubled()[-d"$WEBHOOK_TROUBLED_TODAY"]mkdir"$WEBHOOK_TROUBLED_TODAY"
webhook_log "Troubled: $*"mv"$WEBHOOK_ACCEPTED_JSON""$WEBHOOK_TROUBLED_JSON"mv"$WEBHOOK_ACCEPTED_LOGF""$WEBHOOK_TROUBLED_LOGF"WEBHOOK_LOGFILE_PATH="$WEBHOOK_TROUBLED_LOGF"
print_mailto()_addr="$1"_user_email=""# Add the pusher email address unless it is from the domain NO_REPLY_ADDRESS,# which should match the value of that variable on the gitea 'app.ini' (it# is the domain used for emails when the user hides it).# shellcheck disable=SC2154if[-n"$ gt_pusher_email##*@"$ NO_REPLY_ADDRESS""]&&[-z"$ gt_pusher_email##*@*"];then
_user_email="\"$gt_pusher_full_name <$gt_pusher_email>\""fi
if["$_addr"]&&["$_user_email"];then
echo"$_addr,$_user_email"elif["$_user_email"];then
echo"$_user_email"elif["$_addr"];then
echo"$_addr"fi
mail_success()to_addr="$MAIL_TO_ADDR"if["$MAIL_LOGFILE"="true"];then
to_addr="$(print_mailto "$to_addr")"fi
if["$to_addr"];then# shellcheck disable=SC2154subject="OK - $gt_repo_name updated to commit '$gt_after'"
mail -s"$ MAIL_PREFIX $ subject""$to_addr"\
<"$WEBHOOK_LOGFILE_PATH"fi
mail_failure()to_addr="$MAIL_TO_ADDR"if["$MAIL_ERRFILE"=true];then
to_addr="$(print_mailto "$to_addr")"fi
if["$to_addr"];then# shellcheck disable=SC2154subject="KO - $gt_repo_name update FAILED for commit '$gt_after'"
mail -s"$ MAIL_PREFIX $ subject""$to_addr"\
<"$WEBHOOK_LOGFILE_PATH"fi# ----# MAIN# ----# Check directories
webhook_check_directories
# Go to the base directorycd"$BASE_DIR"# Check if the file existsWEBHOOK_JSON_INPUT_FILE="$1"if[!-f"$WEBHOOK_JSON_INPUT_FILE"];then
webhook_reject "Input arg '$1' is not a file, aborting"fi# Parse the file
webhook_log "Processing file '$WEBHOOK_JSON_INPUT_FILE'"eval"$(jq -r"$ENV_VARS_QUERY""$WEBHOOK_JSON_INPUT_FILE")"# Check that the repository clone url is right# shellcheck disable=SC2154if["$gt_repo_clone_url"!="$REPO_CLONE_URL"];then
webhook_reject "Wrong repository: '$gt_clone_url'"fi# Check that the branch is the right one# shellcheck disable=SC2154if["$gt_ref"!="$REPO_REF"];then
webhook_reject "Wrong repository ref: '$gt_ref'"fi# Accept the file# shellcheck disable=SC2154
webhook_accept "Processing '$gt_repo_name'"# Update the checkoutret="0"
git fetch >>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"if["$ret"-ne"0"];then
webhook_troubled "Repository fetch failed"
mail_failure
fi# shellcheck disable=SC2154
git checkout "$gt_after">>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"if["$ret"-ne"0"];then
webhook_troubled "Repository checkout failed"
mail_failure
fi# Remove the build dir if presentif[-d"$PUBLIC_DIR"];then
rm-rf"$PUBLIC_DIR"fi# Build site
docker compose run hugo -->>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"# go back to the main branch
git switch main && git pull
# Fail if public dir was missingif["$ret"-ne"0"][!-d"$PUBLIC_DIR"];then
webhook_troubled "Site build failed"
mail_failure
fi# Remove old public_html copies
webhook_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR-mindepth 1 -maxdepth 1 -name'public_html-*'-type d \-execrm-rf\;>>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"if["$ret"-ne"0"];then
webhook_troubled "Removal of old site versions failed"
mail_failure
fi# Switch site directoryTS="$(date +%Y%m%d-%H%M%S)"if[-d"$PUBLIC_HTML_DIR"];then
webhook_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"mv"$PUBLIC_HTML_DIR""$PUBLIC_HTML_DIR-$TS">>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"fi
if["$ret"-eq"0"];then
webhook_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"mv"$PUBLIC_DIR""$PUBLIC_HTML_DIR">>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"fi
if["$ret"-ne"0"];then
webhook_troubled "Site switch failed"
mail_failure
else
webhook_deployed "Site deployed successfully"
mail_success
fi# ----# vim: ts=2:sw=2:et:ai:sts=2
Hello
One of my last blog introduced the new cme update dpkg-copyright command and explained how to use this command to create a debian/copyright file from scratch.
We will see in this blog how cme update dpkg-copyright command can also be used if a debian/copyright file already exists.
During the lifetime of a package, creating a new debian/copyright copyright file should only happen once. When updating a package to a new upstream version, the content of the copyright file should be modified to reflect upstream changes by:
adding or removing file entries
updating years or owners
updating licenses
cme update dpkg-copyright can be run with an existing debian/copyright file: the information extracted by licensecheck are merged in debian/copyright. Unfortunately, this merge may bring back the errors coming from licensecheck that you ve cleaned up manually last time.
This last part is not optimal as the manual clean up must be repeated after cme update dpkg-copyright is run.
Here s a way to avoid the last problem: cme update dpkg-copyright supports a way to modify the content of debian/copyright after the merge: the optional file debian/fix.scanned.copyright contains instructions to tweak the content of debian/copyright. (See Config::Model::Dpkg::Copyright man page for more details)
You may remember that Config::Model handles all configuration information as a tree. The information contained in the copyright file is also stored in a tree structure. debian/fix.scanned.copyright contains instructions to navigate this tree and modify the content of the tree.
Imagine that there s a robot (or a monkey) in that configuration tree. Each instruction tells the robot either to move in the tree or to perform an operation.
The most useful instructions for debian/fix.scanned.copyright are:
! to move to the root of the tree
= to assign a value , e.g: short_name=MIT or Copyright="2014, FooBar" (use double quotes when the value white space or other fancy characters).
.= to append to a value, e.g Copyright.=" Appended info."
=~ to apply a substitution with a Perl regexp, e.g. Copyright=~"/ \s*//g"
: to move a Files element, e.g. Files:"foo/baz.c"
:~ to remove a Files element, e.g. Files:~"bogus.c"
Otherwise:
Comments with # or // must begin at the beginning of the line
Instructions are separated by space(s) or new lines
Let s see a commented example:
# go to the root of the tree (with '!')
# and remove a bogus Files entry (with ":~")
! Files:~"foo/bogus.c"
// go to a Files entry and append (operator '.=') some info to owner
! Files:"foo/bar.h"
Copyright.=" Appended info."
# go back to tree root and assign info
! Files:"foo/baz.c" Copyright="2014, FooBar"
# add a global license and a file entry missed by licensecheck
# global license must be set before using it in Files entry
# note: MIT text license is provided by cme
! License=MIT
! Files:"3rdparty/missed.c" Copyright="2005 FooBar" License short_name=MIT
Other possible instructions for this file are described in Config::Model::Loader doc.
The number of instructions depends on the accuracy of licencecheck command. Hopefully, this number can be reduced by improving licensecheck.
Last but not least, the following packages are required:
You may want to install also libconfig-model-tkui-perl to update the copyright file with cme GUI.
In summary, the command cme updated dpkg-copyright associated with instructions stored in debian/fix.scanned.copyright enable Debian packagers to update with less effort debian/copyright file when updating a package.
All the best
Hello
In my opinion, creating and maintaining Debian copyright file is the most boring task required to create a Debian package. Unfortunately, this file is also one of the most important of a package: it specifies some legal aspect regarding the use of the software.
Debian copyright file is scrutinized by ftp masters gatekeepers when accepting a new package in Debian project: this file must accurately describe the copyright and licenses of all files of a source package, preferably using a specific syntax. (Kudos to the ftp-masters team: reading copyright files must be even more boring than writing them).
The content of the copyright file must reflect accurately the license of all files. This license is often specified in the comments of a source files. The licencecheck command is able to scan sources files and reports the copyright and licenses declared in there. But it does not summarize this information: a copyright line is generated for each file of a package.
licensecheck2dep5 (provided by cdbs package as /usr/lib/cdbs/licensecheck2dep5) does better: the output of licensecheck is consolidated and translated in Debian copyright format. The result is better, but must be heavily edited to be reviewable by ftp-masters team.
The new update subcommand of cme (available with libconfig-model-dpkg-perl 2.061 currently in experimental) goes further than licensecheck2deb:
copyright are coalesced when possible (i.e. 2001,2002,2003-2005 is changed to 2001-2005)
file entries same copyright owner and license are grouped, group of files may be represented with a wild card ( * )
license text is filled with actual text for the most popular licenses
For instance, here s the (slightly edited) output of cme run for pan package starting without debian/copyright file:
$ cme update dpkg-copyright -quiet
Adding dummy license text for license public-domain
for path pan/general/sorted-vector.h
Adding dummy license text for license BSD-2-clause
for path pan/usenet-utils/MersenneTwister.h
$ cat debian/copyright
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Files: *
Copyright: 1994-2001, Frank Pilhofer. The author may
License: GPL-2+
Files: pan/*
Copyright: 2002-2007, Charles Kerr
License: GPL-2
Files: pan/data/cert-store.cc
pan/data/cert-store.h
Copyright: 2011, Heinrich M ller
2002-2006, Charles Kerr
License: GPL-2
Files: pan/general/e-util.cc
pan/general/e-util.h
Copyright: 2000, 2001, Ximian, Inc
License: LGPL-2
Files: pan/general/locking.h
pan/general/worker-pool.cc
pan/general/worker-pool.h
Copyright: 2007, Calin Culianu
2002-2007, Charles Kerr
License: LGPL-2+
Files: pan/general/sorted-vector.h
Copyright: 2002, Martin Holzherr (holzherr@infobrain.com).
License: public-domain
Please fill license public-domain from header
of pan/general/sorted-vector.h
[ about 100 more lines including license text for Zlib and several
GPL licenses ]
This is a good start, but some modifications must be applied to get a correct license file:
add missing upstream information (Upstream-Name, Upstream-Contact and Source items)
remove irrelevant text from some copyright owner (e.g. remove The author may from Files: * entry).
add some missing license text (e.g. include text from sorted-vector.h comments to specify upstream author s version of public-domain)
These modifications can be done:
directly after update by running cme update dpkg-copyright --edit
after update by running cme edit dpkg-copyright
with your favorite editor
This post has mentioned creation of Debian copyright file, but does not address the problem of updating an existing copyright file when packaging a new version of a software. This will be the subject of a next post.
I hope this new feature of cme will save hours of work for Debian packagers. As usual comments and suggestions are welcome
All the best
I was at a client today when I remembered that David was due to arrive to Caracas circa 1430, so I called him to his brand-new-venezuelan-GSM-cellphone and I found out that he was in problems. In big, dark, awful problems.
As I told in previous posts, he went to San Crist bal (near the border with Colombia) to visit his girl and know the city. He departed from Caracas in a bus on Friday night and arrived on Saturday morning. He spent the whole weekend there, and in Monday he was due to leave San Crist bal through the Mayor Buenaventura Vivas Military Airbase, in a civilian airplane.
He arrived just on time for his flight, rushing to take the plane on time, and he was stopped by two so-called Inmigration Officials who demanded to see his passport. He recently got a new one from the Passport Office in Mexico City, so he had a brand new passport with the Venezuelan Entry Stamp (Mexicans don’t need a visa to enter Venezuela)
ONIDEX, the National Office for Inmigration. Praised for lots of people for fighting against Passport Corruption, kidnaps people in San Cristobal.
The Officials told him that his passport was a fake, and they told him that, from this time on, he was under arrest. They made him leave the Airport quickly and took him in a civilian vehicle with no identifications. They started the trip to San Crist bal City (1 hour away from the Airport in a very scary road) and they kept telling him that his passport was fake (they claimed that the Passport didn’t have the watermarks, and stuff)
Soon their position changed and a common practice in Venezuelan officials started: extorsion. They told him that there were “several ways to fix the problem”, and asked him what was in his backpack (which had a laptop, but he denied it): they were demanding money, and David wasn’t prone to do that.
National Guard. Doesn’t have a /proc/clue about anything.
They drove David to the Inmigration Office in San Crist bal and told him that they “made” some tests to the passport and they determined that the passport was fake. They told David that their “Commander” told they that he should be deported into Colombia, since he was a “very important member of a Cartel that they were willing to capture”
DISIP: the Venezuelan Political Police. They were helpful in this episode, but they’re not angels.
Finally, hours after that charade of Officials telling him funny laws, stupid suppositions and wanting him to give them money, they left David in the Bus Terminal in San Crist bal, where he was able to talk to Ana, from Debian Venezuela, which picked him up and went back to the Airport. The two “Officials” weren’t there anymore, and he had a last problem with the National Guard which had stupid suppositions about his trip, and stuff (this is common, but at least it’s not illegal) before he was able to get into the plane.
Being in Caracas, I was really worried. Between Ana, my mother and me we were able to move several people to check David status in the Airport (my mother called four members of the Venezuelan FAA, I called the Venezuelan CIA and Ana called the Vicechancellor) and this probably saved David’s life and money, since the “Officials” might have been scared by the movement of people caring about David.
He’s safe now, back in Caracas, in home. He has all his stuff and his health is OK (yet he was scared when he arrived, which I fully understand)
Why am I writing this post? I’m really concerned about what happened to David, and I want to make a public statement on this. Somehow, we venezuelans got used to this kind of practices in our Country. Probably 40 years of pseudo-democracy with messy governors helped, maybe it’s a matter of education and culture. I don’t really care about this, but I do care about what they’re doing with the people in my Country.
Today, President Chavez declared that a dance against corruption was going to start. I think it’s the fourth time he’s making something about corruption, and he’s failing. There’s a gang of people in San Cristobal, Mr. President, who tries to rob and scare the visitors. They should be in jail. They arrested somebody illegally. It’s forbidden in our Constitution to arrest somebody without having proofs to do it (we have one of the best Constitutions in the World, experts say, but we don’t have people who obey it). If you’re going to arrest an alien, you need to have a Public Prosecutor in place, writing down a legal document and guaranteeing the safety of the detained. This people should be in jail.
Stop Corruption!
Though, as my mother later said to David, two awful Venezuelan people kidnapped David, but probably twenty Venezuelan people quickly acted to find out what was happening. We might be a Third World Country, but we use to be able to get out of our problems by our own. And, hell it’s rewarding.
I want to apologize in the name of all venezuelans to David and to make a clear statement against corruption: we venezuelan won’t tolerate more corruption. We’ve grown between that, buying places in a line to get our National ID Card, paying one million bolivares to get a Passport appointment. We’re sick of this. This is not the great Venezuela we live in. Please, losers, shoot yourselves. Or eat rat poison, whichever makes less stains.
I was at a client today when I remembered that David was due to arrive to Caracas circa 1430, so I called him to his brand-new-venezuelan-GSM-cellphone and I found out that he was in problems. In big, dark, awful problems.
As I told in previous posts, he went to San Crist bal (near the border with Colombia) to visit his girl and know the city. He departed from Caracas in a bus on Friday night and arrived on Saturday morning. He spent the whole weekend there, and in Monday he was due to leave San Crist bal through the Mayor Buenaventura Vivas Military Airbase, in a civilian airplane.
He arrived just on time for his flight, rushing to take the plane on time, and he was stopped by two so-called Inmigration Officials who demanded to see his passport. He recently got a new one from the Passport Office in Mexico City, so he had a brand new passport with the Venezuelan Entry Stamp (Mexicans don’t need a visa to enter Venezuela)
ONIDEX, the National Office for Inmigration. Praised for lots of people for fighting against Passport Corruption, kidnaps people in San Cristobal.
The Officials told him that his passport was a fake, and they told him that, from this time on, he was under arrest. They made him leave the Airport quickly and took him in a civilian vehicle with no identifications. They started the trip to San Crist bal City (1 hour away from the Airport in a very scary road) and they kept telling him that his passport was fake (they claimed that the Passport didn’t have the watermarks, and stuff)
Soon their position changed and a common practice in Venezuelan officials started: extorsion. They told him that there were “several ways to fix the problem”, and asked him what was in his backpack (which had a laptop, but he denied it): they were demanding money, and David wasn’t prone to do that.
National Guard. Doesn’t have a /proc/clue about anything.
They drove David to the Inmigration Office in San Crist bal and told him that they “made” some tests to the passport and they determined that the passport was fake. They told David that their “Commander” told they that he should be deported into Colombia, since he was a “very important member of a Cartel that they were willing to capture”
DISIP: the Venezuelan Political Police. They were helpful in this episode, but they’re not angels.
Finally, hours after that charade of Officials telling him funny laws, stupid suppositions and wanting him to give them money, they left David in the Bus Terminal in San Crist bal, where he was able to talk to Ana, from Debian Venezuela, which picked him up and went back to the Airport. The two “Officials” weren’t there anymore, and he had a last problem with the National Guard which had stupid suppositions about his trip, and stuff (this is common, but at least it’s not illegal) before he was able to get into the plane.
Being in Caracas, I was really worried. Between Ana, my mother and me we were able to move several people to check David status in the Airport (my mother called four members of the Venezuelan FAA, I called the Venezuelan CIA and Ana called the Vicechancellor) and this probably saved David’s life and money, since the “Officials” might have been scared by the movement of people caring about David.
He’s safe now, back in Caracas, in home. He has all his stuff and his health is OK (yet he was scared when he arrived, which I fully understand)
Why am I writing this post? I’m really concerned about what happened to David, and I want to make a public statement on this. Somehow, we venezuelans got used to this kind of practices in our Country. Probably 40 years of pseudo-democracy with messy governors helped, maybe it’s a matter of education and culture. I don’t really care about this, but I do care about what they’re doing with the people in my Country.
Today, President Chavez declared that a dance against corruption was going to start. I think it’s the fourth time he’s making something about corruption, and he’s failing. There’s a gang of people in San Cristobal, Mr. President, who tries to rob and scare the visitors. They should be in jail. They arrested somebody illegally. It’s forbidden in our Constitution to arrest somebody without having proofs to do it (we have one of the best Constitutions in the World, experts say, but we don’t have people who obey it). If you’re going to arrest an alien, you need to have a Public Prosecutor in place, writing down a legal document and guaranteeing the safety of the detained. This people should be in jail.
Stop Corruption!
Though, as my mother later said to David, two awful Venezuelan people kidnapped David, but probably twenty Venezuelan people quickly acted to find out what was happening. We might be a Third World Country, but we use to be able to get out of our problems by our own. And, hell it’s rewarding.
I want to apologize in the name of all venezuelans to David and to make a clear statement against corruption: we venezuelan won’t tolerate more corruption. We’ve grown between that, buying places in a line to get our National ID Card, paying one million bolivares to get a Passport appointment. We’re sick of this. This is not the great Venezuela we live in. Please, losers, shoot yourselves. Or eat rat poison, whichever makes less stains.
Last October I filed an RFP for GyachE, a feature-rich Yahoo! Messenger client that supports, among other things, webcams. I requested so because Eric wanted to package this, and at the time I didn’t have any time to allot for packaging.
Now, Eric has asked me to take over the packaging. And indeed, this is a difficult package, since upstream (aka Ms. Erica Andrews) seems to have a rather snobbish attitude with regard to providing support for GyachE (well, it is not required anyway, remember about the non-warranty clause.) She can’t even provide a decent autotooling of the software, and what’s worse, is that the source itself seems to be a mixture of two different applications, one involving GTK2 and the other Python. Hell, I couldn’t even get the source to compile, and when I tried fixing the autotools, it was even more hell
I almost gave up on it, until I came across gyach-improved from Google. Mick Howe, thank you for being such a nice guy! /me proceeds to renaming the ITP…