Search Results: "apw"

3 November 2022

Arturo Borrero Gonz lez: New OpenPGP key and new email

Post logo I m trying to replace my old OpenPGP key with a new one. The old key wasn t compromised or lost or anything bad. Is still valid, but I plan to get rid of it soon. It was created in 2013. The new key id fingerprint is: AA66280D4EF0BFCC6BFC2104DA5ECB231C8F04C4 I plan to use the new key for things like encrypted emails, uploads to the Debian archive, and more. Also, the new key includes an identity with a newer personal email address I plan to use soon: arturo.bg@arturo.bg The new key has been uploaded to some public keyservers. If you would like to sign the new key, please follow the steps in the Debian wiki.
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGNjvX4BEADE4w5x0SQmxWLAI1R17RCC98ngTkD/FMyos0GF5xmv0VJeLYhw
x6oJRmiNGHY8+gjq7SyVCWmlwbLKBEPFNI1k5WcrTB+ClgGkWB5KBnbLKm6CSP4N
ccSbrUQrZW+zxk3Q5h3CJljZpmflB2dvRfnDMSSaw8zOc37EtszW3AVVKNYAu3wj
mXpfwI72/OSELhSvhkr51L+ZlEYUMCITeO+jpiWsnU+sA8oKKPjW4+X8cjrN4eFa
1PAPILDf+Omst5SKM2aV5LGZ8rBzb5wNJF6yDexDw2XmfbFWLOfYzFRY6GTXJz/p
8Fh6O1wkHM9RnwmesCXTtkaGQsVFiVsoqGFyzrkIdWPUruB3RG5EzOkapWi/cnbD
1sy7yrUgy99Ew5yzmLaZ40hmRyq/gBBw4yRkdQaddbkErx+9hT+2tJELa5wrmWkb
FtaVZ38xC6gacOZqRjp0Xqtr0jobI0vED8vzIyY0zJwWM0Hu6qqq4hkLWZHjCy8a
T5Oe/Cb78Kqwa2mzJfncDahPxcgxpnbkYdvKokRtNBDftLVEz+Do8Dczw7Me4BoK
HmU8wLyeGeDTmeoBXpxKH90T+rQokgsiiD13bWZ+nBxILun1tjOTVVONG6SHdP3f
unolq8SU3K+m67lLa+pWjyYcNRS2OTWGOz/1zsH2R39ZOyfGD09/10aAKwARAQAB
tC1BcnR1cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJnQGFydHVyby5iZz6J
AlQEEwEKAD4WIQSqZigNTvC/zGv8IQTaXssjHI8ExAUCY2O9fgIbAwUJA8JnAAUL
CQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRDaXssjHI8ExCZdD/9Z3vR4sV7vBED4
+mCjdNWWf/mw5YlkZo+XQiMVVss4HfQLdt7VxXgGdcOz5Hond9ax3+qeCEo4DdXq
TC0ACpSCu/TPil6vzbE/kO6i6a4oZjFyteAbbcMXP35stbtDM0U5EZH0adIKknfF
msIPTIdJ/dpkcshtBJIoPqjuuTEBa7bF3OYCajHVqwP4Wsgjy4TvDOwl3hy7bhrQ
ZZHqbh7kW40+alQYaJ8jDvbDh/jhN1/pEiZS9ETu0JfBAF3PYPRLW6XedvwZiPWd
jTXwJd0E+vN5LE1Go8OaYvZb9iitZ21UaYOUnFuhw7SEOSQGfEUBs39+41gBj6vW
05HKCEA6kda9NpfptMbUoSSU+hwRfNA5TdnlxtcRv4NqUigzqa1LoXLdxTsyus+K
BL7dRpKXc72JCrEA3vClisD2FgsxLLRCCSDVM8UM/it/YW7tv42XuhQkTW+okQX4
c5laMzTL+ZV8UOoshseTDOsQsdXhskdnWbnuSwAez2/Dd1gHczuN/+lPiiEnyaTF
XgH17K/F25+92MmwPQcFRVPQcYcbyx1VylA6aCgK6gOEqHCejlZv5XLouzbQh1j1
k6MjUR1ncz8vPV5xSuOMAISqozJ9GxUZT2O3o9Vc9pNg5UEzqTvyURgLOdie8yM4
T93S3nKuHVZ++ZVxEOlPnfEfbFP+xbQrQXJ0dXJvIEJvcnJlcm8gR29uemFsZXog
PGFydHVyb0BkZWJpYW4ub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY73LAhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEMKQQAIe18Np+jdhwxHEFZNppBQ69BtyrnPQg4K5VngZ0NUZdVi+/FU7q
Tc9Z1qNydnXgmav3dafL2/l5zDX9wz7mQD2F0a6luOxZwl1PE6iP5f3cUD7uC9zb
148i1bZGEJbO4iNZKTlJKlbNR9m1PG47pv964CHZnNGp6lsnEspxe2G8DJD48Pje
gbhYukgOtIhQ1CaB1fc8aVwZvXZVSbNBLAqp7pAGhTFJqzHE8/U0sn1/V/wPzFAd
TZtWzKfYAkIIFJI5Rr6LVApIwIe7nWymTdgH4crCd2GZkGR+d6ihPKVSxUAUfoAx
EJQUSJY8rYi39gSDhPuEoK8BYXS1nWFGJiNV1o8xaljQo8rNT9myCaeZuQBLX41/
LRzK4XrxYPvjZpKNucc7fSK+UFriQGzdcAaWtW45Kp/8GmAoLVyCD0DPZNWNJdxp
IORhB33aWakhvDKgaLQa16MJ8fSc3ytn/1lxWzDXA1j05i81y/AOKPtCwBKzQWPF
biuZs3kJgZagLq6L6VOQDHlKqf+jqfl1fWeo04iDg98e0TYKABUfiTz8/MdQcV/X
8VkCgtuZ8BcPPyYzBjvuXWZTvdu0n2pikqAPL4u2cbWfD8JIP2AVCJp9HMGKvENo
XcJgY4h6T3rrC/9EidxECfXlsDbUJxLq0WfJLik84+LRtde3kZiReaIRtC5BcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvQG5ldGZpbHRlci5vcmc+iQJUBBMB
CgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvd8CGwMFCQPCZwAFCwkIBwMF
FQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMSP/g/+MHmxCAi/X+NMHodg9Qou
wEG4Vf1uluAE6c+c1QECCdtSsRjBs1dZoJzGsA23t4LWqluyaptuLDWJQEz+EVKR
mG0bvvropNaoOEShnY069pg7lUHuO/GLeDRhfEH3KT45sIVbLly8QkoGaINSCDLe
RBNaHC6feIC8NfQzQEt72nbi4SgdSQUg0F3lj4WxxECVhXsw/YCqh1d3QYqwRVEE
lCGQ4EbavjtRhO8U7dcL1VwHemKHNq3XvM3PJf1OoPgxWqFW5rHbAdlXdN3WAI6u
DAy7kY+qihz3w6rIDTFq6I3YBTrZ44J+5mN21ZC2iDXAsa/C3Uam0vFsjs/pizuq
WgGI9Vmsyap+bOOjuRSX4hemZoOT4a2GC723fS1dFresYWo3MmwfA3sjgV5tK3ZN
XIpxYIvi6HAHLOAarDaE8Sha1GHvrmPwfZ+cEgTL0mqW3efSF3AFmGHduMB+agzK
rM9sksrRQhbY2fHnBLo1t06SQx3rmhlz5mD1ljQEIzna9D6QKleRu4hgImRLHnCB
CN3o+mZa1MHhaIFzViaD2i3Fv2+bYgT7vnS4QAneLW8O/ZgpAc2MUxMoci5JNyfJ
mWdae7Kbs4Z8rrt/mH2gYyioSB0po4VtVwKWEUW9cLtZusA6mFnMviFpfjakb9TX
MimBAv9hAYpxd+HdfHinmqS0MEFydHVybyBCb3JyZXJvIEdvbnphbGV6IDxhYm9y
cmVyb0B3aWtpbWVkaWEub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY735AhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEGooP/20PR5N34m7CNtyaO96H5W0ULuAuSNuoXaKWDo5LGU6zzDriXbIu
ryYtR66vWF5suf7fHZYX8Ufq4PEsG1UNYEGA9hnjPg3oVwGzBJI7f6Rl2P5Pc8wJ
Eq2kN/xKmfUKIrvgh1f5xgFqC4hzcLDkVlLsPowZWfep8dLY4mtVrsrCD1URhelw
zRDGZ3rTVHWXmfXbSHWR2bgZIIrCtVF8BHStg5b6HuAWpj4Oa0eMfBde0N2RZkLE
ye/r2y/lraHfpT7MXnRMcEmltrv8fic7yvj/Nh4ESWr7UmfbV+GiSw9dc/AlVMXM
ihaW0eXv4F5uMtLJOiqI7bv3UfWSvoqwf2a8EPnzOeBBHhQOOJN7O4UzKBK5GAO8
C3k0I1AV3cTmrXrqT/5yoYAHSekDFCIPES//6Y/pO0ITtCbXkA5e8vaulJbtyXpE
g0Z7I7M1kikL6reZ2PuzsR0psEb/x81bWXODIegyOJolPXMRAY7n9J0xpCnSW9yr
CN4j6YT3Oame04JslwX5Xg1cyheuiusotETYNSKRaGaYBCxYffOWoTLNIBa+RCGc
SVOzJq5pd8fVRM1h2ZZFnfpPJBUb62qPsbk6VwmesGoGevB70zcNQYEI+c35kRfM
IOuJWRIN3Wxx0rpxb5E3i/3TASHM86Dix1VW9vsC/atGU/cgaoTOiNVztDdBcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJvcnJlcm8uZ2xlekBnbWFpbC5j
b20+iQJUBBMBCgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvg8CGwMFCQPC
ZwAFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMS7NA/9F7OL/j7a
xnTDjxAHEiyrCzrBQc/DEAM/yim8E+0UBeTJSZR/bShtbvLbSukeL43tKksPhN/X
skjRF8sJ8KWUnpmSWjv1DQTh7AtkJqACnq7+VtQZq3yuKUCNRNpM8lSFxtmYDUqE
XXD4eMXKoJfdphQ+qpViba+RGXg6sd69Dq739zT/OFMuKZ33z8h7hVNXmoWGcBz6
txvN3cWVJhTLdiBvtn38/0dX7IupQLypLOtP0oZdjoUjkRxTo5biOxt3hUGnxS4x
97PPeRGc4j7lv5ADwFV8bo+g54ZMGRjOcyZmA7dlWFN51JrTx3udW2jgXkYqm7UM
xP4lNwDs9TmT3jan6wR08uwlDakOXfDm3gCQEviN+350sJs2tY+JKBN4QR7NpqeU
2aDFOo0G/0ggf0QbFsMkaTSozerVHRGXMdAi+pbYA6pPWPu8lHIkvvdoj4xUu+Ko
cHX0DCRxmL9mylTbZEanrp5gSpne79McrkbQX2/Yc8lWykCtL5/jHVTD4iNiO5Rf
IJYPAVmC2nlj2URfzwGjjoL5apTStZfng4H2Ccq+3cmhwOXI7pb+PsGeI5PND00A
qHFxe590HFhPxLHoftMIlspstoCvHYGcWQxHNbXW6ccmhHdNYT8Pn4ecKgfr6pCt
0ysilOD2ppPJ88hffKA4nTdtX2Tz2ZwOYwG5Ag0EY2O9fgEQALrapVuv1IcLDit8
9gejdA/Dtlufb2/baImVaQD+dTx2QdMxxEiNKl00a5OhMzXDj9tFrB1Lv4z0t8cY
iDJ+NuydDGgz3MlJgWW0GlpAz8yiul2iqTnkWl3cWeiI+VaX8wzL+acmmkPvlrN8
hM7I55BPr8uBWVIQ7VDmI+ts8gi73xE+Etzzrh13GSSnnYnezfGUQrNfYFcip7D0
hB3bpUIGiPdQ45vSZqXUQx/B6FlabiIGRau8Rt4vaEBGXGFZ9rIR+rMJWx6GqYX4
uY1KM2JZ3SKHk++MWGYdzHdM2oaP6xckZq+u/WiwutkYLLO2hnr03lcAu1IDT1C1
YNPrbTKfqUt+3r0oUK5BrG1Cjdc1mZqcXzYcexOLp79FJLb0t5wPdfgU8dT10kjE
uQxeSYiS4oSpikVQkKoFk++/U95d/z/y/81A6v+cfRus6mW+wRSFSwks7Q5ct7zW
UyKELLC4i4EDgnJXmavVcBD0TWzhH/rZpz9FsO4Mb18IYwbV1/144019/RjiPk5Z
MMNdsjorjV2MtrCIoeAGRgZhbFP2P7CcZOp6ZWzjj40ENlElbLp3VCfkYcTiPHJv
2iaiDz2Mhfmhb1Q/5d/a9tYTYINPmv2QVo+m5Zf+1/U29d2HZMRhD4aqDsivvgtd
GpAnKeus6ePSMqpwjO6v2bmQhjpbABEBAAGJAjwEGAEKACYWIQSqZigNTvC/zGv8
IQTaXssjHI8ExAUCY2O9fgIbDAUJA8JnAAAKCRDaXssjHI8ExA5AD/9VWS1/jHM9
aE3HKCDL4CpiXQPc4ds+3/ft6LXwuCMA/tkt8I4svKZGCCi/X5NfiQetVD+cSzVO
nmloctMt/24yjnGNNSFsDozkn/RqzZIhLJBI69gX4JWR4wpeh4kXMItNM5ZlYw3H
DmuLrf/ey8E2NzbFdzj1VQNoENuwtL2pIJrvK92AcS7acvP0FpiS8riLc5a933SW
oPgelQ1j/04WAH8cyKXB/pruq3OhtK0/b8ylIeI0f7a57dxQj5wysyBVKl+EJd/n
UhypVqMDRWL7N0FttGb9gZ6OVvQnt7iwbtS3tYqAK479+GZwi/Wh/RB2dCDyz8jk
zE0j6y7huP4XzpbBbPVntLDdVAYmpW6iIaTWYxlu79FEUw4JmZdY7hJoEDpHuDIz
ylo0YQgjnRfRfWSdnGCosFrY5UgThPVTaQAILCPtdVyWY4/6s1UaeNs3H0PRA5mz
UT4vDKxGq9gXHnE+qg3dfwMcLR3cDPPWUFVeTfNitZ3Y9eV7SdbQXt5NeOXzFadz
DBc9ZzNx3rBEyUUooU0MEmbltyUFM7R/hVcdpFxs12SgHrvgh13tuxVVVNBXTwwo
pSxmap42vHJERQ8ZJQ4lrvnxNZcuwLHSZK7xVzb0b/1wMooNnhw18vlStMWQJwKl
DiXs/L/ifab2amg9jshULAPgVSw7QeP2OQ==
=UABf
-----END PGP PUBLIC KEY BLOCK-----
If you are curious about what that long code block contains, check this https://cirw.in/gpg-decoder/ For the record, the old key fingerprint is: DD9861AB23DC3333892E07A968E713981D1515F8 Cheers!

15 September 2022

Joachim Breitner: rec-def: Dominators case study

More ICFP-inspired experiments using the rec-def library: In Norman Ramsey s very nice talk about his Functional Pearl Beyond Relooper: Recursive Translation of Unstructured Control Flow to Structured Control Flow , he had the following slide showing the equation for the dominators of a node in a graph:
Norman Ramsey shows a formula Norman Ramsey shows a formula
He said it s ICFP and I wanted to say the dominance relation has a beautiful set of equations you can read all these algorithms how to compute this, but the concept is simple . This made me wonder: If the concept is simple and this formula is beautiful shouldn t this be sufficient for the Haskell programmer to obtain the dominator relation, without reading all those algorithms? Before we start, we have to clarify the formula a bit: If a node is an entry node (no predecessors) then the big intersection is over the empty set, and that is not a well-defined concept. For these nodes, we need that big intersection to return the empty set, as entry nodes are not dominated by any other node. (Let s assume that the entry nodes are exactly those with no predecessors.) Let s try, first using plain Haskell data structures. We begin by implementing this big intersection operator on Data.Set, and also a function to find the predecessors of a node in a graph: Now we can write down the formula that Norman gave, quite elegantly: Does this work? It seems it does: But not surprising if you have read my previous blog posts it falls over once we have recursion: So let us reimplement it with Data.Recursive.Set. The hope is that we can simply replace the operations, and that now it can suddenly handle cyclic graphs as well. Let s see: It does! Well, it does return a result but it looks strange. Clearly node 3 and 4 are also dominated by 1, but the result does not reflect that. But the result is a solution to Norman s equation. Was the equation wrong? No, but we failed to notice that the desired solution is the largest, not the smallest. And Data.Recursive.Set calculates, as documented, the least fixed point. What now? Until the library has code for RDualSet a, we can work around this by using the dual formula to calculate the non-dominators. To do this, we
  • use union instead of intersection
  • delete instead of insert,
  • S.empty, use the set of all nodes (which requires some extra plumbing)
  • subtract the result from the set of all nodes to get the dominators
and thus the code turns into:
And with this, now we do get the correct result:
ghci> domintors3 [(1,2),(1,3),(2,4),(3,4),(4,3)]
fromList [(1,[1]),(2,[1,2]),(3,[1,3]),(4,[1,4])]
We worked a little bit on how to express the beautiful formula to Haskell, but at no point did we have to think about how to solve it. To me, this is the essence of declarative programming.

3 September 2022

Joachim Breitner: More recursive definitions

Haskell is a pure and lazy programming language, and the laziness allows us to write some algorithms very elegantly, by recursively referring to already calculated values. A typical example is the following definition of the Fibonacci numbers, as an infinite stream:

Elegant graph traversals A maybe more practical example is the following calculation of the transitive closure of a graph: We represent graphs as maps from vertex to their successors vertex, and define the resulting map sets recursively: The set of reachable vertices from a vertex v is v itself, plus those reachable by its successors vs, for which we query sets. And, despite this apparent self-referential recursion, it works!

Cyclic graphs ruin it all These tricks can be very impressive until someone tries to use it on a cyclic graph and the program just hangs until we abort it: At this point we are thrown back to implement a more pedestrian graph traversal, typically keeping explicit track of vertices that we have seen already: I have written that seen/todo recursion idiom so often in the past, I can almost write it blindly And indeed, this code handles cyclic graphs just fine:
ghci> transitive2 $ M.fromList [(1,[2,3]),(2,[1,3]),(3,[])]
fromList [(1,[1,2,3]),(2,[1,2,3]),(3,[3])]
But this is a bit anticlimactic Haskell is supposed to be a declarative language, and transitive1 declares my intent just fine!

We can have it all It seems there actually is a way to write essentially the code in transitive1, and still get the right result in all cases, and I have just published a possible implementation as rec-def. In the module Data.Recursive.Set we find an API that resembles that of Set, with a type RSet a, and in addition to conversion functions from and to sets, we find the two operations that we needed in transitive1: Let s try that: And indeed it works! Magic!
ghci> transitive2 $ M.fromList [(1,[3]),(2,[1,3]),(3,[])]
fromList [(1,[1,3]),(2,[1,2,3]),(3,[3])]
ghci> transitive2 $ M.fromList [(1,[2,3]),(2,[1,3]),(3,[])]
fromList [(1,[1,2,3]),(2,[1,2,3]),(3,[3])]
To show off some more, here are small examples:
ghci> let s = RS.insert 42 s in RS.get s
fromList [42]
ghci> : 
  let s1 = RS.insert 23 s2
      s2 = RS.insert 42 s1
  in RS.get s1
 : 
fromList [23,42]

How is that possible? Is it still Haskell? The internal workings of the RSet a type will be the topic of a future blog post; let me just briefly mention that it uses unsafe features under the hood, and just keeps applying the equations you gave until a fixed-point is reached. Because it starts with the empty set and all operations provided by Data.Recursive.Set are monotonous (e.g. no difference) it will eventually find the least fixed point. Despite the unsafe machinery under the hood, I claim that Data.Recursive.Set is itself nicely safe, and does not destroy Haskell s nice properties like purity, referential transparency and equational reasoning. If you disagree, I d like to hear about it (here, on Twitter, Reddit or Discourse)! There is a brief discussion at the end of the tutorial in Data.Recursive.Example.

More than sets The library also provides Data.Recursive.Bool for recursive equations with booleans (preferring False) and Data.Recursive.DualBool (preferring True), and some operations like member :: Ord a => a -> RSet a -> RBool can actually connect different types. I plan to add other data types (natural numbers, maps, Maybe, with suitable orders) as demand arises and as I come across nice small example use-cases for the documentation (e.g. finding shortest paths in a graph). I believe this idiom is practically useful in a wide range of applications (which of course all have some underlying graph structure but then almost everything in Computer Science is a graph). My original motivation was a program analysis. Imagine you want to find out from where in your program you can run into a division by zero. As long as your program does not have recursion, you can simply keep track of a boolean flag while you traverse the program, keeping track a mapping from function names to whether they can divide by zero all nice and elegant. But once you allow mutually recursive functions, things become tricky. Unless you use RBool! Simply use laziness, pass the analysis result down when analyzing the function s right-hand sides, and it just works!

9 February 2017

Sven Hoexter: Limit host access based on LDAP groupOfUniqueNames with sssd

For CentOS 4 to CentOS 6 we used pam_ldap to restrict host access to machines, based on groupOfUniqueNames listed in an openldap. With RHEL/CentOS 6 RedHat already deprecated pam_ldap and highly recommended to use sssd instead, and with RHEL/CentOS 7 they finally removed pam_ldap from the distribution. Since pam_ldap supported groupOfUniqueNames to restrict logins a bigger collection of groupOfUniqueNames were created to restrict access to all kind of groups/projects and so on. But sssd is in general only able to filter based on an "ldap_access_filter" or use the host attribute via "ldap_user_authorized_host". That does not allow the use of "groupOfUniqueNames". So to allow a smoth migration I had to configure sssd in some way to still support groupOfUniqueNames. The configuration I ended up with looks like this:
[domain/hostacl]
autofs_provider = none 
ldap_schema = rfc2307bis
# to work properly we've to keep the search_base at the highest level
ldap_search_base = ou=foo,ou=people,o=myorg
ldap_default_bind_dn = cn=ro,ou=ldapaccounts,ou=foo,ou=people,o=myorg
ldap_default_authtok = foobar
id_provider = ldap
auth_provider = ldap
chpass_provider = none
ldap_uri = ldaps://ldapserver:636
ldap_id_use_start_tls = false
cache_credentials = false
ldap_tls_cacertdir = /etc/pki/tls/certs
ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt
ldap_tls_reqcert = allow
ldap_group_object_class = groupOfUniqueNames
ldap_group_member = uniqueMember
access_provider = simple
simple_allow_groups = fraappmgmtt
[sssd]
domains = hostacl
services = nss, pam
config_file_version = 2
Important side note: With current sssd versions you're more or less forced to use ldaps with a validating CA chain, though hostnames are not required to match the CN/SAN so far. Relevant are: In practise what we do is match the member of the groupOfUniqueNames to the sssd internal group representation. The best explanation about the several possible object classes in LDAP for group representation I've found so far is unfortunately in a german blog post. Another explanation is in the LDAP wiki. In short: within a groupOfUniqueNames you'll find a full DN, while in a posixGroup you usually find login names. Different kind of object class requires a different handling. Next step would be to move auth and nss functionality to sssd as well.

15 December 2015

Vasudev Kamath: Persisting resource control options for systemd-nspawn containers

In my previous post on systemd-nspawn I mentioned, I was unclear on how to persist resource control options for a container. Today I accidentally discovered how can the property be persisted across boot without modifying service file or writing custom service file for container. It is done using systemctl set-property To set CPUAccounting and CPUShares for a container we need to run following command.
systemctl set-property systemd-nspawn@container.service  CPUACCounting=1 CPUShares=200
This actually persists these settings at location, /etc/systemd/systemd-nspawn@container.service.d/ folder. So in our case there will be 2 files created under above location by name 50-CPUAccounting.conf and 50-CPUShares.conf with following contents.
# 50-CPUAccounting.conf
[Service]
CPUAccounting=yes
# 50-CPUShares.conf
[Service]
CPUShares=200
Today when I discovered this folder and saw the file contents, I became curious and started to wonder who created this folder. The look at systemctl man page made showed me this.
set-property NAME ASSIGNMENT...

Set the specified unit properties at runtime where this is supported. This allows changing configuration parameter properties such as resource control settings at runtime. Not all properties may be changed at runtime, but many resource control settings (primarily those in systemd.resource-control(5)) may. The changes are applied instantly, and stored on disk for future boots, unless --runtime is passed, in which case the settings only apply until the next reboot. The syntax of the property assignment follows closely the syntax of assignments in unit files. Example: systemctl set-property foobar.service CPUShares=777

Note that this command allows changing multiple properties at the same time, which is preferable over setting them individually. Like unit file configuration settings, assigning the empty list to list parameters will reset the list.

I did remember doing this for my container and hence it became clear these files are actually written by systemctl set-property. In case you don't want to persist the properties across boot you can simply pass --runtime switch. Basically this is not just for container, resource control can be thus applied to any running service on the system. This is actually cool.

8 December 2015

Vincent Sanders: I said it was wired like a Christmas tree

I have recently acquired a 27U high 19 inch rack in which I hope to consolidate all the computing systems in my home that do not interact well with humans.

My main issue is that modern systems are just plain noisy, often with multiple small fans whining away. I have worked to reduce this noise by using quieter components as replacements but in the end it is simply better to be able to put these systems in a box out of the way.

The rack was generously given to me by Andy Simpkins and aside from being a little dirty having been stored for some time was in excellent condition. While the proverbs "never look a gift horse in the mouth" and "beggars cannot be choosers" are very firmly at the front of my mind there were a few minor obstacles to overcome to make it fit in its new role with a very small budget.

The new home for the rack was to be a space under the stairs where, after careful measurement, I determined it would just fit. After an hour or two attempting to manoeuvre a very heavy chunk of steel into place I determined it was simply not possible while it was assembled. So I ended up disassembling and rebuilding the whole rack in a confined space.

The rack is 800mm wide IMRAK 1400 rather than the more common 600mm width which means it employs "cable reducing channels" to allow the mounting of standard width rack units. Most racks these days come with four posts in the corners to allow for longer kit to be supported front and back. This particular rack was not fitted with the rear posts and a brief call to the supplier indicated that any spares from them would be eyewateringly expensive (almost twice the cost of purchasing a new rack from a different supplier) so I had to get creative.

Shelves that did not require the rear rails were relatively straightforward and I bought two 500mm deep cantilever type from Orion (I have no affiliation with them beyond being a satisfied customer).

I took a trip to the local hardware store and purchased some angle brackets and 16mm steel square tube. From this I made support rails which means the racked kit has support to its rear rather than relying solely on being supported by its rack ears.

The next problem was the huge hole in the bottom of the rack where I was hoping to put the UPS and power switching. This hole is intended for use with raised flooring where cables enter from below, when not required it is filled in with a "bottom gland plate". Once again the correct spares for the unit were not within my budget.

Around a year ago I built several systems for open source projects from parts generously donated by Mythic Beasts (yes I did recycle servers used to build a fort). I still had some leftover casework from one of those servers so ten minutes with an angle grinder and a drill and I made myself a suitable plate.

The final problem I faced is that it is pretty dark under the stairs and while putting kit in the rack I could not see what I was doing. After some brief Googling I decided that all real rack lighting solutions were pretty expensive and not terribly effective.

At this point I was interrupted by my youngest son trying to assemble the Christmas tree and the traditional "none of the lights work" so we went off to the local supermarket to buy some bulbs. Instead we bought a 240 LED string for 10 (15usd) in the vague hope that next year they will not be broken.

I immediately had a light bulb moment and thought how a large number of efficient LED bulbs at a low price would be ideal for lighting a rack. So my rack is indeed both wired like and as a Christmas tree!

Now I just have to finish putting all the systems in there and I will be able to call the project a success.

9 November 2015

Vasudev Kamath: Taming systemd-nspawn for running containers

I've been trying to run containers using systemd-nspawn for quite some time. I was always bumping to one or other dead end. This is not systemd-nspawn's fault, rather my impatience stopping me from reading manual pages properly lack of good tutorial like article available online. Compared to this LXC has a quite a lot of good tutorials and howto's available online. This article is my effort to create a notes putting all required information in one place.
Creating a Debian Base Install First step is to have a minimal Debian system some where on your hard disk. This can be easily done using debootsrap. I wrote a custom script to avoid reading manual every time I want to run debootstrap. Parts of this script (mostly packages and the root password generation) is stolen from lxc-debian template provided by lxc package.
#!/bin/sh
set -e
set -x
usage ()  
    echo "$ 0##/*  [options] <suite> <target> [<mirror>]"
    echo "Bootstrap rootfs for Debian"
    echo
    cat <<EOF
    --arch         set the architecture to install
    --root-passwd  set the root password for bootstrapped rootfs
    EOF
 
# copied from the lxc-debian template
packages=ifupdown,\
locales,\
libui-dialog-perl,\
dialog,\
isc-dhcp-client,\
netbase,\
net-tools,\
iproute,\
openssh-server,\
dbus
if [ $(id -u) -ne 0 ]; then
    echo "You must be root to execute this command"
    exit 2
fi
if [ $# -lt 2 ]; then
   usage $0
fi
while true; do
    case "$1" in
        --root-passwd --root-passwd=?*)
            if [ "$1" = "--root-passwd" -a -n "$2" ]; then
                ROOT_PASSWD="$2"
                shift 2
            elif [ "$1" != "$ 1#--root-passwd= " ]; then
                ROOT_PASSWD="$ 1#--root-passwd= "
                shift 1
            else
                # copied from lxc-debian template
                ROOT_PASSWD="$(dd if=/dev/urandom bs=6 count=1 2>/dev/null base64)"
                ECHO_PASSWD="yes"
            fi
            ;;
        --arch --arch=?*)
            if [ "$1" = "--arch" -a -n "$2" ]; then
                ARCHITECTURE="$2"
                shift 2
            elif [ "$1" != "$ 1#--arch= " ]; then
                ARCHITECTURE="$ 1#--arch= "
                shift 1
            else
                ARCHITECTURE="$(dpkg-architecture -q DEB_HOST_ARCH)"
            fi
            ;;
        *)
            break
            ;;
    esac
done
release="$1"
target="$2"
if [ -z "$1" ]   [ -z "$2" ]; then
    echo "You must specify suite and target"
    exit 1
fi
if [ -n "$3" ]; then
    MIRROR="$3"
fi
MIRROR=$ MIRROR:-http://httpredir.debian.org/debian 
echo "Downloading Debian $release ..."
debootstrap --verbose --variant=minbase --arch=$ARCHITECTURE \
             --include=$packages \
             "$release" "$target" "$MIRROR"
if [ -n "$ROOT_PASSWD" ]; then
    echo "root:$ROOT_PASSWD"   chroot "$target" chpasswd
    echo "Root password is '$ROOT_PASSWRD', please change!"
fi
It just gets my needs done, if you don't like it feel free to modify or use debootstrap directly. !NB Please install dbus package in the minimal base install, otherwise you will not be able to control the container using machinectl
Manually Running Container and then persisting it Next we need to run the container manually. This is done by using following command.
systemd-nspawn -bD   /path/to/container --network-veth \
     --network-bridge=natbr0 --machine=Machinename
--machine option is not mandatory, if not specified systemd-nspawn will take the directory name as machine name, and if you have characters like - in the directory name it translates to hexcode x2d and controlling container with name becomes difficult. --network-veth specifies the systemd-nspawn to enable virtual ethernet based networking and --network-bridge tells the bridge interface on host system to be used by systemd-nspawn. These options together constitutes private networking for container. If not specified container can use host systems interface there by removing network isolation of container. Once you run this command container comes up. You can now run machinectl to control the container. Container can be persisted using following command
machinectl enable container-name
This will create a symbolic link of /lib/systemd/system/systemd-nspawn@service to /etc/systemd/system/machine.target.wants/. This allows you to start or stop container using machinectl or systemctl command. Only catch here is your base install should be in /var/lib/machines/. What I do in my case is create a symbolic link from my base container to /var/lib/machines/container-name. !NB Note that symbolic link name under /var/lib/machines should be same as the container name you gave using --machine switch or the directory name if you didn't specify --machine
Persisting Container Networking We did persist the container in above step, but this doesn't persist the networking options we provided in command line. systemd-nspawn@.service provides following command to invoke container.
ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --link-journal=try-guest --network-veth --settings=override --machine=%I
To persist the bridge networking configuration we did in command line, we need the help of systemd-networkd. So first we need to enable the systemd-networkd.service on both container and the host system.
systemctl enable systemd-networkd.service
Now inside the container, interfaces will be named as hostN. Depending on how many interfaces we have N increments. In our example case we had single interface, hence it will named as host0. By default network interfaces will be down inside container, hence systemd-networkd is needed to put it up. We put the following in /etc/systemd/network/host0.network file inside the container.
[Match]
Name=host0
[Network]
Description=Container wired interface host0
DHCP=yes
And in the host system we just configure the bridge interface using systemd-nspawn. I put following in natbr0.netdev in /etc/systemd/network/
[NetDev]
Description=Bridge natbr0
Name=natbr0
Kind=bridge
In my case I already had configured the bridge using /etc/network/interfaces file for lxc. I think its not really needed to use systemd-networkd in this case. Since systemd-networkd doesn't do anything if network / virtual device is already present I safely put above configuration and enabled systemd-networkd. Just for the notes here is my natbr0 configuration in interfaces file.
auto natbr0
iface natbr0 inet static
   address 172.16.10.1
   netmask 255.255.255.0
   pre-up brctl addbr natbr0
   post-down brctl delbr natbr0
   post-down sysctl net.ipv4.ip_forward=0
   post-down sysctl net.ipv6.conf.all.forwarding=0
   post-up sysctl net.ipv4.ip_forward=1
   post-up sysctl net.ipv6.conf.all.forwarding=1
   post-up iptables -A POSTROUTING -t mangle -p udp --dport bootpc -s 172.16.0.0/16 -j CHECKSUM --checksum-fill
   pre-down iptables -D POSTROUTING -t mangle -p udp --dport bootpc -s 172.16.0.0/16 -j CHECKSUM --checksum-fill
Once this is done just reload the systemd-networkd and make sure you have dnsmasq or any other DHCP server running in your system. Now the last part is to tell systemd-nspawn to use the bridge networking interface we have defined. This is done using container-name.nspawn file. Put this file under /etc/systemd/nspawn folder.
[Exec]
Boot=on
[Files]
Bind=/home/vasudev/Documents/Python/upstream/apt-offline/
[Network]
VirtualEthernet=yes
Bridge=natbr0
Here you can specify networking, and files mounting section of the container. For full list please refer the systemd.nspawn manual page. Now all this is done you can happily do
machinectl start container-name
#or
systemctl start systemd-nspawn@container-name
Resource Control Now all things said and done, one last part remains. Yes what is the point if we can't control how much resource does the container use. Atleast it is more important for me, because I use old and bit low powered laptop. Systemd provides way to control the resource using Control interface. To see all the the interfaces exposed by systemd please refer systemd.resource-control manual page. The way to control the resource is using systemctl. Once container starts running we can run following command.
systemctl set-property container-name CPUShares=200 CPUQuota=30% MemoryLimit=500M
The manual page does say that these things can be put under [Slice] section of unit files. Now I don't have clear idea if this can be put under .nspawn files or not. For the sake of persisting the container I manually wrote the service file for container by copying systemd-nspawn@.service and adding [Slice] section. But if I don't know how to find out if this had any effect or not. If some one knows about this please share your suggestions to me and I will update this section with your provided information.
Conclussion All in all I like systemd-nspawn a lot. I use it to run container for development of apt-offline. I previously used lxc where all can be controlled using a single config file. But I feel systemd-nspawn is more tightly integrated with system than lxc. There is definitely more in systemd-nspawn than I've currently figured out. Only thing is its not as popular as other alternatives and definitely lacks good howto documentation.For now only way out is dig the manual pages, scratch your head, pull your hair out and figure out new possibilities in systemd-nspawn. ;-)

25 July 2015

Michal &#268;iha&#345;: Migrating phpMyAdmin from SourceForge.net

Some time ago we've decided to move phpMyAdmin out of SourceForge.net services. This was mostly motivated by issues with sf.net bundling crapware with installers (though we were not affected), but also we've missed some features that we would like to have and were not possible there. The project relied on SourceForge.net with several services. The biggest ones being website and downloads hosting, issue tracking and mailing lists. We've chosen different approach for each of these. As first, we've moved away website and downloads. Thanks to generous offer of CDN77.com, everything went quite smoothly and we now have HTTPS secured website and downloads, see our announcement. Oh and on the way we've started to PGP sign the releases as well, so you can verify the download. Shortly after this SourceForge.net was hit by major problems with infrastructure. Unfortunately we were not yet completely ready with rest of the migration, but this has definitely pushed us to make progress faster. During the outage, we've opened up issue tracker on GitHub, to be able to receive bug reports from our users. On the background I've worked on the issue migration. The good news is that as of now almost all issues are migrated. There are few missing ones, but these will be hopefully handled in upcoming days as well. Last but not least, we had mailing lists on SourceForge.net. We've shortly discussed available options and decided to run own mail server with these. It will allow us greater flexibility while still using well know software in background. Initial attempts with Mailman 3 failed, so we got back to Mailman 2, which is stable and easy to configure. See also our news posts for official announcement. Thanks to SourceForge.net, it has been great home for us, but now we have better places to live.

Filed under: English phpMyAdmin 0 comments

28 May 2015

Richard Hartmann: On SourceForge

You either die a hero or you live long enough to see yourself become the villain. And yes, we all know that that SF decided to wrap crapware around Windows installers ages ago and then made it opt-in after the backlash. Doing so for stale accounts makes sense from their PoV, which makes it all the worse. And no, I don't know how stale that account actually was, but that's irrelevant in this context either way.

2 January 2015

Vincent Fourmond: Release 0.11 of ctioga2

The new year is starting with a new release of ctioga2, with a lot of new features, such as:
The new release is of course available using rubygems:
~ gem update ctioga2
It can also be downloaded from sourceforge.The possibilities of the new styling system are particularly interesting, and I'm working on ways to make it more powerful, and providing series of default style files that anyone could use as they want. Among other future changes, I want to improve the position of ticks, especially when using non-linear axes, and add functions to draw vector fields (though this still needs some thinking). Enjoy, and a happy new year to everyone !

9 January 2014

Gerfried Fuchs: Clawfinger

It's almost a month since I last blogged something, and one of my new year's resolution is to change that, a bit. Let's see how it goes. I've been listening a lot to this great band from Sweden recently again, put all their songs onto my mobile phone. It might sound weird because they have a rather aggressive style, both soundwise but also lyricswise, but it helps me to get things off my chest and stay relaxed in the rest of my life. The band I want to present to you was already twice mentioned in some other articles of my blog I noticed, but this is the proper post about them: I'm talking about Clawfinger. They came up in the nineties during the crossover phase and did blend in pretty well, but it's mostly their direct and political statements they carry in their lyrics that did let them stand out. One warning though: the direct language they use might be considered blunt and maybe even offensive by some. The message behind it though should rather get you thinking of your own doings if you consider it to be offensive. Here are the songs: Like always, enjoy! And maybe also think about it a bit. :)

/music permanent link Comments: 4 Flattr this

26 April 2013

Vincent Sanders: When you make something, cleaning it out of structural debris is one of the most vital things you do.

Collabora recently had a problem with a project's ARM build farm. In a nice change of pace it was not that the kernel was crashing, nor indeed any of the software or hardware.
The ProblemInstead our problem was our build farm could best be described as "a pile of stuff" and we wanted to add more systems to it and have switched power control for automated testing.

Which is kinda where the Christopher Alexander quote comes into this. I suggested that I might be able to come up with a better, or at least cleaner, solution.
The IdeaA subrack with sub modulesPrevious experience had exposed me to the idea of using 19 inch subracks for mounting circuits inside submodules.

I originally envisaged the dev boards individually mounted inside these boxes. However preliminary investigation revealed that the enclosures were both expensive and used a lot of space which would greatly increase the rack space required to house these systems.

imx53 QSB eurocard carrier
I decided to instead look at eurocard type subracks with carriers for the systems. Using my 3D printer I came up with a carrier design for the imx53 QSB and printed it. I used the basic eurocard size of 100mm x 160mm which would allow the cards to be used within a 3U subrack.

Once assembled it became apparent that each carrier would be able to share resources like power supply, ethernet port and serial console via USB just as the existing setup did and that these would need to be housed within the subrack.
The Prototype
The carrier prototype was enough to get enough interest to allow me to move on to the next phase of the project. I purchased a Schroff 24563-194 subrack kit and three packs of guide rails from Farnell and assembled it.

Initially I had envisaged acquiring additional horizontal rails from Schroff which would enable constructing an area suitable for mounting the shared components behind the card area.

Rear profile for Schroff subrackUnfortunately Schroff have no suitable horizontal profiles in their catalog and are another of those companies who seem to not want to actually sell products to end users but rather deal with wholesalers who do not have their entire product range!

Printed rear profile for Schroff subrack
Undaunted by this I created my own horizontal rail profile and 3D printed some lengths. The profile is designed to allow a 3mm thick rear cover sheet attached with M2.5 mounting bolts and fit rack sides in the same way the other profiles do.

At this point I should introduce some information on how these subracks are dimensioned. A standard 19 inch rack (as defined in IEC 60297) has a width of 17.75 inches(450.85mm) between the vertical posts. The height is measured in U (1.75 inches)

A subrack must obviously fit in the horizontal gap while providing as much internal space as possible. A subrack is generally either 3 or 6 U high. The width within a subrack is defined in units called HP (Horizontal Pitch) which are 0.2 inches(5.08 mm) and subracks like the Schroff generally list 84 usable HP.

However we must be careful (or actually just learn from me stuffing this up ;-) as the usable HP is not the same thing as the actual length of the horizontal rails! The enclosures actually leave and additional 0.1 inch at either end giving a total internal width of 85HP (17 inches, 431.8 mm) which leaves 0.75 inches for the subrack sides and some clearance.

The Schroff subrack allows eurocards to be slotted into rails where the card centre line is on HP boundaries, hence we describe the width of a card in the slot in terms of HP

I cannot manufacture aluminium extrusions (I know it is a personal failing) nor produce more than 100 mm long length of the plastic profile on my printer.

Even if full lengths are purchased from a commercial service (120 euros for a pair including tax and shipping) the plastic does not have sufficient mechanical strength.

The solution I came up with was somewhat innovative, as an alternative a M5 bolt into a thread in the aluminium extrusion I used a 444mm long length of 4mm threaded rod with nuts at either end. This arrangement puts the extrusion under compression and gives it a great deal of additional mechanical strength as the steel threaded rod is very strong.

Additionally to avoid having to print enough extrusion for the entire length I used some 6mm aluminium tube as a spacer between 6HP(30.48mm) wide sections of the printed extrusion.

It was intended to use a standard modular PC power supply which is 150mm wide which is pretty close to 30HP (6 inches) so it was decided to have a 6HP section of rail at that point to allow a rear mounting plate for the PSU to be attached.

This gives 6HP of profile, 21HP(106.68mm) of tube spacer, 6HP of profile, 46HP(233.68 mm) of tube spacer and a final 6HP profile summing to our total of 85HP. Of course this would be unnecessary if a full continuous 85HP rail had been purchased, but 6 of 6 HP long profile is only 51 euro a saving of 70 euro.

To provide a flat area on which to mount the power switching, Ethernet switch and USB hubs I ordered a 170 x 431 mm sheet of 3mm thick aluminium from inspiredsteel who, while being an ebay company, were fast, cheap and the cutting was accurate.

Do be sure to mention you would prefer it if any error made the sheet smaller rather than larger or it might not fit, for me though they were accurate to the tenth of a mm! If you would prefer the rear section of the rack to be enclosed when you are finished, buy a second sheet for the top. For my prototype I only purchased a 170 x 280mm sheet as I was unsure if I wanted a surface under the PSU (you do, buy the longer sheet)

PC power supply mounted to back plateMounting the PSU was a simple case of constructing a 3 mm thick plate with the correct cutouts and mounting holes for an ATX supply. Although the images show the PSU mounted on the left hand side of the rack this was later reversed to improve cable management.

The subrack needed to provide Ethernet switch ports to all the systems. A TP-Link TL-SF1016DS 16-Port 10/100Mbps Switch was acquired and the switch board removed from its enclosure. The switch selected has an easily removed board and is powered by a single 3.3V input which is readily available from the ATX PSU.

Attention now returned to the eurocard carriers for the systems, the boards to be housed were iMX53 QSB and iMX6 SABRE Lite and a Raspberry Pi control system to act as USB serial console etc.

The carriers for both main boards needed to be 8HP wide, comprised of:
Although only 38 mm this is 7.5HP and fractions of an HP are not possible with the selected subrack.

With 8HP wide modules this would allow for ten slots, within the 84 usable HP, and an eleventh 4HP wide in which the Raspberry Pi system fits.

iMX6 SABRE Lite eurocard carrierCarrier designs for both the i.MX53 QSB and the i.MX6 SABRE Lite boards were created and fabricated at a professional 3D print shop which gave a high quality finish product and removed the perceived risk of relying on a personal 3D printer for a quantity of parts.

This resulted in changes in the design to remove as much material as possible as commercial 3D services charge by the cubic cm. This Design For Manufacture (DFM) step removed almost 50% from the price of the initial design.

i.MX53 QSB carriers with wiring loom
The i.MX6 design underwent a second iteration to allow for the heatsink to be mounted and not mechanically interfere with the hard drive (although the prototype carrier has been used successfully for a system that does not require a hard drive). The lesson learned here is to be aware that an design iteration or two is likely and that it is not without cost.

The initial installation was to have six i.MX53 and two i.MX6 this later changed to a five/four split, however the carrier solution allows for almost any combination, the only caveat (discovered later) is the imx53 carriers should be to the right hand side with the small 4HP gap at that end as they have a JTAG connector underneath the board which otherwise foul the hard drive of the next carrier.

Racked cards showing unwanted cable tails
A wiring loom was constructed for each board giving them a connector tail long enough to allow them to be removed. This was the wrong approach! if you implement this design (or when I do it again) the connector tails on the wiring loom should present all the connections to the rear at the same depth as the Ethernet connection.

The rack cables themselves should be long enough to allow the slides to be removed but importantly it is not desirable to have the trailing cable on the cards. I guess the original eurocard designers figured this out as they designed the cards around the standard fixed DIN connectors at the back of the card slots.

USB relay board with wiring loom attached
We will now briefly examine a misjudgement that caused the initially deployed solution to be reimplemented. As the design was going to use USB serial converters to access the serial console a USB connected relay board was selected to switch the power to each slot. I had previously used serial controlled relay boards with a USB serial convertor however these were no longer available.

Initial deployment with USB controlled relay board
All the available USB relay boards were HID controlled, this did not initially seem to be an issue and Linux software was written to provide a reasonable interface. However it soon became apparent that the firmware on the purchased board was very buggy and crashed the host computer's USB stack multiple times.

Deployed solutionOnce it became apparent that the USB controlled power board was not viable a new design was conceived. As the Ethernet switch had ports available Ethernet controlled relay boards were acquired.

Evolution of 3mm PCB pillars
It did prove necessary to design and print some PCB support posts with M3 nut traps to allow the relay boards to be easily mounted using double sided adhesive pads.

By stacking the relay boards face to face and the Ethernet switch on top separated using nylon spacers it was possible to reduce the cable clutter and provide adequate cable routing space.

A busbar for Ground (black) and unswitched 12V (yellow) was constructed from two lengths of 5A chock block.

An issue with power supply stability was noted so a load resistor was added to the 12V supply and an adhesive thermal pad used to attach it to the aluminium base plate.

Completed redesign
It was most fortunate that the ethernet switch mounting holes lined up very well with the relay board mounting holes allowing for a neat stack.

This second edition is the one currently in use, it has proved reliable in operation and has been successfully updated with additional carriers.

The outstanding issues are mainly centered around the Raspberry Pi control board:
  • Needs its carrier fitting. It is currently just stuck to the subrack end plate.
  • Needs its Ethernet cable replacing. The existing one has developed a fault post installation.
  • Needs the USB hub supply separating from the device cable. The current arrangement lets the hub power the Pi which means you cannot power cycle it.
  • Connect its switched supply separately to the USB hub/devices.
Shopping listThe final bill of materials (excluding labour and workshop costs) which might be useful to anyone hoping to build their own version.

Prices are in GBP currency converted where appropriate and include tax at 20% and delivery to Cambridge UK and were correct as of April 2013.

The purchasing was not optimised and for example around 20GBP could be saved just by ordering all the shapeways parts in one order.
Base subrack
ItemSupplierQuantityLine Price
Schroff 24563-194 subrack kitFarnell141.28
Schroff 24560-351 guide railsFarnell313.65
Schroff rack rear horizontal railShapeways2100.00
1000mm length of 4mm threaded rodB and Q11.48
170mm x 431mm x 3mm Aluminium sheetinspired steel240.00
PSU mounting plateShapeways135.42
PCB standoffShapeways422.30
160mm Deep Modular PC supplyCCL155.76
TP-Link TL-SF1016DS 16-Port 10/100Mbps-SwitchCCL123.77
8 Channel 16A Relay Board Controlled Via EthernetRapid2126.00
Raspberry PiFarnell126.48
USB Serial convertersCCL1037.40
10 port strip style USB HUBEbay17.00
Parts for custom Ethernet cablesRS1326.00
Parts for custom molex power cables (salvaged from scrap ATX PSU)Workshop1111.00
33R 10W wirewound resistor for dummy loadRS11.26
24pin ATX female connector pre-wiredMaplin12.99
Akasa double sided thermal padMaplin15.00
Small cable tie basesMaplin16.49
Miscellaneous cable, connectors, nylon standoffs, solder, heatshrink, zip ties, nuts, washers etc. Workshop120.00
Total for subrack603.28

The carriers are similarly not optimally priced as over five GBP each can be saved by combining shipping on orders alone. Also the SSD drive selection was made some time ago and a newer model may be more suitable.
i.MX53 QSB carrier
ItemSupplierQuantityLine Price
i.MX53 QSBFarnell1105.52
Intel 320 SSD 80GCCL1111.83
Carrier boardShapeways130.00
combined sata data and power (15 to 20cm version)EBay15.00
Low profile right angle 5.5mm x 2.1mm barrel jackEBay10.25
Parts for 9pin serial cable extensionRS15.00
Miscellaneous solder, heatshrink, nylon nuts, bolts and washersWorkshop15.00
Total for carrier262.60

i.MX6 SABRE Lite carrier
ItemSupplierQuantityLine Price
i.MX6 SABRE LiteFarnell1128.06
Intel 320 SSD 80GCCL1111.83
Carrier boardShapeways135.00
combined sata data and power (15 to 20cm version)EBay15.00
Low profile right angle 5.5mm x 2.1mm barrel jackEBay10.25
Parts for 9pin serial cable modificationRS12.00
Miscellaneous solder, heatshrink, nylon nuts, bolts and washersWorkshop15.00
Total for carrier287.14
ConclusionThe solution works and in a 3U high 355mm deep subrack ten ARM development boards can be racked complete with local ethernet switching, power control and serial consoles.

Deployed system in situ configured as a build and test farm
The solution is neat and provides flexibility, density and reproducibility the "pile of stuff" solution failed to do.

For current prototype with nine filled slots the total cost was around 3000GBP or around 330GBP per slot which indicates a 100GBP per slot overhead over the "pile of stuff" solution. These figures omit the costs of the engineer and workshop time, which are estimated at an additional 1500GBP. Therefore a completed rack, fully filled with i.MX6 carriers costs around 5000GBP

Density could be increased if boards with lower height requirements were used however above twelve units there issues with Ethernet switch, power switch and USB port availability become a factor. For Example the 16 port Ethernet switch requires a port for uplink, one for each relay board and one for the console server which leaves only 12 ports for systems.

Addressing the outstanding issues would result in a much more user friendly solution. As the existing unit is in full time use and downtime is not easily scheduled for all ten systems, the issues are not likely to be fixed on the prototype and would have to be solved on a new build.

The solution is probably not suitable for turning into a product but that was not really the original aim. A commercial ARM blade server using this format would almost certainly use standard DIN connectors and a custom PCB design rather than adapting existing boards.

20 February 2013

Keith Packard: DRI3000

DRI3000 Even Better Direct Rendering This all started with the presentation that Eric Anholt and I did at the 2012 X developers conference, and subsequently wrote about in my DRI-Next posting. That discussion sketched out the goals of changing the existing DRI2-based direct rendering infrastructure. Last month, I gave a more detailed presentation at Linux.conf.au 2013 (the best free software conference in the world). That presentation was recorded, so you can watch it online. Or, you can read Nathan Willis summary at lwn.net. That presentation contained a lot more details about the specific techniques that will be used to implement the new system, in particular it included some initial indications of what kind of performance benefits the overall system might be able to produce. I sat down today and wrote down an initial protocol definition for two new extensions (because two extensions are always better than one). Together, these are designed to provide complete support for direct rendering APIs like OpenGL and offer a better alternative to DRI2. The DRI3 extension Dave Airlie and Eric Anholt refused to let me call either actual extension DRI3000, so the new direct rendering extension is called DRI3. It uses POSIX file descriptor passing to share kernel objects between the X server and the application. DRI3 is a very small extension in three requests:
  1. Open. Returns a file descriptor for a direct rendering device along with the name of the driver for a particular API (OpenGL, Video, etc).
  2. PixmapFromBuffer. Takes a kernel buffer object (Linux uses DMA-BUF) and creates a pixmap that references it. Any place a Pixmap can be used in the X protocol, you can now talk about a DMA-BUF object. This allows an application to do direct rendering, and then pass a reference to those results directly to the X server.
  3. BufferFromPixmap. This takes an existing pixmap and returns a file descriptor for the underlying kernel buffer object. This is needed for the GL Texture from Pixmap extension.
For OpenGL, the plan is to create all of the buffer objects on the client side, then pass the back buffer to the X server for display on the screen. By creating pixmaps, we avoid needing new object types in the X server and can use existing X apis that take pixmaps for these objects. The Swap extension Once you ve got direct rendered content in a Pixmap, you ll want to display it on the screen. You could simply use CopyArea from the pixmap to a window, but that isn t synchronzied to the vertical retrace signal. And, the semantics of the CopyArea operation precludes us from swapping the underlying buffers around, making it more expensive than strictly necessary. The Swap extension fills those needs. Because the DRI3 extension provides an X pixmap reference to the direct rendered content, the Swap extension doesn t need any new object types for its operation. Instead, it talks strictly about core X objects, using X pixmaps as the source of the new data and X drawables as the destination. The core of the Swap extension is one request SwapRegion. This request moves pixels from a pixmap to a drawable. It uses an X fixes Region object to specify the area of the destination being painted, and an offset within the source pixmap to align the two areas. A bunch of data are included in the reply from the SwapRegion request. First, you get a 64-bit sequence number identifying the swap itself. Then, you get a suggested geometry for the next source pixmap. Using the suggested geometry may result in performance improvements from the techniques described in the LCA talk above. The last bit of data included in the SwapRegion reply is a list of pixmaps which were used as source operands to earlier SwapRegion requests to the same drawable. Each pixmap is listed along with the 64-bit sequence number associated with an earlier SwapRegion operation which resulted in the contents which the pixmap now contains. Ok, so that sounds really confusing. Some examples are probably necessary. I m hoping you ll be able to tell that in both cases, the idle swap count tries to name the swap sequence at which time the destination drawable contained the contents currently in the pixmap. Note that even if the SwapRegion is implemented as a Copy operation, the provided source pixmap may not be included in the idle list as the copy may be delayed to meet the synchronization requirements specfied by the client. Finally, if you want to throttle rendering based upon when frames appear on the screen, Swap offers an event that can be delivered to the drawable after the operation actually takes place. Because the Swap extension needs to supply all of the OpenGL SwapBuffers semantics (including a multiplicity of OpenGL extensions related to that), I ve stolen a handful of DRI2 requests to provide the necessary bits for that:
  1. SwapGetMSC
  2. SwapWaitMSC
  3. SwapWaitSBC
These work just like the DRI2 requests of the same names. Current State of the Extensions Both of these extensions have an initial protocol specification written down and stored in git:
  1. DRI3 protocol
  2. Swap protocol

4 July 2012

Stefano Zacchiroli: bits from the DPL for June 2012

Monthly DPL bits, fresh from the oven^W^W^W hot from DebConf12, and just posted to d-d-a.
Howdy from DebConf12. It's hot, but it's also time to bother you again with a (not so) brief DPL activity report, this time for June 2012. Time-based freeze: DONE, short freeze: TODO Two highlights for this month. First, you've probably noticed Wheezy is now frozen, YAY. This is huge achievement for the release, but also for the project. It's the first time we do a time-based freeze, and it took some quite heated discussion at the beginning of the release cycle to decide to do this. And we did it properly: respecting the planned month and narrowing down the period later. This exercise has hopefully helped both DDs in their package planning and our upstreams in targeting Wheezy with stable releases of their software. Kudos to the release team for their coordination work! Now we've the second part still TODO: releasing Wheezy, without RC bugs, with a freeze period as short as possible. See the beginning of my last "bits from the DPL" mail for my usual song and dance :-P on how to deliver that, together. DebConf12 A lot of us will attend DebConf12. Enjoy it! ... and take the chance to both have fun and make great plans for Debian's future. But remember that "if it didn't happen on a mailing list, it didn't happen". Not all of us will be lucky enough to attend DebConf (in person or remotely). Make sure that those who don't can take part in your team decisions and get informed of what is going to happen here. Politics Zack's spring tour I spent a significant part of June doing Debian talks ins some sort of "spring tour" between Italy and France. In particular: Many thanks to the organizers of these events for inviting and sponsoring me (as well as other Debian people, in the ESRF case) and for their interest in Debian. Sprints Assets Discussions Some relevant discussions for project evolution has been going on in June and I took part into them. You might want to have a look at them: Misc Cheers.
PS the boring day-to-day activity log for June is available at master:/srv/leader/news/bits-from-the-DPL.txt.201206

22 March 2012

Julien Danjou: xpyb 1.3 released

It took a while to get it out, but finally, 3 years after the latest release (1.2), the version of 1.3 of xpyb (the XCB Python bindngs) is out. This version has a lot of improvement, and major bug fixes (memory corruption and memory leak were tracked down and fixed). One amazing feature that is now shipped with that release, is my code to export the xpyb API to other Python modules, allowing to draw with Pycairo in Python using XCB. Here is an example of a Python program that draws a spiral in a window using xpyb and Pycairo. You need xpyb >= 1.3 and Pycairo >= 1.10 to make this works.
import cairo
import xcb
from xcb.xproto import *
WIDTH, HEIGHT = 600, 600
def draw_spiral(ctx, width, height):
    """Draw a spiral with lines!"""
    wd = .02 * width
    hd = .02 * height
    width -= 2
    height -= 2
    ctx.move_to (width + 1, 1-hd)
    for i in range(9):
    ctx.rel_line_to (0, height - hd * (2 * i - 1))
    ctx.rel_line_to (- (width - wd * (2 *i)), 0)
    ctx.rel_line_to (0, - (height - hd * (2*i)))
    ctx.rel_line_to (width - wd * (2 * i + 1), 0)
    ctx.set_source_rgb (0, 0, 1)
    ctx.stroke()
# Connect to the X server
conn = xcb.connect()
# Get the X server setup
setup = conn.get_setup()
# Generate X ID for our X "objects"
window = conn.generate_id()
pixmap = conn.generate_id()
gc = conn.generate_id()
# Create a new window
conn.core.CreateWindow(setup.roots[0].root_depth, window,
                       # Parent is the root window
                       setup.roots[0].root,
                       0, 0, WIDTH, HEIGHT, 0, WindowClass.InputOutput,
                       setup.roots[0].root_visual,
                       CW.BackPixel   CW.EventMask,
                       [ setup.roots[0].white_pixel, EventMask.ButtonPress   EventMask.EnterWindow   EventMask.LeaveWindow   EventMask.Exposure ])
# Create a pixmap: it will be used to draw with cairo
conn.core.CreatePixmap(setup.roots[0].root_depth, pixmap, setup.roots[0].root,
                       WIDTH, HEIGHT)
# We just need a GC to copy later the pixmap on the window, so create one
# very simple
conn.core.CreateGC(gc, setup.roots[0].root, GC.Foreground   GC.Background,
                   [ setup.roots[0].black_pixel, setup.roots[0].white_pixel ])
# Create a cairo surface
surface = cairo.XCBSurface (conn, pixmap,
                            setup.roots[0].allowed_depths[0].visuals[0], WIDTH, HEIGHT)
# Create a cairo context with that surface
ctx = cairo.Context(surface)
# Paint everything in white
ctx.set_source_rgb (1, 1, 1)
ctx.set_operator (cairo.OPERATOR_SOURCE)
ctx.paint()
# Draw our spiral
draw_spiral (ctx, WIDTH, HEIGHT)
# Map the window on the screen so it gets visible
conn.core.MapWindow(window)
# Flush all X requests to the X server
conn.flush()
while True:
    try:
        event = conn.wait_for_event()
    except xcb.ProtocolException, error:
        print "Protocol error %s received!" % error.__class__.__name__
        break
    except:
        break
    # ExposeEvent are received when we need to refresh the content of the
    # window, so we copy the content of the pixmap (where cairo drew) in the
    # window
    if isinstance(event, ExposeEvent):
        conn.core.CopyArea(pixmap, window, gc, 0, 0, 0, 0, WIDTH, HEIGHT)
    # You click, I quit.
    elif isinstance(event, ButtonPressEvent):
        break
    conn.flush()
Seeing the complexity it is to draw something simple with this technology, I somehow understand why nobody bothered to release or use the code during the last 3 years. But hey, now that it's out, you can build the next Python based desktop environment with bleeding edge technologies. :-)

15 July 2010

Enrico Zini: On python, frameworks and TOOWTDI

On python, frameworks and TOOWTDI The Python world is ridden with frameworks, microframeworks, metaframeworks and their likes. They are often very clever things, but more often than not they are a tool of despair. A very peculiar thing about Python web frameworkish things is that there are so many of them. There's cherrypy (in its various API redesigns), fapws, gunicorn, bottle and flask, paste, werkzeug and flup, tornado, pylons, turbogears 1 and 2, django, repoze who, what and whatnot, all the myriad of rendering engines and buffet as a metathing on top of them, diesel, twisted, and I apologise if I don't spend my day listing and hyperlinking them all, I hope I made my point. Frameworks are supposed to standardise some aspects of programming; the nice thing about standards is that there are so many of them to choose from, and they all suck, so I'll make my own. But wasn't Python supposed to be the world of TIOOWTDI? Ok, everbody knows it isn't. Just in the standard library there are 2 implementations of pickle and 2 urllibs. But people like the TIOOWTDI idea. I believe the reason people like the TIOOWTDI idea is because it creates a framework. It standardises some aspects of programming, and defines building blocks that guarantee that people doing similar jobs will be using similar sets of components. Let's take for example the datetime module in the standard library. It is an embarassing example of a badly designed module, so embarassing that the standard library documentation continuously fails to document its fundamental design flaws and common work-arounds hoping that noone notices them, but as a consequence each poor soul starting to use it for nontrivial things has to google for hours in despair to rediscover in how many ways it's broken. But still, datetime works as a structure to hold those values that make a date, time, or full UTC timestamp. For that job it's become the standard, and as such it's an important component of the Python TIOOWTDI framework: one can use it to exchange datetimes among different libraries: for example ORMs are using it instead of rolling their own, which makes database programming so much easier when date/time is involved. Even if the implementation is far from perfect, once we apply TIOOWTDI to dates and timestamps, python code from different authors can exchange dates without worries. This is much better than having 3 different superior datetime libraries and having to convert date objects from one to another when passing values from a web form to an ORM. There is an often overlooked Python framework. The Pyton framework. It's called TIOOWTDI. All the micro-mini-midi-maxi-meta-frameworks that people scatter around, are, or should be, just experiments, proofs of concept, competing ideas waiting to be distilled in The Only One Way, bringing the Python experience one step forward. What is unfortunate is that this last distillation thing happens so rarely that people get used to the idea of having to use proof of concept code to get things done. Update: this post apparently wasn't very clear, so here is some clarification:

23 April 2010

Joachim Breitner: Making dictionary passting explicit in Haskell

Haskell provides type classes to support polymorphism. A type class defines a few methods, which can then be implemented for a concrete type in the type class instance. This is a powerful system, but it also has it drawbacks. Most notably, each type can have at most one implementation of the type class. But sometimes you need to use a different implementation. If, for example, you used the Binary class to store data on disk. Now you changed your data type and the binary instance, and you can not read the old data any more. One solution is to re-name your type using newtype and implement another type instance for that. Often, this is enough. But still, instances are not first-class-citizens. You can not pass them around or modify them, as you can pass around and modify data and functions. Under the hood of the compiler, things look different. The ghc puts the methods of the instance in a dictionary and passes that implicitly to any functions having a (Class a) constraint. (Other implementations exist though) If one could make that behavior explicit, one could easily modify the instance before passing it to the function. But this is unfortunately not possible. But it is possible to pass an explicit dictionary along the data. I use the Monoid class as an example, and define a representation of the dictionary to-be-passed, as well as the dictionary of the default instance:
data MonoidDict a = MonoidDict
    ed_mempty :: a
  , ed_mappend :: a -> a -> a
   
monoidDict :: Monoid a => MonoidDict a
monoidDict = MonoidDict mempty mappend
(For conciseness, I ignore the mconcat method.) My first idea was to pass this instance along with data: (MonoidDict a, a). But this would not work because there are methods, such as mempty, who need the dictionary without getting passed a value to use. Therefore, I need to put the dictionary both in the covariant and the contravariant position:
newtype WithMonoidDict a = WithMonoidDict (MonoidDict a -> (MonoidDict a, a))
We need functions to clamp a dictionary to a value, and to extract it again:
wrapWithCustomMonoidDict :: MonoidDict a -> a -> WithMonoidDict a
wrapWithCustomMonoidDict dict val = WithMonoidDict $ const (dict, val)
extractFromCustomMonoidDict :: MonoidDict a -> WithMonoidDict a -> a
extractFromCustomMonoidDict dict (WithMonoidDict f) = snd (f dict)
Note that both expect the dictionary, so that it can be fed into WithMonoidDict from both sides . For convenience, we can define variants that use the standard instance:
wrapWithMonoidDict :: Monoid a => a -> WithMonoidDict a
wrapWithMonoidDict = wrapWithCustomMonoidDict monoidDict
extractFromMonoidDict :: Monoid a => WithMonoidDict a -> a
extractFromMonoidDict = extractFromCustomMonoidDict monoidDict
We want to be able to pass the wrapped values as any other value with a Monoid instance, so we need to declare that:
instance Monoid (WithMonoidDict a) where
  mempty = WithMonoidDict (\d -> (d, ed_mempty d))
  mappend (WithMonoidDict f1) (WithMonoidDict f2) = WithMonoidDict $ \d ->
  let (d1,v1) = f1 d
  (d2,v2) = f2 d
  in  (d1, ed_mappend d1 v1 v2)
Note that mappend has the choice between three dictionaries This is not a good sign, but let s hope that they are all the same. Does it work? Let s see:
listInstance :: MonoidDict [a]
listInstance = monoidDict
reverseInstance :: MonoidDict [a]
reverseInstance = monoidDict   ed_mappend = \l1 l2 -> l2 ++ l1  
examples = do
  let l1 = [1,2,3]
  let l2 = [4,5,6]
  putStrLn $ "Example lists: " ++ show l1 ++ " " ++ show l2
  putStrLn $ "l1 ++ l2: " ++ show (l1 ++ l2) 
  putStrLn $ "l1  mappend  l2: " ++ show (l1  mappend  l2) 
  putStrLn $ "Wrapped with default instance:"
  putStrLn $ "l1  mappend  l2: " ++ show (
  extractFromMonoidDict $ wrapWithMonoidDict l1  mappend  wrapWithMonoidDict l2)
  putStrLn $ "Same with reversed monoid instance:"
  putStrLn $ "l1  mappend  l2: " ++ show (
  extractFromCustomMonoidDict reverseInstance $
  wrapWithCustomMonoidDict reverseInstance l1  mappend 
  wrapWithCustomMonoidDict reverseInstance l2)
Running examples gives this output:
Example lists: [1,2,3] [4,5,6]
l1 ++ l2: [1,2,3,4,5,6]
l1  mappend  l2: [1,2,3,4,5,6]
Wrapped with default instance:
l1  mappend  l2: [1,2,3,4,5,6]
Same with reversed monoid instance:
l1  mappend  l2: [4,5,6,1,2,3]
Indeed it works. Unfortunately, this approach is not sufficient for all cases. It is perfectly valid to have functions with signature (Monoid a => Int -> Int). Obviously, this function would have no access to the explicit dictionary, because no value of type a is passed to or from it.<monoid a="a"> I wonder if it would be possible to extend the syntax somehow to be able to properly pass an alternative dictionary to such functions. But given that not all compilers use dictionary passing, my hopes are low.

Joachim Breitner: Making dictionary passing explicit in Haskell

Haskell provides type classes to support polymorphism. A type class defines a few methods, which can then be implemented for a concrete type in the type class instance. This is a powerful system, but it also has it drawbacks. Most notably, each type can have at most one implementation of the type class. But sometimes you need to use a different implementation. If, for example, you used the Binary class to store data on disk. Now you changed your data type and the binary instance, and you can not read the old data any more. One solution is to re-name your type using newtype and implement another type instance for that. Often, this is enough. But still, instances are not first-class-citizens. You can not pass them around or modify them, as you can pass around and modify data and functions. Under the hood of the compiler, things look different. The ghc puts the methods of the instance in a dictionary and passes that implicitly to any functions having a (Class a) constraint. (Other implementations exist though) If one could make that behavior explicit, one could easily modify the instance before passing it to the function. But this is unfortunately not possible. But it is possible to pass an explicit dictionary along the data. I use the Monoid class as an example, and define a representation of the dictionary to-be-passed, as well as the dictionary of the default instance:
data MonoidDict a = MonoidDict
    ed_mempty :: a
  , ed_mappend :: a -> a -> a
   
monoidDict :: Monoid a => MonoidDict a
monoidDict = MonoidDict mempty mappend
(For conciseness, I ignore the mconcat method.) My first idea was to pass this instance along with data: (MonoidDict a, a). But this would not work because there are methods, such as mempty, who need the dictionary without getting passed a value to use. Therefore, I need to put the dictionary both in the covariant and the contravariant position:
newtype WithMonoidDict a = WithMonoidDict (MonoidDict a -> (MonoidDict a, a))
We need functions to clamp a dictionary to a value, and to extract it again:
wrapWithCustomMonoidDict :: MonoidDict a -> a -> WithMonoidDict a
wrapWithCustomMonoidDict dict val = WithMonoidDict $ const (dict, val)
extractFromCustomMonoidDict :: MonoidDict a -> WithMonoidDict a -> a
extractFromCustomMonoidDict dict (WithMonoidDict f) = snd (f dict)
Note that both expect the dictionary, so that it can be fed into WithMonoidDict from both sides . For convenience, we can define variants that use the standard instance:
wrapWithMonoidDict :: Monoid a => a -> WithMonoidDict a
wrapWithMonoidDict = wrapWithCustomMonoidDict monoidDict
extractFromMonoidDict :: Monoid a => WithMonoidDict a -> a
extractFromMonoidDict = extractFromCustomMonoidDict monoidDict
We want to be able to pass the wrapped values as any other value with a Monoid instance, so we need to declare that:
instance Monoid (WithMonoidDict a) where
  mempty = WithMonoidDict (\d -> (d, ed_mempty d))
  mappend (WithMonoidDict f1) (WithMonoidDict f2) = WithMonoidDict $ \d ->
  let (d1,v1) = f1 d
  (d2,v2) = f2 d
  in  (d1, ed_mappend d1 v1 v2)
Note that mappend has the choice between three dictionaries This is not a good sign, but let s hope that they are all the same. Does it work? Let s see:
listInstance :: MonoidDict [a]
listInstance = monoidDict
reverseInstance :: MonoidDict [a]
reverseInstance = monoidDict   ed_mappend = \l1 l2 -> l2 ++ l1  
examples = do
  let l1 = [1,2,3]
  let l2 = [4,5,6]
  putStrLn $ "Example lists: " ++ show l1 ++ " " ++ show l2
  putStrLn $ "l1 ++ l2: " ++ show (l1 ++ l2) 
  putStrLn $ "l1  mappend  l2: " ++ show (l1  mappend  l2) 
  putStrLn $ "Wrapped with default instance:"
  putStrLn $ "l1  mappend  l2: " ++ show (
  extractFromMonoidDict $ wrapWithMonoidDict l1  mappend  wrapWithMonoidDict l2)
  putStrLn $ "Same with reversed monoid instance:"
  putStrLn $ "l1  mappend  l2: " ++ show (
  extractFromCustomMonoidDict reverseInstance $
  wrapWithCustomMonoidDict reverseInstance l1  mappend 
  wrapWithCustomMonoidDict reverseInstance l2)
Running examples gives this output:
Example lists: [1,2,3] [4,5,6]
l1 ++ l2: [1,2,3,4,5,6]
l1  mappend  l2: [1,2,3,4,5,6]
Wrapped with default instance:
l1  mappend  l2: [1,2,3,4,5,6]
Same with reversed monoid instance:
l1  mappend  l2: [4,5,6,1,2,3]
Indeed it works. Unfortunately, this approach is not sufficient for all cases. It is perfectly valid to have a function with signature (Monoid a => Maybe a -> Maybe a), whose behavior depends on the instance of a, even when being passed Nothing and returning Nothing. Such a function woul<monoid a="a">d have a problem here, because the dictionary would not be passed to the function.
</monoid> I wonder if it would be possible to extend the Haskell language somehow to be able to properly pass an alternative dictionary to such functions. But given that not all compilers use dictionary passing, my hopes are low.

11 November 2009

Yves-Alexis Perez: Key transition (this is _not_ a meme)

Ok, so following the trend, I created some time ago a new GPG key, which I'm now transitioning too. I've set up a transition document, available at http://molly.corsac.net/~corsac/key-transition.txt. It's signed by both the old and the new keys and is reproduced below:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160,SHA512
Wed, 11 Nov 2009 13:44:05 +0100
I've recently set up a new RSA-based GPG key, and will be transitioning away
from my old DSA-based one.  The old key will be revoked soon, so I prefer all
future correspondence to use the new one.  I would also like to ensure that
this new key is well-integrated into the web of trust.  This message is signed
by both keys to certify the transition.
The old DSA key was:
pub   1024D/C5C05BAE 2004-11-11
      Key fingerprint = DE26 2FC4 7097 FFC6 DE2C  D8C0 4D44 C020 C5C0 5BAE
The new RSA key is:
pub   4096R/71EF0BA8 2009-05-06
      Key fingerprint = 4510 DCB5 7ED4 7040 60C6  6476 3055 0F78 71EF 0BA8
If you already know my old key, you can verify that the new key is
signed by the old one:
  gpg --check-sigs 71EF0BA8
If you don't already know my old key, or if you're extra-paranoid, you
can check the fingerprint against the one given above:
  gpg --fingerprint 71EF0BA8
If you have previously signed my old DSA key, and if you're satisfied
that you've got the correct new RSA key, then I'd appreciate it if you
would sign my new key as well:
  caff 71EF0BA8
The caff program is in the signing-party package in Debian.  Please be careful
to generate signatures that don't rely on the weakening SHA-1 hash algorithm,
which requires some careful configuration even if you've already configured
gpg correctly.  See http://www.gag.com/bdale/blog/posts/Strong_Keys.html for
the gory details.
Thanks,
- --
Yves-Alexis Perez
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEAREDAAYFAkr6sqQACgkQTUTAIMXAW64HiACeIyabQueDHAeiAX8EkIeApiDj
++UAn2z7YkjHx0lQh0+s5WdhikG0YztiiQIcBAEBCgAGBQJK+rKkAAoJEDBVD3hx
7wuodUcQAKMbG9Rehxz+uZ6fST99cHt5Fjnv9TorY4hQaQK+85ZgiwPaHMHfYM1G
5hcrXI+JFUpz8j40deZuaWuspOdHBHwnHNQril8MqT0CJgtB6HFTo+w/7Lmmui5M
DDMMed39UJl7bF73hV9ywGecxPpeh+dtoVnh0VT16uK2xTvW6ICEZgaPw1xfPUHS
+jxQ7I05X1OWQkPpmhxXJqGclDyO+qx4CJZsOxUAvt2LphHxhZxB3QE5OUdudGKQ
AH6KhC4rpNQdJVMX20SG8PybL/AipN3Y8N/63VkoqVC2heRlaQ69HjsuqIAkIyan
hHnqmJH8Q+TDTbdKZvOQv6jcd4o3VSibz0T9MwnOfqQ0uRYyTpaXC0vLUH6lXaC4
eK+VVWbY8vCAFHR3h80Q61i2me2HU5ly7a/W22dz19zzDNNC5q9MO78uIYkUK78N
Z0wzJrmOxRyhvs5DOSOpNVlXZhffNQM1f42xxG8cUDaIf7pR5jK+xqHV7tIBQE1D
CrD0mt+YQCnngK0i4wQTO7VT/vjypf4A9W+VSsoJJpRhBbngU4pHu9JWqO84/7AA
j5FN8ug15MWysaS+FQ/EqzHmT7BGBWaTPv3yGlHKUjx0w4bPEpbH7y3fwHAcmOFf
xFRzvZFQ03zeer06yAqTVNuwr77HZgrCzgyQVgIkegAg6iUPiZcs
=CBT+
-----END PGP SIGNATURE-----

10 November 2007

Simon Richter: C and multithreading

Ian Lance Tailor started a discussion that has in the meantime reached Planet Debian via Miry and Giacomo. The single-threaded memory model is the only sane choice for languages that are as close to the "metal" as C and C++ are. For a multi-threaded model to work, the compiler needs to have intrinsic knowledge of "locking", and it needs to emit the appropriate code around all accesses to variables that cannot be accessed atomically (which also means that signal handling does not work — the signal might arrive in a critical section). Java uses such a model. In the language, there is the "synchronized" keyword, which can be used either as a modifier on a method definition (locking the object the method is called on), or before a block (locking an explicitly specified object). The lock is recursive, so calling another "synchronized" method on the same object works fine, leaving the last one finally unlocks the object. Memory consistency is achieved by flushing the object before unlocking (so accessing an object that another thread may use without locking it is still undefined behaviour — but class authors can avoid that by allowing access only through "synchronized" methods). I doubt we want the same in C/C++. For one, it adds a recursive lock to each instance of a class or struct, whether needed or not, since the compiler cannot rule out that some code might want to use this object as a lock. Also, the object inclusion and inheritance rules are quite different, as Java does not have inclusion, and only allows simple inheritance. Basically, if I were to emulate Java's rules, I'd need a "virtual" base class to provide the lock, which is a lot of extra overhead since every derived class now needs to keep the offset to the most derived class. The other two big show stoppers are the "intrinsic" knowledge of locking that the compiler must have (it needs to know how to embed a recursive lock into an object, which is an operating-system dependent structure as it stores some thread identifier for the current holder of the lock) and that it becomes a lot more difficult to write signal handlers, as you need to avoid any code that might implicitly lock some object (which is true already, but there is no implicit locking). Java can do that, as they have a virtual machine that provides locking primitives and object lookup (so they can keep the lock separate). There is a change to the memory model I'd like to see though: the concept of an I/O buffer that behaves like regular memory (so load/store coalescing/reordering is allowed), but will be flushed before volatile accesses and re-read afterwards (optionally, with finer grained control). This would work for both hardware accesses (where the volatile access triggers some hardware) and for "hosted" environments (where the compiler assumes that system calls perform volatile accesses), and still allow most optimisations like writing in the hardware's natural bus width. If you want to write reliable multi-threaded code with current compilers, there are two simple rules:
  1. any data structure shared by multiple threads needs to be marked volatile
  2. any access to a shared data structure needs to be guarded by a lock
The accesses to the shared data structure are ordered with regard to the locking functions and cannot be optimized out, which is exactly what you want. The appropriate mental model is that anything that is not declared volatile (or handled by a platform locking primitive) cannot be seen by the other threads.

Next.