If you ve perused the ActivityPub feed of certificates whose keys are known to be compromised, and clicked on the Show More button to see the name of the certificate issuer, you may have noticed that some issuers seem to come up again and again.
This might make sense after all, if a CA is issuing a large volume of certificates, they ll be seen more often in a list of compromised certificates.
In an attempt to see if there is anything that we can learn from this data, though, I did a bit of digging, and came up with some illuminating results.
The Procedure
I started off by finding all the unexpired certificates logged in Certificate Transparency (CT) logs that have a key that is in the pwnedkeys database as having been publicly disclosed.
From this list of certificates, I removed duplicates by matching up issuer/serial number tuples, and then reduced the set by counting the number of unique certificates by their issuer.
This gave me a list of the issuers of these certificates, which looks a bit like this:
/C=BE/O=GlobalSign nv-sa/CN=AlphaSSL CA - SHA256 - G4
/C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Domain Validation Secure Server CA
/C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Organization Validation Secure Server CA
/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies, Inc./OU=http://certs.starfieldtech.com/repository//CN=Starfield Secure Certificate Authority - G2
/C=AT/O=ZeroSSL/CN=ZeroSSL RSA Domain Secure Site CA
/C=BE/O=GlobalSign nv-sa/CN=GlobalSign GCC R3 DV TLS CA 2020
Rather than try to work with raw issuers (because, as Andrew Ayer says, The SSL Certificate Issuer Field is a Lie), I mapped these issuers to the organisations that manage them, and summed the counts for those grouped issuers together.
The Data
The end result of this work is the following table, sorted by the count of certificates which have been compromised by exposing their private key:
Issuer
Compromised Count
Sectigo
170
ISRG (Let's Encrypt)
161
GoDaddy
141
DigiCert
81
GlobalSign
46
Entrust
3
SSL.com
1
If you re familiar with the CA ecosystem, you ll probably recognise that the organisations with large numbers of compromised certificates are also those who issue a lot of certificates.
So far, nothing particularly surprising, then.
Let s look more closely at the relationships, though, to see if we can get more useful insights.
Volume Control
Using the issuance volume report from crt.sh, we can compare issuance volumes to compromise counts, to come up with a compromise rate .
I m using the Unexpired Precertificates colume from the issuance volume report, as I feel that s the number that best matches the certificate population I m examining to find compromised certificates.
To maintain parity with the previous table, this one is still sorted by the count of certificates that have been compromised.
Issuer
Issuance Volume
Compromised Count
Compromise Rate
Sectigo
88,323,068
170
1 in 519,547
ISRG (Let's Encrypt)
315,476,402
161
1 in 1,959,480
GoDaddy
56,121,429
141
1 in 398,024
DigiCert
144,713,475
81
1 in 1,786,586
GlobalSign
1,438,485
46
1 in 31,271
Entrust
23,166
3
1 in 7,722
SSL.com
171,816
1
1 in 171,816
If we now sort this table by compromise rate, we can see which organisations have the most (and least) leakiness going on from their customers:
Issuer
Issuance Volume
Compromised Count
Compromise Rate
Entrust
23,166
3
1 in 7,722
GlobalSign
1,438,485
46
1 in 31,271
SSL.com
171,816
1
1 in 171,816
GoDaddy
56,121,429
141
1 in 398,024
Sectigo
88,323,068
170
1 in 519,547
DigiCert
144,713,475
81
1 in 1,786,586
ISRG (Let's Encrypt)
315,476,402
161
1 in 1,959,480
By grouping by order-of-magnitude in the compromise rate, we can identify three bands :
The Super Leakers: Customers of Entrust and GlobalSign seem to love to lose control of their private keys.
For Entrust, at least, though, the small volumes involved make the numbers somewhat untrustworthy.
The three compromised certificates could very well belong to just one customer, for instance.
I m not aware of anything that GlobalSign does that would make them such an outlier, either, so I m inclined to think they just got unlucky with one or two customers, but as CAs don t include customer IDs in the certificates they issue, it s not possible to say whether that s the actual cause or not.
The Regular Leakers: Customers of SSL.com, GoDaddy, and Sectigo all have compromise rates in the 1-in-hundreds-of-thousands range.
Again, the low volumes of SSL.com make the numbers somewhat unreliable, but the other two organisations in this group have large enough numbers that we can rely on that data fairly well, I think.
The Low Leakers: Customers of DigiCert and Let s Encrypt are at least three times less likely than customers of the regular leakers to lose control of their private keys.
Good for them!
Now we have some useful insights we can think about.
Why Is It So?
All of the organisations on the list, with the exception of Let s Encrypt, are what one might term traditional CAs.
To a first approximation, it s reasonable to assume that the vast majority of the customers of these traditional CAs probably manage their certificates the same way they have for the past two decades or more.
That is, they generate a key and CSR, upload the CSR to the CA to get a certificate, then copy the cert and key somewhere.
Since humans are handling the keys, there s a higher risk of the humans using either risky practices, or making a mistake, and exposing the private key to the world.
Let s Encrypt, on the other hand, issues all of its certificates using the ACME (Automatic Certificate Management Environment) protocol, and all of the Let s Encrypt documentation encourages the use of software tools to generate keys, issue certificates, and install them for use.
Given that Let s Encrypt has 161 compromised certificates currently in the wild, it s clear that the automation in use is far from perfect, but the significantly lower compromise rate suggests to me that lifecycle automation at least reduces the rate of key compromise, even though it doesn t eliminate it completely.
Sidebar: ACME Does Not Currently Rule The World
It is true that all of the organisations in this analysis also provide ACME issuance workflows, should customers desire it.
However, the traditional CA companies have been around a lot longer than ACME has, and so they probably acquired many of their customers before ACME existed.
Given that it s incredibly hard to get humans to change the way they do things, once they have a way that works , it seems reasonable to assume that most of the certificates issued by these CAs are handled in the same human-centric, error-prone manner they always have been.
If organisations would like to refute this assumption, though, by sharing their data on ACME vs legacy issuance rates, I m sure we d all be extremely interested.
Explaining the Outlier
The difference in presumed issuance practices would seem to explain the significant difference in compromise rates between Let s Encrypt and the other organisations, if it weren t for one outlier.
This is a largely traditional CA, with the manual-handling issues that implies, but with a compromise rate close to that of Let s Encrypt.
We are, of course, talking about DigiCert.
The thing about DigiCert, that doesn t show up in the raw numbers from crt.sh, is that DigiCert manages the issuance of certificates for several of the biggest hosted TLS providers, such as CloudFlare and AWS.
When these services obtain a certificate from DigiCert on their customer s behalf, the private key is kept locked away, and no human can (we hope) get access to the private key.
This is supported by the fact that no certificates identifiably issued to either CloudFlare or AWS appear in the set of certificates with compromised keys.
When we ask for all certificates issued by DigiCert , we get both the certificates issued to these big providers, which are very good at keeping their keys under control, as well as the certificates issued to everyone else, whose key handling practices may not be quite so stringent.
It s possible, though not trivial, to account for certificates issued to these hosted TLS providers, because the certificates they use are issued from intermediates branded to those companies.
With the crt.sh psql interface we can run this query to get the total number of unexpired precertificates issued to these managed services:
SELECT SUM(sub.NUM_ISSUED[2] - sub.NUM_EXPIRED[2])
FROM (
SELECT ca.name, max(coalesce(coalesce(nullif(trim(cc.SUBORDINATE_CA_OWNER), ''), nullif(trim(cc.CA_OWNER), '')), cc.INCLUDED_CERTIFICATE_OWNER)) as OWNER,
ca.NUM_ISSUED, ca.NUM_EXPIRED
FROM ccadb_certificate cc, ca_certificate cac, ca
WHERE cc.CERTIFICATE_ID = cac.CERTIFICATE_ID
AND cac.CA_ID = ca.ID
GROUP BY ca.ID
) sub
WHERE sub.name ILIKE '%Amazon%' OR sub.name ILIKE '%CloudFlare%' AND sub.owner = 'DigiCert';
The number I get from running that query is 104,316,112, which should be subtracted from DigiCert s total issuance figures to get a more accurate view of what DigiCert s regular customers do with their private keys.
When I do this, the compromise rates table, sorted by the compromise rate, looks like this:
Issuer
Issuance Volume
Compromised Count
Compromise Rate
Entrust
23,166
3
1 in 7,722
GlobalSign
1,438,485
46
1 in 31,271
SSL.com
171,816
1
1 in 171,816
GoDaddy
56,121,429
141
1 in 398,024
"Regular" DigiCert
40,397,363
81
1 in 498,732
Sectigo
88,323,068
170
1 in 519,547
All DigiCert
144,713,475
81
1 in 1,786,586
ISRG (Let's Encrypt)
315,476,402
161
1 in 1,959,480
In short, it appears that DigiCert s regular customers are just as likely as GoDaddy or Sectigo customers to expose their private keys.
What Does It All Mean?
The takeaway from all this is fairly straightforward, and not overly surprising, I believe.
The less humans have to do with certificate issuance, the less likely they are to compromise that certificate by exposing the private key.
While it may not be surprising, it is nice to have some empirical evidence to back up the common wisdom.
Fully-managed TLS providers, such as CloudFlare, AWS Certificate Manager, and whatever Azure s thing is called, is the platonic ideal of this principle: never give humans any opportunity to expose a private key.
I m not saying you should use one of these providers, but the security approach they have adopted appears to be the optimal one, and should be emulated universally.
The ACME protocol is the next best, in that there are a variety of standardised tools widely available that allow humans to take themselves out of the loop, but it s still possible for humans to handle (and mistakenly expose) key material if they try hard enough.
Legacy issuance methods, which either cannot be automated, or require custom, per-provider automation to be developed, appear to be at least four times less helpful to the goal of avoiding compromise of the private key associated with a certificate.
Humans Are, Of Course, The Problem
This observation that if you don t let humans near keys, they don t get leaked is further supported by considering the biggest issuers by volume who have not issued any certificates whose keys have been compromised: Google Trust Services (fourth largest issuer overall, with 57,084,529 unexpired precertificates), and Microsoft Corporation (sixth largest issuer overall, with 22,852,468 unexpired precertificates).
It appears that somewhere between most and basically all of the certificates these organisations issue are to customers of their public clouds, and my understanding is that the keys for these certificates are managed in same manner as CloudFlare and AWS the keys are locked away where humans can t get to them.
It should, of course, go without saying that if a human can never have access to a private key, it makes it rather difficult for a human to expose it.
More broadly, if you are building something that handles sensitive or secret data, the more you can do to keep humans out of the loop, the better everything will be.
Your Support is Appreciated
If you d like to see more analysis of how key compromise happens, and the lessons we can learn from examining billions of certificates, please show your support by buying me a refreshing beverage.
Trawling CT logs is thirsty work.
Appendix: Methodology Limitations
In the interests of clarity, I feel it s important to describe ways in which my research might be flawed.
Here are the things I know of that may have impacted the accuracy, that I couldn t feasibly account for.
Time Periods: Because time never stops, there is likely to be some slight mismatches in the numbers obtained from the various data sources, because they weren t collected at exactly the same moment.
Issuer-to-Organisation Mapping: It s possible that the way I mapped issuers to organisations doesn t match exactly with how crt.sh does it, meaning that counts might be skewed.
I tried to minimise that by using the same data sources (the CCADB AllCertificates report) that I believe that crt.sh uses for its mapping, but I cannot be certain of a perfect match.
Unwarranted Grouping: I ve drawn some conclusions about the practices of the various organisations based on their general approach to certificate issuance.
If a particular subordinate CA that I ve grouped into the parent organisation is managed in some unusual way, that might cause my conclusions to be erroneous.
I was able to fairly easily separate out CloudFlare, AWS, and Azure, but there are almost certainly others that I didn t spot, because hoo boy there are a lot of intermediate CAs out there.
I have been a bearded subject since I was 18, back in 1994. Yes,
during 1999-2000, I shaved for my military service, and I briefly tried the
goatee look in 2008 Few people nowadays can imagine my face without
a forest of hair.
But sometimes, life happens. And, unlike my good friend
Bdale, I didn t get Linus to do the
honors But, all in all, here I am:
Turns out, I have been suffering from quite bad skin infections for a
couple of years already. Last Friday, I checked in to the hospital,
with an ugly, swollen face (I won t put you through that), and the
hospital staff decided it was in my best interests to trim my
beard. And then some more. And then shave me. I sat in the hospital
for four days, getting soaked (medical term) with antibiotics and
otherstuff, got my recipes for the next few days, and well, I
really hope that s the end of the infections. We shall see!
So, this is the result of the loving and caring work of three
different nurses. Yes, not clean-shaven (I should not trim it
further, as shaving blades are a risk of reinfection).
Anyway I guess the bits of hair you see over the place will not
take too long to become a beard again, even get somewhat
respectable. But I thought some of you would like to see the real me
PS- Thanks to all who have reached out with good wishes. All is fine!
The goal behind reproducible builds is to ensure that no deliberate flaws have been introduced during compilation processes via promising or mandating that identical results are always generated from a given source. This allowing multiple third-parties to come to an agreement on whether a build was compromised or not by a system of distributed consensus.
In these reports we outline the most important things that have been happening in the world of reproducible builds in the past month:
First mentioned in our March 2021 report, Martin Heinz published two blog posts on sigstore, a project that endeavours to offer software signing as a public good, [the] software-signing equivalent to Let s Encrypt . The two posts, the first entitled Sigstore: A Solution to Software Supply Chain Security outlines more about the project and justifies its existence:
Software signing is not a new problem, so there must be some solution already, right? Yes, but signing software and maintaining keys is very difficult especially for non-security folks and UX of existing tools such as PGP leave much to be desired. That s why we need something like sigstore - an easy to use software/toolset for signing software artifacts.
Some time ago I checked Signal s reproducibility and it failed. I asked others to test in case I did something wrong, but nobody made any reports. Since then I tried to test the Google Play Store version of the apk against one I compiled myself, and that doesn t match either.
BitcoinBinary.org was announced this month, which aims to be a repository of Reproducible Build Proofs for Bitcoin Projects :
Most users are not capable of building from source code themselves, but we can at least get them able enough to check signatures and shasums. When reputable people who can tell everyone they were able to reproduce the project s build, others at least have a secondary source of validation.
Related to this, there was continuing discussion on how to embed/encode the build metadata for the Debian live images which were being worked on by Roland Clobus.
Ariadne Conill published another detailed blog post related to various security initiatives within the Alpine Linux distribution. After summarising some conventional security work being done (eg. with sudo and the release of OpenSSH version 3.0), Ariadne included another section on reproducible builds: The main blocker [was] determining what to do about storing the build metadata so that a build environment can be recreated precisely .
Finally, Bernhard M. Wiedemann posted his monthly reproducible builds status report.
Community news
On our website this month, Bernhard M. Wiedemann fixed some broken links [] and Holger Levsen made a number of changes to the Who is Involved? page [][][]. On our mailing list, Magnus Ihse Bursie started a thread with the subject Reproducible builds on Java, which begins as follows:
I m working for Oracle in the Build Group for OpenJDK which is primary responsible for creating a built artifact of the OpenJDK source code. [ ] For the last few years, we have worked on a low-effort, background-style project to make the build of OpenJDK itself building reproducible. We ve come far, but there are still issues I d like to address. []
diffoscopediffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 183, 184 and 185 as well as performed significant triaging of merge requests and other issues in addition to making the following changes:
New features:
Support a newer format version of the R language s .rds files. []
Don t call close_archive when garbage collecting Archive instances, unless open_archive definitely returned successfully. This prevents, for example, an AttributeError where PGPContainer s cleanup routines were rightfully assuming that its temporary directory had actually been created. []
Fix (and test) the comparison of R language s .rdb files after refactoring temporary directory handling. []
Ensure that RPM archives exists in the Debian package description, regardless of whether python3-rpm is installed or not at build time. []
Codebase improvements:
Use our assert_diff routine in tests/comparators/test_rdata.py. []
Move diffoscope.versions to diffoscope.tests.utils.versions. []
Reformat a number of modules with Black. [][]
However, the following changes were also made:
Mattia Rizzolo:
Fix an autopkgtest caused by the androguard module not being in the (expected) python3-androguard Debian package. []
Appease a shellcheck warning in debian/tests/control.sh. []
Ignore a warning from h5py in our tests that doesn t concern us. []
Drop a trailing .1 from the Standards-Version field as it s required. []
Zbigniew J drzejewski-Szmek:
Stop using the deprecated distutils.spawn.find_executable utility. [][][][][]
Adjust an LLVM-related test for LLVM version 13. []
Update invocations of llvm-objdump. []
Adjust a test with a one-byte text file for file version 5.40. []
And, finally, Benjamin Peterson added a --diff-context option to control unified diff context size [] and Jean-Romain Garnier fixed the Macho comparator for architectures other than x86-64 [].
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
#990246 filed against vlc (forwarded upstream [][])
Testing framework
The Reproducible Builds project runs a testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
Holger Levsen:
Drop my package rebuilder prototype as it s not useful anymore. []
Schedule old packages in Debian bookworm. []
Stop scheduling packages for Debian buster. [][]
Don t include PostgreSQL debug output in package lists. []
Detect Python library mismatches during build in the node health check. []
Update a note on updating the FreeBSD system. []
Mattia Rizzolo:
Silence a warning from Git. []
Update a setting to reflect that Debian bookworm is the new testing. []
Upgrade the PostgreSQL database to version 13. []
Roland Clobus (Debian live image generation):
Workaround non-reproducible config files in the libxml-sax-perl package. []
Use the new DNS for the snapshot service. []
Vagrant Cascadian:
Also note that the armhf architecture also systematically varies by the kernel. []
Contributing
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
I ve been watching the show Riverdale on Netflix recently. It s an interesting modern take on the Archie comics. Having watched Josie and the Pussycats in Outer Space when I was younger I was anticipating something aimed towards a similar audience. As solving mysteries and crimes was apparently a major theme of the show I anticipated something along similar lines to Scooby Doo, some suspense and some spooky things, but then a happy ending where criminals get arrested and no-one gets hurt or killed while the vast majority of people are nice. Instead the first episode has a teen being murdered and Ms Grundy being obsessed with 15yo boys and sleeping with Archie (who s supposed to be 15 but played by a 20yo actor).
Everyone in the show has some dark secret. The filming has a dark theme, the sky is usually overcast and it s generally gloomy. This is a significant contrast to Veronica Mars which has some similarities in having a young cast, a sassy female sleuth, and some similar plot elements. Veronica Mars has a bright theme and a significant comedy element in spite of dealing with some dark issues (murder, rape, child sex abuse, and more). But Riverdale is just dark. Anyone who watches this with their kids expecting something like Scooby Doo is in for a big surprise.
There are lots of interesting stylistic elements in the show. Lots of clothing and uniform designs that seem to date from the 1940 s. It seems like some alternate universe where kids have smartphones and laptops while dressing in the style of the 1940s. One thing that annoyed me was construction workers using tools like sledge-hammers instead of excavators. A society that has smart phones but no earth-moving equipment isn t plausible.
On the upside there is a racial mix in the show that more accurately reflects American society than the original Archie comics and homophobia is much less common than in most parts of our society. For both race issues and gay/lesbian issues the show treats them in an accurate way (portraying some bigotry) while the main characters aren t racist or homophobic.
I think it s generally an OK show and recommend it to people who want a dark show. It s a good show to watch while doing something on a laptop so you can check Wikipedia for the references to 1940s stuff (like when Bikinis were invented). I m half way through season 3 which isn t as good as the first 2, I don t know if it will get better later in the season or whether I should have stopped after season 2.
I don t usually review fiction, but the interesting aesthetics of the show made it deserve a review.
I joined the
Debian project in late 1994, well before the first
stable release was issued, and have been involved in various ways continuously
ever since. Over the years, I adopted a number of packages that are, or at
least were at one time, fundamental to the distribution.
But, not surprisingly, my interests have shifted over time. In the more than
quarter century I've contributed to Debian, I've adopted existing packages
that needed attention, packaged new software I wanted to use that wasn't yet
in Debian, offered packages up for others to adopt, and even sometimes
requested the removal of packages that became obsolete or replaced by
something better. That all felt completely healthy.
But over the last couple weeks, I realized I'm still "responsible" for some
packages I'd had for a very long time, that generally work well but over time
have accumulated bugs in functionality I just don't use, and frankly haven't
been able to find the motivation to chase down. As one example, I just
noticed that I first uploaded the gzip package 25 years ago today, on
2 December 1995. And while the package works fine for me and most other
folks, there are 30 outstanding bugs and 3 forwarded bugs that I just can't
muster up any energy to address.
So, I just added gzip to a short list of packages I've offered up for
adoption recently. I'm pleased that tar already has a new maintainer, and
hope that both sudo and gzip will get more attention soon.
It's not that I'm less interested in Debian. I've just been busy recently
packaging up more software I use or want to use in designing high
power model
rockets and the solid propellant motors I fly in them, and would rather spend
the time I have available for Debian maintaining those packages and all their
various build dependencies than continuing to be responsible for core
packages in the distribution that "work fine for me" but could use attention.
I'm writing about this partly to mark the passing of more than a quarter
century as a package maintainer for Debian, partly to encourage other Debian
package maintainers with the right skills and motivation to consider adopting
some of the packages I'm giving up, and finally to encourage other long-time
participants in Debian to spend a little time evaluating their own package
lists in a similar way.
To ensure Dronefly always remains free, the Dronefly project has been relicensed under two copyleft licenses. Read the license change and learn more about copyleft at these links.
I was prompted to make this change after a recent incident in the Red DiscordBot development community that made me reconsider my prior position that the liberal MIT license was best for our project. While on the face of it, making your license as liberal as possible might seem like the most generous and hassle-free way to license any project, I was shocked into the realization that its liberality was also its fatal flaw: all is well and good so long as everyone is being cooperative, but it does not afford any protection to developers or users should things suddenly go sideways in how a project is run. A copyleft license is the best way to avoid such issues.
In this incident a sad story of conflict between developers I respect on both sides of the rift, and owe a debt to for what they ve taught me three cogs we had come to depend on suddenly stopped being viable for us to use due to changes to the license & the code. Effectively, those cogs became unsupported and unsupportable. To avoid any such future disaster with the Dronefly project, I started shopping for a new license that would protect developers and users alike from similarly losing support, or losing control of their contributions. I owe thanks to Dale Floer, a team member who early on advised me the AGPL might be a better fit, and later was helpful in selecting the doc license and encouraging me to follow through. We ran the new licenses by each contributor and arrived at this consensus: the AGPL is best suited for our server-based code, and CC-BY-SA is best suited for our documentation. The relicensing was made official this morning.
On Discord platform alternatives
You might well question what I, a Debian developer steeped in free software culture and otherwise in agreement with its principles, am doing encouraging a community to grow on the proprietary Discord platform! I have no satisfying answer to that. I explained when I introduced my project here some of the backstory, but that s more of an account of its beginnings than justification for it to continue on this platform. Honestly, all I can offer is a rather dissatisfying it seemed like the right thing to do at the time.
Time will tell whether we could successfully move off of it to a freedom-respecting and privacy-respecting alternative chat platform that is both socially and technically viable to migrate to. That platform would ideally:
not be under the control of a single, central commercial entity running proprietary code, so their privacy is safeguarded, and they are protected from disaster, should it become unattractive to remain on the platform;
have a vibrant and supportive open source third party extension development community;
support our image-rich content and effortless sharing of URLs with previews automatically provided from the page s content (e.g. via OpenGraph tags);
be effortless to join regardless of what platform/device each user uses;
keep a history of messages so that future members joining the community can benefit from past conversations, and existing members can catch up on conversations they missed;
but above all else: be acceptable and compelling to the existing community to move over onto it.
I m intrigued by Matrix and wonder if it provides some or all of the above in its current form. Are you a developer writing bots for this platform? If so, I especially want to hear from you in the comments about your experience. Or in any case, if you ve been there before if you ve faced the same issue with your community and have a success story to share, I would love to hear from you.
DebConf4
This tshirt is 16 years old and from DebConf4. Again, I should probably wash it at 60 celcius for once...
DebConf4 was my 2nd DebConf and took place in Porto Alegre, Brasil.
Like many DebConfs, it was a great opportunity to meet people:
I remember sitting in the lobby of the venue and some guy asked me what I did in Debian and I told
him about my little involvements and then asked him what he was doing, and he told me he wanted to
become involved in Debian again, after getting distracted away. His name was Ian Murdock...
DebConf4 also had a very cool history session in the hallway track (IIRC, but see below) with
Bdale Garbee, Ian Jackson and Ian Murdock and with a young student named Biella Coleman busy writing notes.
That same hallway also saw the kickoff meeting of the Debian Women project, though sadly
today http://tinc.debian.net ("there's no cabal") only shows an apache placeholder page and not
a picture of that meeting.
DebCon4 was also the first time I got a bit involved in preparing DebConf, together with
Jonas Smedegaard I've set up some computers there, using FAI. I had no idea that this was
the start of me contributing to DebConfs for text ten years.
And of course I also saw some talks, including one which I really liked, which then in turn made
me notice there were no people doing video recordings, which then lead to something...
I missed the group picture of this one. I guess it's important to me to mention it because
I've met very wonderful people at this DebConf... (some mentioned in this post, some not. You know
who you are!)
Afterwards some people stayed in Porto Alegre for FISL,
where we saw Lawrence Lessing present Creative Commons to the world for the first time. On the flight
back I sat next to a very friendly guy from Poland and we talked almost the whole flight and then
we never saw each other again, until 15 years later in Asia...
Oh, and then, after DebConf4, I used IRC for the first time. And stayed in the #debconf4 IRC channel
for quite some years
Finally, DebConf4 and more importantly FISL, which was really big (5000 people?) and after that,
the wizard of OS conference in Berlin
(which had a very nice talk about Linux in different places in the world, illustrating the
different states of 'first they ignore you, then they laugh at you, then they fight you, then you win'),
made me quit my job at a company supporting Windows- and Linux-setups as I realized I'd better
start freelancing with Linux-only jobs. So, once again, my life would have been different
if I would not have attended these events!
Note: yesterdays post about DebConf3 was thankfully corrected twice. This might well happen
to this post too!
As nationwide protests over the deaths of George Floyd and Breonna Taylor are met with police brutality, John Oliver discusses how the histories of policing ...
La morte di Stefano Cucchi avvenne a Roma il 22 ottobre 2009 mentre il giovane era sottoposto a custodia cautelare. Le cause della morte e le responsabilit sono oggetto di procedimenti giudiziari che hanno coinvolto da un lato i medici dell'ospedale Pertini,[1][2][3][4] dall'altro continuano a coinvolgere, a vario titolo, pi militari dell Arma dei Carabinieri[5][6]. Il caso ha attirato l'attenzione dell'opinione pubblica a seguito della pubblicazione delle foto dell'autopsia, poi riprese da agenzie di stampa, giornali e telegiornali italiani[7]. La vicenda ha ispirato, altres , documentari e lungometraggi cinematografici.[8][9][10]
La morte di Giuseppe Uva avvenne il 14 giugno 2008 dopo che, nella notte tra il 13 e il 14 giugno, era stato fermato ubriaco da due carabinieri che lo portarono in caserma, dalla quale venne poi trasferito, per un trattamento sanitario obbligatorio, nell'ospedale di Varese, dove mor la mattina successiva per arresto cardiaco. Secondo la tesi dell'accusa, la morte fu causata dalla costrizione fisica subita durante l'arresto e dalle successive violenze e torture che ha subito in caserma. Il processo contro i due carabinieri che eseguirono l'arresto e contro altri sei agenti di polizia ha assolto gli imputati dalle accuse di omicidio preterintenzionale e sequestro di persona[1][2][3][4]. Alla vicenda dedicato il documentario Viva la sposa di Ascanio Celestini[1][5].
Il caso Aldrovandi la vicenda giudiziaria causata dall'uccisione di Federico Aldrovandi, uno studente ferrarese, avvenuta il 25 settembre 2005 a seguito di un controllo di polizia.[1][2][3] I procedimenti giudiziari hanno condannato, il 6 luglio 2009, quattro poliziotti a 3 anni e 6 mesi di reclusione, per "eccesso colposo nell'uso legittimo delle armi";[1][4] il 21 giugno 2012 la Corte di cassazione ha confermato la condanna.[1] All'inchiesta per stabilire la cause della morte ne sono seguite altre per presunti depistaggi e per le querele fra le parti interessate.[1] Il caso stato oggetto di grande attenzione mediatica e ha ispirato un documentario, stato morto un ragazzo.[1][5]
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In October, about 197 work hours have been dispatched among 13 paid contributors. Their reports are available:
Antoine Beaupr did 21h (out of 16h allocated + 8.75h remaining, thus keeping 3.75h for November).
Ben Hutchings did 20 hours (out of 15h allocated + 9 extra hours, thus keeping 4 extra hours for November).
Lucas Kanashiro did 2 hours (out of 5h allocated, thus keeping 3 hours for November).
Markus Koschany did 19 hours (out of 20.75h allocated, thus keeping 1.75 extra hours for November).
Ola Lundqvist did 7.5h (out of 7h allocated + 0.5 extra hours).
Rapha l Hertzog did 13.5 hours (out of 12h allocated + 1.5 extra hours).
Roberto C. Sanchez did 11 hours (out of 20.75 hours allocated + 14.75 hours remaining, thus keeping 24.50 extra hours for November, he will give back remaining hours at the end of the month).
Evolution of the situation
The number of sponsored hours increased slightly to 183 hours per month. With the increasing number of security issues to deal with, and with the number of open issues not really going down, I decided to bump the funding target to what amounts to 1.5 full-time position.
The security tracker currently lists 50 packages with a known CVE and the dla-needed.txt file 36 (we re a bit behind in CVE triaging apparently).
Thanks to our sponsors
New sponsors are in bold.
The tenth (!!) annual annual R/Finance conference will take in Chicago on the UIC campus on June 1 and 2, 2018. Please see the call for papers below (or at the website) and consider submitting a paper.
We are once again very excited about our conference, thrilled about who we hope may agree to be our anniversary keynotes, and hope that many R / Finance users will not only join us in Chicago in June -- and also submit an exciting proposal.
So read on below, and see you in Chicago in June!
Call for Papers
R/Finance 2018: Applied Finance with R
June 1 and 2, 2018
University of Illinois at Chicago, IL, USA
The tenth annual R/Finance conference for applied finance using R will be held June 1 and 2, 2018 in Chicago, IL, USA at the University of Illinois at Chicago. The conference will cover topics including portfolio management, time series analysis, advanced risk tools, high-performance computing, market microstructure, and econometrics. All will be discussed within the context of using R as a primary tool for financial risk management, portfolio construction, and trading.
Over the past nine years, R/Finance has includedattendeesfrom around the world. It has featured presentations from prominent academics and practitioners, and we anticipate another exciting line-up for 2018.
We invite you to submit complete papers in pdf format for consideration. We will also consider one-page abstracts (in txt or pdf format) although more complete papers are preferred. We welcome submissions for both full talks and abbreviated "lightning talks." Both academic and practitioner proposals related to R are encouraged.
All slides will be made publicly available at conference time. Presenters are strongly encouraged to provide working R code to accompany the slides. Data sets should also be made public for the purposes of reproducibility (though we realize this may be limited due to contracts with data vendors). Preference may be given to presenters who have released R packages.
Please submit proposals online at http://go.uic.edu/rfinsubmit. Submissions will be reviewed and accepted on a rolling basis with a final submission deadline of February 2, 2018. Submitters will be notified via email by March 2, 2018 of acceptance, presentation length, and financial assistance (if requested).
Financial assistance for travel and accommodation may be available to presenters. Requests for financial assistance do not affect acceptance decisions. Requests should be made at the time of submission. Requests made after submission are much less likely to be fulfilled. Assistance will be granted at the discretion of the conference committee.
Additional details will be announced via the conference website at http://www.RinFinance.com/ as they become available. Information on previous years'presenters and their presentations are also at the conference website. We will make a separate announcement when registration opens.
For the program committee:
Gib Bassett, Peter Carl, Dirk Eddelbuettel, Brian Peterson,
Dale Rosenthal, Jeffrey Ryan, Joshua Ulrich
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In August, about 170 work hours have been dispatched among 13 paid contributors. Their reports are available:
Antoine Beaupr did 7h (out of 15.75h allocated, thus keeping 8.75h for October).
Ben Hutchings did 12 hours (out of 15h allocated + 6 extra hours, thus keeping 9 extra hours for October).
Evolution of the situation
The number of sponsored hours is the same as last month. But we have a new sponsor in the pipe.
The security tracker currently lists 52 packages with a known CVE and the dla-needed.txt file 49. The number of packages with open issues decreased slightly compared to last month but we re not yet back to the usual situation.
Thanks to our sponsors
New sponsors are in bold.
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In August, about 189 work hours have been dispatched among 12 paid contributors. Their reports are available:
Evolution of the situation
The number of sponsored hours is the same as last month.
The security tracker currently lists 59 packages with a known CVE and the dla-needed.txt file 60. The number of packages with open issues decreased slightly compared to last month but we re not yet back to the usual situation. The number of CVE to fix per package tends to increase due to the increased usage of fuzzers.
Thanks to our sponsors
New sponsors are in bold.
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In July, about 181 work hours have been dispatched among 11 paid contributors. Their reports are available:
Antoine Beaupr did 20h (out of 16h allocated + 4 extra hours).
Ben Hutchings did 14 hours (out of 15h allocated, thus keeping 1 extra hour for August).
Evolution of the situation
The number of sponsored hours increased slightly with two new sponsors: Leibniz Rechenzentrum (silver sponsor) and Catalyst IT Ltd (bronze sponsor).
The security tracker currently lists 74 packages with a known CVE and the dla-needed.txt file 64. The number of packages with open issues increased of almost 50% compared to last month. Hopefully this backlog will get cleared up when the unused hours will actually be done. In any case, this evolution is worth watching.
Thanks to our sponsors
New sponsors are in bold.
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In May, about 161 work hours have been dispatched among 11 paid contributors. Their reports are available:
Antoine Beaupr did 12h (out of 16h allocated, thus keeping 4 extra hours for July).
Ben Hutchings did 20 hours (out of 15h allocated + 5 extra hours).
Evolution of the situation
The number of sponsored hours increased slightly with one new bronze sponsor and another silver sponsor is in the process of joining.
The security tracker currently lists 49 packages with a known CVE and the dla-needed.txt file 54. The number of open issues is close to last month.
Thanks to our sponsors
New sponsors are in bold.
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In May, about 182 work hours have been dispatched among 11 paid contributors. Their reports are available:
Ben Hutchings did 13 hours (out of 15h allocated + 3 extra hours, thus keeping 5 extra hours for June).
Evolution of the situation
The number of sponsored hours did not change and we are thus still a little behind our objective.
The security tracker currently lists 44 packages with a known CVE and the dla-needed.txt file 42. The number of open issues is close to last month.
Thanks to our sponsors
New sponsors are in bold (none this month unfortunately).
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In April, about 190 work hours have been dispatched among 13 paid contributors. Their reports are available:
Antoine Beaupr did 19.5 hours (out of 16h allocated + 5.5 remaining hours, thus keeping 2 extra hours for May).
Ben Hutchings did 12 hours (out of 15h allocated, thus keeping 3 extra hours for May).
Evolution of the situation
The number of sponsored hours decreased slightly and we re now again a little behind our objective.
The security tracker currently lists 54 packages with a known CVE and the dla-needed.txt file 37. The number of open issues is comparable to last month.
Thanks to our sponsors
New sponsors are in bold.
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In March, about 190 work hours have been dispatched among 14 paid contributors. Their reports are available:
Antoine Beaupr did 19 hours (out of 14.75h allocated + 10 remaining hours, thus keeping 5.75 extra hours for April).
Balint Reczey did nothing (out of 14.75 hours allocated + 2.5 hours remaining) and gave back all his unused hours. He took on a new job and will stop his work as LTS paid contributor.
Evolution of the situation
The number of sponsored hours has been unchanged but will likely decrease slightly next month as one sponsor will not renew his support (because they have switched to CentOS).
The security tracker currently lists 52 packages with a known CVE and the dla-needed.txt file 40. The number of open issues continued its slight increase not worrisome yet but we need to keep an eye on this situation.
Thanks to our sponsors
New sponsors are in bold.
Work has been hellishly busy lately, so that's pretty much all I've been
doing. The major project I'm working on should be basically done in the
next couple of weeks, though (fingers crossed), so maybe I'll be able to
surface a bit more after that.
In the meantime, I'm still acquiring books I don't have time to read,
since that's my life. In this case, two great Humble Book Bundles were
too good of a bargain to pass up. There are a bunch of books in here that
I already own in paperback (and hence showed up in previous haul posts),
but I'm running low on shelf room, so some of those paper copies may go to
the used bookstore to make more space.
Kelley Armstrong Lost Souls (sff)
Clive Barker Tortured Souls (horror)
Jim Butcher Working for Bigfoot (sff collection)
Octavia E. Butler Parable of the Sower (sff)
Octavia E. Butler Parable of the Talents (sff)
Octavia E. Butler Unexpected Stories (sff collection)
Octavia E. Butler Wild Seed (sff)
Jacqueline Carey One Hundred Ablutions (sff)
Richard Chizmar A Long December (sff collection)
Jo Clayton Skeen's Leap (sff)
Kate Elliot Jaran (sff)
Harlan Ellison Can & Can'tankerous (sff collection)
Diana Pharoh Francis Path of Fate (sff)
Mira Grant Final Girls (sff)
Elizabeth Hand Black Light (sff)
Elizabeth Hand Saffron & Brimstone (sff collection)
Elizabeth Hand Wylding Hall (sff)
Kevin Hearne The Purloined Poodle (sff)
Nalo Hopkinson Skin Folk (sff)
Katherine Kurtz Camber of Culdi (sff)
Katherine Kurtz Lammas Night (sff)
Joe R. Lansdale Fender Lizards (mainstream)
Robert McCammon The Border (sff)
Robin McKinley Beauty (sff)
Robin McKinley The Hero and the Crown (sff)
Robin McKinley Sunshine (sff)
Tim Powers Down and Out in Purgatory (sff)
Cherie Priest Jacaranda (sff)
Alastair Reynolds Deep Navigation (sff collection)
Pamela Sargent The Shore of Women (sff)
John Scalzi Miniatures (sff collection)
Lewis Shiner Glimpses (sff)
Angie Thomas The Hate U Give (mainstream)
Catherynne M. Valente The Bread We Eat in Dreams (sff collection)
Connie Willis The Winds of Marble Arch (sff collection)
M.K. Wren Sword of the Lamb (sff)
M.K. Wren Shadow of the Swan (sff)
M.K. Wren House of the Wolf (sff)
Jane Yolen Sister Light, Sister Dark (sff)
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In January, about 154 work hours have been dispatched among 13 paid contributors. Their reports are available:
Antoine Beaupr did 3 hours (out of 13h allocated, thus keeping 10 extra hours for March).
Balint Reczey did 13 hours (out of 13 hours allocated + 1.25 hours remaining, thus keeping 1.25 hours for March).
Ben Hutchings did 19 hours (out of 13 hours allocated + 15.25 hours remaining, he gave back the remaining hours to the pool).
Evolution of the situation
The number of sponsored hours increased slightly thanks to Bearstech and LiHAS joining us.
The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file 39. The number of open issues continued its slight increase, this time it could be explained by the fact that many contributors did not spend all the hours allocated (for various reasons). There s nothing worrisome at this point.
Thanks to our sponsors
New sponsors are in bold.
A few days ago I ordered a small batch of
the ChaosKey, a small
USB dongle for generating entropy created by Bdale Garbee and Keith
Packard. Yesterday it arrived, and I am very happy to report that it
work great! According to its designers, to get it to work out of the
box, you need the Linux kernel version 4.1 or later. I tested on a
Debian Stretch machine (kernel version 4.9), and there it worked just
fine, increasing the available entropy very quickly. I wrote a small
test oneliner to test. It first print the current entropy level,
drain /dev/random, and then print the entropy level for five seconds.
Here is the situation without the ChaosKey inserted:
% cat /proc/sys/kernel/random/entropy_avail; \
dd bs=1M if=/dev/random of=/dev/null count=1; \
for n in $(seq 1 5); do \
cat /proc/sys/kernel/random/entropy_avail; \
sleep 1; \
done
300
0+1 oppf ringer inn
0+1 oppf ringer ut
28 byte kopiert, 0,000264565 s, 106 kB/s
4
8
12
17
21
%
The entropy level increases by 3-4 every second. In such case any
application requiring random bits (like a HTTPS enabled web server)
will halt and wait for more entrpy. And here is the situation with
the ChaosKey inserted:
% cat /proc/sys/kernel/random/entropy_avail; \
dd bs=1M if=/dev/random of=/dev/null count=1; \
for n in $(seq 1 5); do \
cat /proc/sys/kernel/random/entropy_avail; \
sleep 1; \
done
1079
0+1 oppf ringer inn
0+1 oppf ringer ut
104 byte kopiert, 0,000487647 s, 213 kB/s
433
1028
1031
1035
1038
%
Quite the difference. :) I bought a few more than I need, in case
someone want to buy one here in Norway. :)
Update: The dongle was presented at Debconf last year. You might
find the talk
recording illuminating. It explains exactly what the source of
randomness is, if you are unable to spot it from the schema drawing
available from the ChaosKey web site linked at the start of this blog
post.