Search Results: "rch"

22 July 2024

Martin-Éric Racine: dhcpcd replacing dhclient for Trixie... or perhaps networkd?

My work on overhauling dhcpcd as the prime replacement for ISC's discontinued DHCP client is done. The package has achieved stability, both upstream and at Debian. The only remaining points are bug #1038882 to swap the Priorities of isc-dhcp-client and dhcpcd-base in the repository's override, and swaping ifupdown's search order to put dhcpcd first. Meanwhile, ifupdown's de-facto maintainer prompted me to sollicit opinions on which of the 4 ifupdown implementations should ship with a minimal installation for Trixie. This, in turn, re-opened the debate of what should be Debian's default network configuation framework (see the thread starting with this post). networkd Given how most Debian ports (except for Hurd) ship with systemd, which includes a disabled networkd by standard, many people in the thread feel that this should become the default network configuration tool for minimal installations. As it happens, most of my hosts fit that scenario, so I figured that I would give another go at testing networkd on one host. I used the following minimalistic /etc/systemd/network/dhcp.network:
[Match]
Name=en* wl*
[Network]
DHCP=yes
This correctly configured IPv4 via DHCP, with the small caveat that it doesn't update /etc/resolv.conf without installing resolvconf or systemd-resolved. However, networkd's default IPv6 settings really are not suitable for public consumption. The key issues (see Bug #1076432):
  1. Temporary addresses are not enabled by default. Worse, the setting is ignored if it was enabled by sysctl during bootup. This is a major privacy issue. Adding IPv6PrivacyExtensions=yes to the above exposed another issue: instead of using the fe80 address generated by the kernel, networkd adds a new one.
  2. Networkd uses EUI64 addresses by default. This is another major privacy issue, since EUI64 addresses are forensically traceable to the interface's MAC address. Worse, the setting is ignored if stable-privacy was enabled by sysctl during bootup. To top it all, networkd does stable-privacy using systemd's time-proven brain-dead approach of reinventing the wheel: instead of merely setting the kernel's address generation mode to 3 and letting it configure the secret address, it expects the secret address to be spelled out in the systemd unit.
Conclusion: networkd works well enough for someone configuring an IPv4-only network from 20 years ago, but is utterly inadequate for IPv6 or dual-stack installations, doubly so on a software distribution that claims to care about privacy and network security.

21 July 2024

Mike Gabriel: Polis - a FLOSS Tool for Civic Participation -- Issues extending Polis and adjusting our Goals

Here comes the 3rd article of the 5-episode blog post series on Polis, written by Guido Berh rster, member of staff at my company Fre(i)e Software GmbH. Enjoy also this read on Guido's work on Polis,
Mike
Table of Contents of the Blog Post Series
  1. Introduction
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals (this article)
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap
Polis - Issues extending Polis and adjusting our Goals After the initial implementation of limited branding support, user feedback and the involvement of an UX designer lead to the conclusion that we needed more far-reaching changes to the user interface in order to reduce visual clutter, rearrange and improve UI elements, and provide better integration with the websites in which conversations are embedded. Challenges when visualizing Data in Polis Polis visualizes groups using a spatial projection of users based on similarities in voting behavior and places them in two to five groups using a clustering algorithm. During our testing and evaluation users were rarely able to interpret the visualization and often intuitively made incorrect assumptions e.g. by associating the filled area of a group with its significance or size. After consultation with a member of the Multi-Agent Systems (MAS) Group at the University of Groningen we chose to temporarily replace the visualization offered by Polis with simple bar charts representing agreement or disagreement with statements of a group or the majority. We intend to revisit this and explore different forms of visualization at a later point in time. The different factors playing into the weight attached to statements which determine the pseuodo-random order in which they are presented for voting ( comment routing ) proved difficult to explain to stakeholders and users and the admission of the ad-hoc and heuristic nature of the used algorithm1 by Polis authors lead to the decision to temporarily remove this feature. Instead, statements should be placed into three groups, namely
  1. metadata questions,
  2. seed statements,
  3. and participant statements
Statements should then be sorted by group but in a fully randomized order within the group so that metadata questions would be presented before seed statements which would be presented before participant s statements. This simpler method was deemed sufficient for the scale of our pilot projects, however we intend to revisit this decision and explore different methods of comment routing in cooperation with our scientific partners at a later point in time. An evaluation of the requirements for implementing mandatory authentication and adding support for additional authentication methods to Polis showed that significant changes to both the administration and participation frontend were needed due to a lack of an abstraction layer or extension mechanism and the current authentication providers being hardcoded in many parts of the code base. A New Frontend is born: Particiapp Based on the implementation details of the participation frontend, the invasive nature of the changes required, and the overhead of keeping up with active upstream development it became clear that a different, more flexible approach to development was needed. This ultimately lead to the creation of Particiapp, a new Open Source project providing the building blocks and necessary abstraction layers for rapid protoyping and experimentation with different fontends which are compatible with but independent from Polis.
  1. Small, Christopher T., Bjorkegren, Michael, Erkkil , Timo, Shaw, Lynette and Megill, Colin (2021). Polis: Scaling deliberation by mapping high dimensional opinion spaces. Recerca. Revista de Pensament i An lisi, 26(2), pp. 1-26.

Russell Coker: SE Linux Policy for Dell Management

The recent issue of Windows security software killing computers has reminded me about the issue of management software for Dell systems. I wrote policy for the Dell management programs that extract information from iDRAC and store it in Linux. After the break I ve pasted in the policy. It probably needs some changes for recent software, it was last tested on a PowerEdge T320 and prior to that was used on a PowerEdge R710 both of which are old hardware and use different management software to the recent hardware. One would hope that the recent software would be much better but usually such hope is in vain. I deliberately haven t submitted this for inclusion in the reference policy because it s for proprietary software and also it permits many operations that we would prefer not to permit. The policy is after the break because it s larger than you want on a Planet feed. But first I ll give a few selected lines that are bad in a noteworthy way:
  1. sys_admin means the ability to break everything
  2. dac_override means break Unix permissions
  3. mknod means a daemon creates devices due to a lack of udev configuration
  4. sys_rawio means someone didn t feel like writing a device driver, maintaining a device driver for DKMS is hard and getting a driver accepted upstream requires writing quality code, in any case this is a bad sign.
  5. self:lockdown is being phased out, but used to mean bypassing some integrity protections, that would usually be related to sys_rawio or similar.
  6. dev_rx_raw_memory is bad, reading raw memory allows access to pretty much everything and execute of raw memory is something I can t imagine a good use for, the Reference Policy doesn t use this anywhere!
  7. dev_rw_generic_chr_files usually means a lack of udev configuration as udev should do that.
  8. storage_raw_write_fixed_disk shouldn t be needed for this sort of thing, it doesn t do anything that involves managing partitions.
Now without network access or other obvious ways of remote control this level of access while excessive isn t necessarily going to allow bad things to happen due to outside attack. But if there are bugs in the software there s nothing to stop it from giving the worst results.
allow dell_datamgrd_t self:capability   dac_override dac_read_search mknod sys_rawio sys_admin  ;
allow dell_datamgrd_t self:lockdown integrity;
dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)
storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)
allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:capability   sys_admin sys_rawio  ;
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)
The best thing that Dell could do for their customers is to make this free software and allow the community to fix some of these issues.
Here is dellsrvadmin.te:
policy_module(dellsrvadmin,1.0.0)
require  
  type dmidecode_exec_t;
  type udev_t;
  type device_t;
  type event_device_t;
  type mon_local_test_t;
 
type dellsrvadmin_t;
type dellsrvadmin_exec_t;
init_daemon_domain(dellsrvadmin_t, dellsrvadmin_exec_t)
type dell_datamgrd_t;
type dell_datamgrd_exec_t;
init_daemon_domain(dell_datamgrd_t, dell_datamgrd_t)
type dellsrvadmin_var_t;
files_type(dellsrvadmin_var_t)
domain_transition_pattern(udev_t, dellsrvadmin_exec_t, dellsrvadmin_t)
modutils_domtrans(dellsrvadmin_t)
allow dell_datamgrd_t device_t:dir rw_dir_perms;
allow dell_datamgrd_t device_t:chr_file create;
allow dell_datamgrd_t event_device_t:chr_file   read write  ;
allow dell_datamgrd_t self:process signal;
allow dell_datamgrd_t self:fifo_file rw_file_perms;
allow dell_datamgrd_t self:sem create_sem_perms;
allow dell_datamgrd_t self:capability   dac_override dac_read_search mknod sys_rawio sys_admin  ;
allow dell_datamgrd_t self:lockdown integrity;
allow dell_datamgrd_t self:unix_dgram_socket create_socket_perms;
allow dell_datamgrd_t self:netlink_route_socket r_netlink_socket_perms;
modutils_domtrans(dell_datamgrd_t)
can_exec(dell_datamgrd_t, dmidecode_exec_t)
allow dell_datamgrd_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:file manage_file_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:lnk_file read;
allow dell_datamgrd_t dellsrvadmin_var_t:sock_file manage_file_perms;
kernel_read_network_state(dell_datamgrd_t)
kernel_read_system_state(dell_datamgrd_t)
kernel_search_fs_sysctls(dell_datamgrd_t)
kernel_read_vm_overcommit_sysctl(dell_datamgrd_t)
# for /proc/bus/pci/*
kernel_write_proc_files(dell_datamgrd_t)
corecmd_exec_bin(dell_datamgrd_t)
corecmd_exec_shell(dell_datamgrd_t)
corecmd_shell_entry_type(dell_datamgrd_t)
dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)
files_search_tmp(dell_datamgrd_t)
files_read_etc_files(dell_datamgrd_t)
files_read_etc_symlinks(dell_datamgrd_t)
files_read_usr_files(dell_datamgrd_t)
logging_search_logs(dell_datamgrd_t)
miscfiles_read_localization(dell_datamgrd_t)
storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)
can_exec(mon_local_test_t, dellsrvadmin_exec_t)
allow mon_local_test_t dellsrvadmin_var_t:dir search;
allow mon_local_test_t dellsrvadmin_var_t:file read_file_perms;
allow mon_local_test_t dellsrvadmin_var_t:file setattr;
allow mon_local_test_t dellsrvadmin_var_t:sock_file write;
allow mon_local_test_t dell_datamgrd_t:unix_stream_socket connectto;
allow mon_local_test_t self:sem   create read write destroy unix_write  ;
allow dellsrvadmin_t self:process signal;
allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:sem create_sem_perms;
allow dellsrvadmin_t self:fifo_file rw_file_perms;
allow dellsrvadmin_t self:packet_socket create;
allow dellsrvadmin_t self:unix_stream_socket   connectto create_stream_socket_perms  ;
allow dellsrvadmin_t self:capability   sys_admin sys_rawio  ;
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)
allow dellsrvadmin_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:file manage_file_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:lnk_file read;
allow dellsrvadmin_t dellsrvadmin_var_t:sock_file write;
allow dellsrvadmin_t dell_datamgrd_t:unix_stream_socket connectto;
kernel_read_network_state(dellsrvadmin_t)
kernel_read_system_state(dellsrvadmin_t)
kernel_search_fs_sysctls(dellsrvadmin_t)
kernel_read_vm_overcommit_sysctl(dellsrvadmin_t)
corecmd_exec_bin(dellsrvadmin_t)
corecmd_exec_shell(dellsrvadmin_t)
corecmd_shell_entry_type(dellsrvadmin_t)
files_read_etc_files(dellsrvadmin_t)
files_read_etc_symlinks(dellsrvadmin_t)
files_read_usr_files(dellsrvadmin_t)
logging_search_logs(dellsrvadmin_t)
miscfiles_read_localization(dellsrvadmin_t)
Here is dellsrvadmin.fc:
/opt/dell/srvadmin/sbin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/sbin/dsm_sa_datamgrd        --        gen_context(system_u:object_r:dell_datamgrd_t,s0)
/opt/dell/srvadmin/bin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/var(/.*)?                        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)
/opt/dell/srvadmin/etc/srvadmin-isvc/ini(/.*)?        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)

17 July 2024

Dirk Eddelbuettel: Rcpp 1.0.13 on CRAN: Some Updates

rcpp logo The Rcpp Core Team is once again pleased to announce a new release (now at 1.0.13) of the Rcpp package. It arrived on CRAN earlier today, and has since been uploaded to Debian. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course r2u should catch up tomorrow too. The release was uploaded last week, but not only does Rcpp always gets flagged because of the grandfathered .Call(symbol) but CRAN also found two packages regressing which then required them to take five days to get back to us. One issue was known; another did not reproduce under our tests against over 2800 reverse dependencies leading to the eventual release today. Yay. Checks are good and appreciated, and it does take time by humans to review them. This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As a reminder, we do of course make interim snapshot dev or rc releases available via the Rcpp drat repo as well as the r-universe page and repo and strongly encourage their use and testing I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 2867 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 256 in BioConductor. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 59.9% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 86.3 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 1848 (JSS, 2011) and 324 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 641. This release is incremental as usual, generally preserving existing capabilities faithfully while smoothing our corners and / or extending slightly, sometimes in response to changing and tightened demands from CRAN or R standards. The move towards a more standardized approach for the C API of R leads to a few changes; Kevin did most of the PRs for this. Andrew Johnsom also provided a very nice PR to update internals taking advantage of variadic templates. The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.0.13 (2024-07-11)
  • Changes in Rcpp API:
    • Set R_NO_REMAP if not already defined (Dirk in #1296)
    • Add variadic templates to be used instead of generated code (Andrew Johnson in #1303)
    • Count variables were switches to size_t to avoid warnings about conversion-narrowing (Dirk in #1307)
    • Rcpp now avoids the usage of the (non-API) DATAPTR function when accessing the contents of Rcpp Vector objects where possible. (Kevin in #1310)
    • Rcpp now emits an R warning on out-of-bounds Vector accesses. This may become an error in a future Rcpp release. (Kevin in #1310)
    • Switch VECTOR_PTR and STRING_PTR to new API-compliant RO variants (Kevin in #1317 fixing #1316)
  • Changes in Rcpp Deployment:
    • Small updates to the CI test containers have been made (#1304)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues). If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Gunnar Wolf: Scholarly spam Wulfenia

I just got one of those utterly funny spam messages And yes, I recognize everybody likes building a name for themselves. But some spammers are downright silly. I just got the following mail:
From: Hermine Wolf <hwolf850@gmail.com>
To: me, obviously  
Date: Mon, 15 Jul 2024 22:18:58 -0700
Subject: Make sure that your manuscript gets indexed and showcased in the prestigious Scopus database soon.
Message-ID: <CAEZZb3XCXSc_YOeR7KtnoSK4i3OhD=FH7u+A5xSMsYvhQZojQA@mail.gmail.com>
This message has visual elements included. If they don't display, please   
update your email preferences.
*Dear Esteemed Author,*
Upon careful examination of your recent research articles available online,
we are excited to invite you to submit your latest work to our esteemed    
journal, '*WULFENIA*'. Renowned for upholding high standards of excellence 
in peer-reviewed academic research spanning various fields, our journal is 
committed to promoting innovative ideas and driving advancements in        
theoretical and applied sciences, engineering, natural sciences, and social
sciences. 'WULFENIA' takes pride in its impressive 5-year impact factor of 
*1.000* and is highly respected in prestigious databases including the     
Science Citation Index Expanded (ISI Thomson Reuters), Index Copernicus,   
Elsevier BIOBASE, and BIOSIS Previews.                                     
                                                                           
*Wulfenia submission page:*                                                
[image: research--check.png][image: scrutiny-table-chat.png][image:        
exchange-check.png][image: interaction.png]                                
.                                                                          
Please don't reply to this email                                           
                                                                           
We sincerely value your consideration of 'WULFENIA' as a platform to       
present your scholarly work. We eagerly anticipate receiving your valuable 
contributions.                                                             
*Best regards,*                                                            
Professor Dr. Vienna S. Franz                                              
Scholarly spam Who cares what Wulfenia is about? It s about you, my stupid Wolf cousin!

13 July 2024

Ravi Dwivedi: Yellow Fever Vaccine

Recently, I got vaccinated with yellow fever vaccine as I am planning to travel to Kenya, a high risk country for yellow fever, in the near future. The vaccine takes 10 days to produce the required antibodies, so it should be taken at least 10 days before the date of departure to the at-risk country. In order to get vaccinated, I searched for vaccination centers in Delhi for yellow fever. I found this page by the Indian government which lists vaccination centers for yellow fever all over India. From that list, I made a phone call to the Airport Health Organization, a vaccination center near to the Delhi Airport. They asked me to write an email stating that I need yellow fever vaccination. After sending the email, they requested a scanned copy of my passport. Subsequently, they emailed me my appointment date, asking me to pay 300 INR in advance along with other instructions. You have to reach vaccination center at any time between 10 AM to 12 noon. I got there at around 11 AM on my appointment date and got vaccinated in around 40 minutes, followed by obtaining a vaccine certificate in half an hour. One dosage of this vaccine gives immunity against yellow fever for lifetime. Therefore, I can travel to any country at risk of yellow fever. Although some countries may require proof of vaccination within some time frame and some people might need a booster dose to maintain immunity.

12 July 2024

Reproducible Builds: Reproducible Builds in June 2024

Welcome to the June 2024 report from the Reproducible Builds project! In our reports, we outline what we ve been up to over the past month and highlight news items in software supply-chain security more broadly. As always, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. Next Reproducible Builds Summit dates announced
  2. GNU Guix patch review session for reproducibility
  3. New reproducibility-related academic papers
  4. Misc development news
  5. Website updates
  6. Reproducibility testing framework


Next Reproducible Builds Summit dates announced We are very pleased to announce the upcoming Reproducible Builds Summit, set to take place from September 17th 19th 2024 in Hamburg, Germany. We are thrilled to host the seventh edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. If you re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. We are very much looking forward to seeing many readers of these reports there.

GNU Guix patch review session for reproducibility Vagrant Cascadian will holding a Reproducible Builds session as part of the monthly Guix patch review series on July 11th at 17:00 UTC. These online events are intended to encourage everyone everyone becoming a patch reviewer and the goal of reviewing patches is to help Guix project accept contributions while maintaining our quality standards and learning how to do patch reviews together in a friendly hacking session.

Development news In Debian this month, 4 reviews of Debian packages were added, 11 were updated and 14 were removed this month adding to our knowledge about identified issues. Only one issue types was updated, though, explaining that we don t vary the build path anymore.
On our mailing list this month, Bernhard M. Wiedemann wrote that whilst he had previously collected issues that introduce non-determinism he has now moved on to discuss about mitigations , in the sense of how can we avoid whole categories of problem without patching an infinite number of individual packages . In addition, Janneke Nieuwenhuizen announced the release of two versions of GNU Mes. [ ][ ]
In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.
In NixOS, with the 24.05 release out, it was again validated that our minimal ISO is reproducible by building it on a virtual machine with no access to the binary cache.
What s more, we continued to write patches in order to fix specific reproducibility issues, including Bernhard M. Wiedemann writing three patches (for qutebrowser, samba and systemd), Chris Lamb filing Debian bug #1074214 against the fastfetch package and Arnout Engelen proposing fixes to refind and for the Scala compiler [ .
Lastly, diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb uploaded two versions (270 and 271) to Debian, and made the following changes as well:
  • Drop Build-Depends on liblz4-tool in order to fix Debian bug #1072575. [ ]
  • Update tests to support zipdetails version 4.004 that is shipped with Perl 5.40. [ ]

Website updates There were a number of improvements made to our website this month, including Akihiro Suda very helpfully making the <h4> elements more distinguishable from the <h3> level [ ][ ] as well as adding a guide for Dockerfile reproducibility [ ]. In addition Fay Stegerman added two tools, apksigcopier and reproducible-apk-tools, to our Tools page.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, a number of changes were made by Holger Levsen, including:
  • Marking the virt(32 64)c-armhf nodes as down. [ ]
  • Granting a developer access to the osuosl4 node in order to debug a regression on the ppc64el architecture. [ ]
  • Granting a developer access to the osuosl4 node. [ ][ ]
In addition, Mattia Rizzolo re-aligned the /etc/default/jenkins file with changes performed upstream [ ] and changed how configuration files are handled on the rb-mail1 host. [ ], whilst Vagrant Cascadian documented the failure of the virt32c and virt64c nodes after initial investigation [ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Russ Allbery: Review: The Splinter in the Sky

Review: The Splinter in the Sky, by Kemi Ashing-Giwa
Publisher: Saga Press
Copyright: July 2023
ISBN: 1-6680-0849-1
Format: Kindle
Pages: 372
The Splinter in the Sky is a stand-alone science fiction political thriller. It is Kemi Ashing-Giwa's first novel. Enitan is from Koriko, a vegetation-heavy moon colonized by the Vaalbaran empire. She lives in the Ijebu community with her sibling Xiang and has an on-again, off-again relationship with Ajana, the Vaalbaran-appointed governor. Xiang is studying to be an architect, which requires passing stringent entrance exams to be allowed to attend an ancillary imperial school intended for "primitives." Enitan works as a scribe and translator, one of the few Korikese allowed to use the sacred Orin language of Vaalbara. In her free time, she grows and processes tea. When Xiang mysteriously disappears while she's at work, Enitan goes to Ajana for help. Then Ajana dies, supposedly from suicide. The Vaalbaran government demands a local hostage while the death is investigated, someone who will be held as a diplomatic "guest" on the home world and executed if there is any local unrest. This hostage is supposed to be the child of the local headwoman, a concept that the Korikese do not have. Seeing a chance to search for Xiang, Enitan volunteers, heading into the heart of imperial power with nothing but desperate determination and a tea set. The empire doesn't stand a chance. Admittedly, a lot of the reason why the empire doesn't stand a chance is because the author is thoroughly on Enitan's side. Before she even arrives on Gondwana, Vaalbara's home world, Enitan is recruited as a spy by the other Gondwana power and Vaalbara's long-standing enemy. Her arrival in the Splinter, the floating arcology that serves as the center of Vaalbaran government, is followed by a startlingly meteoric rise in access. Some of this is explained by being a cultural curiosity for bored nobles, and some is explained by political factors Enitan is not yet aware of, but one can see the author's thumb resting on the scales. This was the sort of book that was great fun to read, but whose political implausibility provoked "wait, that didn't make sense" thoughts afterwards. I think one has to assume that the total population of Vaalbara is much less than first comes to mind when considering an interplanetary empire, which would help explain the odd lack of bureaucracy. Enitan is also living in, effectively, the palace complex, for reasonably well-explained political reasons, and that could grant her a surprising amount of access. But there are other things that are harder to explain away: the lack of surveillance, the relative lack of guards, and the odd political structure that's required for the plot to work. It's tricky to talk about this without spoilers, but the plot rests heavily on a conspiratorial view of how government power is wielded that I think strains plausibility. I'm not naive enough to think that the true power structure of a society matches the formal power structure, but I don't think they diverge as much as people think they do. It's one thing to say that the true power brokers of society can be largely unknown to the general population. In a repressive society with a weak media, that's believable. It's quite another matter for the people inside the palace to be in the dark about who is running what. I thought that was the biggest problem with this book. Its greatest feature is the characters, and particularly the character relationships. Enitan is an excellent protagonist: fascinating, sympathetic, determined, and daring in ways that make her success more believable. Early in the book, she forms an uneasy partnership that becomes the heart of the book, and I loved everything about that relationship. The politics of her situation might be a bit too simple, but the emotions were extremely well-done. This is a book about colonialism. Specifically, it's a book about cultural looting, appropriation, and racist superiority. The Vaalbarans consider Enitan barely better than an animal, and in her home they're merciless and repressive. Taken out of that context into their imperial capital, they see her as a harmless curiosity and novelty. Enitan exploits this in ways that are entirely believable. She is also driven to incandescent fury in ways that are entirely believable, and which she only rarely can allow herself to act on. Ashing-Giwa drives home the sheer uselessness of even the more sympathetic Vaalbarans more forthrightly than science fiction is usually willing to be. It's not a subtle point, but it is an accurate one. The first two thirds of this book had me thoroughly engrossed and unable to put it down. The last third unfortunately turns into a Pok mon hunt of antagonists, which I found less satisfying and somewhat less believable. I wish there had been more need for Enitan to build political alliances and go deeper into the social maneuverings of the first part of the book, rather than gaining some deus ex machina allies who trivially solve some otherwise-tricky plot problems. The setup is amazing; the resolution felt a bit like escaping a maze by blasting through the walls, which I don't think played to the strengths of the characters and relationships that Ashing-Giwa had constructed. The advantage of that approach is that we do get a satisfying resolution and a standalone novel. The central relationship of the book is unfortunately too much of a spoiler to talk about in a review, but I thought it was the best part of the story. This is a political thriller on the surface, but I think it's heart is an unexpected political alliance with a fascinatingly tricky balance of power. I was delighted that Ashing-Giwa never allows the tension in that relationship to collapse into one of the stock patterns it so easily could have become. The Splinter in the Sky reminded me a little of Arkady Martine's A Memory Called Empire. It's not as assured or as adroitly balanced as that book, and the characters are not quite as memorable, but that's a very high bar. The political point is even sharper, and it has some of the same appeal. I had so much fun reading this book. You may need to suspend your disbelief about some of the politics, and I wish the conclusion had been a bit less brute-force, but this is great stuff. Recommended when you're in the mood for a character story in the trappings of a political thriller. Rating: 8 out of 10

Freexian Collaborators: Monthly report about Debian Long Term Support, June 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In June, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Adrian Bunk did 47.0h (out of 74.25h assigned and 11.75h from previous period), thus carrying over 39.0h to the next month.
  • Arturo Borrero Gonzalez did 6.0h (out of 6.0h assigned).
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 15.5h (out of 16.0h assigned and 8.0h from previous period), thus carrying over 8.5h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 4.0h (out of 8.0h assigned and 2.0h from previous period), thus carrying over 6.0h to the next month.
  • Emilio Pozuelo Monfort did 23.25h (out of 49.5h assigned and 10.5h from previous period), thus carrying over 36.75h to the next month.
  • Guilhem Moulin did 4.5h (out of 13.0h assigned and 7.0h from previous period), thus carrying over 15.5h to the next month.
  • Lee Garrett did 17.0h (out of 25.0h assigned and 35.0h from previous period), thus carrying over 43.0h to the next month.
  • Lucas Kanashiro did 5.0h (out of 10.0h assigned and 10.0h from previous period), thus carrying over 15.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 10.0h (out of 6.5h assigned and 17.5h from previous period), thus carrying over 14.0h to the next month.
  • Roberto C. S nchez did 5.25h (out of 7.75h assigned and 4.25h from previous period), thus carrying over 6.75h to the next month.
  • Santiago Ruano Rinc n did 22.5h (out of 14.5h assigned and 8.0h from previous period).
  • Sean Whitton did 6.5h (out of 6.0h assigned and 0.5h from previous period).
  • Stefano Rivera did 0.5h (out of 0.0h assigned and 10.0h from previous period), thus carrying over 9.5h to the next month.
  • Sylvain Beucler did 9.0h (out of 24.5h assigned and 35.5h from previous period), thus carrying over 51.0h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).

Evolution of the situation In June, we have released 31 DLAs. Notable security updates in June included:
  • git: multiple vulnerabilities, which may result in privilege escalation, denial of service, and arbitrary code execution
  • sendmail: SMTP smuggling allowed remote attackers bypass SPF protection checks
  • cups: arbitrary remote code execution
Looking further afield to the broader Debian ecosystem, LTS contributor Bastien Roucari s also patched sendmail in Debian 12 (bookworm) and 11 (bullseye) in order to fix the previously mentioned SMTP smuggling vulnerability. Furthermore, LTS contributor Thorsten Alteholz provided fixes for the cups packages in Debian 12 (bookworm) and 11 (bullseye) in order to fix the aforementioned arbitrary remote code execution vulnerability. Additionally, LTS contributor Ben Hutchings has commenced work on an updated backport of Linux kernel 6.1 to Debian 11 (bullseye), in preparation for bullseye transitioning to the responsibility of the LTS (and the associated closure of the bullseye-backports repository). LTS Lucas Kanashiro also began the preparatory work of backporting parts of the rust/cargo toolchain to Debian 11 (bullseye) in order to make future updates of the clamav virus scanner possible. June was the final month of LTS for Debian 10 (as announced on the debian-lts-announce mailing list). No additional Debian 10 security updates will be made available on security.debian.org. However, Freexian and its team of paid Debian contributors will continue to maintain Debian 10 going forward for the customers of the Extended LTS offer. Subscribe right away if you sill have Debian 10 which must be kept secure (and which cannot yet be upgraded).

Thanks to our sponsors Sponsors that joined recently are in bold.

10 July 2024

Freexian Collaborators: Debian Contributions: YubiHSM packaging, unschroot, live-patching, and more! (by Stefano Rivera)

Debian Contributions: 2024-06 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

YubiHSM packaging, by Colin Watson Freexian is starting to use YubiHSM devices (hardware security modules) as part of some projects, and we wanted to have the supporting software directly in Debian rather than needing to use third-party repositories. Since Yubico publish everything we need under free software licences, Colin packaged yubihsm-connector, yubihsm-shell, and python-yubihsm from https://developers.yubico.com/, in some cases based partly on the upstream packaging, and got them all into Debian unstable. Backports to bookworm will be forthcoming once they ve all reached testing.

unschroot by Helmut Grohne Following an in-person discussion at MiniDebConf Berlin, Helmut attempted splitting the containment functionality of sbuild --chroot-mode=unshare into a dedicated tool interfacing with sbuild as a variant of --chroot-mode=schroot providing a sufficiently compatible interface. While this seemed technically promising initially, a discussion on debian-devel indicated a desire to rely on an existing container runtime such as podman instead of using another Debian-specific tool with unclear long term maintenance. None of the existing container runtimes meet the specific needs of sbuild, so further advancing this matter implies a compromise one way or another.

Linux live-patching, by Santiago Ruano Rinc n In collaboration with Emmanuel Arias, Santiago is working on the development of linux live-patching for Debian. For the moment, this is in an exploratory phase, that includes how to handle the different patches that will need to be provided. kpatch could help significantly in this regard. However, kpatch was removed from unstable because there are some RC bugs affecting the version that was present in Debian unstable. Santiago packaged the most recent upstream version (0.9.9) and filed an Intent to Salvage bug. Santiago is waiting for an ACK by the maintainer, and will upload to unstable after July 10th, following the package salvaging rules. While kpatch 0.9.9 fixes the main issues, it still needs some work to properly support Debian and the Linux kernel versions packaged in our distribution. More on this in the report next month.

Salsa CI, by Santiago Ruano Rinc n The work by Santiago in Salsa CI this month includes a merge request to ease testing how the production images are built from the changes introduced by future merge requests. By default, the pipelines triggered by a merge request build a subset of the images built for production, to reduce the use of resources, and because most of the time the subset of staging images is enough to test the proposed modifications. However, sometimes it is needed to test how the full set of production images is built, and the above mentioned MR helps to do that. The changes include documentation, so hopefully this will make it easier to test future contributions. Also, for being able to include support for RISC-V, Salsa CI needs to replace kaniko as the tool used to build the images. Santiago tested buildah, but there are some issues when pushing built images for non-default platform architectures (i386, armhf, armel) to the container registry. Santiago will continue to work on this to find a solution.

Miscellaneous contributions
  • Stefano Rivera prepared updates for a number of Python modules.
  • Stefano uploaded the latest point release of Python 3.12 and the latest Python 3.13 beta. Both uncovered upstream regressions that had to be addressed.
  • Stefano worked on preparations for DebConf 24.
  • Stefano helped SPI to reconcile their financial records for DebConf 23.
  • Colin did his usual routine work on the Python team, upgrading 36 packages to new upstream versions (including fixes for four CVEs in python-aiohttp), fixing RC bugs in ipykernel, ipywidgets, khard, and python-repoze.sphinx.autointerface, and packaging zope.deferredimport which was needed for a new upstream version of python-persistent.
  • Colin removed the user_readenv option from OpenSSH s PAM configuration (#1018260), and prepared a release note.
  • Thorsten Alteholz uploaded a new upstream version of cups.
  • Nicholas Skaggs updated xmacro to support reproducible builds (#1014428), DEP-3 and DEP-5 compatibility, along with utilizing hardening build flags. Helmut supported and uploaded package.
  • As a result of login having become non-essential, Helmut uploaded debvm to unstable and stable and fixed a crossqa.debian.net worker.
  • Santiago worked on the Content Team activities for DebConf24. Together with other DebConf25 team members, Santiago wrote a document for the head of the venue to describe the project of the conference.

9 July 2024

Simon Josefsson: Towards Idempotent Rebuilds?

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match. I have felt that something is lacking, and months have passed and I haven t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command. Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%. So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences. The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a idempotent rebuild . This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions. Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually. While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable. I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status. One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem. Our notion of rebuildability appears thus to be complementary to reproducible-builds.org s definition and bootstrappable.org s definition. Each to their own devices, and Happy Hacking! Addendum about terminology: With idempotent rebuild I am talking about a rebuild of the entire operating system, applied to itself. Compare how you build the latest version of the GNU C Compiler: it first builds itself using whatever system compiler is available (often an earlier version of gcc) which we call step 1. Then step 2 is to build a copy of itself using the compiler built in step 1. The final step 3 is to build another copy of itself using the compiler from step 2. Debian, Ubuntu etc are at step 1 in this process right now. The output of step 2 and step 3 ought to be bit-by-bit identical, or something is wrong. The comparison between step 2 and 3 is what I refer to with an idempotent rebuild. Of course, most packages aren t a compiler that can compile itself. However entire operating systems such as Trisquel, PureOS, Ubuntu or Debian are (hopefully) a self-contained system that ought to be able to rebuild itself to an identical copy. Or something is amiss. The reproducible build and bootstrappable build projects are about improve the quality of step 1. The property I am interested is the identical rebuild and comparison in step 2 and 3. I feel the word idempotent describes the property I m interested in well, but I realize there may be better ways to describe this. Ideas welcome!

8 July 2024

Dirk Eddelbuettel: RcppArmadillo 14.0.0-1 on CRAN: New Upstream

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1158 other packages on CRAN, downloaded 35.1 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 587 times according to Google Scholar. Conrad released a new major upstream version 14.0.0 a couple of days ago. We had been testing this new version extensively over several rounds of reverse-dependency checks across all 1100+ packages. This revealed nine packages requiring truly minor adjustments which eight maintainers made in a matter of days; all this was coordinated in issue #443. Following the upload, CRAN noticed one more issue (see issue #446) but this turned out to be local to the package. There are also renewed deprecation warnings with some Armadillo changes which we will need to address one-by-one. Last but not least with this release we also changed the package versioning scheme to follow upstream Armadillo more closely. The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 14.0.0-1 (2024-07-05)
  • Upgraded to Armadillo release 14.0.0 (Stochastic Parrot)
    • C++14 is now the minimum recommended C++ standard
    • Faster handling of compound expressions by as_scalar(), accu(), dot()
    • Faster interactions between sparse and dense matrices
    • Expanded stddev() to handle sparse matrices
    • Expanded relational operators to handle expressions between sparse matrices and scalars
    • Added .as_dense() to obtain dense vector/matrix representation of any sparse matrix expression
    • Updated physical constants to NIST 2022 CODATA values
  • New package version numbering scheme following upstream versions
  • Re-enabling ARMA_IGNORE_DEPRECATED_MARKE for silent CRAN builds

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

7 July 2024

Niels Thykier: Improving packaging file detection in Debian

Debian packaging consists of a directory (debian/) containing a number of "hard-coded" filenames such as debian/control, debian/changelog, debian/copyright. In addition to these, many packages will also use a number of optional files that are named via a pattern such as debian/ PACKAGE .install. At a high level the patterns looks deceptively simple. However, if you start working on trying to automatically classify files in debian/ (which could be helpful to tell the user they have a typo in the filename), you will quickly realize these patterns are not machine friendly at all for this purpose.
The patterns deconstructed To appreciate the problem fully, here is a primer on the pattern and all its issues. If you are already well-versed in these, you might want to skip the section. The most common patterns are debian/package.stem or debian/stem and usually the go to example is the install stem ( a concrete example being debian/debhelper.install). However, the full pattern consists of 4 parts where 3 of them are optional.
  • The package name followed by a period. Optional, but must be the first if present.
  • The name segment followed by a period. Optional, but must appear between the package name (if present) and the stem. If the package name is not present, then the name segment must be first.
  • The stem. Mandatory.
  • An architecture restriction prefixed by a period. Optional, must appear after the stem if present.
To visualize it with [foo] to mark optional parts, it looks like debian/[PACKAGE.][NAME.]STEM[.ARCH] Detecting whether a given file is in fact a packaging file now boils down to reverse engineering its name against this pattern. Again, so far, it might still look manageable. One major complication is that every part (except ARCH) can contain periods. So a trivial "split by period" is not going to cut it. As an example:
debian/g++-3.0.user.service
This example is deliberately crafted to be ambiguous and show this problem in its full glory. This file name can be in multiple ways:
  • Is the stem service or user.service? (both are known stems from dh_installsystemd and dh_installsystemduser respectively). In fact, it can be both at the same time with "clever" usage of --name=user passed to dh_installsystemd.
  • The g++-3.0 can be a package prefix or part of the name segment. Even if there is a g++-3.0 package in debian/control, then debhelper (until compat 15) will still happily match this file for the main package if you pass --name=g++-3.0 to the helper. Side bar: Woe is you if there is a g++-3 and a g++-3.0 package in debian/control, then we have multiple options for the package prefix! Though, I do not think that happens in practice.
Therefore, there are a lot of possible ways to split this filename that all matches the pattern but with vastly different meaning and consequences.
Making detection practical To make this detection practical, lets look at the first problems that we need to solve.
  1. We need the possible stems up front to have a chance at all. When multiple stems are an option, go for the longest match (that is, the one with most periods) since --name is rare and "code golfing" is even rarer.
  2. We can make the package prefix mandatory for files with the name segment. This way, the moment there is something before the stem, we know the package prefix will be part of it and can cut it. It does not solve the ambiguity if one package name is a prefix of another package name (from the same source), but it still a lot better. This made its way into debhelper compat 15 and now it is "just" a slow long way to a better future.
A simple solution to the first problem could be to have a static list of known stems. That will get you started but the debhelper eco-system strive on decentralization, so this feels like a mismatch. There is also a second problem with the static list. Namely, a given stem is only "valid" if the command in question is actually in use. Which means you now need to dumpster dive into the mess that is Turning-complete debhelper configuration file known as debian/rules to fully solve that. Thanks to the Turning-completeness, we will never get a perfect solution for a static analysis. Instead, it is time to back out and instead apply some simplifications. Here is a sample flow:
  1. Check whether the dh sequencer is used. If so, use some heuristics to figure out which addons are used.
  2. Delegate to dh_assistant to figure out which commands will be used and which debhelper config file stems it knows about. Here we need to know which sequences are in use from step one (if relevant). Combine this with any other sources for stems you have.
  3. Deconstruct all files in debian/ against the stems and known package names from debian/control. In theory, dumpster diving after --name options would be helpful here, but personally I skipped that part as I want to keep my debian/rules parsing to an absolute minimum.
With this logic, you can now:
  • Provide typo detection of the stem (debian/foo.intsall -> debian/foo.install) provided to have adequate handling of the corner cases (such as debian/*.conf not needing correction into debian/*.config)
  • Detect possible invalid package prefix (debian/foo.install without foo being a package). Note this has to be a weak warning unless the package is using debhelper compat 15 or you dumpster dived to validate that dh_install was not passed dh_install --name foo. Agreed, no one should do that, but they can and false positives are the worst kind of positives for a linting tool.
  • With some limitations, detect files used without the relevant command being active. As an example, the some integration modes of debputy removes dh_install, so a debian/foo.install would not be used.
  • Associate a given file with a given command to assist users with the documentation look up. Like debian/foo.user.service is related to dh_installsystemduser, so man dh_installsystemduser is a natural start for documentation.
I have added the logic for all these features in debputy though the documentation association is currently not in a user facing command. All the others are now diagnostics emitted by debputy in its editor support mode (debputy lsp server) or via debputy lint. In the editor mode, the diagnostics are currently associated with the package name in debian/control due to technical limitations of how the editor integration works. Some of these features will the latest version of debhelper (moving target at times). Check with debputy lsp features for the Extra dh support feature, which will be enabled if you got all you need. Note: The detection is currently (mostly) ignoring files with architecture restrictions. That might be lifted in the future. However, architecture restricted config files tend to be rare, so they were not a priority at this point. Additionally, debputy for technical reasons ignores stem typos with multiple matches. That sadly means that typos of debian/docs will often be unreported due to its proximity to debian/dirs and vice versa.
Diving a bit deeper on getting the stems To get the stems, debputy has 3 primary sources:
  1. Its own plugins can provide packager provided files. These are only relevant if the package is using debputy.
  2. It is als possible to provide a debputy plugin that identifies packaging files (either static or named ones). Though in practice, we probably do not want people to roll their own debputy plugin for this purpose, since the detection only works if the plugin is installed. I have used this mechanism to have debhelper provide a debhelper-documentation plugin to enrich the auto-detected data and we can assume most people interested in this feature would have debhelper installed.
  3. It asks dh_assistant list-guessed-dh-config-files for config files, which is covered below.
The dh_assistant command uses the same logic as dh to identify the active add-ons and loads them. From there, it scans all commands mentioned in the sequence for the PROMISE: DH NOOP WITHOUT ...-hint and a new INTROSPECTABLE: CONFIG-FILES ...-hint. When these hints reference a packaging file (as an example, via pkgfile(foo)) then dh_assistant records that as a known packaging file for that helper. Additionally, debhelper now also tracks commands that were removed from the sequence. Several of the dh_assistant subcommand now use this to enrich their (JSON) output with notes about these commands being known but not active.
The end result With all of this work, you now get:
$ apt satisfy 'dh-debputy (>= 0.1.43~), debhelper (>= 13.16~), python3-lsprotocol, python3-levenshtein'
# For demo purposes, pull two known repos (feel free to use your own packages here)
$ git clone https://salsa.debian.org/debian/debhelper.git -b debian/13.16
$ git clone https://salsa.debian.org/debian/debputy.git -b debian/0.1.43
$ cd debhelper
$ mv debian/debhelper.install debian/debhelper.intsall
$ debputy lint
warning: File: debian/debhelper.intsall:1:0:1:0: The file "debian/debhelper.intsall" is likely a typo of "debian/debhelper.install"
    File-level diagnostic
$ mv debian/debhelper.intsall debian/debhleper.install
$ debputy lint
warning: File: debian/debhleper.install:1:0:1:0: Possible typo in "debian/debhleper.install". Consider renaming the file to "debian/debhelper.debhleper.install" or "debian/debhelper.install" if it is intended for debhelper
    File-level diagnostic
$ cd ../debputy
$ touch debian/install
$ debputy lint --no-warn-about-check-manifest
warning: File: debian/install:1:0:1:0: The file debian/install is related to a command that is not active in the dh sequence with the current addons
    File-level diagnostic
As mentioned, you also get these diagnostics in the editor via the debputy lsp server feature. Here the diagnostics appear in debian/control over the package name for technical reasons. The editor side still needs a bit more work. Notably, changes to the filename is not triggered automatically and will first be caught on the next change to debian/control. Likewise, changes to debian/rules to add --with to dh might also have some limitations depending on the editor. Saving both files and then triggering an edit of debian/control seems to work reliable but ideally it should not be that involved. The debhelper side could also do with some work to remove the unnecessary support for the name segment with many file stems that do not need them and announce that to debputy. Anyhow, it is still a vast improvement over the status quo that was "Why is my file silently ignored!?".

4 July 2024

Arturo Borrero Gonz lez: Wikimedia Toolforge: migrating Kubernetes from PodSecurityPolicy to Kyverno

Le ch teau de Val re et le Haut de Cry en juillet 2022 Christian David, CC BY-SA 4.0, via Wikimedia Commons This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez. Summary: this article shares the experience and learnings of migrating away from Kubernetes PodSecurityPolicy into Kyverno in the Wikimedia Toolforge platform. Wikimedia Toolforge is a Platform-as-a-Service, built with Kubernetes, and maintained by the Wikimedia Cloud Services team (WMCS). It is completely free and open, and we welcome anyone to use it to build and host tools (bots, webservices, scheduled jobs, etc) in support of Wikimedia projects. We provide a set of platform-specific services, command line interfaces, and shortcuts to help in the task of setting up webservices, jobs, and stuff like building container images, or using databases. Using these interfaces makes the underlying Kubernetes system pretty much invisible to users. We also allow direct access to the Kubernetes API, and some advanced users do directly interact with it. Each account has a Kubernetes namespace where they can freely deploy their workloads. We have a number of controls in place to ensure performance, stability, and fairness of the system, including quotas, RBAC permissions, and up until recently PodSecurityPolicies (PSP). At the time of this writing, we had around 3.500 Toolforge tool accounts in the system. We early adopted PSP in 2019 as a way to make sure Pods had the correct runtime configuration. We needed Pods to stay within the safe boundaries of a set of pre-defined parameters. Back when we adopted PSP there was already the option to use 3rd party agents, like OpenPolicyAgent Gatekeeper, but we decided not to invest in them, and went with a native, built-in mechanism instead. In 2021 it was announced that the PSP mechanism would be deprecated, and removed in Kubernetes 1.25. Even though we had been warned years in advance, we did not prioritize the migration of PSP until we were in Kubernetes 1.24, and blocked, unable to upgrade forward without taking actions. The WMCS team explored different alternatives for this migration, but eventually we decided to go with Kyverno as a replacement for PSP. And so with that decision it began the journey described in this blog post. First, we needed a source code refactor for one of the key components of our Toolforge Kubernetes: maintain-kubeusers. This custom piece of software that we built in-house, contains the logic to fetch accounts from LDAP and do the necessary instrumentation on Kubernetes to accommodate each one: create namespace, RBAC, quota, a kubeconfig file, etc. With the refactor, we introduced a proper reconciliation loop, in a way that the software would have a notion of what needs to be done for each account, what would be missing, what to delete, upgrade, and so on. This would allow us to easily deploy new resources for each account, or iterate on their definitions. The initial version of the refactor had a number of problems, though. For one, the new version of maintain-kubeusers was doing more filesystem interaction than the previous version, resulting in a slow reconciliation loop over all the accounts. We used NFS as the underlying storage system for Toolforge, and it could be very slow because of reasons beyond this blog post. This was corrected in the next few days after the initial refactor rollout. A side note with an implementation detail: we stored a configmap on each account namespace with the state of each resource. Storing more state on this configmap was our solution to avoid additional NFS latency. I initially estimated this refactor would take me a week to complete, but unfortunately it took me around three weeks instead. Previous to the refactor, there were several manual steps and cleanups required to be done when updating the definition of a resource. The process is now automated, more robust, performant, efficient and clean. So in my opinion it was worth it, even if it took more time than expected. Then, we worked on the Kyverno policies themselves. Because we had a very particular PSP setting, in order to ease the transition, we tried to replicate their semantics on a 1:1 basis as much as possible. This involved things like transparent mutation of Pod resources, then validation. Additionally, we had one different PSP definition for each account, so we decided to create one different Kyverno namespaced policy resource for each account namespace remember, we had 3.5k accounts. We created a Kyverno policy template that we would then render and inject for each account. For developing and testing all this, maintain-kubeusers and the Kyverno bits, we had a project called lima-kilo, which was a local Kubernetes setup replicating production Toolforge. This was used by each engineer in their laptop as a common development environment. We had planned the migration from PSP to Kyverno policies in stages, like this:
  1. update our internal template generators to make Pod security settings explicit
  2. introduce Kyverno policies in Audit mode
  3. see how the cluster would behave with them, and if we had any offending resources reported by the new policies, and correct them
  4. modify Kyverno policies and set them in Enforce mode
  5. drop PSP
In stage 1, we updated things like the toolforge-jobs-framework and tools-webservice. In stage 2, when we deployed the 3.5k Kyverno policy resources, our production cluster died almost immediately. Surprise. All the monitoring went red, the Kubernetes apiserver became irresponsibe, and we were unable to perform any administrative actions in the Kubernetes control plane, or even the underlying virtual machines. All Toolforge users were impacted. This was a full scale outage that required the energy of the whole WMCS team to recover from. We temporarily disabled Kyverno until we could learn what had occurred. This incident happened despite having tested before in lima-kilo and in another pre-production cluster we had, called Toolsbeta. But we had not tested that many policy resources. Clearly, this was something scale-related. After the incident, I went on and created 3.5k Kyverno policy resources on lima-kilo, and indeed I was able to reproduce the outage. We took a number of measures, corrected a few errors in our infrastructure, reached out to the Kyverno upstream developers, asking for advice, and at the end we did the following to accommodate the setup to our needs: I have to admit, I was briefly tempted to drop Kyverno, and even stop pursuing using an external policy agent entirely, and write our own custom admission controller out of concerns over performance of this architecture. However, after applying all the measures listed above, the system became very stable, so we decided to move forward. The second attempt at deploying it all went through just fine. No outage this time When we were in stage 4 we detected another bug. We had been following the Kubernetes upstream documentation for setting securityContext to the right values. In particular, we were enforcing the procMount to be set to the default value, which per the docs it was DefaultProcMount . However, that string is the name of the internal variable in the source code, whereas the actual default value is the string Default . This caused pods to be rightfully rejected by Kyverno while we figured the problem. I sent a patch upstream to fix this problem. We finally had everything in place, reached stage 5, and we were able to disable PSP. We unloaded the PSP controller from the kubernetes apiserver, and deleted every individual PSP definition. Everything was very smooth in this last step of the migration. This whole PSP project, including the maintain-kubeusers refactor, the outage, and all the different migration stages took roughly three months to complete. For me there are a number of valuable reasons to learn from this project. For one, the scale is something to consider, and test, when evaluating a new architecture or software component. Not doing so can lead to service outages, or unexpectedly poor performances. This is in the first chapter of the SRE handbook, but we got a reminder the hard way This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

Samuel Henrique: Debian's curl now supports HTTP3

tl;dr Starting with curl 8.0.0-2, you can now use HTTP3.
curl --http3-only https://example.com
Or, if you would like to try it out in a container:
podman run debian:unstable apt install --update -y curl && curl --http3-only https://example.com
(in case you haven't noticed, apt now has the --update option for the upgrade and install commands, although not available on stable yet)

Availability
  • Debian unstable - Since 2024-07-02
  • Debian testing - Since 2024-07-18
  • Debian 12/bookworm backports - Expected by the end of August 2024.
  • Debian 12/bookworm - Due to the mechanisms we have in place to make sure Debian stable is in fact stable, we will never be able to ship this in the regular repository. Users can make use of the backports repositories instead.
  • Debian derivatives - Rolling releases will get it by the time it's on Debian testing (e.g.: Kali Linux). Stable derivatives only in their next major release.

The challenge HTTP3 is fresh new, well... not really, but at least fresh enough that I'm not aware of any other Linux distribution supporting it on curl, the reason is likely two-fold:
  1. OpenSSL is not there yet OpenSSL still doesn't have proper HTTP3 support, and given that OpenSSL is so widely used, almost every curl distributor/packager will build curl with it and thus changing the TLS backend to something else is risky. Unfortunately, proper support for the OpenSSL libcurl is unlikely to come anytime before the end of this year, the OpenSSL performance is not good enough yet as of version 3.3. Daniel Stenberg has written about the state of this multiple times, most recently at HTTP/3 in curl mid 2024, if you're interested, I suggest reading through his other posts as well. Some might have noticed that nginx does support HTTP3 through OpenSSL, although when you look closely, it's not exactly perfect:
    An SSL library that provides QUIC support is recommended to build nginx, such as BoringSSL, LibreSSL, or QuicTLS. Otherwise, the OpenSSL compatibility layer will be used that does not support early data.
    As you can see, they don't recommend using OpenSSL, and when doing so, you don't get complete support.

  2. HTTP3 support for GnuTLS/nghttp3/ngtcp2 is recent The non-experimental support arrived back in October 2023, and so that's when I started seriously planning for this. curl has been working on HTTP3 support for years, and so it did support other TLS backends before that, but out of them, the one most feasible for a distribution to ship would be GnuTLS, which gets HTTP3 support through ngctp2 and nghttp3.

How it was done The Debian curl package has historically shipped at least two variants of libcurl, an OpenSSL and a GnuTLS one. The OpenSSL libcurl can't support HTTP3 for the reasons explained above, but the GnuTLS libcurl can (with ngtcp2 and nghtp3). Debian packages can choose which version of libcurl to link against (without having to modify any upstream source code). Debian's "git" package being a famous example of a package that links against the GnuTLS libcurl. Enabling HTTP3 on curl was done in three steps:
  1. Make sure all required dependencies fulfill the minimum requirements.
  2. Enable HTTP3 for GnuTLS libcurl.
  3. Change the libcurl used by the curl CLI, from OpenSSL to GnuTLS.
curl's HTTP3 support requires a somewhat recent version of nghttp3 and updating that required a transition (due to the SONAME bump), while we've also had months of freeze for transitions due to the time_t transition. After the dependencies were in place, enabling HTTP3 for the GnuTLS libcurl was straightforward. Then, for the last part, we had to switch the TLS backend used by the curl CLI. Doing the swap is also quite easy on the packaging level, but we have to consider the chances of this change breaking our users' environments.

Ensuring there are no breakages The first thing to consider regarding breakages is that this change is not going to be pushed directly to the current Debian stable releases, it will be present in the next stable release (13/trixie) but the current one will stick to the version that's already shipped. Secondly, we have to consider the risk of losing the ability to use certain parameters from the curl CLI which could be limited to the OpenSSL backend. During curl-up 2024, the curl developers pointed out the existence of a page that lists the TLS related options and the backends they work with. Analysing that page, ignoring all of the options that are suffixed with "BLOB" (only pertinent to the library, not the CLI), the only one left which is attention worthy is CURLOPT_ECH.
This experimental feature requires a special build of OpenSSL, as ECH is not yet supported in OpenSSL releases. In contrast ECH is supported by the latest BoringSSL and wolfSSL releases.
As it turns out, Encrypted Client Hello is experimental and it's not supported by the vanilla OpenSSL. This was enough of an investigation for me to go ahead with the change. Noting that even in the worst case scenario (we find a horrible regression), we can rollback without having affected a single stable release. Now that the package is on Debian unstable, the CI tests (autopkgtest) of every package that depends on curl is currently running, the results are compared against the migration-reference (in this case, the curl CLI with OpenSSL, before the change). If everything goes right, curl with HTTP3 support will migrate to Debian testing in around 5 days. If we spot any issues, we'll have to solve them first and it's going to be hard to predict how long it takes, although it's fair to expect less than a month.

Feedback Feel free to join the Matrix room for the Debian curl maintainers:
https://matrix.to/#/#debian-curl-maintainers:matrix.org

Acknowledgements It took us a bit longer than expected to be able to enable HTTP3, nonetheless it's still early enough to be excited about. A lot of people were crucial to make this happen. I should recognize in the first place, obviously, the curl developers and the developers of the supporting libraries: GnuTLS, nghttp3, ngtcp2. Participating in the curl-up 2024 conference helped me get motivated to push this through, besides becoming aware of the right documentation to research for impact. On the Debian side, Sakirnth Nagarasa <sakirnth> was responsible for updating and taking care of the transition for nghttp3 and ngtcp2. Also on the Debian side, I've got loads of help and support from the co-maintainers of the curl package: Sergio Durigan Junior <sergiodj> and Carlos Henrique Lima Melara <charles>.

Changes since publication

2024-07-18
  • Update date of availability for Debian testing and expected date for bookworm backports.
  • We have historically spoken Portuguese in the room but we'll switch to English in case anyone joins.

3 July 2024

Mike Gabriel: Polis - a FLOSS Tool for Civic Participation -- Initial Evaluation and Adaptation (episode 2/5)

Here comes the 2nd article of the 5-episode blog post series written by Guido Berh rster, member of staff at my company Fre(i)e Software GmbH. Enjoy also this read on Guido's work on Polis,
Mike
Table of Contents of the Blog Post Series
  1. Introduction
  2. Initial evaluation and adaptation (this article)
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis
  5. Current status and roadmap
Polis - Initial evaluation and adaptation The Polis code base consists of a number of components, the administration and participation interfaces, a common web backend, and a statistics processing server. Both frontends and the backend are written in a mixture of JavaScript and TypeScript, only the statistics processing server is written in Clojure. In case of self hosting the preferred method of deployment is via Docker containers using Docker Compose or any other orchestrator. The participation frontend for conversations can either be used as a standalone web page or be embedded via an iframe. For our planned use case we initially defined the following goals: After a preliminary evaluation of our own and consulting with Policy Lab UK who were also evaluating and testing Polis and had already made a range of improvements related to self-hosting as well as bug fixes and modernization changes we decided to take their work as a base for our adaptations with the intent of submitting generally useful changes back to the Polis project. Subsequently, a number of changes were implemented, including the removal of hardcoded domain names, the elimination of unnecessary cookies and third-party requests, support for an alternative email sending service, and the option of disabling Facebook and X integration. For the branding our approach was to add an option allowing websites which are embedding conversations in an iframe to load an alternative stylesheet for overriding the native Polis branding. For this to be practical we intended to use CSS custom properties for defining branding-related styles such as colors and fonts. That approach turned out to be problematic because although the Polis participation frontend stylesheet is generated via SCSS and some of the colors are parameterized, however, they are not used consistently throughout the SCSS stylesheets, unfortunately. In addition the frontend templates contain a large amount of hardcoded style attributes. While we succeeded in implementing user-defined stylesheets, it took a disproportionate amount of development resources to parameterize all used colors and fonts via CSS custom properties aggravated by the fact that the SCSS and template files are huge and contain many unused rules and code.

Sahil Dhiman: RTI to NPL Regarding Their NTP Infrastructure

I became interested in Network Time Protocol (NTP) last year after learning how fundamental this protocol is to the functioning of the global Internet. NTP helps synchronize clocks on devices over the Internet, which is essential for secure browsing, timestamping, keeping everyone in sync or just checking what time it is. Computers usually have a hardware real-time clock (RTC) but that deviates over time, so an occasional sync over NTP is required to keep the time accurate. Many network and IoT devices don t have hardware RTC so have even more reliance on NTP. Accurate time keeping starts with reference clocks like atomic clocks, GPS etc. Multiple government standard agencies host these reference clocks, which are regarded as Stratum 0. Stratum 1 servers are known as primary servers, and directly connect to Stratum 0 clocks for time. Stratum 1 servers then distribute time to Stratum 2 and further down the hierarchy. Computers typically connects to one or more Stratum 1/2/3 servers to get their time. Someone has to host these public Stratum 1,2,3 NTP servers. That s what NTP pool, a global effort by volunteers does. They provide NTP servers for the public to use. As of today, there are 4700+ servers in the pool which are free to use for anyone. Now let s come to the reason for writing this post. Indian Computer Emergency Response Team (CERT-In) in April 2022 released a set of cybersecurity directions which set the alarm bells ringing. Internet Society (and almost everyone else) wrote about it. And then there was this specific section about NTP:
All service providers, intermediaries, data centres, body corporate and Government organisations shall connect to the Network Time Protocol (NTP) Server of National Informatics Centre (NIC) or National Physical Laboratory (NPL) or with NTP servers traceable to these NTP servers, for synchronisation of all their ICT systems clocks. Entities having ICT infrastructure spanning multiple geographies may also use accurate and standard time source other than NPL and NIC, however it is to be ensured that their time source shall not deviate from NPL and NIC.
CSIR-National Physical Laboratory (NPL) is the official timekeeper for India and hosts the only public Stratum 1 clock in India, according to NTP pool website. So I was naturally curious to know what kind of infrastructure they re running for NTP. India has a Right to Information (RTI) Act which, like the Freedom of Information Act (FOIA) in the United States, gives citizens rights to request information from governmental entities, to which they have to respond in under 30 days. So last year, I filed two sets of RTI (one after the first reply came) inquiring about NPL s public NTP server setup. The first RTI had some generic questions: RTI 1
First RTI. Click to enlarge
This gave a vague idea about the setup, so I sat down and came with some specific questions in the next RTI. RTI 2
Second RTI. Click to enlarge
Feel free to make your conclusions from it now. Bear in mind these were filled last year so things might have changed. Do let me know if you have more information about it. Update (07/07/2024): Found an article from Medianama about Indian government time servers with information sourced through RTI.

2 July 2024

Bits from Debian: Bits from the DPL

Dear Debian community, Statement on Daniel Pocock The Debian project has successfully taken action to secure its trademarks and interests worldwide, as detailed in our press statement. I would like to personally thank everyone in the community who was involved in this process. I would have loved for you all to have spent your volunteer time on more fruitful things. Debian Boot team might need help I think I've identified the issue that finally motivated me to contact our teams: for a long time, I have had the impression that Debian is driven by several "one-person teams" (to varying extents of individual influence and susceptibility to burnout). As DPL, I see it as my task to find ways to address this issue and provide support. I received private responses from Debian Boot team members, which motivated me to kindly invite volunteers to some prominent and highly visible fields of work that you might find personally challenging. I recommend subscribing to the Debian Boot mailing list to see where you might be able to provide assistance. /usrmerge Helmut Grohne confirmed that the last remaining packages shipping aliased files inside the package set relevant to debootstrap were uploaded. Thanks a lot for Helmut and all contributors that helped to implement DEP17. Contacting more teams I'd like to repeat that I've registered a BoF for DebConf24 in Busan with the following description: This BoF is an attempt to gather as much as possible teams inside Debian to exchange experiences, discuss workflows inside teams, share their ways to attract newcomers etc. Each participant team should prepare a short description of their work and what team roles ( openings ) they have for new contributors. Even for delegated teams (membership is less fluid), it would be good to present the team, explain what it takes to be a team member, and what steps people usually go to end up being invited to participate. Some other teams can easily absorb contributions from salsa MRs, and at some point people get commit access. Anyway, the point is that we work on the idea that the pathway to become a team member becomes more clear from an outsider point-of-view. I'm lagging a bit behind my team contacting schedule and will not manage to contact every team before DebConf. As a (short) summary, I can draw some positive conclusions about my efforts to reach out to teams. I was able to identify some issues that were new to me and which I am now working on. Examples include limitations in Salsa and Salsa CI. I consider both essential parts of our infrastructure and will support both teams in enhancing their services. Some teams confirmed that they are basically using some common infrastructure (Salsa team space, mailing lists, IRC channels) but that the individual members of the team work on their own problems without sharing any common work. I have also not read about convincing strategies to attract newcomers to the team, as we have established, for instance, in the Debian Med team. DebConf attendance The amount of money needed to fly people to South Korea was higher than usual, so the DebConf bursary team had to make some difficult decisions about who could be reimbursed for travel expenses. I extended the budget for diversity and newcomers, which enabled us to invite some additional contributors. We hope that those who were not able to come this year can make it next year to Brest or to MiniDebConf Cambridge or Toulouse tag2upload On June 12, Sean Whitton requested comments on the debian-vote list regarding a General Resolution (GR) about tag2upload. The discussion began with technical details but unfortunately, as often happens in long threads, it drifted into abrasive language, prompting the community team to address the behavior of an opponent of the GR supporters. After 560 emails covering technical details, including a detailed security review by Russ Allbery, Sean finally proposed the GR on June 27, 2024 (two weeks after requesting comments). Firstly, I would like to thank the drivers of this GR and acknowledge the technical work behind it, including the security review. I am positively convinced that Debian can benefit from modernizing its infrastructure, particularly through stronger integration of Git into packaging workflows. Sam Hartman provided some historical context [1], [2], [3], [4], noting that this discussion originally took place five years ago with no results from several similarly lengthy threads. My favorite summary of the entire thread was given by Gregor Herrmann, which reflects the same gut feeling I have and highlights a structural problem within Debian that hinders technical changes. Addressing this issue is definitely a matter for the Debian Project Leader, and I will try to address it during my term. At the time of writing these bits, a proposal from ftpmaster, which is being continuously discussed, might lead to a solution. I was also asked to extend the GR discussion periods which I will do in separate mail. Talk: Debian GNU/Linux for Scientific Research I was invited to have a talk in the Systems-Facing Track of University of British Columbia (who is sponsoring rack space for several Debian servers). I admit it felt a bit strange to me after working more than 20 years for establishing Debian in scientific environments to be invited to such a talk "because I'm DPL". Kind regards Andreas.

Ben Hutchings: FOSS activity in June 2024

1 July 2024

Ben Hutchings: FOSS activity in April 2024

Next.