Andy Simpkins: The state of the art

comload
. It's written in Python
because Python is easy to deploy (I can just throw it in a .deb
and
upload it to my internal repository), and because I did not find a Go
library to handle Action API pagination for me. The basic usage is
like this:
$ comload --subcats "Taavi V n nen"
comload
is available from PyPI and from my Git server directly,
and is licensed under the GPLv3.
libglib2.0-dev
such that it would absorb more functionality and in particular provide tools for working with .gir
files. Those tools practically require being run for their host architecture (practically this means running under qemu-user
) which is at odds with the requirements of architecture cross bootstrap. The qemu
requirement was expressed in package dependencies and also made people unhappy attempting to use libglib2.0-dev
for i386
on amd64
without resorting to qemu
. The use of qemu
in architecture bootstrap is particularly problematic as it tends to not be ready at the time bootstrapping is needed.
As a result, Simon proposed and implemented the introduction of a libgio-2.0-dev
package providing a subset of libglib2.0-dev
that does not require qemu
. Packages should continue to use libglib2.0-dev
in their Build-Depends
unless involved in architecture bootstrap. Helmut reviewed and tested the implementation and integrated the necessary changes into rebootstrap. He also prepared a patch for libverto to use the new package and proposed adding forward compatibility to glib2.0.
Helmut continued working on adding cross-exe-wrapper to architecture-properties and implemented autopkgtests later improved by Simon. The cross-exe-wrapper
package now provides a generic mechanism to a program on a different architecture by using qemu
when needed only. For instance, a dependency on cross-exe-wrapper:i386
provides a i686-linux-gnu-cross-exe-wrapper
program that can be used to wrap an ELF executable for the i386
architecture. When installed on amd64
or i386
it will skip installing or running qemu
, but for other architectures qemu
will be used automatically. This facility can be used to support cross building with targeted use of qemu
in cases where running host code is unavoidable as is the case for GObject introspection.
This concludes the joint work with Simon and Niels Thykier on glib2.0
and architecture-properties
resolving known architecture bootstrap regressions arising from the glib2.0
refactoring earlier this year.
dpkg
, the question arises how this affects existing packages. The dedup.debian.net infrastructure provides an easy playground to answer such questions, so Helmut gathered file metadata from all binary packages in unstable and performed an explorative analysis. Some results include:
/usr
-merge is not the only cause for aliasing problems in Debian.
dpkg
can enforce.
setup.py test
. This month, Stefano did some more rebuilds, starting with experimental versions of dh-python
.
During the Python 3.12 transition, we had added a dependency on python3-setuptools to dh-python, to ease the transition. Python 3.12 removed distutils from the stdlib, but many packages were expecting it to still be available. Setuptools contains a version of distutils, and dh-python was a convenient place to depend on setuptools for most package builds. This dependency was never meant to be permanent. A rebuild without it resulted in mass-filing about 340 bugs (and around 80 more by mistake).
A new feature in Python 3.12, was to have unittest s test runner exit with a non-zero return code, if no tests were run. We added this feature, to be able to detect tests that are not being discovered, by mistake. We are ignoring this failure, as we wouldn t want to suddenly cause hundreds of packages to fail to build, if they have no tests. Stefano did a rebuild to see how many packages were affected, and found that around 1000 were. The Debian Python community has not come to a conclusion on how to move forward with this.
As soon as Python 3.13 release candidate 2 was available, Stefano did a rebuild of the Python packages in the archive against it. This was a more complex rebuild than the others, as it had to be done in stages. Many packages need other Python packages at build time, typically to run tests. So transitions like this involve some manual bootstrapping, followed by several rounds of builds. Not all packages could be tested, as not all their dependencies support 3.13 yet. The result was around 100 bugs in packages that need work to support Python 3.13. Many other packages will need additional work to properly support Python 3.13, but being able to build (and run tests) is an important first step.
setup.py test
, backported a large upstream patch set to make buildbot work with SQLAlchemy 2.0, and upgraded 25 other Python packages to new upstream versions.
sbuild
reviewing and improving a MR for refactoring the unshare backend.
gcc-defaults
.
/usr
-move. With more and more key packages such as libvirt
or fuse3
fixed. We re moving into the boring long-tail of the transition.
glib2.0
above, rebootstrap moves a lot further, but still fails for any architecture.
libcupsfilter
to fix the autopkgtest and a dependency problem of this package. After package splix
was abandoned by upstream and OpenPrinting.org adopted its maintenance, Thorsten uploaded their first release.
binsider
tool to analyse ELF binaries
README
page:
Binsider can perform static and dynamic analysis, inspect strings, examine linked libraries, and perform hexdumps, all within a user-friendly terminal user interface!More information about Binsider s features and how it works can be found within Binsider s documentation pages.
95% fixed by [merge request] !12680 when -fobject-determinism
is enabled. [ ]
The linked merge request has since been merged, and Rodrigo goes on to say that:
After that patch is merged, there are some rarer bugs in both interface file determinism (eg.#25170
) and in object determinism (eg.#25269
) that need to be taken care of, but the great majority of the work needed to get there should have been merged already. When merged, I think we should close this one in favour of the more specific determinism issues like the two linked above.
zlib
/deflate
compression in .zip
and .apk
files and later followed up with the results of her subsequent investigation.
CONFIG_MODULE_SIG
flag. [ ]
zlib
to zlib-ng
as reproducibility requires identical compressed data streams. [ ]
maven-lockfile
that is designed aid building Maven projects with integrity . [ ]
This is a report of Part 1 of my journey: building 100% bit-reproducible packages for every package that makes up [openSUSE s] minimalVM
image. This target was chosen as the smallest useful result/artifact. The larger package-sets get, the more disk-space and build-power is required to build/verify all of them.
This work was sponsored by NLnet s NGI Zero fund.
A hermetic build system manages its own build dependencies, isolated from the host file system, thereby securing the build process. Although, in recent years, new artifact-based build technologies like Bazel offer build hermeticity as a core functionality, no empirical study has evaluated how effectively these new build technologies achieve build hermeticity. This paper studies 2,439 non-hermetic build dependency packages of 70 Bazel-using open-source projects by analyzing 150 million Linux system file calls collected in their build processes. We found that none of the studied projects has a completely hermetic build process, largely due to the use of non-hermetic top-level toolchains. [ ]
debrebuild
component of the devscripts suite of tools. In particular:
#1081047
: Fails to download .dsc
file.#1081048
: Does not work with a proxy.#1081050
: Fails to create a debrebuild.tar
.#1081839
: Fails with E: mmdebstrap failed to run
error.build_path
variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org. This month, this issue was closed by Santiago R. R., nicely explaining that build path variation is no longer the default, and, if desired, how developers may enable it again.
278
to Debian:
python3-setuptools
dependency. (#1080825)Standards-Version
to 4.7.0. [ ]0.5.11-4
was uploaded to Debian unstable by Holger Levsen making the following changes:
pkg-config
package with one on pkgconf
, following a Lintian check. [ ]Standards-Version
field to 4.7.0, with no related changes needed. [ ]0.7.28
was uploaded to Debian unstable by Holger Levsen including a change by Jelle van der Waa to move away from the pipes
Python module to shlex
, as the former will be removed in Python version 3.13 [ ].
classes.dex
file (and thus a different .apk
) depending on the number of cores available during the build, thereby breaking Reproducible Builds:
We ve rebuilt [tagv3.6.1
] multiple times (each time in a fresh container): with 2, 4, 6, 8, and 16 cores available, respectively:
- With 2 and 4 cores we always get an unsigned APK with SHA-256
14763d682c9286ef
.- With 6, 8, and 16 cores we get an unsigned APK with SHA-256
35324ba4c492760
instead.
reproducibility settings [being] applied to some of Gradle s built-in tasks that should really be the default. Compatible with Java 8 and Gradle 8.3 or later.
ext4
, erofs
and FAT
filesystems can now be made reproducible . [ ]
agama-integration-tests
(random)contrast
(FTBFS-nocheck)cpython
(FTBFS-2038)crash
(parallelism, race)ghostscript
(toolchain date)glycin-loaders
(FTBFS -j1
)gstreamer-plugins-rs
(date, other)kernel-doc/Sphinx
(toolchain bug, parallelism/race)kernel
(parallelism in BTF)libcamera
(random key)libgtop
(uname -r
)libsamplerate
(random temporary directory)lua-luarepl
(FTBFS)meson
(toolchain)netty
(modification time in .a
)nvidia-persistenced
(date)nvidia-xconfig
(date-related issue)obs-build
(build-tooling corruption)perl
(Perl records kernel version)pinentry
(make efl droppable)python-PyGithub
(FTBFS 2024-11-25)python-Sphinx
(parallelism/race)python-chroma-hnswlib
(CPU)python-libcst
python-pygraphviz
(random timing)python312
(.pyc
embeds modification time)python312
(drop .pyc
from documentation time)scap-security-guide
(date)seahorse
(parallelism)subversion
(minor Java .jar
modification times)xen/acpica
(date-related issue in toolchain)xmvn
(random)magic-wormhole-transit-relay
.python-sphobjinv
.lomiri-content-hub
.python-mt-940
.tree-puzzle
.muon-meson
.osuosl4
node to Debian trixie in anticipation of running debrebuild
and rebuilderd
there. [ ][ ][ ]osuosl4
node as offline due to ongoing xfs_repair
filesystem maintenance. [ ][ ]risc64
architecture to the multiarch version skew tests for Debian trixie and sid. [ ][ ][ ]virt 32,64 b
nodes as down. [ ]virt32b
and virt64b
nodes [ ], performed some maintenance of the cbxi4a
node [ ][ ] and marked most armhf
architecture systems as being back online.
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
setup.py test
command. This caused some fallout in Debian,
some of which was quite non-obvious as packaging helpers sometimes fell back
to different ways of running test suites that didn t quite work. I fixed
django-guardian,
manuel,
python-autopage,
python-flask-seeder,
python-pgpdump,
python-potr,
python-precis-i18n,
python-stopit,
serpent,
straight.plugin,
supervisor, and
zope.i18nmessageid.
As usual for new language versions, the addition of Python 3.13 caused some
problems. I fixed psycopg2,
python-time-machine, and
python-traits.
I fixed build/autopkgtest failures in
keymapper,
python-django-test-migrations,
python-rosettasciio,
routes,
transmissionrpc, and
twisted.
buildbot was in a bit of a mess due to
being incompatible with SQLAlchemy 2.0. Fortunately by the time I got to it
upstream had committed a workable set of patches, and the main difficulty
was figuring out what to cherry-pick since they haven t made a new upstream
release with all of that yet. I figured this out and got us up to 4.0.3.
Adrian Bunk asked whether python-zipp
should be removed from trixie. I spent some time investigating this and
concluded that the answer was no, but looking into it was an interesting
exercise anyway.
On the other hand, I looked into flask-appbuilder, concluded that it should
be removed, and filed a removal request.
I upgraded some embedded CSS files in
nbconvert.
I upgraded importlib-resources, ipywidgets, jsonpickle, pydantic-settings,
pylint (fixing a test failure),
python-aiohttp-session, python-apptools, python-asyncssh,
python-django-celery-beat, python-django-rules, python-limits,
python-multidict, python-persistent, python-pkginfo, python-rt, python-spur,
python-zipp, stravalib, transmissionrpc, vulture, zodbpickle,
zope.exceptions (adopting it),
zope.i18nmessageid, zope.proxy, and zope.security to new upstream versions.
debmirror
The experimental
and *-proposed-updates
suites used to not have
Contents-*
files, and a long time ago debmirror was changed to just skip
those files in those suites. They were added to the Debian archive some
time ago, but debmirror carried on skipping them anyway. Once I realized
what was going on, I removed these unnecessary special cases
(#819925,
#1080168).
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More details are at the RApiSerialize page; code, issue tickets etc at the GitHub repositoryrapiserializerepo. If you like this or other open-source work I do, you can sponsor me at GitHub.Changes in version 0.1.4 (2024-09-28)
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
ebuild
out of Gentoo
and when they looked for someone to
help out they reached out to me. We recognized the Linux kernel was pretty much
the weakest link in the Chrome OS security posture and I joined them to help
solve that. Their userspace was pretty well handled but the kernel had a lot
of weaknesses, so focusing on hardening was the next place to go. When I
compared notes with other users of the Linux kernel within Google there were a
number of common concerns and desires. Chrome OS already had an upstream
first requirement, so I tried to consolidate the concerns and solve them
upstream. It was challenging to land anything in other kernel team repos at
Google, as they (correctly) wanted to minimize their delta from upstream, so I
needed to work on any major improvements entirely in upstream and had a lot of
support from Google to do that. As such, my focus shifted further from working
directly on Chrome OS into being entirely upstream and being more of a
consultant to internal teams, helping with integration or sometimes
backporting. Since the volume of needed work was so gigantic I needed to find
ways to inspire other developers (both inside and outside of Google) to help.
Once I had a budget I tried to get folks paid (or hired) to work on these areas
when it wasn t already their job.
switch
statements. The language would just fall
through between adjacent case
s if a break
(or other code flow directive)
wasn t present. But this is ambiguous: is the code meant to fall-through, or
did the author just forget a break
statement? By defining the [[fallthrough]]
statement,
and requiring its use in
Linux,
all switch
statements now have explicit code flow, and the entire class of
bugs disappeared. During our refactoring we actually found that 1 in 10 added
[[fallthrough]]
statements were actually missing break
statements. This
was an extraordinarily common bug!
So getting rid of that ambiguity is where we have been. Another area I ve been
spending a bit of time on lately is looking at how defensive security work has
challenges associated with metrics. How do you measure your defensive security
impact? You can t say because we installed locks on the doors, 20% fewer
break-ins have happened. Much of our signal is always secondary or
retrospective, which is frustrating: This class of flaw was used X much over
the last decade so, and if we have eliminated that class of flaw and will never
see it again, what is the impact? Is the impact infinity? Attackers will just
move to the next easiest thing. But it means that exploitation gets
incrementally more difficult. As attack surfaces are reduced, the expense of
exploitation goes up.
IN\_FORMAT
, segmented LUTs,
interpolation types, etc. Developers from Qualcomm and ARM also added
information regarding their hardware.
Upstream work related to this session:
hw_done
callback to timestamp
when the hardware programming of the last atomic commit is complete. Also an
API to pre-program color pipeline in a kind of A/B scheme. It may not be
supported by all drivers, but might be useful in different ways.
rw systemctl enable --now kvmd-oled kvmd-oled-reboot kvmd-oled-shutdown systemctl enable --now kvmd-fan roThe default webadmin username/password is admin/admin. To change the passwords run the following commands:
rw kvmd-htpasswd set admin passwd root roIt is configured to have the root filesystem mounted read-only which is something I thought had gone out of fashion decades ago. I don t think that modern versions of the Ext3/4 drivers are going to corrupt your filesystem if you have it mounted read-write when you reboot. By default it uses a self-signed SSL certificate so with a Chrome based browser you get an error when you connect where you have to select advanced and then tell it to proceed regardless. I presume you could use the DNS method of Certbot authentication to get a SSL certificate to use on an internal view of your DNS to make it work normally with SSL. The web based software has all the features you expect from a KVM. It shows the screen in any resolution up to 1920*1080 and proxies keyboard and mouse. Strangely lsusb on the machine being managed only reports a single USB device entry for it which covers both keyboard and mouse. Managing Computers For a tower PC disconnect any regular monitor(s) and connect a HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the OTG port on the KVM, then it should all just work. For a laptop connect the HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the OTG port on the KVM. Then boot it up and press Fn-F8 for Dell, Fn-F7 for Lenovo or whatever the vendor code is to switch display output to HDMI during the BIOS initialisation, then Linux will follow the BIOS and send all output to the HDMI port for the early stages of booting. Apparently Lenovo systems have the Fn key mapped in the BIOS so an external keyboard could be used to switch between display outputs, but the PiKVM software doesn t appear to support that. For other systems (probably including the Dell laptops that interest me) the Fn key apparently can t be simulated externally. So for using this to work on laptops in another city I need to have someone local press Fn-F8 at the right time to allow me to change BIOS settings. It is possible to configure the Linux kernel to mirror display to external HDMI and an internal laptop screen. But this doesn t seem useful to me as the use cases for this device don t require that. If you are using it for a server that doesn t have iDRAC/ILO or other management hardware there will be no other monitor and all the output will go through the only connected HDMI device. My main use for it in the near future will be for supporting remote laptops, when Linux has a problem on boot as an easier option than talking someone through Linux commands and for such use it will be a temporary thing and not something that is desired all the time. For the gdm3 login program you can copy the .config/monitors.xml file from a GNOME user session to the gdm home directory to keep the monitor settings. This configuration option is decent for the case where a fixed set of monitors are used but not so great if your requirement is display a login screen on anything that s available . Is there an xdm type program in Debian/Ubuntu that supports this by default or with easy reconfiguration? Conclusion The PiKVM is a well engineered and designed product that does what s expected at a low price. There are lots of minor issues with using it which aren t the fault of the developers but are due to historical decisions in the design of BIOS and Linux software. We need to change the Linux software in question and lobby hardware vendors for BIOS improvements. The feature for connecting to an ATX PSU was unexpected and could be really handy for some people, it s not something I have an immediate use for but is something I could possibly use in future. I like the way they shipped the hardware for it as part of the package giving the user choices about how they use it, many vendors would make it an optional extra that costs another $100. This gives the PiKVM more functionality than many devices that are much more expensive. The web UI wasn t as user friendly as it might have been, but it s a lot better than iDRAC so I don t have a serious complaint about it. It would be nice if there was an option for creating macros for keyboard scancodes so I could try and emulate the Fn options and keys for volume control on systems that support it.
IN\_FORMAT
, segmented LUTs,
interpolation types, etc. Developers from Qualcomm and ARM also added
information regarding their hardware.
Upstream work related to this session:
hw_done
callback to timestamp
when the hardware programming of the last atomic commit is complete. Also an
API to pre-program color pipeline in a kind of A/B scheme. It may not be
supported by all drivers, but might be useful in different ways.
Note
This post is a continuation of my previous article on enabling the Unified Kernel Image (UKI) on Debian.
sudo ls /efi/loader/keys/chamunda
db.auth KEK.auth PK.auth
sbsign --key <path-to db.key> --cert <path-to db.crt> \
/usr/lib/systemd/boot/efi/systemd-bootx64.efi
Note
If you encounter warnings about mount options, update your fstab with the umask=0077 option for the EFI partition.
SecureBootPrivateKey=/path/to/db.key
SecureBootCertificate=/path/to/db.crt
sudo dpkg-reconfigure linux-image-$(uname -r)
# Repeat for other kernel versions if necessary
sudo dpkg-reconfigure linux-image-$(uname -r)
/etc/kernel/postinst.d/dracut:
dracut: Generating /boot/initrd.img-6.10.9-amd64
Updating kernel version 6.10.9-amd64 in systemd-boot...
Signing unsigned original image
Using config file: /etc/kernel/uki.conf
+ sbverify --list /boot/vmlinuz-6.10.9-amd64
+ sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukicc7vcxhy --output /tmp/kernel-install.staging.QLeGLn/uki.efi
Wrote signed /tmp/kernel-install.staging.QLeGLn/uki.efi
/etc/kernel/postinst.d/zz-systemd-boot:
Installing kernel version 6.10.9-amd64 in systemd-boot...
Signing unsigned original image
Using config file: /etc/kernel/uki.conf
+ sbverify --list /boot/vmlinuz-6.10.9-amd64
+ sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukit7r1hzep --output /tmp/kernel-install.staging.dWVt5s/uki.efi
Wrote signed /tmp/kernel-install.staging.dWVt5s/uki.efi
systemctl reboot --boot-loader-menu=0
Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is at the Rblpapi repo or the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.Changes in Rblpapi version 0.3.15 (2024-09-18)
- A warning is now issued if more than 1000 results are returned (John in #377 addressing #375)
- A few typos in the rblpapi-intro vignette were corrected (Michael Streatfield in #378)
- The continuous integration setup was updated (Dirk in #388)
- Deprecation warnings over
char*
where C++ className
is now preferred have been addressed (Dirk in #391)- Several package files have been updated (Dirk in #392)
- The
request
formation has been corrected, and an example was added (Dirk and John in #394 and #396)- The Bloomberg API has been upgraded to release 3.24.6.1 (Dirk in #397)
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Series: | Library Trilogy #2 |
Publisher: | Ace |
Copyright: | 2024 |
ISBN: | 0-593-43796-9 |
Format: | Kindle |
Pages: | 366 |
Publisher: | Tachyon |
Copyright: | 2024 |
ISBN: | 1-61696-415-4 |
Format: | Kindle |
Pages: | 394 |
Sep 11 05:08:03 Warning: mysqldump: Error 2013: Lost connection to server during query when dumping table 1C4Uonkwhe_options at row: 1402
Sep 11 05:08:03 Warning: Failed to dump mysql databases ic_wp
Sep 11 13:50:11 mysql007 mariadbd[580]: 2024-09-11 13:50:11 69577 [Warning] Aborted connection 69577 to db: 'ic_wp' user: 'root' host: 'localhost' (Got an error writing communication packets)
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'select option_id,option_name,option_value,autoload from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
+-----------+
option_id
+-----------+
16296351
+-----------+
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'select option_value from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+-----------+----------------------+--------------+----------+
option_id option_name option_value autoload
+-----------+----------------------+--------------+----------+
16296351 z_taxonomy_image8905 yes
+-----------+----------------------+--------------+----------+
root@mysql007:~#
option_value
right?
root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+---------------------------+
CHAR_LENGTH(option_value)
+---------------------------+
0
+---------------------------+
root@mysql007:~# mysql --protocol=socket -e 'select HEX(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+-------------------+
HEX(option_value)
+-------------------+
+-------------------+
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'update 1C4Uonkwhe_options set option_value = "" where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'delete from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table 1C4Uonkwhe_options at row: 1401
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_options VALUES(16296351,"z_taxonomy_image8905","","yes");' ic_wp
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table 1C4Uonkwhe_options at row: 1402
root@mysql007:~#
oot@mysql007:~# mysql --protocol=socket -e 'create table 1C4Uonkwhe_new_options like 1C4Uonkwhe_options;' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 1402 offset 0;' ic_wp
--- There is only 33 more records, not sure how to specify unlimited limit but 100 does the trick.
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 100 offset 1403;' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_new_options' ic_wp >/dev/null;
root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options where option_id not in (select option_id from 1C4Uonkwhe_new_options) ;' ic_wp
+-----------+
option_id
+-----------+
18405297
+-----------+
root@mysql007:~#
ORDER BY
or you won t get consistent results.
root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options order by option_id limit 1 offset 1402' ic_wp ;
+-----------+
option_id
+-----------+
18405297
+-----------+
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
+---------------------------+
CHAR_LENGTH(option_value)
+---------------------------+
50814767
+---------------------------+
root@mysql007:~#
LONGTEXT
field so it should be able to handle it. But now I have a better idea of what
could be going wrong. The name of the option is rewrite_rules so it seems
like something is going wrong with the generation of that option.
I imagine there is some tweak I can make to allow MariaDB to cough up the value
(read_buffer_size
? tmp_table_size
?). But I ll start with checking in with
the database owner because I don t think 35,000 pages of rewrite rules is
appropriate for any site.
SPDLOG_LEVEL
.
The NEWS entry for this release follows.
Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.Changes in RcppSpdlog version 0.0.18 (2024-09-10)
- Upgraded to upstream release spdlog 1.14.1
- Minor packaging upgrades
- Allow logging levels to be set via environment variable
SPDLOG_LEVEL
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
python3 setup.py test
in their Debian packaging. Stefano did a partial archive-rebuild using debusine.debian.net to find the regressions and file bugs.
Debusine will be a powerful tool to do QA work like this for Debian in the future, but it doesn t have all the features needed to coordinate rebuild-testing, yet. They are planned to be fleshed out in the next year. In the meantime, Debusine has the building blocks to work through a queue of package building tasks and store the results, it just needs to be driven from outside the system.
So, Stefano started working on a set of tools using the Debusine client API to perform archive rebuilds, found and tagged existing bugs, and filed many more.
/usr
, so what we re left with is the long tail of the transition. Rather than fix all of them, Helmut started a discussion on removing packages from unstable and filed a first batch. As libvirt
is being restructured in experimental
, we re handling the fallout in collaboration with its maintainer Andrea Bolognani. Since base-files
validates the aliasing symlinks before upgrading, it was discovered that systemd has its own ideas with no solution as of yet. Helmut also proposed that dash
checks for ineffective diversions of /bin/sh
and that lintian
warns about aliased files.
unstable
. We had a number of fairly intrusive changes this year already. August included a little more fallout from the earlier gcc-for-host
work where the C++ include search path would end up being wrong in the generated cross toolchain. A number of packages such as util-linux (twice), libxml2, libcap-ng or systemd had their stage profiles broken. e2fsprogs
gained a cycle with libarchive-dev
due to having gained support for creating an ext4
filesystem from a tar
archive. The restructuring of glib2.0
remains an unsolved problem for now, but libxt and cdebconf should be buildable without glib2.0
.
build-riscv64
job makes it possible to test that a package successfully builds in the riscv64 architecture. The RISC-V runner (salsaci riscv64 runner 01) runs in a couple of machines generously provided by lab.rvperf.org. Debian Developers interested in running this job in their projects should enable the runner (salsaci riscv64 runner 01) in Settings / CI / Runners, and follow the instructions available at https://salsa.debian.org/salsa-ci-team/pipeline/#build-job-on-risc-v.
Santiago also took part in discussions about how to optimize the build jobs and reviewed !537 to make the build-source job to only satisfy the Build-Depends and Build-Conflicts fields by Andrea Pappacoda. Thanks a lot to him!
/etc/hosts
configurations.cross-exe-wrapper
proposed by Simon McVittie for use with glib2.0
.ExtUtils::PkgConfig
suitable for cross building.debvm
in all situations and implemented a test case using expect
.Next.