Search Results: "frx"

16 November 2023

Dimitri John Ledkov: Ubuntu 23.10 significantly reduces the installed kernel footprint


Photo by Pixabay
Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

  • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

  • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

  • 0.5x increase in download size (949MB vs 600MB)

  • 2.5x faster initrd generation (4.5s vs 11.3s)

  • approximately the same total time (103s vs 98s, hardware dependent)


For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

  • 1.3x less disk space used (548MB vs 742MB)

  • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

  • 0.4x increase in download size (207MB vs 146MB)


Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.

This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.

The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.

The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.

Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.

Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:

For questions and comments please post to Kernel section on Ubuntu Discourse.



9 December 2015

Vincent Fourmond: Announcing QSoas, a powerful y=f(x) data analysis software

Why a new data analysis software ?I'm a researcher at the interface between physics, chemistry and biology, and in our team, we pride ourselves on making the most of the data we acquire, especially through quantitative analysis and modelling. In fact, we spend a lot of time doing fitting simple formulas or complex differential equations to our data. As we were not really satisfied with the data fitting capacities of the software available, we've had our custom data processing/fitting tool, SOAS, for ages. However, that tool was hard to maintain (Fortran + Fortran libraries interfacing with X11 with ABI changing every once in a while without notice), impossible to port to non-X11 platforms, not very user-friendly, and not easy to extend at all. So, when I got my permanent position, a rewrote a completely new version from scratch, called QSoas using C++, Qt, Ruby and the GNU Scientific Library. The result is incomparably more powerful, more easy to maintain, more user-friendly, and more portable (I build it for Linux, Mac and Windows).
What does it do ?The main features of QSoas are:
What has it done already ?We've relied heavily on QSoas's functionalities for the past 3-4 years, and a great part of the team's publications just wouldn't be there without QSoas. More precisely on selected examples:
I hope it will also help you get more than you previously could from your data (and faster, too !).
I want it, where can I get it ?You can download QSoas version 1.0 on its website. The source code is fully available under the GNU General Public License. For those not too compilation-savvy, we sell pre-built binaries for windows and mac, in collaboration with Satt Sud-Est and eValorix. Compilation under Linux is very simple, but I'm willing to come up with a Debian package, should some of you want that. You should definitely have a look at the tutorial and the command reference.

15 August 2013

Daniel Leidert: HP N54L Microserver - energy efficiency and power management

I recently worked on activating power management functions, reduce energy consumption and noise of my little HP N54L "toy". During this process I tried to avoid the usage of /etc/rc.local and set things by udev, hdparm and friends. Below are my results. Actual resultsWith the following steps my system (N54L + 3xWD20EFRX HDD +1xWD5003AZEX HDD + LCD-mod + case fan mod + Debian Wheezy) uses 27W in idle mode. The USB W-LAN card uses another 10W. In active mode, e.g. compiling source code, the system runs (and boots) with around 57W. The highest power consumption observed is during startup phase with 88W. First things firstFor the following steps it might be necessary to have some packages installed, that maybe do not occur in this post. If I missed something, I appreciate a hint. Further the following steps might produce even better results with a custom kernel. I'm using the stock linux-image-3.2.0-4-amd64 kernel image as the time of writing and I have these packages installed: amd64-microcode, firmware-linux, firmware-linux-free, firmware-linux-nonfree and firmware-atheros (the latter for my WLAN card). ASPM and ACPIFirst I enabled PCIE ASPM in my (non-modded) BIOS and forced it together with ACPI via grub by changing GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub so it looks like this:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=force pcie_aspm=force nmi_watchdog=0"
ASPM has now been enabled as lspci prooves:

00:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 0) (prog-if 00 [Normal decode])
[..]
LnkCap: Port #1, Speed 5GT/s, Width x1, ASPM L0s L1, Latency L0 <64ns, L1 <1us
ClockPM- Surprise- LLActRep+ BwNot+
LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
[..]
00:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780 PCI to PCI bridge (PCIE port 2) (prog-if 00 [Normal decode])
[..]
LnkCap: Port #3, Speed 5GT/s, Width x1, ASPM L0s L1, Latency L0 <64ns, L1 <1us
ClockPM- Surprise- LLActRep+ BwNot+
LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
02:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
[..]
LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Latency L0 <4us, L1 unlimited
ClockPM+ Surprise- LLActRep- BwNot-
LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
[..]
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5723 Gigabit Ethernet PCIe (rev 10)
[..]
LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <1us, L1 <64us
ClockPM+ Surprise- LLActRep- BwNot-
LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
[..]
Even so /sys/module/pcie_aspm/parameters/policy will still show as below:

[default] performance powersave
I'll show how to set the powersave value in /sys/module/pcie_aspm/parameters/policy in the next section.JFTR: These are my ACPI related packages installed: acpi, acpid, acpi-support and acpi-support-base. Enable powersaving via UDEVThe following rules file /etc/udev/rules.d/90-local-n54l.rules has been inspired by a blog post. It enables powersaving modes for all PCI, SCSI and USB devices and ASPM. Further the internal RADEON cards power profile is set to the low value. There is no monitor connected usually. The file contains these rules:

SUBSYSTEM=="module", KERNEL=="pcie_aspm", ACTION=="add", TEST=="parameters/policy", ATTR parameters/policy ="powersave"

SUBSYSTEM=="i2c", ACTION=="add", TEST=="power/control", ATTR power/control ="auto"
SUBSYSTEM=="pci", ACTION=="add", TEST=="power/control", ATTR power/control ="auto"
SUBSYSTEM=="usb", ACTION=="add", TEST=="power/control", ATTR power/control ="auto"
SUBSYSTEM=="usb", ACTION=="add", TEST=="power/autosuspend", ATTR power/autosuspend ="2"
SUBSYSTEM=="scsi", ACTION=="add", TEST=="power/control", ATTR power/control ="auto"
SUBSYSTEM=="spi", ACTION=="add", TEST=="power/control", ATTR power/control ="auto"

SUBSYSTEM=="drm", KERNEL=="card*", ACTION=="add", DRIVERS=="radeon", TEST=="power/control", TEST=="device/power_method", ATTR device/power_method ="profile", ATTR device/power_profile ="low"

SUBSYSTEM=="scsi_host", KERNEL=="host*", ACTION=="add", TEST=="link_power_management_policy", ATTR link_power_management_policy ="min_power"
Set harddrives spindown timeoutI decided to sent my system drive to standby after 20 minutes and the RAID drives after 15 minutes. This is usually ok, because the RAID isn't always used. hdparm is the right tool to realize this. Many people use the /dev/disk/by-uuid/... syntax here, to avoid having to touch the configuration file if some system configuration changes. Because I'm running a RAID, I couldn't use this syntax, although it might be possible to use /dev/disk/by-id/... instead. Well for the moment I stay with the configuration below. The relevant part of /etc/hdparm.conf is:

[..]

# system harddrive
/dev/sda
spindown_time = 240


# below are the WD20EFRX drives
/dev/sdb
spindown_time = 180


/dev/sdc
spindown_time = 180


/dev/sdd
spindown_time = 180

Idle modeWhen there is nothing to do for the system, all I hear is the (still a bit noisy) fan of the power supply, which I might replace in the future too. Either by testing a different fan or by replacing the whole power supply unit by the fanless FORTRON FSP150-50TNF or (even better) a picoPSU.The system currently shows a power consumption of around 37W in idle mode whereas the USB W-LAN card itself needs around 10W. There is a possibility to enable power savings mode for this card too. I could add this entry to /etc/udev/rules.d/90-local-n54l.rules:

SUBSYSTEM=="net", ACTION=="add", KERNEL=="wlan*" RUN+="/usr/bin/iw dev %k set power_save on"
But it turned out that the connection became a bit unstable after it. So I don't use this rule. More on the roadThere are a lot more options one can easily find via $search_engine. The N54L system could be brought to sleep and woken up by LAN via Wake-on-LAN (WOL). This is a feature I don't use. I've also read rumors about enabling different sleep/suspend states of the system, which seems to require to install a modded BIOS. Well, I'll post news and changes if they happen to come;)

24 July 2013

Daniel Leidert: Moving from (software) RAID 1 to RAID 5 and testing performance

Finally I finished migrating my new N54L from a 2-disk software RAID 1 (2x2TB WD20EFRX) to 3-disk RAID 5 (3x2TB WD20EFRX) using mdadm. I got an excellent mini HOWTO I followed. Resyncing and reshaping the RAID took several days ... top speed was 110.000K/s ... but left my data untouched. Finally growing the RAID now I have two partitions; one with 0.5GB (md0) and the other with 3.4GB (md1) which both host an EXT4 file system, whereas the first one is encrypted via LUKS and the second one is not. RAID device and file system configuration Below is a summary of the setup and the values I found working well for my system.
3-disk RAID 5 devicefile system
devicechunkstripe_cache_sizeread_ahead_kbtypeencryptionstridestripe-width
/dev/md064409632768ext4luks1632
/dev/md15121638432768ext4no128256
I've played around with the system a bit. Changing the chunk size on the fly however took a long time. /dev/md0 will contain some backups, so there probably will be a mixture of small and large files. So I've chosen to only test the values 64K, 128K and 512K (default) for this device. I left the other untouched as it will mainly contain large files. Performance measurement Below are the results using hdparm to measure performance. First lets take a look at the drives ...

$ hdparm -tT /dev/sd[bcd]
/dev/sdb:
Timing cached reads: 3268 MB in 2.00 seconds = 1634.54 MB/sec
Timing buffered disk reads: 438 MB in 3.00 seconds = 145.87 MB/sec

/dev/sdc:
Timing cached reads: 3292 MB in 2.00 seconds = 1646.32 MB/sec
Timing buffered disk reads: 392 MB in 3.01 seconds = 130.22 MB/sec

/dev/sdd:
Timing cached reads: 3306 MB in 2.00 seconds = 1653.18 MB/sec
Timing buffered disk reads: 436 MB in 3.00 seconds = 145.26 MB/sec

$ hdparm --direct -tT /dev/sd[bcd]
/dev/sdb:
Timing O_DIRECT cached reads: 468 MB in 2.01 seconds = 233.17 MB/sec
Timing O_DIRECT disk reads: 442 MB in 3.00 seconds = 147.15 MB/sec

/dev/sdc:
Timing O_DIRECT cached reads: 468 MB in 2.00 seconds = 233.69 MB/sec
Timing O_DIRECT disk reads: 392 MB in 3.01 seconds = 130.36 MB/sec

/dev/sdd:
Timing O_DIRECT cached reads: 468 MB in 2.00 seconds = 233.94 MB/sec
Timing O_DIRECT disk reads: 442 MB in 3.01 seconds = 146.93 MB/sec
... and now at the RAID devices ...

$ hdparm -tT /dev/md?
/dev/md0:
Timing cached reads: 3320 MB in 2.00 seconds = 1660.37 MB/sec
Timing buffered disk reads: 770 MB in 3.01 seconds = 256.05 MB/sec

/dev/md1:
Timing cached reads: 3336 MB in 2.00 seconds = 1668.07 MB/sec
Timing buffered disk reads: 742 MB in 3.01 seconds = 246.89 MB/sec

$ hdparm --direct -tT /dev/md?
/dev/md0:
Timing O_DIRECT cached reads: 974 MB in 2.00 seconds = 487.08 MB/sec
Timing O_DIRECT disk reads: 770 MB in 3.01 seconds = 256.17 MB/sec

/dev/md1:
Timing O_DIRECT cached reads: 784 MB in 2.00 seconds = 391.18 MB/sec
Timing O_DIRECT disk reads: 742 MB in 3.01 seconds = 246.42 MB/sec
... and now lets see, which actual speed we reach using dd. First lets check the encrypted device:

RAID-5 /dev/md0 (LUKS encrypted EXT4): chunk=64K, stripe_cache_size=4096,
readahead(blockdev)=65536, stride=16, stripe-width=32 ...

$ dd if=/dev/zero of=/mnt/md0/10g.img bs=1k count=10000000
10000000+0 Datens tze ein
10000000+0 Datens tze aus
10240000000 Bytes (10 GB) kopiert, 64,1227 s, 160 MB/s

$ dd if=/mnt/md0/10g.img of=/dev/null bs=1k count=10000000
10000000+0 Datens tze ein
10000000+0 Datens tze aus
10240000000 Bytes (10 GB) kopiert, 85,768 s, 119 MB/s
Well, read speed is consistently lower than write speed for the encrypted file system. Lets take a look at the non-encrypted device:

RAID-5 /dev/md1 (EXT4): chunk=512K, stripe_cache_size=16384,
readahead(blockdev)=65536, stride=128, stripe-width=256 ...

$ dd if=/dev/zero of=/mnt/md1/10g.img bs=1k count=10000000
10000000+0 Datens tze ein
10000000+0 Datens tze aus
10240000000 Bytes (10 GB) kopiert, 37,0016 s, 277 MB/s

$ dd if=/mnt/md1/10g.img of=/dev/null bs=1k count=10000000
10000000+0 Datens tze ein
10000000+0 Datens tze aus
10240000000 Bytes (10 GB) kopiert, 33,5901 s, 305 MB/s
Looks nice to me. How to set these values I use a mixture of udev, util-linux and e2fsprogs to set the values. First I checked, which values for stripe_cache_size and read_ahead_kb are working best for me. For the LUKS encrypted EXT4 I got varying results showing best performances with values of 4096, 8192 and 16384 for stripe_cache_size. I decided for the first value, because it appeared more often with the best performance than the others.

$ less /etc/udev/rules.d/90-local-n54l.rules grep stripe_cache
SUBSYSTEM=="block", KERNEL=="md0", ACTION=="add", TEST=="md/stripe_cache_size", TEST=="queue/read_ahead_kb", ATTR md/stripe_cache_size ="4096", ATTR queue/read_ahead_kb ="32768", ATTR bdi/read_ahead_kb ="32768"
SUBSYSTEM=="block", KERNEL=="md1", ACTION=="add", TEST=="md/stripe_cache_size", TEST=="queue/read_ahead_kb", ATTR md/stripe_cache_size ="16384", ATTR queue/read_ahead_kb ="32768", ATTR bdi/read_ahead_kb ="32768"
The read_ahead_kb value can also be set using blockdev. Note that this command expects a value of 512-byte sectors whereas read_ahead_kb is the size in kbyte. Therefor the difference in values:

$ blockdev --setra 65536 /dev/md[01]
Tuning the EXT4 file system performance with calculated values using tune2fs:

$ tune2fs -E stride=16,stripe-width=32 -O dir_index /dev/mapper/_dev_md0
$ tune2fs -E stride=128,stripe-width=256 -O dir_index /dev/md1
Disabling NCQ reduced the speed a lot for me, so I left the values as is and did not struggle with it:

$ cat /sys/block/sd[bcd]/device/queue_depth
31
31
31

5 July 2013

Daniel Leidert: Dear Lazyweb ... why does lshw (still) identify my Linux raid autodetect partition(s) as NTFS volumes?

In a very first attempt, my disk:2 was partitioned and initialized as follows:
/dev/sdc1     1,5TB     NTFS
/dev/sdc2 0,5TB EXT4
This was later changed to what you can see below and what fdisk correctly reports. These partitions all use the EXT4 file system.
[..]
Device Boot Start End Blocks Id System
/dev/sdb1 2048 524290047 262144000 fd Linux raid autodetect
/dev/sdb2 524290048 3907029167 1691369560 fd Linux raid autodetect
[..]
Device Boot Start End Blocks Id System
/dev/sdc1 2048 524290047 262144000 fd Linux raid autodetect
/dev/sdc2 524290048 3907029167 1691369560 fd Linux raid autodetect
[..]
I'm wondering why lshw and parted shows some of the partitions still being NTFS volumes? Checkout the output below. How can this be fixed? What is missing? Erase some header data?

Model: ATA WDC WD20EFRX-68A (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 268GB 268GB primary raid
2 268GB 2000GB 1732GB primary raid


Model: ATA WDC WD20EFRX-68A (scsi)
Disk /dev/sdc: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 268GB 268GB primary ntfs raid
2 268GB 2000GB 1732GB primary raid
[..]
*-disk:1
description: ATA Disk
product: WDC WD20EFRX-68A
vendor: Western Digital
physical id: 1
bus info: scsi@3:0.0.0
logical name: /dev/sdb
version: 80.0
serial: WD-WCC300354221
size: 1863GiB (2TB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 sectorsize=4096
*-volume:0
description: Linux raid autodetect partition
physical id: 1
bus info: scsi@3:0.0.0,1
logical name: /dev/sdb1
capacity: 250GiB
capabilities: primary multi
*-volume:1
description: Linux raid autodetect partition
physical id: 2
bus info: scsi@3:0.0.0,2
logical name: /dev/sdb2
capacity: 1613GiB
capabilities: primary multi
*-disk:2
description: ATA Disk
product: WDC WD20EFRX-68A
vendor: Western Digital
physical id: 2
bus info: scsi@4:0.0.0
logical name: /dev/sdc
version: 80.0
serial: WD-WCC1T0567095
size: 1863GiB (2TB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 sectorsize=4096 signature=000a4d07
*-volume:0
description: Windows NTFS volume
physical id: 1
bus info: scsi@4:0.0.0,1
logical name: /dev/sdc1
version: 3.1
serial: 013e-8473
size: 1396GiB
capabilities: primary multi ntfs initialized
configuration: clustersize=4096 created=2013-06-18 06:24:11 filesystem=ntfs label=MEDIA state=clean
*-volume:1
description: Linux raid autodetect partition
physical id: 2
bus info: scsi@4:0.0.0,2
logical name: /dev/sdc2
capacity: 1613GiB
capabilities: primary multi
[..]

28 June 2013

Daniel Leidert: Idea: A new toy (ein neues Spielzeug) ... HP Microserver N54L

Ich fertige regelm ig Backups meiner Systeme an. Diese werden auf der Systemplatte meines Notebooks abgelegt und via rsync auf mobilen Speicher dupliziert. Hierzu verwende ich eine USB-Festplatte. Diese enth lt auch Medien-Dateien und wird regelm ig an den Fernseher angeschlossen. Prinzipiell halte ich meine Daten daher f r sicher. Aber vor kurzem stie ich an die Grenzen ihrer Kapazit t. Schon l nger habe ich nach einer Alternative gesucht, nicht zuletzt da heute viel gr ere Festplatten m glich sind und mein Laptop ber einen eSATA-Anschluss verf gt, der schneller als USB2.0 ist. Meine bevorzugte Variante war ein FANTEC DB-ALU3e Geh use mit einer WD Red WD20EFRX 2TB (5400 RPM) Festplatte, die f r den 24/7 Betrieb zertifiziert ist (und zudem ber eine ausgezeichnete Reputation verf gt). Die Kombination lief sehr gut und schnell, sieht edel aus, ben tigt aber eine externe Stromversorgung. Ich kann Sie als Speicherl sung absolut empfehlen. Allerdings hatte ich zu diesem Zeitpunkt noch weitere Anspr che, die mit der o.g. L sung nicht zu befriedigen sind. So trage ich mich bereits l nger mit dem Gedanken an ein RAID-1-NAS. Au erdem spiegelt sich die Beanspruchung meiner Notebook-Festplatte durch das Pakete-Bauen f r Debian im S.M.A.R.T.-Status wieder. Daher wollte ich diese Arbeit an einen robusten lokalen buildd-Boliden abgeben und habe ber den Kauf eines g nstigen Rechners nachgedacht. Ein NAS verbraucht aber deutlich weniger Strom als ein Desktop-Rechner. Also wie l sst sich ein buildd und ein energiesparendes NAS vereinen? Per Zufall stie ich bei einem lokalen H ndler auf den HP ProLiant MicroServer N40L. Das Angebot klang super und so entschied ich mich zum Kauf meines neuen Spielzeuges: ein HP ProLiant MicroServer N54L, der zuk nftig folgende Aufgaben verrichten soll:
Datensicherung
Die Sicherung der Daten erfolgt cron-gesteuert auf den RAID-Verbund in eine gesonderte (verschl sselte) Partition. Der S.M.A.R.T.-Status der Festplatten wird via smartd berwacht. Sollte eine Platte kaputt gehen, bestehen gute Aussichten, die Daten zu retten. Eine zuk nftige Option w re auch noch ein RAID-6 Verbund.
NAS / File-Server
Das Ger t verf gt ber bis zu 6 SATA Anschl sse. Davon werden vier standardm ig via Wechselrahmen belegt. Die mitgelieferte 250GB Festplatte wird vorerst das Betriebssystem aufnehmen und an den drei verbleibenden Anschl ssen werden zun chst drei WD Red WD20EFRX 2TB (5400 RPM) Festplatten als RAID-5-Verbund f r den notwendigen Platz sorgen. Letzterer l sst sich ohne Erweiterung nur via Software-Raid und mdadm realisieren.
buildd
Betriebssystem wird Debian GNU/Linux. Der Hauptspeicher wird auf mindestens 8GB ECC-Ram aufger stet.
HTPC (XBMC)
Der Microserver l sst sich nicht als Massenspeicher an einen Fernseher anschlie en. Daher soll vorr. XBMC in Verbindung mit einem USB3.0 BR/DVD-Player den Server zum Entertainment-Ger t erheben.
Das ganze soll m glichst wenig Strom verbrauchen und leise sein. Zum Anschluss an das lokale Netzwerk habe ich mich f r WLAN entschieden, da kein Gigabit-Ethernet vorhanden ist. Folgende Teile ben tige ich f r "meinen" Server:
Server
HP ProLiant N54L MicroServer mit Turion II Neo 2,2 GHz, 2GB RAM/250GB HDD - ca. 200 EUR (lokal)
Bel ftung / Lautst rke
Scythe Slip Stream Geh usel fter 120mm 800RPM 11dB - ca. 9 EUR (SY1225SL12L)
Scythe Slip Stream Geh usel fter 120mm 500RPM 7,5dB - ca. 8 EUR (SY1225SL12SL)
Netzwerk
TP-Link TL-WN722N(C) 150Mbps USB-Adapter - ca. 15 EUR (TL-WN722N(C))
File-Server
3x WD Red WD20EFRX 2TB 5400 RPM SATA600 f r NAS 24/7 - ca. 95 EUR / St. (WD20EFRX)
buildd
8GB (2x4GB) Kingston ValueRAM DDR3-1333 CL9 ECC Modul RAM-Kit - ca. 85 EUR (KVR1333D3E9SK2/8G)
16GB (2x8GB) Kingston ValueRAM DDR3-1333 CL9 ECC Modul RAM-Kit - ca. 145 EUR (KVR1333D3E9SK2/16G)
HTPC
Sapphire Radeon HD 5450/6450/6570/6670/7750 PCIe 16x Low-Profile passiv/aktiv - ca. 25..100 EUR (11166-45-20G, 11190-09-20G, 11191-27-20G, 11191-02-20G, 11192-18-20G, 11202-10-20G)
SILVERSTONE PCIe 1x USB3.0 2xInt 2xExt - ca. 21 EUR (SST-EC04-P)
Logitech K400 od. Keysonic ACK-540RF - ca. 40 EUR (920-003100 bzw. ACK-540 RF)
BR/DVD-Player od. Brenner mit USB3.0 Anschluss - 50..100 EUR
LCD-Mod
LDC Display Modul mind. 4x20 - ca. 10 EUR
Interessant ist auch noch die Option einer echten RAID-Karte. Ich stie dabei auf die IBM ServeRAID M1015 (46M0831) und diesen Hinweis. Kauft man stattdessen den "Schl ssel" zur Freischaltung des vollen Funktionsumfanges, dann bezahlt man (lokal) zus tzlich ca. 150 EUR! Aber das nur BTW. N tzliche Links:

29 November 2007

Ond&#345;ej &#268;ert&iacute;k: Debian meeting in Merida, Spain

Right now, some Debian Developers (and also not yet Developers, like me:), are on
the work sessions in Extremadura, I am on the QA and release teams meeting.

We started in the morning with presentations (see also the schedule). Any comments and suggestions welcomed, please add comments below the post.

Lucas Nussbaum presenting:


Most of us:



And in details, names from left to right. Cyril Brulebois, Gon ri Le Bouder:


Luk Claes, Marc 'HE' Brockschmidt, J rg Jaspert, Lars Wirzenius:


Fabio Tranchitella, Bernd Zeimetz, Mario Iseli, Luk Claes:


Filippo Giunchedi, Stefano Zacchiroli, Tzafrir Cohen, Simon Richter, Faidon Liambotis:


And again, so that Faidon is visible:

23 February 2007

MJ Ray: Debian: Links for 2007-02-23

While we're fixing some network problems, here is a musical interlude^W^Wset of links...
Explaining Debian at parties
'Do not try to convince people to try it out or you will become a support hotline. Instead, tell them about LUGs and mailing lists (and how cool it can be) and put the challenge on them: "if you try it, those are your support channels. Are you up for it?"'
Index of /~joerg/templates
Explanations of some common reasons for getting REJECT from ftp-master.
gravityboy: Politics
"If we're going to make interactions between developers more civil and friendly and less personally hostile then we need a different kind of stick. Maybe a carrot of sorts is in order too?"
Consensus and community review in open source and open standards Decentralized Information Group (DIG) Breadcrumbs
'this is quite a stretch with respect to the normal dictionary meaning of "consensus"'
Social Drag
How many Debian Developers are dressed in Social Drag?
debian Gender Research
More research to add to the surveys page
#354950 - ITP: gauche-readline -- A readline-like library for the Gauche Scheme implementation - Debian Bug report logs
Contemplating sponsoring again.
The joys of English cuisine are abomination in God's sight Ted Walther's Private Diary
Blast from debian past: Ted Walther's gullibility for stereotyping is an abomination in God's sight and those foods are hardly typical English. For instance, the only one on his PS list that I've eaten recently is flapjack (not flapjacks, which is something else and not English AFAIK) and I can only remember eating five of the list in the last year. Also, the linked site seems to concentrate on processed pap from multinationals and supermarkets, rather than real trad English food. Example: "Yorkshire puddings are basically a bread type product. You can use them in place of dinner rolls." Why not make 'em a bit harder, cook 'em a bit longer and you can use them for cobbling the street instead? Yorkshire pudding should be a baked batter with a soft middle and a crisp puffy edge. Nothing like bread. And don't get me started about the pasties. Honestly, if you learn about food from such sources, no wonder you'd think it an abomination.
Re: Position Statement to the Dunc-Tanc "experiment"
"had somebody wanted to kill (or inflict maximum damage) to the project, he couldn't have done any better than the current DPL"
Releas-o-meter
Interesting idea - why no cron-run copy on the web?
EUPL v 1.0 revokable ?
One that may appear in a future 'bits from debian-legal' - it seems the EU are popularising a type of termination clause. Yay.
Future of Debian Weekly News
As I hoped, Joey is explicitly calling for sponsorship for DWN production.
DPL 2007, Current Candidates
still don't have a title is keeping a list of DPL candidates, so I won't put one up this year.
#411041 - request for debian-l10n-persian mailing list - Debian Bug report logs
Tell all your persian-speaking friends to show their support!
Interprete I cite, you cite, I rant
Lack of references is a problem on lists like debian-legal where explaining everything again can drive you insane and still not convince some people.
Lucas Nussbaum's Blog State of software for Suspend to RAM/Disk ?
More tips on the "Will I Work Or Not?" suspend problems.
only on debian-devel - she.geek.nz
> Am I mistaken or this is a flame on how to flame ?