Search Results: "blendi"

20 December 2023

Melissa Wen: The Rainbow Treasure Map Talk: Advanced color management on Linux with AMD/Steam Deck.

Last week marked a major milestone for me: the AMD driver-specific color management properties reached the upstream linux-next! And to celebrate, I m happy to share the slides notes from my 2023 XDC talk, The Rainbow Treasure Map along with the individual recording that just dropped last week on youtube talk about happy coincidences!

Steam Deck Rainbow: Treasure Map & Magic Frogs While I may be bubbly and chatty in everyday life, the stage isn t exactly my comfort zone (hallway talks are more my speed). But the journey of developing the AMD color management properties was so full of discoveries that I simply had to share the experience. Witnessing the fantastic work of Jeremy and Joshua bring it all to life on the Steam Deck OLED was like uncovering magical ingredients and whipping up something truly enchanting. For XDC 2023, we split our Rainbow journey into two talks. My focus, The Rainbow Treasure Map, explored the new color features we added to the Linux kernel driver, diving deep into the hardware capabilities of AMD/Steam Deck. Joshua then followed with The Rainbow Frogs and showed the breathtaking color magic released on Gamescope thanks to the power unlocked by the kernel driver s Steam Deck color properties.

Packing a Rainbow into 15 Minutes I had so much to tell, but a half-slot talk meant crafting a concise presentation. To squeeze everything into 15 minutes (and calm my pre-talk jitters a bit!), I drafted and practiced those slides and notes countless times. So grab your map, and let s embark on the Rainbow journey together! Slide 1: The Rainbow Treasure Map - Advanced Color Management on Linux with AMD/SteamDeck Intro: Hi, I m Melissa from Igalia and welcome to the Rainbow Treasure Map, a talk about advanced color management on Linux with AMD/SteamDeck. Slide 2: List useful links for this technical talk Useful links: First of all, if you are not used to the topic, you may find these links useful.
  1. XDC 2022 - I m not an AMD expert, but - Melissa Wen
  2. XDC 2022 - Is HDR Harder? - Harry Wentland
  3. XDC 2022 Lightning - HDR Workshop Summary - Harry Wentland
  4. Color management and HDR documentation for FOSS graphics - Pekka Paalanen et al.
  5. Cinematic Color - 2012 SIGGRAPH course notes - Jeremy Selan
  6. AMD Driver-specific Properties for Color Management on Linux (Part 1) - Melissa Wen
Slide 3: Why do we need advanced color management on Linux? Context: When we talk about colors in the graphics chain, we should keep in mind that we have a wide variety of source content colorimetry, a variety of output display devices and also the internal processing. Users expect consistent color reproduction across all these devices. The userspace can use GPU-accelerated color management to get it. But this also requires an interface with display kernel drivers that is currently missing from the DRM/KMS framework. Slide 4: Describe our work on AMD driver-specific color properties Since April, I ve been bothering the DRM community by sending patchsets from the work of me and Joshua to add driver-specific color properties to the AMD display driver. In parallel, discussions on defining a generic color management interface are still ongoing in the community. Moreover, we are still not clear about the diversity of color capabilities among hardware vendors. To bridge this gap, we defined a color pipeline for Gamescope that fits the latest versions of AMD hardware. It delivers advanced color management features for gamut mapping, HDR rendering, SDR on HDR, and HDR on SDR. Slide 5: Describe the AMD/SteamDeck - our hardware AMD/Steam Deck hardware: AMD frequently releases new GPU and APU generations. Each generation comes with a DCN version with display hardware improvements. Therefore, keep in mind that this work uses the AMD Steam Deck hardware and its kernel driver. The Steam Deck is an APU with a DCN3.01 display driver, a DCN3 family. It s important to have this information since newer AMD DCN drivers inherit implementations from previous families but aldo each generation of AMD hardware may introduce new color capabilities. Therefore I recommend you to familiarize yourself with the hardware you are working on. Slide 6: Diagram with the three layers of the AMD display driver on Linux The AMD display driver in the kernel space: It consists of three layers, (1) the DRM/KMS framework, (2) the AMD Display Manager, and (3) the AMD Display Core. We extended the color interface exposed to userspace by leveraging existing DRM resources and connecting them using driver-specific functions for color property management. Slide 7: Three-layers diagram highlighting AMD Display Manager, DM - the layer that connects DC and DRM Bridging DC color capabilities and the DRM API required significant changes in the color management of AMD Display Manager - the Linux-dependent part that connects the AMD DC interface to the DRM/KMS framework. Slide 8: Three-layers diagram highlighting AMD Display Core, DC - the shared code The AMD DC is the OS-agnostic layer. Its code is shared between platforms and DCN versions. Examining this part helps us understand the AMD color pipeline and hardware capabilities, since the machinery for hardware settings and resource management are already there. Slide 9: Diagram of the AMD Display Core Next architecture with main elements and data flow The newest architecture for AMD display hardware is the AMD Display Core Next. Slide 10: Diagram of the AMD Display Core Next where only DPP and MPC blocks are highlighted In this architecture, two blocks have the capability to manage colors:
  • Display Pipe and Plane (DPP) - for pre-blending adjustments;
  • Multiple Pipe/Plane Combined (MPC) - for post-blending color transformations.
Let s see what we have in the DRM API for pre-blending color management. Slide 11: Blank slide with no content only a title 'Pre-blending: DRM plane' DRM plane color properties: This is the DRM color management API before blending. Nothing! Except two basic DRM plane properties: color_encoding and color_range for the input colorspace conversion, that is not covered by this work. Slide 12: Diagram with color capabilities and structures in AMD DC layer without any DRM plane color interface (before blending), only the DRM CRTC color interface for post blending In case you re not familiar with AMD shared code, what we need to do is basically draw a map and navigate there! We have some DRM color properties after blending, but nothing before blending yet. But much of the hardware programming was already implemented in the AMD DC layer, thanks to the shared code. Slide 13: Previous Diagram with a rectangle to highlight the empty space in the DRM plane interface that will be filled by AMD plane properties Still both the DRM interface and its connection to the shared code were missing. That s when the search begins! Slide 14: Color Pipeline Diagram with the plane color interface filled by AMD plane properties but without connections to AMD DC resources AMD driver-specific color pipeline: Looking at the color capabilities of the hardware, we arrive at this initial set of properties. The path wasn t exactly like that. We had many iterations and discoveries until reached to this pipeline. Slide 15: Color Pipeline Diagram connecting AMD plane degamma properties, LUT and TF, to AMD DC resources The Plane Degamma is our first driver-specific property before blending. It s used to linearize the color space from encoded values to light linear values. Slide 16: Describe plane degamma properties and hardware capabilities We can use a pre-defined transfer function or a user lookup table (in short, LUT) to linearize the color space. Pre-defined transfer functions for plane degamma are hardcoded curves that go to a specific hardware block called DPP Degamma ROM. It supports the following transfer functions: sRGB EOTF, BT.709 inverse OETF, PQ EOTF, and pure power curves Gamma 2.2, Gamma 2.4 and Gamma 2.6. We also have a one-dimensional LUT. This 1D LUT has four thousand ninety six (4096) entries, the usual 1D LUT size in the DRM/KMS. It s an array of drm_color_lut that goes to the DPP Gamma Correction block. Slide 17: Color Pipeline Diagram connecting AMD plane CTM property to AMD DC resources We also have now a color transformation matrix (CTM) for color space conversion. Slide 18: Describe plane CTM property and hardware capabilities It s a 3x4 matrix of fixed points that goes to the DPP Gamut Remap Block. Both pre- and post-blending matrices were previously gone to the same color block. We worked on detaching them to clear both paths. Now each CTM goes on its own way. Slide 19: Color Pipeline Diagram connecting AMD plane HDR multiplier property to AMD DC resources Next, the HDR Multiplier. HDR Multiplier is a factor applied to the color values of an image to increase their overall brightness. Slide 20: Describe plane HDR mult property and hardware capabilities This is useful for converting images from a standard dynamic range (SDR) to a high dynamic range (HDR). As it can range beyond [0.0, 1.0] subsequent transforms need to use the PQ(HDR) transfer functions. Slide 21: Color Pipeline Diagram connecting AMD plane shaper properties, LUT and TF, to AMD DC resources And we need a 3D LUT. But 3D LUT has a limited number of entries in each dimension, so we want to use it in a colorspace that is optimized for human vision. It means in a non-linear space. To deliver it, userspace may need one 1D LUT before 3D LUT to delinearize content and another one after to linearize content again for blending. Slide 22: Describe plane shaper properties and hardware capabilities The pre-3D-LUT curve is called Shaper curve. Unlike Degamma TF, there are no hardcoded curves for shaper TF, but we can use the AMD color module in the driver to build the following shaper curves from pre-defined coefficients. The color module combines the TF and the user LUT values into the LUT that goes to the DPP Shaper RAM block. Slide 23: Color Pipeline Diagram connecting AMD plane 3D LUT property to AMD DC resources Finally, our rockstar, the 3D LUT. 3D LUT is perfect for complex color transformations and adjustments between color channels. Slide 24: Describe plane 3D LUT property and hardware capabilities 3D LUT is also more complex to manage and requires more computational resources, as a consequence, its number of entries is usually limited. To overcome this restriction, the array contains samples from the approximated function and values between samples are estimated by tetrahedral interpolation. AMD supports 17 and 9 as the size of a single-dimension. Blue is the outermost dimension, red the innermost. Slide 25: Color Pipeline Diagram connecting AMD plane blend properties, LUT and TF, to AMD DC resources As mentioned, we need a post-3D-LUT curve to linearize the color space before blending. This is done by Blend TF and LUT. Slide 26: Describe plane blend properties and hardware capabilities Similar to shaper TF, there are no hardcoded curves for Blend TF. The pre-defined curves are the same as the Degamma block, but calculated by the color module. The resulting LUT goes to the DPP Blend RAM block. Slide 27: Color Pipeline Diagram  with all AMD plane color properties connect to AMD DC resources and links showing the conflict between plane and CRTC degamma Now we have everything connected before blending. As a conflict between plane and CRTC Degamma was inevitable, our approach doesn t accept that both are set at the same time. Slide 28: Color Pipeline Diagram connecting AMD CRTC gamma TF property to AMD DC resources We also optimized the conversion of the framebuffer to wire encoding by adding support to pre-defined CRTC Gamma TF. Slide 29: Describe CRTC gamma TF property and hardware capabilities Again, there are no hardcoded curves and TF and LUT are combined by the AMD color module. The same types of shaper curves are supported. The resulting LUT goes to the MPC Gamma RAM block. Slide 30: Color Pipeline Diagram with all AMD driver-specific color properties connect to AMD DC resources Finally, we arrived in the final version of DRM/AMD driver-specific color management pipeline. With this knowledge, you re ready to better enjoy the rainbow treasure of AMD display hardware and the world of graphics computing. Slide 31: SteamDeck/Gamescope Color Pipeline Diagram with rectangles labeling each block of the pipeline with the related AMD color property With this work, Gamescope/Steam Deck embraces the color capabilities of the AMD GPU. We highlight here how we map the Gamescope color pipeline to each AMD color block. Slide 32: Final slide. Thank you! Future works: The search for the rainbow treasure is not over! The Linux DRM subsystem contains many hidden treasures from different vendors. We want more complex color transformations and adjustments available on Linux. We also want to expose all GPU color capabilities from all hardware vendors to the Linux userspace. Thanks Joshua and Harry for this joint work and the Linux DRI community for all feedback and reviews. The amazing part of this work comes in the next talk with Joshua and The Rainbow Frogs! Any questions?
References:
  1. Slides of the talk The Rainbow Treasure Map.
  2. Youtube video of the talk The Rainbow Treasure Map.
  3. Patch series for AMD driver-specific color management properties (upstream Linux 6.8v).
  4. SteamDeck/Gamescope color management pipeline
  5. XDC 2023 website.
  6. Igalia website.

13 December 2023

Melissa Wen: 15 Tips for Debugging Issues in the AMD Display Kernel Driver

A self-help guide for examining and debugging the AMD display driver within the Linux kernel/DRM subsystem. It s based on my experience as an external developer working on the driver, and are shared with the goal of helping others navigate the driver code. Acknowledgments: These tips were gathered thanks to the countless help received from AMD developers during the driver development process. The list below was obtained by examining open source code, reviewing public documentation, playing with tools, asking in public forums and also with the help of my former GSoC mentor, Rodrigo Siqueira.

Pre-Debugging Steps: Before diving into an issue, it s crucial to perform two essential steps: 1) Check the latest changes: Ensure you re working with the latest AMD driver modifications located in the amd-staging-drm-next branch maintained by Alex Deucher. You may also find bug fixes for newer kernel versions on branches that have the name pattern drm-fixes-<date>. 2) Examine the issue tracker: Confirm that your issue isn t already documented and addressed in the AMD display driver issue tracker. If you find a similar issue, you can team up with others and speed up the debugging process.

Understanding the issue: Do you really need to change this? Where should you start looking for changes? 3) Is the issue in the AMD kernel driver or in the userspace?: Identifying the source of the issue is essential regardless of the GPU vendor. Sometimes this can be challenging so here are some helpful tips:
  • Record the screen: Capture the screen using a recording app while experiencing the issue. If the bug appears in the capture, it s likely a userspace issue, not the kernel display driver.
  • Analyze the dmesg log: Look for error messages related to the display driver in the dmesg log. If the error message appears before the message [drm] Display Core v... , it s not likely a display driver issue. If this message doesn t appear in your log, the display driver wasn t fully loaded and you will see a notification that something went wrong here.
4) AMD Display Manager vs. AMD Display Core: The AMD display driver consists of two components:
  • Display Manager (DM): This component interacts directly with the Linux DRM infrastructure. Occasionally, issues can arise from misinterpretations of DRM properties or features. If the issue doesn t occur on other platforms with the same AMD hardware - for example, only happens on Linux but not on Windows - it s more likely related to the AMD DM code.
  • Display Core (DC): This is the platform-agnostic part responsible for setting and programming hardware features. Modifications to the DC usually require validation on other platforms, like Windows, to avoid regressions.
5) Identify the DC HW family: Each AMD GPU has variations in its hardware architecture. Features and helpers differ between families, so determining the relevant code for your specific hardware is crucial.
  • Find GPU product information in Linux/AMD GPU documentation
  • Check the dmesg log for the Display Core version (since this commit in Linux kernel 6.3v). For example:
    • [drm] Display Core v3.2.241 initialized on DCN 2.1
    • [drm] Display Core v3.2.237 initialized on DCN 3.0.1

Investigating the relevant driver code: Keep from letting unrelated driver code to affect your investigation. 6) Narrow the code inspection down to one DC HW family: the relevant code resides in a directory named after the DC number. For example, the DCN 3.0.1 driver code is located at drivers/gpu/drm/amd/display/dc/dcn301. We all know that the AMD s shared code is huge and you can use these boundaries to rule out codes unrelated to your issue. 7) Newer families may inherit code from older ones: you can find dcn301 using code from dcn30, dcn20, dcn10 files. It s crucial to verify which hooks and helpers your driver utilizes to investigate the right portion. You can leverage ftrace for supplemental validation. To give an example, it was useful when I was updating DCN3 color mapping to correctly use their new post-blending color capabilities, such as: Additionally, you can use two different HW families to compare behaviours. If you see the issue in one but not in the other, you can compare the code and understand what has changed and if the implementation from a previous family doesn t fit well the new HW resources or design. You can also count on the help of the community on the Linux AMD issue tracker to validate your code on other hardware and/or systems. This approach helped me debug a 2-year-old issue where the cursor gamma adjustment was incorrect in DCN3 hardware, but working correctly for DCN2 family. I solved the issue in two steps, thanks for community feedback and validation: 8) Check the hardware capability screening in the driver: You can currently find a list of display hardware capabilities in the drivers/gpu/drm/amd/display/dc/dcn*/dcn*_resource.c file. More precisely in the dcn*_resource_construct() function. Using DCN301 for illustration, here is the list of its hardware caps:
	/*************************************************
	 *  Resource + asic cap harcoding                *
	 *************************************************/
	pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
	pool->base.pipe_count = pool->base.res_cap->num_timing_generator;
	pool->base.mpcc_count = pool->base.res_cap->num_timing_generator;
	dc->caps.max_downscale_ratio = 600;
	dc->caps.i2c_speed_in_khz = 100;
	dc->caps.i2c_speed_in_khz_hdcp = 5; /*1.4 w/a enabled by default*/
	dc->caps.max_cursor_size = 256;
	dc->caps.min_horizontal_blanking_period = 80;
	dc->caps.dmdata_alloc_size = 2048;
	dc->caps.max_slave_planes = 2;
	dc->caps.max_slave_yuv_planes = 2;
	dc->caps.max_slave_rgb_planes = 2;
	dc->caps.is_apu = true;
	dc->caps.post_blend_color_processing = true;
	dc->caps.force_dp_tps4_for_cp2520 = true;
	dc->caps.extended_aux_timeout_support = true;
	dc->caps.dmcub_support = true;
	/* Color pipeline capabilities */
	dc->caps.color.dpp.dcn_arch = 1;
	dc->caps.color.dpp.input_lut_shared = 0;
	dc->caps.color.dpp.icsc = 1;
	dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
	dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
	dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
	dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
	dc->caps.color.dpp.dgam_rom_caps.pq = 1;
	dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
	dc->caps.color.dpp.post_csc = 1;
	dc->caps.color.dpp.gamma_corr = 1;
	dc->caps.color.dpp.dgam_rom_for_yuv = 0;
	dc->caps.color.dpp.hw_3d_lut = 1;
	dc->caps.color.dpp.ogam_ram = 1;
	// no OGAM ROM on DCN301
	dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
	dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.dpp.ogam_rom_caps.pq = 0;
	dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
	dc->caps.color.dpp.ocsc = 0;
	dc->caps.color.mpc.gamut_remap = 1;
	dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
	dc->caps.color.mpc.ogam_ram = 1;
	dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
	dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.mpc.ogam_rom_caps.pq = 0;
	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
	dc->caps.color.mpc.ocsc = 1;
	dc->caps.dp_hdmi21_pcon_support = true;
	/* read VBIOS LTTPR caps */
	if (ctx->dc_bios->funcs->get_lttpr_caps)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_lttpr_enable = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
		dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
	 
	if (ctx->dc_bios->funcs->get_lttpr_interop)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_interop_enabled = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios, &is_vbios_interop_enabled);
		dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
	 
Keep in mind that the documentation of color capabilities are available at the Linux kernel Documentation.

Understanding the development history: What has brought us to the current state? 9) Pinpoint relevant commits: Use git log and git blame to identify commits targeting the code section you re interested in. 10) Track regressions: If you re examining the amd-staging-drm-next branch, check for regressions between DC release versions. These are defined by DC_VER in the drivers/gpu/drm/amd/display/dc/dc.h file. Alternatively, find a commit with this format drm/amd/display: 3.2.221 that determines a display release. It s useful for bisecting. This information helps you understand how outdated your branch is and identify potential regressions. You can consider each DC_VER takes around one week to be bumped. Finally, check testing log of each release in the report provided on the amd-gfx mailing list, such as this one Tested-by: Daniel Wheeler:

Reducing the inspection area: Focus on what really matters. 11) Identify involved HW blocks: This helps isolate the issue. You can find more information about DCN HW blocks in the DCN Overview documentation. In summary:
  • Plane issues are closer to HUBP and DPP.
  • Blending/Stream issues are closer to MPC, OPP and OPTC. They are related to DRM CRTC subjects.
This information was useful when debugging a hardware rotation issue where the cursor plane got clipped off in the middle of the screen. Finally, the issue was addressed by two patches: 12) Issues around bandwidth (glitches) and clocks: May be affected by calculations done in these HW blocks and HW specific values. The recalculation equations are found in the DML folder. DML stands for Display Mode Library. It s in charge of all required configuration parameters supported by the hardware for multiple scenarios. See more in the AMD DC Overview kernel docs. It s a math library that optimally configures hardware to find the best balance between power efficiency and performance in a given scenario. Finding some clk variables that affect device behavior may be a sign of it. It s hard for a external developer to debug this part, since it involves information from HW specs and firmware programming that we don t have access. The best option is to provide all relevant debugging information you have and ask AMD developers to check the values from your suspicions.
  • Do a trick: If you suspect the power setup is degrading performance, try setting the amount of power supplied to the GPU to the maximum and see if it affects the system behavior with this command: sudo bash -c "echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level"
I learned it when debugging glitches with hardware cursor rotation on Steam Deck. My first attempt was changing the clock calculation. In the end, Rodrigo Siqueira proposed the right solution targeting bandwidth in two steps:

Checking implicit programming and hardware limitations: Bring implicit programming to the level of consciousness and recognize hardware limitations. 13) Implicit update types: Check if the selected type for atomic update may affect your issue. The update type depends on the mode settings, since programming some modes demands more time for hardware processing. More details in the source code:
/* Surface update type is used by dc_update_surfaces_and_stream
 * The update type is determined at the very beginning of the function based
 * on parameters passed in and decides how much programming (or updating) is
 * going to be done during the call.
 *
 * UPDATE_TYPE_FAST is used for really fast updates that do not require much
 * logical calculations or hardware register programming. This update MUST be
 * ISR safe on windows. Currently fast update will only be used to flip surface
 * address.
 *
 * UPDATE_TYPE_MED is used for slower updates which require significant hw
 * re-programming however do not affect bandwidth consumption or clock
 * requirements. At present, this is the level at which front end updates
 * that do not require us to run bw_calcs happen. These are in/out transfer func
 * updates, viewport offset changes, recout size changes and pixel
depth changes.
 * This update can be done at ISR, but we want to minimize how often
this happens.
 *
 * UPDATE_TYPE_FULL is slow. Really slow. This requires us to recalculate our
 * bandwidth and clocks, possibly rearrange some pipes and reprogram
anything front
 * end related. Any time viewport dimensions, recout dimensions,
scaling ratios or
 * gamma need to be adjusted or pipe needs to be turned on (or
disconnected) we do
 * a full update. This cannot be done at ISR level and should be a rare event.
 * Unless someone is stress testing mpo enter/exit, playing with
colour or adjusting
 * underscan we don't expect to see this call at all.
 */
enum surface_update_type  
UPDATE_TYPE_FAST, /* super fast, safe to execute in isr */
UPDATE_TYPE_MED,  /* ISR safe, most of programming needed, no bw/clk change*/
UPDATE_TYPE_FULL, /* may need to shuffle resources */
 ;

Using tools: Observe the current state, validate your findings, continue improvements. 14) Use AMD tools to check hardware state and driver programming: help on understanding your driver settings and checking the behavior when changing those settings.
  • DC Visual confirmation: Check multiple planes and pipe split policy.
  • DTN logs: Check display hardware state, including rotation, size, format, underflow, blocks in use, color block values, etc.
  • UMR: Check ASIC info, register values, KMS state - links and elements (framebuffers, planes, CRTCs, connectors). Source: UMR project documentation
15) Use generic DRM/KMS tools:
  • IGT test tools: Use generic KMS tests or develop your own to isolate the issue in the kernel space. Compare results across different GPU vendors to understand their implementations and find potential solutions. Here AMD also has specific IGT tests for its GPUs that is expect to work without failures on any AMD GPU. You can check results of HW-specific tests using different display hardware families or you can compare expected differences between the generic workflow and AMD workflow.
  • drm_info: This tool summarizes the current state of a display driver (capabilities, properties and formats) per element of the DRM/KMS workflow. Output can be helpful when reporting bugs.

Don t give up! Debugging issues in the AMD display driver can be challenging, but by following these tips and leveraging available resources, you can significantly improve your chances of success. Worth mentioning: This blog post builds upon my talk, I m not an AMD expert, but presented at the 2022 XDC. It shares guidelines that helped me debug AMD display issues as an external developer of the driver. Open Source Display Driver: The Linux kernel/AMD display driver is open source, allowing you to actively contribute by addressing issues listed in the official tracker. Tackling existing issues or resolving your own can be a rewarding learning experience and a valuable contribution to the community. Additionally, the tracker serves as a valuable resource for finding similar bugs, troubleshooting tips, and suggestions from AMD developers. Finally, it s a platform for seeking help when needed. Remember, contributing to the open source community through issue resolution and collaboration is mutually beneficial for everyone involved.

7 November 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 2)

TL;DR: This blog post explores the color capabilities of AMD hardware and how they are exposed to userspace through driver-specific properties. It discusses the different color blocks in the AMD Display Core Next (DCN) pipeline and their capabilities, such as predefined transfer functions, 1D and 3D lookup tables (LUTs), and color transformation matrices (CTMs). It also highlights the differences in AMD HW blocks for pre and post-blending adjustments, and how these differences are reflected in the available driver-specific properties. Overall, this blog post provides a comprehensive overview of the color capabilities of AMD hardware and how they can be controlled by userspace applications through driver-specific properties. This information is valuable for anyone who wants to develop applications that can take advantage of the AMD color management pipeline. Get a closer look at each hardware block s capabilities, unlock a wealth of knowledge about AMD display hardware, and enhance your understanding of graphics and visual computing. Stay tuned for future developments as we embark on a quest for GPU color capabilities in the ever-evolving realm of rainbow treasures.
Operating Systems can use the power of GPUs to ensure consistent color reproduction across graphics devices. We can use GPU-accelerated color management to manage the diversity of color profiles, do color transformations to convert between High-Dynamic-Range (HDR) and Standard-Dynamic-Range (SDR) content and color enhacements for wide color gamut (WCG). However, to make use of GPU display capabilities, we need an interface between userspace and the kernel display drivers that is currently absent in the Linux/DRM KMS API. In the previous blog post I presented how we are expanding the Linux/DRM color management API to expose specific properties of AMD hardware. Now, I ll guide you to the color features for the Linux/AMD display driver. We embark on a journey through DRM/KMS, AMD Display Manager, and AMD Display Core and delve into the color blocks to uncover the secrets of color manipulation within AMD hardware. Here we ll talk less about the color tools and more about where to find them in the hardware. We resort to driver-specific properties to reach AMD hardware blocks with color capabilities. These blocks display features like predefined transfer functions, color transformation matrices, and 1-dimensional (1D LUT) and 3-dimensional lookup tables (3D LUT). Here, we will understand how these color features are strategically placed into color blocks both before and after blending in Display Pipe and Plane (DPP) and Multiple Pipe/Plane Combined (MPC) blocks. That said, welcome back to the second part of our thrilling journey through AMD s color management realm!

AMD Display Driver in the Linux/DRM Subsystem: The Journey In my 2022 XDC talk I m not an AMD expert, but , I briefly explained the organizational structure of the Linux/AMD display driver where the driver code is bifurcated into a Linux-specific section and a shared-code portion. To reveal AMD s color secrets through the Linux kernel DRM API, our journey led us through these layers of the Linux/AMD display driver s software stack. It includes traversing the DRM/KMS framework, the AMD Display Manager (DM), and the AMD Display Core (DC) [1]. The DRM/KMS framework provides the atomic API for color management through KMS properties represented by struct drm_property. We extended the color management interface exposed to userspace by leveraging existing resources and connecting them with driver-specific functions for managing modeset properties. On the AMD DC layer, the interface with hardware color blocks is established. The AMD DC layer contains OS-agnostic components that are shared across different platforms, making it an invaluable resource. This layer already implements hardware programming and resource management, simplifying the external developer s task. While examining the DC code, we gain insights into the color pipeline and capabilities, even without direct access to specifications. Additionally, AMD developers provide essential support by answering queries and reviewing our work upstream. The primary challenge involved identifying and understanding relevant AMD DC code to configure each color block in the color pipeline. However, the ultimate goal was to bridge the DC color capabilities with the DRM API. For this, we changed the AMD DM, the OS-dependent layer connecting the DC interface to the DRM/KMS framework. We defined and managed driver-specific color properties, facilitated the transport of user space data to the DC, and translated DRM features and settings to the DC interface. Considerations were also made for differences in the color pipeline based on hardware capabilities.

Exploring Color Capabilities of the AMD display hardware Now, let s dive into the exciting realm of AMD color capabilities, where a abundance of techniques and tools await to make your colors look extraordinary across diverse devices. First, we need to know a little about the color transformation and calibration tools and techniques that you can find in different blocks of the AMD hardware. I borrowed some images from [2] [3] [4] to help you understand the information.

Predefined Transfer Functions (Named Fixed Curves): Transfer functions serve as the bridge between the digital and visual worlds, defining the mathematical relationship between digital color values and linear scene/display values and ensuring consistent color reproduction across different devices and media. You can learn more about curves in the chapter GPU Gems 3 - The Importance of Being Linear by Larry Gritz and Eugene d Eon. ITU-R 2100 introduces three main types of transfer functions:
  • OETF: the opto-electronic transfer function, which converts linear scene light into the video signal, typically within a camera.
  • EOTF: electro-optical transfer function, which converts the video signal into the linear light output of the display.
  • OOTF: opto-optical transfer function, which has the role of applying the rendering intent .
AMD s display driver supports the following pre-defined transfer functions (aka named fixed curves):
  • Linear/Unity: linear/identity relationship between pixel value and luminance value;
  • Gamma 2.2, Gamma 2.4, Gamma 2.6: pure power functions;
  • sRGB: 2.4: The piece-wise transfer function from IEC 61966-2-1:1999;
  • BT.709: has a linear segment in the bottom part and then a power function with a 0.45 (~1/2.22) gamma for the rest of the range; standardized by ITU-R BT.709-6;
  • PQ (Perceptual Quantizer): used for HDR display, allows luminance range capability of 0 to 10,000 nits; standardized by SMPTE ST 2084.
These capabilities vary depending on the hardware block, with some utilizing hardcoded curves and others relying on AMD s color module to construct curves from standardized coefficients. It also supports user/custom curves built from a lookup table.

1D LUTs (1-dimensional Lookup Table): A 1D LUT is a versatile tool, defining a one-dimensional color transformation based on a single parameter. It s very well explained by Jeremy Selan at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations It enables adjustments to color, brightness, and contrast, making it ideal for fine-tuning. In the Linux AMD display driver, the atomic API offers a 1D LUT with 4096 entries and 8-bit depth, while legacy gamma uses a size of 256.

3D LUTs (3-dimensional Lookup Table): These tables work in three dimensions red, green, and blue. They re perfect for complex color transformations and adjustments between color channels. It s also more complex to manage and require more computational resources. Jeremy also explains 3D LUT at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations

CTM (Color Transformation Matrices): Color transformation matrices facilitate the transition between different color spaces, playing a crucial role in color space conversion.

HDR Multiplier: HDR multiplier is a factor applied to the color values of an image to increase their overall brightness.

AMD Color Capabilities in the Hardware Pipeline First, let s take a closer look at the AMD Display Core Next hardware pipeline in the Linux kernel documentation for AMDGPU driver - Display Core Next In the AMD Display Core Next hardware pipeline, we encounter two hardware blocks with color capabilities: the Display Pipe and Plane (DPP) and the Multiple Pipe/Plane Combined (MPC). The DPP handles color adjustments per plane before blending, while the MPC engages in post-blending color adjustments. In short, we expect DPP color capabilities to match up with DRM plane properties, and MPC color capabilities to play nice with DRM CRTC properties. Note: here s the catch there are some DRM CRTC color transformations that don t have a corresponding AMD MPC color block, and vice versa. It s like a puzzle, and we re here to solve it!

AMD Color Blocks and Capabilities We can finally talk about the color capabilities of each AMD color block. As it varies based on the generation of hardware, let s take the DCN3+ family as reference. What s possible to do before and after blending depends on hardware capabilities describe in the kernel driver by struct dpp_color_caps and struct mpc_color_caps. The AMD Steam Deck hardware provides a tangible example of these capabilities. Therefore, we take SteamDeck/DCN301 driver as an example and look at the Color pipeline capabilities described in the file: driver/gpu/drm/amd/display/dcn301/dcn301_resources.c
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1; // If it is a Display Core Next (DCN): yes. Zero means DCE.
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1; // Intput Color Space Conversion  (CSC) matrix.
dc->caps.color.dpp.dgam_ram = 0; // The old degamma block for degamma curve (hardcoded and LUT).  Gamma correction  is the new one.
dc->caps.color.dpp.dgam_rom_caps.srgb = 1; // sRGB hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1; // BT2020 hardcoded curve support (seems not actually in use)
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1; // Gamma 2.2 hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.pq = 1; // PQ hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.hlg = 1; // HLG hardcoded curve support
dc->caps.color.dpp.post_csc = 1; // CSC matrix
dc->caps.color.dpp.gamma_corr = 1; // New  Gamma Correction  block for degamma user LUT;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1; // 3D LUT support. If so, it's always preceded by a shaper curve. 
dc->caps.color.dpp.ogam_ram = 1; //  Blend Gamma  block for custom curve just after blending
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1; // Post-blending CTM (pre-blending CTM is always supported)
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; // Post-blending 3D LUT (preceded by shaper curve)
dc->caps.color.mpc.ogam_ram = 1; // Post-blending regamma.
// No pre-defined TF supported for regamma.
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1; // Output CSC matrix.
I included some inline comments in each element of the color caps to quickly describe them, but you can find the same information in the Linux kernel documentation. See more in struct dpp_color_caps, struct mpc_color_caps and struct rom_curve_caps. Now, using this guideline, we go through color capabilities of DPP and MPC blocks and talk more about mapping driver-specific properties to corresponding color blocks.

DPP Color Pipeline: Before Blending (Per Plane) Let s explore the capabilities of DPP blocks and what you can achieve with a color block. The very first thing to pay attention is the display architecture of the display hardware: previously AMD uses a display architecture called DCE
  • Display and Compositing Engine, but newer hardware follows DCN - Display Core Next.
The architectute is described by: dc->caps.color.dpp.dcn_arch

AMD Plane Degamma: TF and 1D LUT Described by: dc->caps.color.dpp.dgam_ram, dc->caps.color.dpp.dgam_rom_caps,dc->caps.color.dpp.gamma_corr AMD Plane Degamma data is mapped to the initial stage of the DPP pipeline. It is utilized to transition from scanout/encoded values to linear values for arithmetic operations. Plane Degamma supports both pre-defined transfer functions and 1D LUTs, depending on the hardware generation. DCN2 and older families handle both types of curve in the Degamma RAM block (dc->caps.color.dpp.dgam_ram); DCN3+ separate hardcoded curves and 1D LUT into two block: Degamma ROM (dc->caps.color.dpp.dgam_rom_caps) and Gamma correction block (dc->caps.color.dpp.gamma_corr), respectively. Pre-defined transfer functions:
  • they are hardcoded curves (read-only memory - ROM);
  • supported curves: sRGB EOTF, BT.709 inverse OETF, PQ EOTF and HLG OETF, Gamma 2.2, Gamma 2.4 and Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. Setting TF = Identity/Default and LUT as NULL means bypass. References:

AMD Plane 3x4 CTM (Color Transformation Matrix) AMD Plane CTM data goes to the DPP Gamut Remap block, supporting a 3x4 fixed point (s31.32) matrix for color space conversions. The data is interpreted as a struct drm_color_ctm_3x4. Setting NULL means bypass. References:

AMD Plane Shaper: TF + 1D LUT Described by: dc->caps.color.dpp.hw_3d_lut The Shaper block fine-tunes color adjustments before applying the 3D LUT, optimizing the use of the limited entries in each dimension of the 3D LUT. On AMD hardware, a 3D LUT always means a preceding shaper 1D LUT used for delinearizing and/or normalizing the color space before applying a 3D LUT, so this entry on DPP color caps dc->caps.color.dpp.hw_3d_lut means support for both shaper 1D LUT and 3D LUT. Pre-defined transfer function enables delinearizing content with or without shaper LUT, where AMD color module calculates the resulted shaper curve. Shaper curves go from linear values to encoded values. If we are already in a non-linear space and/or don t need to normalize values, we can set a Identity TF for shaper that works similar to bypass and is also the default TF value. Pre-defined transfer functions:
  • there is no DPP Shaper ROM. Curves are calculated by AMD color modules. Check calculate_curve() function in the file amd/display/modules/color/color_gamma.c.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting Plane Shaper TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT as NULL works as bypass. References:

AMD Plane 3D LUT Described by: dc->caps.color.dpp.hw_3d_lut The 3D LUT in the DPP block facilitates complex color transformations and adjustments. 3D LUT is a three-dimensional array where each element is an RGB triplet. As mentioned before, the dc->caps.color.dpp.hw_3d_lut describe if DPP 3D LUT is supported. The AMD driver-specific property advertise the size of a single dimension via LUT3D_SIZE property. Plane 3D LUT is a blog property where the data is interpreted as an array of struct drm_color_lut elements and the number of entries is LUT3D_SIZE cubic. The array contains samples from the approximated function. Values between samples are estimated by tetrahedral interpolation The array is accessed with three indices, one for each input dimension (color channel), blue being the outermost dimension, red the innermost. This distribution is better visualized when examining the code in [RFC PATCH 5/5] drm/amd/display: Fill 3D LUT from userspace by Alex Hung:
+	for (nib = 0; nib < 17; nib++)  
+		for (nig = 0; nig < 17; nig++)  
+			for (nir = 0; nir < 17; nir++)  
+				ind_lut = 3 * (nib + 17*nig + 289*nir);
+
+				rgb_area[ind].red = rgb_lib[ind_lut + 0];
+				rgb_area[ind].green = rgb_lib[ind_lut + 1];
+				rgb_area[ind].blue = rgb_lib[ind_lut + 2];
+				ind++;
+			 
+		 
+	 
In our driver-specific approach we opted to advertise it s behavior to the userspace instead of implicitly dealing with it in the kernel driver. AMD s hardware supports 3D LUTs with 17-size or 9-size (4913 and 729 entries respectively), and you can choose between 10-bit or 12-bit. In the current driver-specific work we focus on enabling only 17-size 12-bit 3D LUT, as in [PATCH v3 25/32] drm/amd/display: add plane 3D LUT support:
+		/* Stride and bit depth are not programmable by API yet.
+		 * Therefore, only supports 17x17x17 3D LUT (12-bit).
+		 */
+		lut->lut_3d.use_tetrahedral_9 = false;
+		lut->lut_3d.use_12bits = true;
+		lut->state.bits.initialized = 1;
+		__drm_3dlut_to_dc_3dlut(drm_lut, drm_lut3d_size, &lut->lut_3d,
+					lut->lut_3d.use_tetrahedral_9,
+					MAX_COLOR_3DLUT_BITDEPTH);
A refined control of 3D LUT parameters should go through a follow-up version or generic API. Setting 3D LUT to NULL means bypass. References:

AMD Plane Blend/Out Gamma: TF + 1D LUT Described by: dc->caps.color.dpp.ogam_ram The Blend/Out Gamma block applies the final touch-up before blending, allowing users to linearize content after 3D LUT and just before the blending. It supports both 1D LUT and pre-defined TF. We can see Shaper and Blend LUTs as 1D LUTs that are sandwich the 3D LUT. So, if we don t need 3D LUT transformations, we may want to only use Degamma block to linearize and skip Shaper, 3D LUT and Blend. Pre-defined transfer function:
  • there is no DPP Blend ROM. Curves are calculated by AMD color modules;
  • supported curves: Identity, sRGB EOTF, BT.709 inverse OETF, PQ EOTF, HLG inverse OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. If plane_blend_tf_property != Identity TF, AMD color module will combine the user LUT values with pre-defined TF into the LUT parameters to be programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

MPC Color Pipeline: After Blending (Per CRTC)

DRM CRTC Degamma 1D LUT The degamma lookup table (LUT) for converting framebuffer pixel data before apply the color conversion matrix. The data is interpreted as an array of struct drm_color_lut elements. Setting NULL means bypass. Not really supported. The driver is currently reusing the DPP degamma LUT block (dc->caps.color.dpp.dgam_ram and dc->caps.color.dpp.gamma_corr) for supporting DRM CRTC Degamma LUT, as explaning by [PATCH v3 20/32] drm/amd/display: reject atomic commit if setting both plane and CRTC degamma.

DRM CRTC 3x3 CTM Described by: dc->caps.color.mpc.gamut_remap It sets the current transformation matrix (CTM) apply to pixel data after the lookup through the degamma LUT and before the lookup through the gamma LUT. The data is interpreted as a struct drm_color_ctm. Setting NULL means bypass.

DRM CRTC Gamma 1D LUT + AMD CRTC Gamma TF Described by: dc->caps.color.mpc.ogam_ram After all that, you might still want to convert the content to wire encoding. No worries, in addition to DRM CRTC 1D LUT, we ve got a AMD CRTC gamma transfer function (TF) to make it happen. Possible TF values are defined by enum amdgpu_transfer_function. Pre-defined transfer functions:
  • there is no MPC Gamma ROM. Curves are calculated by AMD color modules.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting CRTC Gamma TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

Others

AMD CRTC Shaper and 3D LUT We have previously worked on exposing CRTC shaper and CRTC 3D LUT, but they were removed from the AMD driver-specific color series because they lack userspace case. CRTC shaper and 3D LUT works similar to plane shaper and 3D LUT but after blending (MPC block). The difference here is that setting (not bypass) Shaper and Gamma blocks together are not expected, since both blocks are used to delinearize the input space. In summary, we either set Shaper + 3D LUT or Gamma.

Input and Output Color Space Conversion There are two other color capabilities of AMD display hardware that were integrated to DRM by previous works and worth a brief explanation here. The DC Input CSC sets pre-defined coefficients from the values of DRM plane color_range and color_encoding properties. It is used for color space conversion of the input content. On the other hand, we have de DC Output CSC (OCSC) sets pre-defined coefficients from DRM connector colorspace properties. It is uses for color space conversion of the composed image to the one supported by the sink. References:

The search for rainbow treasures is not over yet If you want to understand a little more about this work, be sure to watch Joshua and I presented two talks at XDC 2023 about AMD/Steam Deck colors on Gamescope: In the time between the first and second part of this blog post, Uma Shashank and Chaitanya Kumar Borah published the plane color pipeline for Intel and Harry Wentland implemented a generic API for DRM based on VKMS support. We discussed these two proposals and the next steps for Color on Linux during the Color Management workshop at XDC 2023 and I briefly shared workshop results in the 2023 XDC lightning talk session. The search for rainbow treasures is not over yet! We plan to meet again next year in the 2024 Display Hackfest in Coru a-Spain (Igalia s HQ) to keep up the pace and continue advancing today s display needs on Linux. Finally, a HUGE thank you to everyone who worked with me on exploring AMD s color capabilities and making them available in userspace.

21 August 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 1)

TL;DR: Color is a visual perception. Human eyes can detect a broader range of colors than any devices in the graphics chain. Since each device can generate, capture or reproduce a specific subset of colors and tones, color management controls color conversion and calibration across devices to ensure a more accurate and consistent color representation. We can expose a GPU-accelerated display color management pipeline to support this process and enhance results, and this is what we are doing on Linux to improve color management on Gamescope/SteamDeck. Even with the challenges of being external developers, we have been working on mapping AMD GPU color capabilities to the Linux kernel color management interface, which is a combination of DRM and AMD driver-specific color properties. This more extensive color management pipeline includes pre-defined Transfer Functions, 1-Dimensional LookUp Tables (1D LUTs), and 3D LUTs before and after the plane composition/blending.
The study of color is well-established and has been explored for many years. Color science and research findings have also guided technology innovations. As a result, color in Computer Graphics is a very complex topic that I m putting a lot of effort into becoming familiar with. I always find myself rereading all the materials I have collected about color space and operations since I started this journey (about one year ago). I also understand how hard it is to find consensus on some color subjects, as exemplified by all explanations around the 2015 online viral phenomenon of The Black and Blue Dress. Have you heard about it? What is the color of the dress for you? So, taking into account my skills with colors and building consensus, this blog post only focuses on GPU hardware capabilities to support color management :-D If you want to learn more about color concepts and color on Linux, you can find useful links at the end of this blog post.

Linux Kernel, show me the colors ;D DRM color management interface only exposes a small set of post-blending color properties. Proposals to enhance the DRM color API from different vendors have landed the subsystem mailing list over the last few years. On one hand, we got some suggestions to extend DRM post-blending/CRTC color API: DRM CRTC 3D LUT for R-Car (2020 version); DRM CRTC 3D LUT for Intel (draft - 2020); DRM CRTC 3D LUT for AMD by Igalia (v2 - 2023); DRM CRTC 3D LUT for R-Car (v2 - 2023). On the other hand, some proposals to extend DRM pre-blending/plane API: DRM plane colors for Intel (v2 - 2021); DRM plane API for AMD (v3 - 2021); DRM plane 3D LUT for AMD - 2021. Finally, Simon Ser sent the latest proposal in May 2023: Plane color pipeline KMS uAPI, from discussions in the 2023 Display/HDR Hackfest, and it is still under evaluation by the Linux Graphics community. All previous proposals seek a generic solution for expanding the API, but many seem to have stalled due to the uncertainty of matching well the hardware capabilities of all vendors. Meanwhile, the use of AMD color capabilities on Linux remained limited by the DRM interface, as the DCN 3.0 family color caps and mapping diagram below shows the Linux/DRM color interface without driver-specific color properties [*]: Bearing in mind that we need to know the variety of color pipelines in the subsystem to be clear about a generic solution, we decided to approach the issue from a different perspective and worked on enabling a set of Driver-Specific Color Properties for AMD Display Drivers. As a result, I recently sent another round of the AMD driver-specific color mgmt API. For those who have been following the AMD driver-specific proposal since the beginning (see [RFC][V1]), the main new features of the latest version [v2] are the addition of pre-blending Color Transformation Matrix (plane CTM) and the differentiation of Pre-defined Transfer Functions (TF) supported by color blocks. For those who just got here, I will recap this work in two blog posts. This one describes the current status of the AMD display driver in the Linux kernel/DRM subsystem and what changes with the driver-specific properties. In the next post, we go deeper to describe the features of each color block and provide a better picture of what is available in terms of color management for Linux.

The Linux kernel color management API and AMD hardware color capabilities Before discussing colors in the Linux kernel with AMD hardware, consider accessing the Linux kernel documentation (version 6.5.0-rc5). In the AMD Display documentation, you will find my previous work documenting AMD hardware color capabilities and the Color Management Properties. It describes how AMD Display Manager (DM) intermediates requests between the AMD Display Core component (DC) and the Linux/DRM kernel interface for color management features. It also describes the relevant function to call the AMD color module in building curves for content space transformations. A subsection also describes hardware color capabilities and how they evolve between versions. This subsection, DC Color Capabilities between DCN generations, is a good starting point to understand what we have been doing on the kernel side to provide a broader color management API with AMD driver-specific properties.

Why do we need more kernel color properties on Linux? Blending is the process of combining multiple planes (framebuffers abstraction) according to their mode settings. Before blending, we can manage the colors of various planes separately; after blending, we have combined those planes in only one output per CRTC. Color conversions after blending would be enough in a single-plane scenario or when dealing with planes in the same color space on the kernel side. Still, it cannot help to handle the blending of multiple planes with different color spaces and luminance levels. With plane color management properties, userspace can get a more accurate representation of colors to deal with the diversity of color profiles of devices in the graphics chain, bring a wide color gamut (WCG), convert High-Dynamic-Range (HDR) content to Standard-Dynamic-Range (SDR) content (and vice-versa). With a GPU-accelerated display color management pipeline, we can use hardware blocks for color conversions and color mapping and support advanced color management. The current DRM color management API enables us to perform some color conversions after blending, but there is no interface to calibrate input space by planes. Note that here I m not considering some workarounds in the AMD display manager mapping of DRM CRTC de-gamma and DRM CRTC CTM property to pre-blending DC de-gamma and gamut remap block, respectively. So, in more detail, it only exposes three post-blending features:
  • DRM CRTC de-gamma: used to convert the framebuffer s colors to linear gamma;
  • DRM CRTC CTM: used for color space conversion;
  • DRM CRTC gamma: used to convert colors to the gamma space of the connected screen.

AMD driver-specific color management interface We can compare the Linux color management API with and without the driver-specific color properties. From now, we denote driver-specific properties with the AMD prefix and generic properties with the DRM prefix. For visual comparison, I bring the DCN 3.0 family color caps and mapping diagram closer and present it here again: Mixing AMD driver-specific color properties with DRM generic color properties, we have a broader Linux color management system with the following features exposed by properties in the plane and CRTC interface, as summarized by this updated diagram: The blocks highlighted by red lines are the new properties in the driver-specific interface developed by me (Igalia) and Joshua (Valve). The red dashed lines are new links between API and AMD driver components implemented by us to connect the Linux/DRM interface to AMD hardware blocks, mapping components accordingly. In short, we have the following color management properties exposed by the DRM/AMD display driver:
  • Pre-blending - AMD Display Pipe and Plane (DPP):
    • AMD plane de-gamma: 1D LUT and pre-defined transfer functions; used to linearize the input space of a plane;
    • AMD plane CTM: 3x4 matrix; used to convert plane color space;
    • AMD plane shaper: 1D LUT and pre-defined transfer functions; used to delinearize and/or normalize colors before applying 3D LUT;
    • AMD plane 3D LUT: 17x17x17 size with 12 bit-depth; three dimensional lookup table used for advanced color mapping;
    • AMD plane blend/out gamma: 1D LUT and pre-defined transfer functions; used to linearize back the color space after 3D LUT for blending.
  • Post-blending - AMD Multiple Pipe/Plane Combined (MPC):
    • DRM CRTC de-gamma: 1D LUT (can t be set together with plane de-gamma);
    • DRM CRTC CTM: 3x3 matrix (remapped to post-blending matrix);
    • DRM CRTC gamma: 1D LUT + AMD CRTC gamma TF; added to take advantage of driver pre-defined transfer functions;
Note: You can find more about AMD display blocks in the Display Core Next (DCN) - Linux kernel documentation, provided by Rodrigo Siqueira (Linux/AMD display developer) in a 2021-documentation series. In the next post, I ll revisit this topic, explaining display and color blocks in detail.

How did we get a large set of color features from AMD display hardware? So, looking at AMD hardware color capabilities in the first diagram, we can see no post-blending (MPC) de-gamma block in any hardware families. We can also see that the AMD display driver maps CRTC/post-blending CTM to pre-blending (DPP) gamut_remap, but there is post-blending (MPC) gamut_remap (DRM CTM) from newer hardware versions that include SteamDeck hardware. You can find more details about hardware versions in the Linux kernel documentation/AMDGPU Product Information. I needed to rework these two mappings mentioned above to provide pre-blending/plane de-gamma and CTM for SteamDeck. I changed the DC mapping to detach stream gamut remap matrixes from the DPP gamut remap block. That means mapping AMD plane CTM directly to DPP/pre-blending gamut remap block and DRM CRTC CTM to MPC/post-blending gamut remap block. In this sense, I also limited plane CTM properties to those hardware versions with MPC/post-blending gamut_remap capabilities since older versions cannot support this feature without clashes with DRM CRTC CTM. Unfortunately, I couldn t prevent conflict between AMD plane de-gamma and DRM plane de-gamma since post-blending de-gamma isn t available in any AMD hardware versions until now. The fact is that a post-blending de-gamma makes little sense in the AMD color pipeline, where plane blending works better in a linear space, and there are enough color blocks to linearize content before blending. To deal with this conflict, the driver now rejects atomic commits if users try to set both AMD plane de-gamma and DRM CRTC de-gamma simultaneously. Finally, we had no other clashes when enabling other AMD driver-specific color properties for our use case, Gamescope/SteamDeck. Our main work for the remaining properties was understanding the data flow of each property, the hardware capabilities and limitations, and how to shape the data for programming the registers - AMD color block capabilities (and limitations) are the topics of the next blog post. Besides that, we fixed some driver bugs along the way since it was the first Linux use case for most of the new color properties, and some behaviors are only exposed when exercising the engine. Take a look at the Gamescope/Steam Deck Color Pipeline[**], and see how Gamescope uses the new API to manage color space conversions and calibration (please click on the image for a better view): In the next blog post, I ll describe the implementation and technical details of each pre- and post-blending color block/property on the AMD display driver. * Thank Harry Wentland for helping with diagrams, color concepts and AMD capabilities. ** Thank Joshua Ashton for providing and explaining Gamescope/Steam Deck color pipeline. *** Thanks to the Linux Graphics community - explicitly Harry, Joshua, Pekka, Simon, Sebastian, Siqueira, Alex H. and Ville - to all the learning during this Linux DRM/AMD color journey. Also, Carlos and Tomas for organizing the 2023 Display/HDR Hackfest where we have a great and immersive opportunity to discuss Color & HDR on Linux.

29 July 2017

Dirk Eddelbuettel: Updated overbought/oversold plot function

A good six years ago I blogged about plotOBOS() which charts a moving average (from one of several available variants) along with shaded standard deviation bands. That post has a bit more background on the why/how and motivation, but as a teaser here is the resulting chart of the SP500 index (with ticker ^GSCP): Example chart of overbought/oversold levels from plotOBOS() function The code uses a few standard finance packages for R (with most of them maintained by Joshua Ulrich given that Jeff Ryan, who co-wrote chunks of these, is effectively retired from public life). Among these, xts had a recent release reflecting changes which occurred during the four (!!) years since the previous release, and covering at least two GSoC projects. With that came subtle API changes: something we all generally try to avoid but which is at times the only way forward. In this case, the shading code I used (via polygon() from base R) no longer cooperated with the beefed-up functionality of plot.xts(). Luckily, Ross Bennett incorporated that same functionality into a new function addPolygon --- which even credits this same post of mine. With that, the updated code becomes
## plotOBOS -- displaying overbough/oversold as eg in Bespoke's plots
##
## Copyright (C) 2010 - 2017  Dirk Eddelbuettel
##
## This is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 2 of the License, or
## (at your option) any later version.
suppressMessages(library(quantmod))     # for getSymbols(), brings in xts too
suppressMessages(library(TTR))          # for various moving averages
plotOBOS <- function(symbol, n=50, type=c("sma", "ema", "zlema"),
                     years=1, blue=TRUE, current=TRUE, title=symbol,
                     ticks=TRUE, axes=TRUE)  
    today <- Sys.Date()
    if (class(symbol) == "character")  
        X <- getSymbols(symbol, from=format(today-365*years-2*n), auto.assign=FALSE)
        x <- X[,6]                          # use Adjusted
      else if (inherits(symbol, "zoo"))  
        x <- X <- as.xts(symbol)
        current <- FALSE                # don't expand the supplied data
     
    n <- min(nrow(x)/3, 50)             # as we may not have 50 days
    sub <- ""
    if (current)  
        xx <- getQuote(symbol)
        xt <- xts(xx$Last, order.by=as.Date(xx$ Trade Time ))
        colnames(xt) <- paste(symbol, "Adjusted", sep=".")
        x <- rbind(x, xt)
        sub <- paste("Last price: ", xx$Last, " at ",
                     format(as.POSIXct(xx$ Trade Time ), "%H:%M"), sep="")
     
    type <- match.arg(type)
    xd <- switch(type,                  # compute xd as the central location via selected MA smoother
                 sma = SMA(x,n),
                 ema = EMA(x,n),
                 zlema = ZLEMA(x,n))
    xv <- runSD(x, n)                   # compute xv as the rolling volatility
    strt <- paste(format(today-365*years), "::", sep="")
    x  <- x[strt]                       # subset plotting range using xts' nice functionality
    xd <- xd[strt]
    xv <- xv[strt]
    xyd <- xy.coords(.index(xd),xd[,1]) # xy coordinates for direct plot commands
    xyv <- xy.coords(.index(xv),xv[,1])
    n <- length(xyd$x)
    xx <- xyd$x[c(1,1:n,n:1)]           # for polygon(): from first point to last and back
    if (blue)  
        blues5 <- c("#EFF3FF", "#BDD7E7", "#6BAED6", "#3182BD", "#08519C") # cf brewer.pal(5, "Blues")
        fairlylight <<- rgb(189/255, 215/255, 231/255, alpha=0.625) # aka blues5[2]
        verylight <<- rgb(239/255, 243/255, 255/255, alpha=0.625) # aka blues5[1]
        dark <<- rgb(8/255, 81/255, 156/255, alpha=0.625) # aka blues5[5]
        ## buglet in xts 0.10-0 requires the <<- here
      else  
        fairlylight <<- rgb(204/255, 204/255, 204/255, alpha=0.5)  # two suitable grays, alpha-blending at 50%
        verylight <<- rgb(242/255, 242/255, 242/255, alpha=0.5)
        dark <<- 'black'
     
    plot(x, ylim=range(range(x, xd+2*xv, xd-2*xv, na.rm=TRUE)), main=title, sub=sub, 
         major.ticks=ticks, minor.ticks=ticks, axes=axes) # basic xts plot setup
    addPolygon(xts(cbind(xyd$y+xyv$y, xyd$y+2*xyv$y), order.by=index(x)), on=1, col=fairlylight)  # upper
    addPolygon(xts(cbind(xyd$y-xyv$y, xyd$y+1*xyv$y), order.by=index(x)), on=1, col=verylight)    # center
    addPolygon(xts(cbind(xyd$y-xyv$y, xyd$y-2*xyv$y), order.by=index(x)), on=1, col=fairlylight)  # lower
    lines(xd, lwd=2, col=fairlylight)   # central smooted location
    lines(x, lwd=3, col=dark)           # actual price, thicker
 
and the main change are the three calls to addPolygon. To illustrate, we call plotOBOS("SPY", years=2) with an updated plot of the ETF representing the SP500 over the last two years: Updated example chart of overbought/oversold levels from plotOBOS() function Comments and further enhancements welcome!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 August 2016

Zlatan Todori : When you wake up with a feeling

I woke up at 5am. Somehow made myself to soon go back to sleep again. Woke up at 6am. Such is the life of jet-lag. Or I am just getting old for it. But the truth wouldn't be complete with only those assertion. I woke inspired and tired and the same time. Tired because I am doing very time consumable things. Also in the same time very emotional things. AND at the exact same time things that inspire me. On paper, I am technical leader of Purism. In reality, I have insanely good relations with my CEO for such a short time. So good that I for months were not leading the technical shift only, but also I overtook operations (getting orders and delivering them while working with our assembly line to automate most of the tasks in this field). I was playing also as first line of technical support (forums, IRC and email). Actually I was pretty much the only line of support for few months. I was doing some website changes: change some wording, updating bunch of plugins and making it sure all works, resolved (hopefully) Tor and Cloudflare issues for it, annoying caching system for forums, stopped forum spam and so on. I worked on better messaging for Purism public relations. I thought my team to use keys for signing and encryption. I interviewed (and read all mails) for people that were interested in working or helping Purism. In process of doing all that, I maybe wasn't the most speedy person for all our users needs but I hope they understand and forgive me. I was doing all that while I was researching and developing tablets (which ended up not being the most successful campaign but we now do have them as product). I was doing all that while seeing (and resolving) that our kernel builds were failing. Worked on pushing touchpad (not so good but we are still working on) patches upstream (and they ended being upstreamed). While seeing repos being down because of our host. Repos being down because of broken sync with Debian. Repos being down because of our key mis-management. Metadata not working well. PureBrowser getting broken all the time. Tor browser out of date. No real ISO updates. Wrong sources.list entries and so on. And the hardest part on work was, I was doing all this with very limited scope and even more limited resources. So what kept me on, what is pushing me forward and what am I doing? One philosophy - Free software. Let me not explain it as a technical debt. Let me explain it as social movement. In age, where people are "bombed" by media, by all-time lying politicians (which use fear of non-existent threats/terror as model to control population), in age where proprietary corporations are selling your freedom so you can gain temporary convenience the term Free software is like Giordano Bruno in age of Inquisitions. Free software does not only preserve your Freedom to software source usage but it preserves your Freedom to think and think out of the box and not being punished for that. It preserves the Freedom to live - to choose what and when to do, without having the negative impact on your or others people lives. The Freedom to be transparent and to share. Because not only ideas grow with sharing, but we, as human beings, grow as we share. The Freedom to say "NO". NO. I somehow learnt, and personally think, that the Freedom to say NO is the most important Freedom in our lives. No I will not obey some artificially created master that think they can plan and choose my life decision. No I will not negotiate my Freedom for your convenience (also, such Freedom is anyway not real and it is matter of time where you will be blown away by such illusion). No I will not accept your credit because it has STRINGS attached to it which you either don't present or you blur it in mountain of superficial wording. No I will not implant a chip inside me for sake of your research or my convenience. No I will not have social account on media where majority of people are. No, I will not have pacemaker which is a blackbox with proprietary (buggy) software and it harvesting my data without me being able to look at it. Yin-Yang. Yes, I want to collaborate on making world better place for us all. I don't agree with most of people, but that doesn't make them my enemies (although media would like us to feel and think like that). I will try to preserve everyones Freedom as much as I can. Yes I will share with my community and friends. Yes I want to learn from better than I am. Yes I want to have awesome mentors. Yes, I will try to be awesome mentor. Yes, I choose to care and not ignore facts and actions done by me and other people. Yes, I have the right to be imperfect and do mistakes as long as I will aknowledge and work on them. Bugfixing ourselves as humans is the most important task in our lives. As in software, it is very time consumable but also as in software, it is improvement and incredible satisfaction to see better version of yourself, getting more and more features (even if that sometimes means actually getting read of other/bad features). This all is blending with my work at Purism. I spend a lot of time thinking about projects, development and future. I must do that in order not to make grave mistakes. Failing hardware and software is not grave mistake. Serious, but not grave. Grave is if we betray ourselves and our community in pursue for Freedom. We are trying to unify many things - we want to give you security, privacy and FREEDOM with convenience. So I am pushing myself out of comfort zones and also out of conventional and sometimes even my standard way of thinking. I have seen that non-existing infrastructure for PureOS is hurting is a lot but I needed to cope with it to the time where I will be able to say: not anymore, we are starting to build our own infrastructure. I was coping with Cloudflare being assholes to Tor users but now we also shifting away from them. I came to team where people didn't properly understand what and why are we building this. Came to very small and not that efficient team. Now, we employed a dedicated and hard working person on operations (Goran) which I trust. We have dedicated support person (Mladen) which tries hard to work with people. A very creative visual mastermind (Francois). We have a capable Debian Developer (Matthias Klumpp) working on PureOS new infra. We have a capable and dedicated sysadmins (Theo and Stelio) which we didn't even have in past. We are trying to LEVEL UP Free software and unify them in convenient solution which is lead by Joey Hess. We have a hard-working PureOS developer (Hema) who is coping with current non-existent PureOS infra. We have GNOME Boards of Directors person (Jeff) who is trying to light up our image in world (working with James, to try bring some lights into our shadows caused by infinite supply chain delays). We have created Advisory Board for Freedom, Privacy and Security which I don't want to name now as we are preparing to announce soon that (and trust me, we have good people in here). But, the most important thing here is not that they are all capable or cool people. It is the core value in all of them - they care about Freedom and I trust them on their paths. The trust is always important but in Purism it is essential for our work. I built the workflow without time management (everyone spends their time every single day as they see it fit as long as the work gets done). And we don't create insane short deadlines because everyone else thinks it is important (and rarely something is more important than our time freedom). So the trust is built out of knowledge and the knowledge I have about them and their works is because we freely share with no strings attached. Because of them, and other good people from our community I have the energy to sacrifice my entire time for Purism. It is not white and black: CEO and me don't always agree, some members of my team don't always agree with me or I with them, some people in community are very rude, impolite and don't respect our work but even with disagreement everyone in Purism finds agreement at the end (we use facts in our judgments) and all the people who just try to disturb my and mine teams work aren't as efficient as all the lovely words of people who believe in us, who send us words of support and who share ideas and their thoughts with us. There is no more satisfaction for me than reading a personal mail giving us kudos for the work and their understanding of underlaying amount of work and issues. While we are limited with resources we had an occasional outcry from community to help us. Now I want to help them to help me (you see the Freedom of sharing here?). PureOS has now a wiki. It will be a community wiki which is endorsed by Purism as company. Yes you read it right, Purism considers its community part of company (you don't need to get paycheck to be Purism member). That is why a call upon contributors (technical but mostly non-technical too) to help us make PureOS wiki the best resource on net for our needs. Write tutorials for others, gather and put info on wiki, create an ideas page and vote on them so we can see what community wants to see, chat with us so we all understand what, why and how are we working on things. Make it as transparent as possible. Everyone interested please get in touch with our teams by either poking us online (IRC, social accounts) or via emails (our personal or [hr, pr, feedback]@puri.sm. To finish this writing (as it is 8am here and I still want to rest a bit because I will have meetings for 6 hours straight today) - I wanted to share some personal insight into few things from my point of view. I wanted to say despite all the troubles and people who tried to make our time even harder (and it is already hard by all the limitation which come naturally today with our kind of work), we still create products, we still ship them, we still improved step by step, we still hired and we are still building. Keeping all that together and making progress is for me a milestone greater than just creating a technical product. I just hope we will continue and improve our pace so we can start progressing towards my personal great goal - integrate and cooperate with most of FLOSS ecosystem. P.S. yes, I also (finally!) became an official Debian Developer - still didn't have time to sit and properly think and cry (as every good men) about it.

1 April 2014

Russ Allbery: Review: Asimov's, September 2011

Review: Asimov's Science Fiction, September 2011
Editor: Sheila Williams
Issue: Volume 35, No. 9
ISSN: 1065-2698
Pages: 112
Due to various other life priorities, it's been quite a while since I read this magazine. Let's see if I can remember the contents well enough to review it properly. The editorial this issue was about the Readers' Awards. Vaguely interesting, but Williams didn't have much to add beyond announcing the winners. I'm very happy to see Rusch's "Becoming One with the Ghosts" win best novella, though. The Silverberg column was more interesting: some musings and pop history about the Japanese convention of a retired emperor and how that fit into national politics. Di Filippo's book review column is all about short story collections, continuing the trend of Di Filippo mostly being interested in things I don't care about. "The Observation Post" by Allen M. Steele: A bit of alternate history set during the Cuban Missile Crisis, but with airships. The protagonist was a radioman aboard a blimp that was patrolling the ocean for Russian vessels sailing to Cuba. A storm forces them down on an island, resulting in an encounter with some claimed tourists who may be Russian spies. The SFnal twist is unlikely to come as much surprise to an experienced reader, and the barb at the end of the story suffers from the same problem. I appreciate the ethical dilemma, but I've also seen it in lots of stories and have a hard time getting fully invested in another version of it. But the story is otherwise competently written. (6) "D.O.C.S." by Neal Barrett, Jr.: Everyone has an author or two that they just don't get. Barrett is one of mine, although this story is a bit less surreal than most of his. I'm fairly sure it's an odd twist on the "death panel" conspiracy theory given a fantastic twist, but it's not entirely forthright about what's going on. Possibly of more interest to those who like Barrett better. (5) "Danilo" by Carol Emshwiller: Emshwiller's stories are always distinctive and not quite like anyone else's, involving odd outsiders and their attempts to make sense of their world. This one involves, as is common, an out-of-the-way village. Lewella claims that she's going to be married to a stranger from the north. No one believes her, although they give her bridal gifts anyway, and then one day she takes her gifts and leaves. The protagonist follows her, to look after her. The rest of the story walks the boundary that Emshwiller often walks, leaving the reader unsure whether the characters are in touch with some deeper reality or insane and suffering, but the ending is even more ambiguous than normal and, at least for me, entirely unsatisfying. (4) "Shadow Angel" by Erick Melton: This is another retread of an old SF idea. This time, it's that piloting through hyperspace involves alternate modes of consciousness and has profound effects on the pilot. The risk of this sort of story is that it turns hallucinatory and a bit incoherent, and I think that happened here. I like the world-building; the glimmers of future politics and trade and the way he weaves alternate timelines into the story caught my interest. But the story wasn't quite coherent enough (although part of this may be reviewing it quite some time after I originally read it). Promising, but not clear, and without quite enough agency for the protagonist. (6) "The Odor of Sanctity" by Ian Creasey: I found this story more memorable. The conceit is that a future society has developed technology that allows the capture and replay of scents, which has created a huge market for special scent experiences and the triggering of memories. The story is set in the Philippines and revolves around a Catholic priest who takes the mission to the poor seriously. He's dying, and several people wonder if it is possible to capture the mythical odor of scantity: the sweet scent said to follow the death of a saint rather than the normal odor of human death. Creasey handles this idea well, blending postulated future technology, the practical and cynical world of the poor streets, and a balance between mystical belief and practical skepticism. Nothing in the story is that surprising, but I was happy with the eventual resolution. (7) "Grandma Said" by R. Neube: This story's protagonist is a cleanser on a frontier planet made extremely dangerous by a virulent alien fungus. It is almost always fatal and very difficult to eradicate. Vic's job is to completely sanitize anything that had been in contact with a victim and maintain the other rules of strict quarantine required to keep the fungal infection from spreading uncontrolled. Nuebe weaves world-building together with Vic's background and adds a twist in the form of deeply unhealthy responses to the constant stress of living near death. Well told, if a bit disturbing. (7) "Stalker" by Robert Reed: Reed has a knack for fascinating and disturbing stories, and this is an excellent example of the type. The protagonist is a manufactured companion who is completely devoted to its owner. Their commercial name is Adorers, but everyone calls them Stalkers. In this case, the protagonist's owner is a serial rapist and murderer; given that, and given how good Reed is at writing these sorts of stories, you can probably imagine how chilling it is. As usual, there is a sharp barb in the ending, and not the one I was expecting. Good if you can handle the graphic violence and disturbing subject material. (7) "Burning Bibles" by Alan Wall: This is an interesting twist on the spy thriller. A three-letter agency in charge of investigating possible terrorist plots becomes suspicious after a warehouse of Bibles burns in mysterious circumstances. The agent they send in is a deaf-mute with special powers of intuition. This prompted some eye-rolling, and there's a lot of magic disability powers here to annoy, but it's played mostly straight after that introduction. The rest is a fairly conventional spy story, despite special empathic powers, but it's one I enjoyed and thought was fairly well-written. (7) Rating: 6 out of 10

1 October 2013

Gunnar Wolf: Piracy and culture circulation: #encirc13

This week's lesson on the Arte y cultura en circulaci n: crear y compartir en tiempos digitales course talks about piracy and the circulation of culture, a topic that over time has been debated over and over. And a topic, yes, that can always lead to interesting discussions. This time, we are requested to choose one among ten ideas among the media groups' discourse on what piracy is and means for the "cultural industry". There are tons of material written already on several of those ten lines (i.e. piracy disincentivates creativity, or two that can be seen as two faces of the same argument, If a consumer can have free access to cultural products, he will stop spending his money on them and Every time a consumer has access to an illegal copy, the industry loses a sale), and some are quite obvious (i.e. Piracy makes job positions be lost... Just look at the amount of people the unauthorized distribution industry feeds! Or possibly, Piracy is a prosper industry that gives money to people distributing illegal products Of course, that is true. The problem is, what causes said products to be illegal to begin with? Should they be so?). Some other ideas talk about harsher penalties and ways to punish illegal copying in order to drive actors out of that sector (and into the... void?) So, I chose item #4: Cultural products have a high cost because their production is complex (and a tag could be made, linking complex with expensive). I think this item can lead to a long discussion as to what does this complexity and cost mean. Some cultural products do require quite a bit of investment, yes. Others don't. How do content producers make the jump to produce expensive works? If I am a new programmer/artist/writer/screenplayer/whatever, most likely, my products will be not very complex or expensive. I will start small. And if I excel at my work, somebody will look at me and, in some way, become my patron, my sponsor. Being a sponsor might mean that, based on the results of my good work, I could get hired as a software developer at a large company, or an editorial company would buy the patrimonial rights of my book/music (be it for a fixed fee or for a percentage of sales), or whatever. But the leap is not made quantically A newcomer to the cultural scene will at first, most likely, have a hard time selling his products. At first, it takes convincing just getting people to take a shot at looking at your work ( Hey, please take a look at my program and tell me what you think about it! , Would you be interested in listening to my latest song? And those two are by far ahead of the first attempts where the interactions would more likely be Turn off that $#^#!^ computer, it's well past bedtime or stop murdering that guitar, I'm having a headache ). Maybe the toughest part is to get people to agree to read/hear your work. And there, you start into a continuum Selling your CDs while performing on the street, then getting to play to a bar, then getting somebody to want to produce (maybe even "discover"!) you. Publish some short stories in your school magazine, then in a "From our audience" section in a larger magazine, then a collective book, your self-published book, yet-unwritten books by contract... The same story over and over again, in each different field. Ok, yes, but... This logic succession still leaves space for the Important Producers with the Mighty Big Pockets for the most wanted/largest productions, right? And were unauthorized distribution (piracy) to be the norm (as it currently is, dare I say), wouldn't they stop producing an important portion of cultural works? I'd be tempted to say so. However, a different actor comes into play. When Mighty Big Pockets comes into play, they no longer worry only about getting money from each cultural creation, but from all derived uses of it. And the cultural creation industry (when seen as an industry) goes very much hand in hand with the advertising, marketing industries They end up blending with each other. So, the biggest best sellers will most likely have a hit from illegal copiers. Books are still a great business, but hey An even better business is (usually) movie making. And when you make a movie out of a great story, you will surely link some advertising into it (or at the very least, push advertising/product pushing campaigns to go after it). And there, illegal distribution actually helps the money circle to grow stronger. In the early 1990s, the link between dinosaurs and carbonated drinks was a top seller (because Pepsi was a Jurassic Park sponsor). Although I have always loathed the madness around the World Cups (and basically anything that involves football of any kind), I can perfectly remember several of the theme songs for most of the world cups played during my lifetime. So, in short... No. Illegal distribution does marginally little harm to the money income to the cultural business, at any level. And where it does get some direct harm, it increases the money flux given the auxiliary channels.

16 September 2013

Debian Med

DebConf 13 report (by Andreas Tille) General impression unofficial  Scenic Hacklab I'm beginning my DebConf report in an unofficial "Scenic Hacklab" right at the edge of the lake in Yverdon. This is the right place to memorise the last days. When I started from this place cycling to Le Camp 12 days ago I was full of great expectations and what should I say - the reality has even beaten these. Once it comes about comparing DebConfs even if it is an unfair comparison due all the differences my secret long term favourite was Helsinki very closely followed by Argentina and also very closely followed by all the other great DebConfs I joined (and I joined all in Europe). Would Le Camp be able to beat it? The short answer is: Yes, it is now my favourite DebConf while I think I do not suffer from the last-Debconf-was-the-best-DebConf-syndrome (and I realised there are others thinking the same). As you might probably know I'm a bit addicted to swimming. While Helsinki had admittedly the better conditions I was at least able to fix the distance issue using my bicycle. (Hey, those Le Camp photographers did a great job in hiding the fact that you can not actually touch the lake right from the meadow of Le Camp.) Being able to have my bicycle at DebConf scored some extra points. However, the really great view of the lake, the inspiring "Scenic Hacklab" which was my favourite place has bumped DebConf13 at first place in my personal ranking. So it comes quite natural to say: "Kudos to the great organisation team!" They did a Swiss-like precise work and perfectly succeeded in hiding any problems (I assume there were some as always) from the attendees so everything went smooth, nice and shiny for the attendees. The local team was even precise in setting up great weather conditions for DebConf. sunrise over  the lake While saying thanks to the local team I would like to also explicitly thank Luca Capello who has quite some share that this DebConf was possible at all (while I have to decrease my DebConf score one point because he was not really there - Luca to bad that you were not able to come full time!) Also thanks to Gunnar and Gannef who helped remotely (another score down because I were missing them this year as well). Even if it was my favourite DebConf I was not able to work down my todo list fully (which was not only uploading one package per day which I at least statistically fullfilled). But that's probably a general feature of todo lists anyway. One item was definitely done: Doing my daily swimming BoF. I actually was able to do the other parts of the triathlon which was skipped by Christian and have done in summary about 150km cycling with 3500m elevation and estimated 7-8km swimming (0m elevation ;-)). Considering the great view at sunrise over the lake I was not hating my "Senile bed escape" disease too much (I was every day waking up at sunset) - it was simply a great experience. I will never forget seeing water drips glimmering like gold inside the morning sun while seeing the Alps panorama in the distant. I hope I was able to help all interested swimmers with the DebConf Beach Map which was just a by-product of my activities in DebCamp. Speaking about OSM: I was astonished that the area was way less covered than I expected. Thanks to several DebConf attendees the situation became better and the map does not only show random trees in the wild but also the tracks leading to these. (Remark: It was no DebConf attendee who is responsible for plastering the map with single trees.) While I had my mapping focus basically close to the edge of the lake I was also able to even map my very own street. :-) I clearly remember one specific mapping tour when I was invited by the DPL: He convinced me to join him on a bicycle tour and since I was afraid to get fired I joined him instead to keep on hacking. Also Sorina was brave enough to join us on the tour and she did quite well. (Sorina, do you remember the agreement about your work on the installer? ;-)) Lucas described the tour as: going uphill on only asphalted roads. Sorina and me were witnessing the mighty DPL powers when we left the wood around Le Camp to reach the described road: The asphalt was just put onto the road - no doubt that it was done on the immediate demand of mighty DPL. :-) DebCamp time was flying like nose dive and a lot of known (and unknown) faces arrived at Le Camp. What I really liked a lot this year was that several really young children has pulled down the average age of DebConf attendees. I clearly remember all the discussion one year ago what to do about children. As always the issue was solved in a typical Debian way: Just do it and bring your children - they had obviously a great time as well. I think the youngest child was 2 months and the oldest "child" above 20. ;-) Actually Baptiste Perrier did great in making the C&W party a success and had obviously a nice time. (I wished my son would have been able to come as well but he needs to write his bachelor s thesis in physics. :-() It was nice to see the kids using all playing facilities and communicating with geeks. Also I would like to point out that even the very young attendees had their share at the success of DebConf: Just think of the three "bell ringing assistants" who helped me ringing the bells for lunch and dinner. I've got this cool job from Didier in the beginning of DebCamp. I must say having some real bells ringing is by far nicer than just the "lunch / dinner starts in 10 minutes" from IRC bot. The only thing I did not understand was that people did not considered ringing the bells at 8:00 for breakfast as a good idea. Regarding the food in general I would also like to send kudos to the kitchen: It was tasty, freshly prepared, regional food with a good change rate. I really liked this. Extra points for having the chance to sit outside when eating. Talks But lets have a look into the conference programme. I'd really recommend watching the videos of the talks Bits from the DPL (video) and Debian Cosmology (video). I considered both talks as entertaining and interesting. I also really hope that the effort Enrico Zini started in Debian Contributors (video) will be successful. I had some talks and BoFs myself starting with Why running a Blend (video) and I admit that (as usual) the number of attendees was quite low even if I think there is some proof (see below) that it is interesting for way more people who should consider working more "blendish" in their team. Do you know how to recruit one developer per year and relax the man power problem in your team? Feel free to watch the video. We have confirmation that ten DDs of our team have considered to join Debian only because Debian Med exists. Admittedly biology and medicine are really leaf topics inside the Debian universe. So if even this topic that has a very tiny share of the Debian users is able to attract this level of attention - how many more people could we win for multimedia, games, GIS and others? So if you feel you are quite overworked with your packaging and you have no time this is most probably wrong. The amount of time is basically a matter of priorities you set for your tasks. Try to put some higher priority onto using the just existing Blends tools I explained in my talk to attract more users and developers to your team and by doing so spread the workload over more people. It works, the prove was given in my main talk. So before you start working on a specific package you should wonder who else could have an even stronger interest to get this work done and provide him with some additional motivation and help to get the common goal done. The interesting thing is that my BoF about How to attract new developers for your team (video) - which was a simple report about some by-product of the Blends work - made it into the main talk room and got way more attention. For me this is the proof that the Blends concept itself is probably badly perceived as something like "a few outsiders are doing damn specific stuff which is not really interesting for anybody else" instead of what is really is: Smoothing the way from specific upstream applications to the end user via Debian. Once you see the video of this BoF you can observe how my friend Asheesh Laroia became more and more excited about the Blends concept and admitted what I said above: We should have more Blends for different fields. Funnily enough Asheesh asked me in his excitement to talk more about Blends. This would have been a really good suggestion ten years ago. At DebConf 3 in Oslo I had my very first talk about Blends (at this time under the name "Debian Internal Projects"). I continuously kept on talking about this (MiniDebConf Peking 2005, DebConf 5, Helsinki (video), DebConf 7, Edinburgh (video), DebConf 8, Mar del Plata (video), DebConf 9, C ceres (video), MiniDebConf Berlin 2010 (video in German), MiniDebConf Paris 2010 (not video recorded), DebConf 11, Banja Luka (video) ... and these are only (Mini)DebConfs my talks page is full of this topic) and every new year I try different ways to communicate the idea to my fellow Debianistas. I'm wondering how I could invent a title + abstract avoiding the term Blends, put "Git", "release" and "systemd versus upstart" in and being able to inform about Blends reasonably by not becoming to off topic with the abstract. I also registered the Debian Science round table. I admit we were lacking some input from remote via IRC which used to be quite helpful in the past. The attendees agreed upon the handling of citations in debian/upstream files which was invented by Debian Med team to create even stronger bounds to our upstream developers by giving their work extra reward and providing users with even better documentation (see my summary in Wiki). As usual I suggested to create some Debian Science offsprings like "Debian Astronomy", "Debian Electronics", "Debian Mathematics", "Debian Physics" etc. who could perfectly leave the Debian Science umbrella to get a more fine grained structure and a more focused team to enhance the contact to our users. Unfortunately there is nobody who volunteers to take over the lead for such Blends. I have given a short summary about this BoF on the Debian Science mailing list. In the Debian Med meeting I have given some status report. No other long term team members were attending DebConf and so I gave some kind of introduction for newcomers and interested people. I touched also the DebiChem topic which maintains some packages that are used by biologists frequently and so we have a good connection to this team. Finally I had registered three BoFs in Blends I'm actually not (or not yet) active part of. My motivation was to turn the ideas I have explained in my main talk into specific application inside these teams and helping them to implement the Blends framework. In the first BoF about Debian GIS I have shown the usual team metrics graphs to demonstrate, that the one packaging team Pkg-OSM is in danger to become MIA. There are only three persons doing actual uploads. Two of them were at DebConf but did not joined the BoF because they do not consider their contribution to Pkg-OSM as a major part of their general Debian work. I will contact the main contributor David Paleino about his opinion to move the packages step by step into maintenance of Debian GIS packaging team to try to overcome the split of two teams that are sharing a good amount of interest. At least if I might become an Uploader for one of the packages currently maintained by Pkg-OSM I will move this to pkg-grass-devel (which is the name of the packaging team of Debian GIS for historical reasons). The attendees of the BoF have considered this plan as sensible. Moreover I talked about my experiences with OSGeo Live - an Ubuntu derivative that tries to provide a full tool chain to work on GIS and OSM problems ... basically the same goal as Debian GIS has just provided by the OSGeo project. I'm lurking on OSGeo mailing list when I asked explicitly I've got the answer that they are working together with Debian GIS and are using common repository (which is IMHO the optimal way of cooperation). However, it seems that several protagonists of OSGeo Live are underestimating the resources provided by Debian. For instance there was a question about Java packaging issues but people were not aware about the existence of the debian-java mailing list. I was able to give an example how the Debian Med team managed to strengthen its ties to BioLinux that is also an Ubuntu derivative for biologists. At our first Debian Med sprint in 2011 we invited developers from BioLinux and reached a state where they are using the very same VCS on Alioth where we are maintaining our packages. At DebConf I was able to upload two packages where BioLinux developers did certain changes for enhancing the user experience. My "work" was just bumping the version number in changelog and so we did profit from the work of the BioLinux developers as well as they are profiting from our work. I plan to dive a bit more into Debian GIS and try to strengthen the connection to OSGeo Live a bit. The next BoF was the Debian Multimedia meeting. It was nice that the current leader of Ubuntu Studio Kaj Ailomaa joined the meeting. When I was explaining my ideas about cooperation with derivatives I repeated my detailed explanation about the relation with BioLinux. It seems every topic you could cover inside Debian has its related derivative. So to me it seems to be quite natural to work together with the developers of the derivative to join forces. I actually consider a Blend a derivative done the right way = inside Debian. The final work for the derivers that might be left for them is doing some shiny customising of backgrounds or something like this - but all the hard work could and should be done in common with the relevant Debian team. My dream is to raise such relevant teams inside Debian ... the Blends. Finally the last BoF of this series was the Debian Games meeting. As always I presented the team metrics graphs and the Debian Games team members who attended the BoF were quite interested. So it seems to be some unknown fact that team metrics are done for several teams in side Debian and so I repeat the link to it for those who are not yet aware of it. As a result of the BoF Debian Games team members agreed to put some more effort into maintaining their Blends tasks. Moreover Miriam Ruiz wants to put some effort into reviving Debian Jr. Regarding Debian Jr. there was an interesting talk about DouDouLinux - in case you might want to watch the video I'd recommend skipping the first 30min and rather watch the nice live demo. There was also an ad hoc BoF about Debian Jr scheduled to bring together all people interested into this cute project and Per Anderson volunteered to take over the lead. I have given a summary about this specific BoF at the Debian Jr list. For some other talks that I'd regard as remarkable for some reasons: I'd regard the talk "Debian-LAN" by Andreas Mundt as some hidden pearl because it did not got a lot of attention but after having seen the video I was quite impressed - specifically because it is also relevant for the Blends topic. Memories I also liked "Paths into Debian" by Moray Allan (and I was only able to enjoy the latter talks thanks to the great work of the video team!) because it also scratched the same topic I was concerned about in my mentoring talk. Related to this was in my opinion also "Women in Debian 2013" were we tried to find out reasons for the lack of woman compared to other projects and how to overcome this issue. Geert hovering  over the grass Besides the talks I will probably never forget two specific moments that make DebConf so special. One of these moments is recorded on an image that clearly needs no words - just see Geert hovering over the grass. Another strong moment in my personal record was in the DebConf Newbies BoF "First time at DebConf" that unfortunately was not recorded but at least for this statement it would have been very great if we would have some reference better than personal memory. Aarsh Shah a GSoC student from India suddenly raised up and said: "Four months ago I was not even aware that Free Software exists. Now I'm here with so many people who are totally equal. If I will tell my mother at home that I was standing in the same queue where the Debian Project Leader was queuing up for food she will never believe me." He was totally excited about things we are regarding as normal. IMHO we should memorise moments like this that might be part of the key to success in cultures, where Debian is widely unknown and very rarely in use. Amongst these not scheduled great moments the scheduled day trip was also a great thing. I had a really hard time to decide what tour I might join but ended up in the "long distance walking (or should I say running) group". Inspired by the "running Bubulle" who was flashing between the walking groups we went uphill with 5.4km/h which was a nice exercise. Our destination the large cliff was an exciting landscape and I guess we all enjoyed the dinner organised by the "Trout cabal". ;-) say goodby to  friends So I had a hard time to leave Le Camp and tried hard to make sure my memories will remain as long as possible. Keeping some signs attached to my bicycle, conserving the "Scenic Hacklab" sign for my private "scenic hacklab @ home" was one part. I also have cut some branches of the Buxus sempervirens in Le Camp and have put them in my garden at home (where I create some hedgerow from places where I spent some great time). These will probably build a great part of the hedgerow ... Thanks for reading this longish report. Looking forward to see you all in Germany 2015 (or earlier) Andreas. Scenic Hacklab  @ home

25 January 2013

Russ Allbery: The "Why?" of Work

(This is going to be long and rambling. Hopefully at some point I'll be able to distill it into something shorter.) In preparation for a tech leads retreat tomorrow, several of us at work were asked to watch Simon Sinek's TED talk, "How great leaders inspire action". I'll be honest with you: I hated this talk. Sinek lost me right at the start by portraying his idea as the point of commonality among great leaders (don't get me started on survivorship bias; it's a pet peeve) and then compounded the presentation problem with some dubious biology about brain structure. So, after watching it, I ranted a bit about how much I disliked it (to, as it turns out, people who had gotten a lot out of it). (Don't do this, btw. It's nearly always worthwhile to suppress negativity about something someone else enjoyed. I say this to increase the pool of people who can remind me of what I said the next time I forget. Which, if my normal pattern holds, will be about five minutes from now.) Thankfully, I work with tolerant and forgiving people who kindly pointed out the things they saw in the video that I missed, and we ended up having a really good hour and a half discussion, which convinced me that there's an idea under here that's worth talking about. It also helped clarify for me just how much I hate the conventional construction of both leadership and success. This talk is framed around getting other people to do things, which is one of the reasons why I had such a negative reaction to it. It's right there in the title: leaders inspiring action. This feeds into what at least in the United States is an endemic belief that the world consists of leaders and followers, and that the key to success in the world (in business, in politics, in everything else) is to become one of the leaders and accumulate followers (most frequently as customers, since we have a capitalist obsession). This is then defined as success. I think this idea of success is bullshit. Now, that statement requires an immediate qualification. Saying that the societal definition of success is bullshit is a statement from privilege. I have the luxury of saying that because I am successful; I'm in a position where I have a lot of control over my own job, I'm not struggling to make ends meet, and I can spend my time pondering existential questions like how to define success. If I were a little less lucky, success would be whatever put food on the table and kept a roof over my head. I'm making an argument from the top of Maslow's hierarchy. But that is, in a roundabout way, my point: why is defining and constructing success still so hard, and why do we do such a bad job at it, even when we're at the top of the pyramid and able to focus on self-actualization? The context of this talk for my group is pre-work for a discussion about, in Sinek's construction, the "why?" of our group. Why are we here, what is our purpose, and what do we care about? By the mere fact that we are able to ask questions like that, you can correctly surmise that we're already successful. The question, therefore, is what should we do with that success? I normally hear one or more of the following answers, all of which I find unsatisfying or problematic. So, what should I do with success? Or, put another way, since I have the luxury of figuring out a "why?", what's my "why?" This question comes at a good time. As I've mentioned the last couple of days here, I've just come off of two days of the most fun I've had at work in the last several years. I spent about 25 hours total building a log parsing infrastructure that I'm quite fond of, and which may even be useful to other people. And I did that in response to a rather prosaic request: produce a report of user agents by authenticated unique users, rather than by hits, so that we can get an idea of what percentage of our community uses different devices or browsers. This was a problem that I probably could have solved adequately enough for the original request in four hours, maybe less, and then moved on to something else. I spent six times that long on it. That's something I can do because I'm successful: that's the sort of luxury you get when you can define how you want to do your job. So, apparently I have an answer to my question staring me in my face: what I do with success, when I have it, is use that leeway to produce elegant and comprehensive solutions to problems in a way that fully engages me, makes the problem more interesting, and constructs infrastructure that I can reuse for other problems. Huh. That sounds like a "why?" response that's quite common among hackers and software developers. Nothing earth-shattering there... except why is that so rare in a business context? Why isn't it common to answer questions like "what is our group mission statement" with answers like that? This is what I missed in the TED talk, and what the subsequent discussion with my co-workers brought to light for me. I think Sinek was getting at this, but I think he buried the lede. The "why?" should be something that excites you. Something that you're passionate about. Something that you believe in. He says that's because other people will then believe in it too and will buy it from you. I personally don't care about (or, to be honest, have active antipathy towards) that particular outcome, but that's fine; that's not the point. The point is that a "why?" comes from the heart, from something that actually matters, and it creates a motivating and limiting principle. It defines both what you want to do and what you don't want to do. That gives me a personal answer. My "why?" is that I want to build elegant solutions to problems and do work that I find engaging and can be proud of afterwards. I automate common tasks not because I particularly care about being efficient, but because manually doing common tasks is mind-numbing and boring, and I don't like being bored. I write reliable systems not particularly because that helps clients, but primarily because reliable software is more elegant and beautiful and unreliable software offends me. (Being more usable and frustrating for clients is also good; don't get me wrong. It's just not a motive. It's an outcome.) What does that mean for a group mission statement, a group "why?" Usually these exercises produce some sort of distillation of the collective job responsibilities of the people in the group. Our mission is to maintain core infrastructure to let people do their work and to support authentication and authorization services for the university, yadda yadda yadda... this is all true, in its way, but it's also boring. One can work oneself up to caring about things like that, but it requires a lot of effort. But we all have individual "why?" answers, and I bet they look more like my answer than they do like traditional mission statements. If we're in a place where we have the luxury of worrying about self-actualization questions, what gets us up in the morning, what makes it exciting to go into work, is probably some variation on doing interesting and engaging work. But it's probably a different variation for everyone in the group. For example, as you can see from above, I like building things. My happiest moments are when someone gives me a clearly-defined problem statement that fills a real need and then goes away and leaves me in peace to solve it. One thing I've learned is that I'm not very good at coming up with the problem statements myself; I can do it, but usually I end up solving some problem that isn't very important to other people. I love it when my employer can hand me real problems that will make the world better for people, since often they're a lot more interesting (and meaningful) than the problems I come up with on my own. But that's all highly idiosyncratic and is not going to be shared by everyone in my group. I'm an introvert; the "leave me alone" part of that is important. Other people are extroverts; what gets them up in the morning is, in part, engaging with other people. Some people care passionately about UI design. (I also care passionately about UI design, but the UI designs that I'm passionate about are the ones that are natural for my people, who are apparently aliens from another galaxy, so I'm not the person you want doing UI design for things used by humans.) Others might be particularly interested in researching new technology, or coming up with those problem statements, or in smoothly-running production systems, or in metrics and reporting... I don't really know, but I do know that there's no one answer that fits everyone. Which means that none of our individual "why?" responses should become the group "why?". However, I think that leads to an answer, and it's the answer I'm going to advocate for in the meeting tomorrow. I believe the "why?" of our team should be to use the leeway, trust, and credibility that we have because we're successful to try to create an environment in which every individual member of the team can follow their own "why?" responses. In other words, I think the mission of our group should not be about any specific technology, or about any specific set of services, or outcomes. The way we should use our success is to let every member of our team work in a way that lights their fire. That makes them excited to come into work. That lets each of us have as much fun as I had in the past two days. We should have as our goal to create passionate and empowered employees. Nothing more, but nothing less. This is totally not how group mission statements are done. They're always about blending in to some larger overall technological purpose. But I think that's a mistake, and (despite disliking the presentation), I think that's what this TED talk does actually get at. The purpose is the what, or sometimes the how. It's not the why. And the why isn't static; technology is changing fast, and people are using technology in different ways. Any mission statement around technology today is going to be obsolete in short order, and is going to be too narrow. But I think the flip side is that good technological solutions to the problems of the larger organization are outcomes that fall out of having passionate and inspired employees. If people can work in ways that engage and excite them, they will end up solving problems. We're all adults; we know that we're paid to do a job and that job needs to involve solving real problems for the larger organization. All of that is obvious, and therefore none of that belongs in a mission statement. A mission statement should state the inobvious. And while some visionary people can come up with mission statements around technology or around how people use technology that can be a rallying point for a team or organization, I think that's much rarer than people like to think it is. If you stumble across one like that, great, but I think most teams, and certainly our team, would be better served by having the team mission statement be to enable every individual on the team to be passionate about their work. What should our group work on next? Figure out what the university's problems are, line those needs up with the passions of the members of the team, ask the people most excited about each problem how they want to solve that problem, and write down the answers. There's our roadmap and our strategy, all rolled into one.

22 January 2013

Russ Allbery: Review: Fantasy & Science Fiction, March/April 2011

Review: Fantasy & Science Fiction, March/April 2011
Editor: Gordon van Gelder
Issue: Volume 120, No. 3 & 4
ISSN: 1095-8258
Pages: 258
Charles de Lint's book review column sticks with the sorts of things he normally reviews: urban and contemporary fantasy and young adult. Predictably, I didn't find that much of interest. But I was happy to see that not all the reviews were positive, and he talked some about how a few books didn't work. I do prefer seeing a mix of positive and negative (or at least critical) reviews. James Sallis's review column focuses entirely on Henry Kuttner and C.L. Moore (by way of reviewing a collection). I'm always happy to see this sort of review. But between that and de Lint's normal subject matter, this issue of F&SF was left without any current science fiction reviews, which was disappointing. Lucius Shepard's movie review column features stunning amounts of whining, even by Shepard's standards. The topic du jour is how indie films aren't indie enough, mixed with large amounts of cane-shaking and decrying of all popular art. I find it entertaining that the F&SF film review column regularly contains exactly the sort of analysis that one expects from literary gatekeepers who are reviewing science fiction and fantasy. Perhaps David Langford should consider adding an "As We See Others" feature to Ansible cataloging the things genre fiction fans say about popular movies. "Scatter My Ashes" by Albert E. Cowdrey: The protagonist of this story is an itinerant author who has been contracted to write a family history (for $100K, which I suspect is a bit of tongue-in-cheek wish fulfillment) and has promptly tumbled into bed with his employer. But he is somewhat serious about the writing as well, and is poking around in family archives and asking relatives about past details. There is a murder (maybe) in the family history, not to mention some supernatural connections. Those familiar with Cowdrey's writing will recognize the mix of historical drama, investigation, and the supernatural. Puzzles are, of course, untangled, not without a bit of physical danger. Experienced fantasy readers will probably guess at some of the explanation long before the protagonist does. Like most Cowdrey, it's reliably entertaining, but I found it a bit thin. (6) "A Pocketful of Faces" by Paul Di Filippo: Here's a bit of science fiction, and another mystery, this time following the basic model of a police procedural. The police in this case are enforcing laws around acceptable use of "faces" in a future world where one can clone someone's appearance from their DNA and then mount it on a programmable android. As you might expect from that setup, the possibilities are lurid, occasionally disgusting, and inclined to give the police nightmares. After some scene-setting, the story kicks in with the discovery of the face of a dead woman who, at least on the surface, no one should have any motive to clone. There were a few elements of the story that were a bit too disgusting for me, but the basic mystery plot was satisfying. I thought the ending was a let-down, however. Di Filippo tries to complicate the story and, I thought, went just a little too far, leaving motives and intent more confusing than illuminating. (6) "The Paper Menagerie" by Ken Liu: Back to fantasy, this time using a small bit of magic to illustrate the emotional conflicts and difficulties of allegiance for second-generation immigrants. Jack is the son of an American farther and a Chinese mother who was a mail-order bride. He's young at the start of the story and struggling with the embarassment and humiliation that he feels at his mother's history and the difficulties he has blending in with other kids, leading to the sort of breathtaking cruelty that comes so easily from teenagers who are too self-focused and haven't yet developed adult empathy. I found this very hard to read. The magic is beautiful, personal, and very badly damaged by the cruelty in ways that can never really be fixed. It's a sharp reminder of the importance of being open-hearted, but it's also a devastating reminder that the lesson is normally learned too late. Not the story to read if you're prone to worrying about how you might have hurt other people. (6) "The Evening and the Morning" by Sheila Finch: This long novella is about a third of the issue and is, for once, straight science fiction, a somewhat rare beast in F&SF these days. It's set in the far future, among humans who are members of the Guild of Xenolinguists and among aliens called the Venatixi, and it's about an expedition back to the long-abandoned planet of Earth. I had a lot of suspension of disbelief problems with the setup here. While Earth has mostly dropped out of memory, there's a startling lack of curiosity about its current condition among the humans. Finch plays some with transportation systems and leaves humanity largely dependent on other races to explain the failure to return to Earth, but I never quite bought it. It was necessary to set up the plot, which is an exploration story with touches of first contact set on an Earth that's become alien to the characters, but it seemed remarkably artificial to me. But, putting that aside, I did get pulled into the story. Its emotional focus is one of decline and senescence, a growing sense of futility, that's halted by exploration, mystery, and analysis. The question of what's happened on Earth is inherently interesting and engaging, and the slow movement of the story provides opportunities to build up to some eerie moments. The problem, continuing a theme for this issue, is the ending. Some of the reader's questions are answered, but most of the answers are old, well-worn paths in science fiction. The emotional arc of the story is decidedly unsatisfying, at least to me. I think I see what Finch was trying to do: there's an attempted undermining of the normal conclusion of this sort of investigation to make a broader point about how to stay engaged in the world. But it lacked triumph and catharsis for me, partly because the revelations that we get are too pedestrian for the build-up they received. It's still an interesting story, but I don't think it entirely worked. (6) "Night Gauntlet" by Walter C. DeBill, Jr., et al.: The full list of authors for this story (Walter C. DeBill, Jr., Richard Gavin, Robert M. Price, W.H. Pugmire, Jeffrey Thomas, and Don Webb) provides one with the first clue that it's gone off the rails. Collaborative storytelling, where each author tries to riff off the work of the previous author while spinning the story in a different direction, is something that I think works much better orally, particularly if you can watch facial expressions while the authors try to stump each other. In written form, it's a recipe for a poorly-told story. That's indeed what we get here. The setup is typical Cthulhu mythos stuff: a strange scientist obsessed with conspiracy theories goes insane, leaving behind an office with a map of linkages between apparently unrelated places. The characters in the story also start going insane for similar reasons, leading up to a typical confrontation with things man was not meant to know, or at least pay attention to. If you like that sort of thing, you may like this story better than I did, but I thought it was shallow and predictable. (3) "Happy Ending 2.0" by James Patrick Kelly: More fantasy, this time of the time travel variety. (I call it fantasy since there's no scientific explanation for the time travel and it plays a pure fantasy role in the story.) That's about as much as I can say without giving away the plot entirely (it's rather short). I can see what Kelly was going for, and I think he was largely successful, but I'm not sure how to react to it. The story felt like it reinforced some rather uncomfortable stereotypes about romantic relationships, and the so-called happy ending struck me as the sort of situation that was going to turn very nasty and very uncomfortable about five or ten pages past where Kelly ended the story. (5) "The Second Kalandar's Tale" by Francis Marion Soty: The main question I have about this story is one that I can't answer without doing more research than I feel like doing right now: how much of this is original to Soty and how much if it is straight from Burton's translation of One Thousand and One Nights. Burton is credited for the story, so I suspect quite a lot of this is from the original. Whether one would be better off just reading the original, or if Soty's retelling adds anything substantial, are good questions that I don't have the background to answer. Taken as a stand-alone story, it's not a bad one. It's a twisting magical adventure involving a djinn, a captive woman, some rather predictable fighting over the woman, and then a subsequent adventure involving physical transformation and a magical battle reminiscent of T.H. White. (Although I have quite likely reversed the order of inspiration if as much of this is straight from Burton as I suspect.) Gender roles, however, are kind of appalling, despite the presence of a stunningly powerful princess, due to the amount of self-sacrifice expected from every woman in the story. Personally, I don't think any of the men in the story are worth anywhere near the amount of loyalty and bravery that the women show. Still, it was reasonably entertaining throughout, in exactly the way that I would expect a One Thousand and One Nights tale to be. Whether there's any point in reading it instead of the original is a question I'll leave to others. (7) "Bodyguard" by Karl Bunker: This is probably the best science fiction of the issue. The first person protagonist is an explorer living with an alien race, partly in order to flee the post-singularity world of uploaded minds and apparent stagnation that Earth has become. It's a personal story that uses his analysis of alien mindsets (and his interaction with his assigned alien bodyguard) to flesh out his own desires, emotional background, and reactions to the world. There are some neat linguistic bits here that I quite enjoyed, although I wish they'd been developed at even more length. (The alien language is more realistic than it might sound; there are some human languages that construct sentences in a vaguely similar way.) It's a sad, elegiac story, but it grew on me. (7) "Botanical Exercises for Curious Girls" by Kali Wallace: One has to count this story as science fiction as well, although for me it had a fantasy tone because the scientific world seems to play by fantasy rules from the perspective of the protagonist. Unpacking that perspective is part of the enjoyment of the story. At the start, she seems to be a disabled girl who is being cared for by a strange succession of nurses who vary by the time of day, but as the story progresses, it becomes clear that something much stranger is going on. There are moments that capture a sense of wonder, reinforced by the persistantly curious and happy narrative voice, but both are undercut by a pervasive sense of danger and dread. This is a light story written about rather dark actions. My biggest complaint with the story is that it doesn't so much end as wander off into the sunset. It set up conflict within a claustrophobic frame, so I can understand the thematic intent of breaking free of that frame, but in the process I felt like the story walked away from all of the questions and structure that it created and ended in a place that felt less alive with potential than formless and oddly pointless. I think I wanted it to stay involved and engaged with the environment it had created. (6) "Ping" by Dixon Wragg: I probably should just skip this, since despite the table of contents billing and the full title introduction, it's not a story. It's a two-line joke. But it's such a bad two-line joke that I had to complain about it. I have no idea why F&SF bothered to reprint it. (1) "The Ifs of Time" by James Stoddard: This certainly fits with the Arabian Nights story in this issue. The timekeeper of a vast and rambling castle (think Gormenghast taken to the extreme) wanders into a story-telling session in a distant part of the castle. The reader gets to listen to four (fairly good) short stories about time, knowledge, and memory, told in four very different genres. All of this does relate to why the timekeeper is there, and the frame story is resolved by the end, but the embedded stories are the best part; each of them is interesting in a different way, and none of them outlast their welcome. This was probably the strongest story of this issue. (7) Rating: 6 out of 10

16 January 2011

Dirk Eddelbuettel: Plotting overbought / oversold regions in R

The good folks at Bespoke Investment Group frequently show charts of so-called overbought or oversold levels; see e.g. here for the most recent global markets snapshot. Classifying markets as overbought or oversold is a popular heuristic. It starts from computing a rolling smoothed estimate of the prices, usually via a (exponential or standard) moving average over a suitable number of days (where Bespoke uses 50 days, see here). This is typically coupled with a (simple) rolling standard deviation. Overbought and oversold regions are then constructed by taking the smoothed mean plus/minus one and two standard deviations. Doing this is in R is pretty easy thanks to the combination of R's rich base functions and its add-on packages from CRAN. Below is a simply function I wrote a couple of months ago---and I figured I might as well release. It relies on the powerful packages quantmod and TTR by my pals Jeff Ryan and Josh Ulrich, respectively.
## plotOBOS -- displaying overbough/oversold as eg in Bespoke's plots
##
## Copyright (C) 2010 - 2011  Dirk Eddelbuettel
##
## This is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 2 of the License, or
## (at your option) any later version.
suppressMessages(library(quantmod))     # for getSymbols(), brings in xts too
suppressMessages(library(TTR))          # for various moving averages
plotOBOS <- function(symbol, n=50, type=c("sma", "ema", "zlema"), years=1, blue=TRUE)  
    today <- Sys.Date()
    X <- getSymbols(symbol, src="yahoo", from=format(today-365*years-2*n), auto.assign=FALSE)
    x <- X[,6]                          # use Adjusted
    type <- match.arg(type)
    xd <- switch(type,                  # compute xd as the central location via selected MA smoother
                 sma = SMA(x,n),
                 ema = EMA(x,n),
                 zlema = ZLEMA(x,n))
    xv <- runSD(x, n)                   # compute xv as the rolling volatility
    strt <- paste(format(today-365*years), "::", sep="")
    x  <- x[strt]                       # subset plotting range using xts' nice functionality
    xd <- xd[strt]
    xv <- xv[strt]
    xyd <- xy.coords(.index(xd),xd[,1]) # xy coordinates for direct plot commands
    xyv <- xy.coords(.index(xv),xv[,1])
    n <- length(xyd$x)
    xx <- xyd$x[c(1,1:n,n:1)]           # for polygon(): from first point to last and back
    if (blue)  
        blues5 <- c("#EFF3FF", "#BDD7E7", "#6BAED6", "#3182BD", "#08519C") # cf brewer.pal(5, "Blues")
        fairlylight <- rgb(189/255, 215/255, 231/255, alpha=0.625) # aka blues5[2]
        verylight <- rgb(239/255, 243/255, 255/255, alpha=0.625) # aka blues5[1]
        dark <- rgb(8/255, 81/255, 156/255, alpha=0.625) # aka blues5[5]
      else  
        fairlylight <- rgb(204/255, 204/255, 204/255, alpha=0.5)         # grays with alpha-blending at 50%
        verylight <- rgb(242/255, 242/255, 242/255, alpha=0.5)
        dark <- 'black'
     
    plot(x, ylim=range(range(xd+2*xv, xd-2*xv, na.rm=TRUE)), main=symbol, col=fairlylight) 		# basic xts plot
    polygon(x=xx, y=c(xyd$y[1]+xyv$y[1], xyd$y+2*xyv$y, rev(xyd$y+xyv$y)), border=NA, col=fairlylight) 	# upper
    polygon(x=xx, y=c(xyd$y[1]-1*xyv$y[1], xyd$y+1*xyv$y, rev(xyd$y-1*xyv$y)), border=NA, col=verylight)# center
    polygon(x=xx, y=c(xyd$y[1]-xyv$y[1], xyd$y-2*xyv$y, rev(xyd$y-xyv$y)), border=NA, col=fairlylight) 	# lower
    lines(xd, lwd=2, col=fairlylight)   # central smooted location
    lines(x, lwd=3, col=dark)           # actual price, thicker
    invisible(NULL)
 
After downloading data and computing the rolling smoothed mean and standard deviation, it really is just a matter of plotting (appropriate) filled polygons. Here I used colors from the neat RColorBrewer package with some alpha blending. Colors can be turned off via an option to the function; ranges, data length and type of smoother can also be picked. To call this in R, simply source the file and the call, say, plotOBOS("^GSPC", years=2) which creates a two-year plot of the SP500 as shown here: Example chart of overbought/oversold levels from plotOBOS() function This shows the market did indeed bounce off the oversold lows nicely on a few occassions in 2009 and 2010 --- but also continued to slide after hitting the condition. Nothing is foolproof, and certainly nothing as simple as this is, so buyer beware. But it may prove useful in conjunction with other tools. The code for the script is here and of course available under GPL 2 or later. I'd be happy to help incorporate it into some other finance package. Lastly, if you read this post this far, also consider our R / Finance conference coming at the end of April.

Edit: Corrected several typos with thanks to Josh.

22 October 2009

Ingo Juergensmann: Panorama-Foto von Akureyri

When I went on my business trip with AIDAaura during second half of July, I took our Nikon D90 with me and made some pictures. Well, in total I made 24 GB of pictures (JPEG+RAW) and some of these GBs of pictures were taken with the background in mind to make a nice panorama picture out of them with Hugin. Usually I selected the necessary data points for blending by hand. Of course this is an ardous work, especially when you heard from another guy that this can be done automatically. But in Debian this function doesn't work out of the box because the autopano-sift-c package, which is contains the needed program, is not part of the distribution because of some US patents.

Luckily there is an package on http://www.debian-multimedia.org that solves this problem. Generating panorama views is becoming easy with Hugin after I installed that package. And as a first result I made a panorama view of Akureyri, Iceland:

Panorama of Akureyri


The panorama consist of 10 photos with 4288px x 2848px, giving a total panorama view of 11517px x 2497px (34 MB).

15 November 2008

Stefano Zacchiroli: from Vim to Emacs - part 2

One month with Emacs and counting - Part 2
Other posts in the from Vim to Emacs series: part 1
A while ago I've blogged about my switch from Vim to Emacs, promising a blog post series, quite a mouthful :-) nevertheless, it's time to continue the series. The first part was about why I think that nowadays Emacs is ready to be switching to. This second part is about flawed Vim design choices which substantially contributed to my choice. (Of course all this post is under a IMO-/rant-disclaimer, it can't be otherwise, so take it cum grano salis) Flawed Vim design choices Vim started as a wonderful editor, following the path of Vi, which in turn has got right a good deal of design choices. One is the often debated modality. In spite of being a HCI PITA in general, modality is not a problem at all for software you are using every single day (and the editor is the 2nd program I use the most, with the terminal being the 1st), because you learn it as a habit. In the case of Vim, modality is good because it let you use powerful yet simple motions in command mode, which can be easily combined with operator-pending commands. This can be considered yet another incarnation of the UNIX philosophy, casted inside the editor itself. ... which brings us to the good old joke which describes Emacs as a great operating system, lacking only a decent editor. In fact, Vim is no longer the nice Unix philosophy player which it used to be, and Emacs plays much more nicely with external tools than Vim. As arguments, let's consider a few (not so) recent Vim evolutions: Well, I acknowledge that Emacs got this choice right, and that it's an important one: Emacs supports interaction with external processes, and plugin authors have been happy to exploit it to create really cool stuff (sticking to the UNIX philosophy). Please welcome: Flyspell, M-x grep (shameless smartass plug: same number of keystroke of :grep), Grand Unified Debugger, a sampling of language-specific major modes with top-level support. (But I recommend having a look at least once at the coolness of other stuff built-on top of external process support, like Flymake, mentioned by dancer not a long ago.) Even in this respect, I acknowledge that Emacs got this right. It has chosen a single, now full-fledged, programming language (Elisp), which is offered both to the final user (fact: the average Emacs user knows much better how to program her own editor than the average Vim user) and to extensions developer. The standard library of the language is organized with a consistent naming (even though not always intuitive) and documented in an organized manner. You might not like the language, but at least is has been taken seriously and managed as such. Finally, being a Lisp dialect, you might need what you learn with Elisp elsewhere. Vim script is too committing: it is useful only inside Vim. I believe that contributed significantly in its scarce diffusion among Vim users. The fact that Elisp was born together with Emacs and that it is also used internally for the editor implementation is not relevant here: I do care about what is offered to me as an user and as an extension developer. What I see on the Emacs side in this respect is much better that what I see on the Vim side. Moreover, from the point of view of the users, is the difference between what is inside the editor and what is outside (i.e., the extensions) that relevant? I think it is not, and that brings me to the final Vim choice which I consider flawed and which has the potential of seriously limit its future evolutions: Where to from here in this blog post series? For sure some tips on how to migrate for hard-core Vim users, then we'll see ... If you did enjoy the read please let me know commenting in the discussion page (as you did in the last post, thanks!).

16 April 2007

Pablo S. Torralba: Photo tech

In the last two weeks I've discovered (or was told about) some programs quite useful for an outdoor technician (amateur photographer):


10 May 2006

Uwe Hermann: Debian packages release names - Reloaded

Upon popular request (my post was even featured on Debian Weekly News), I re-ran my previous query on the changelog files in Debian packages. This time, however, I didn't only retrieve 40 random package release names, but "all" of them, for unknown values of "all". I didn't analyze some of the files (missing permissions), and maybe I missed one or two because my query sucked, but I think I've got most of them. I ran a slightly more complicated query than last time, using the data from gluck:/org/lintian.debian.org/laboratory/. I have not the slightest idea how old the files in that archive are, but there's ca. 10.000 packages in there — more than enough, if you ask me. The results (78 KB) this time are in alphabetical order, and include the package names where the strings were found. There's a total of 1408 strings. Here are 20 randomly chosen strings, for some more fun:
gdb: * The "Ahhhhhhhhhhhhhhhhhh!" Release.
glibc: * The "Fuck Me Harder" release.
abiword: * The "Foolin' Myself" release.
opensc: * The "RTFM" release.
directory-administrator: * The "On Train" release
xchat: * The "Merry Christmas, mine beloved Xchat users!" release.
apache: * The "Yes, we know there is a new upstream release" upload.
mmm-mode: * The "But I'm Not Dead Yet!" Release
mozilla-firefox: * The "becoming more and more an iceweasel" release.
nano: * The "Marbella, ciudad hermanada con Benidorm" release.
thy: * The Empty Spaces' release.
glibc: * The "Chainsaw Psycho" release.
sam: * The Minime' release.
xchat: * The "Binary only" release.
tellico: * The "pbuider and buildds are not the same" package release
pingus: * The "All you pingus are belong to blendi" release
xchat: * The "Ok, wrong patch, excuse me guys :)" release.
cappuccino: * The "It's time for the upload" release
abiword: * The "Got A Good Thing Goin'" release.
firefox: * The "what he taketh, he giveth back" release.
I also created a small statistic this time. Here's the Top-20 packages (the ones with the most release names):
64 abiword
62 thy
41 xchat
35 glibc
31 shadow
31 abcde
28 menu
18 reportbug
18 firefox
17 fetchmail
15 ccze
14 tama
14 mozilla-firefox
12 nano
12 apache2
11 gaim
10 debconf
9 mailutils
9 lirc
9 geneweb
Feel free to grab the whole results file for more reading fun during boring hours of the day.
If you do any further processing or analysis of any kind with the data, please post a comment and let us all know ;-)