Search Results: "Ritesh Raj Sarraf"

22 April 2022

Ritesh Raj Sarraf: Systemd Service Hang

Finally, TIL, what can all be the reason for systemd services to hang indefinitely. The internet is flooded with numerous reports on this topic but no clear answers. So no more uselessly marked workarounds like: systemctl daemon-reload and systemctl-daemon-reexec for this scenario. The scene would be something along the lines of:
rrs         6467  0.0  0.0  23088 15852 pts/1    Ss   12:53   0:00          \_ /bin/bash
rrs        11512  0.0  0.0  14876  4608 pts/1    S+   13:18   0:00              \_ systemctl restart snapper-timeline.timer
rrs        11513  0.0  0.0  14984  3076 pts/1    S+   13:18   0:00                  \_ /bin/systemd-tty-ask-password-agent --watch
rrs        11514  0.0  0.0 234756  6752 pts/1    Sl+  13:18   0:00                  \_ /usr/bin/pkttyagent --notify-fd 5 --fallback
The snapper-timeline service is important to me and it not running for months is a complete failure. Disappointingly, commands like systemctl --failed do not report of this oddity. The overall system status is reported to be fine, which is completely incorrect. Thankfully, a kind soul s comment gave the hint. The problem is that you could be having certain services in Activating status, which thus blocks all other services; quietly. So much for the unnecessary fun. Looking further, in my case, it was:
rrs@priyasi:~$ systemctl list-jobs 
JOB  UNIT                           TYPE  STATE  
81   timers.target                  start waiting
85   man-db.timer                   start waiting
88   fstrim.timer                   start waiting
3832 snapper-timeline.service       start waiting
83   snapper-timeline.timer         start waiting
39   systemd-time-wait-sync.service start running
87   logrotate.timer                start waiting
84   debspawn-clear-caches.timer    start waiting
89   plocate-updatedb.timer         start waiting
91   dpkg-db-backup.timer           start waiting
93   e2scrub_all.timer              start waiting
40   time-sync.target               start waiting
86   apt-listbugs.timer             start waiting

13 jobs listed.
13:12                      
That was it. I knew the systemd-timesyncd service, in the past, had given me enough headaches. And so was it this time, just quietly doing it all again.
rrs@priyasi:~$ systemctl status systemd-time-wait-sync.service
  systemd-time-wait-sync.service - Wait Until Kernel Time Synchronized
     Loaded: loaded (/lib/systemd/system/systemd-time-wait-sync.service; enabled; vendor preset>
     Active: activating (start) since Fri 2022-04-22 13:14:25 IST; 1min 38s ago
       Docs: man:systemd-time-wait-sync.service(8)
   Main PID: 11090 (systemd-time-wa)
      Tasks: 1 (limit: 37051)
     Memory: 836.0K
        CPU: 7ms
     CGroup: /system.slice/systemd-time-wait-sync.service
              11090 /lib/systemd/systemd-time-wait-sync

Apr 22 13:14:25 priyasi systemd[1]: Starting Wait Until Kernel Time Synchronized...
Apr 22 13:14:25 priyasi systemd-time-wait-sync[11090]: adjtime state 5 status 40 time Fri 2022->
13:16                   => 3  
Dear LazyWeb, anybody knows of why the systemd-time-wait-sync service would hang indefinitely? I ve had identical setups on many machines, in the same network, where others don t exhibit this problem.
rrs@priyasi:~$ systemctl cat systemd-time-wait-sync.service

...snipped...

[Service]
Type=oneshot
ExecStart=/lib/systemd/systemd-time-wait-sync
TimeoutStartSec=infinity
RemainAfterExit=yes

[Install]
WantedBy=sysinit.target
The TimeoutStartSec=infinity is definitely an attribute that shouldn t be shipped in any system services. There are use cases for it but that should be left for local admins to explicitly decide. Hanging for infinity is not a desired behavior for a system service. In figuring all this out, today I learnt the handy systemctl list-jobs command, which will give the list of active running/blocked/waiting jobs.

20 April 2022

Ritesh Raj Sarraf: Btrfs Subvol Fix

There surely is need for better tooling on the BTRFS File System side. While migrating my setup from one machine to another, this is one issue I came to be aware of, only today, when my backup tool (btrbk) complained about it. Following the pointers, I see the below snippet in btrfs-subvolume manual page.
       A snapshot that was created by send/receive will be read-only, with different last change generation, read-only and with set received_uuid which identifies the subvolume on the
       filesystem that produced the stream. The usecase relies on matching data on both sides. Changing the subvolume to read-write after it has been received requires to reset the
       received_uuid. As this is a notable change and could potentially break the incremental send use case, performing it by btrfs property set requires force if that is really desired by
       user.

           Note
           The safety checks have been implemented in 5.14.2, any subvolumes previously received (with a valid received_uuid) and read-write status may exist and could still lead to
           problems with send/receive. You can use btrfs subvolume show to identify them. Flipping the flags to read-only and back to read-write will reset the received_uuid manually. There
           may exist a convenience tool in the future.
Fixing the Received UUID: flag meant running the below:
rrs@priyasi:.../spool$ sudo btrfs sub show /
WARNING: the subvolume is read-write and has received_uuid set,
         don't use it for incremental send. Please see section
         'SUBVOLUME FLAGS' in manual page btrfs-subvolume for
         further information.
ROOTVOL
        Name:                   ROOTVOL
        UUID:                   122b0de1-e6f2-6845-aba0-6bf766c16526
        Parent UUID:            -
        Received UUID:          34772967-c709-5146-bf20-898f7dbc2c1f
        Creation time:          2021-12-02 19:59:29 +0530
        Subvolume ID:           256
        Generation:             138473
        Gen at creation:        7
        Parent ID:              5
        Top level ID:           5
        Flags:                  -
        Send transid:           35245
        Send time:              2021-12-02 19:59:29 +0530
        Receive transid:        34
        Receive time:           2021-12-02 20:13:11 +0530
        Snapshot(s):
                                ROOTVOL/.snapshots/1/snapshot
                                ROOTVOL/.snapshots/2/snapshot
22:40                      


rrs@priyasi:.../spool$ sudo btrfs property set / ro true
WARNING: read-write subvolume with received_uuid, this is bad
22:40                      



rrs@priyasi:.../spool$ sudo btrfs property set -f / ro false
22:40                      



rrs@priyasi:.../spool$ sudo btrfs sub show /
ROOTVOL
        Name:                   ROOTVOL
        UUID:                   122b0de1-e6f2-6845-aba0-6bf766c16526
        Parent UUID:            -
        Received UUID:          -
        Creation time:          2021-12-02 19:59:29 +0530
        Subvolume ID:           256
        Generation:             138473
        Gen at creation:        7
        Parent ID:              5
        Top level ID:           5
        Flags:                  -
        Send transid:           0
        Send time:              2021-12-02 19:59:29 +0530
        Receive transid:        138480
        Receive time:           2022-04-20 22:40:43 +0530
        Snapshot(s):
                                ROOTVOL/.snapshots/1/snapshot
                                ROOTVOL/.snapshots/2/snapshot
22:40                      
Hoping there won t be surprises in the coming months.

12 February 2022

Ritesh Raj Sarraf: apt-offline 1.8.4

apt-offline 1.8.4 apt-offline version 1.8.4 has been released. This release includes many bug fixes but the important ones are:
  • Better GPG signature handling
  • Support for verifying InRelease files

Changelog
apt-offline (1.8.4-1) unstable; urgency=medium
  [ Debian Janitor ]
  * Update standards version to 4.5.0, no changes needed.
  [ Paul Wise ]
  * Clarify file type in unknown file message
  * Fix typos
  * Remove trailing whitespace
  * Update LICENSE file to match official GNU version
  * Complain when there are no valid keyrings instead of missing keyrings
  * Make all syncrhronised files world readable
  * Fix usage of indefinite articles
  * Only show the APT Offline GUI once in the menu
  * Update out of date URLs
  * Fix date and whitespace issues in the manual page
  * Replace stereotyping with an appropriate word
  * Switch more Python shebangs to Python 3
  * Correct usage of the /tmp/ directory
  * Fix YAML files
  * Fix usage of the log API
  * Make the copying of changelog lines less brittle
  * Do not split keyring paths on whitespace
  [ Ritesh Raj Sarraf ]
  * Drop the redundant import of the apt module.
    Thanks to github/dandelionred
  * Fix deprecation of get_bugs() in debianbts
  * Drop the unused IgnoredBugTypes
  * Set encoding for files when opening
  * Better error logging when apt fails
  * Don't mandate a default option
  * Demote metadata errors to verbose
  * Also log an error message for every failed .deb url
  * Check hard for the url type
  * Check for ascii armored signature files.
    Thanks to David Klnischkies
  * Add MIME type for InRelease files
  * Drop patch 0001-Drop-the-redundant-import-of-the-apt-module.patch.
    Now part of the 1.8.4 release
  * Prepare release 1.8.3
  * Prepare release 1.8.4
  * debian packaging
    + Bump debhelper compatibility to 13
    + Update install files
  [ Dean Anderson ]
  * [#143] Added support for verifying InRelease files
 -- Ritesh Raj Sarraf <rrs@debian.org>  Sat, 12 Feb 2022 18:52:58 +0530

Resources
  • Tarball and Zip archive for apt-offline are available here
  • Packages should be available in Debian.
  • Development for apt-offline is currently hosted here

11 January 2022

Ritesh Raj Sarraf: ThinkPad AMD Debian

After a hiatus of 6 years, it was nice to be back with the ThinkPad. This blog post briefly touches upon my impressions with the current generation ThinkPad T14 Gen2 AMD variant.
ThinkPad T14 Gen2 AMD
ThinkPad T14 Gen2 AMD

Lenovo It took 8 weeks to get my hands on the machine. Given the pandemic, restrictions and uncertainities, not sure if I should call it an ontime delivery. This was a CTO - Customise-to-order; so was nice to get rid of things I really didn t care/use much. On the other side, it also meant I could save on some power. It also came comparatively cheaper overall.
  • No fingerprint reader
  • No Touch screen
There s still parts where Lenovo could improve. Or less frustate a customer. I don t understand why a company would provide a full customization option on their portal, while at the same time, not provide an explicit option to choose the make/model of the hardware one wants. Lenovo deliberately chooses to not show/specify which WiFi adapter one could choose. So, as I suspected, I ended up with a MEDIATEK Corp. Device 7961 wifi adapter.

AMD For the first time in my computing life, I m now using AMD at the core. I was pretty frustrated with annoying Intel Graphics bugs, so decided to take the plunge and give AMD/ATI a shot, knowing that the radeon driver does have decent support. So far, on the graphics side of things, I m glad that things look bright. The stock in-kernel radeon driver has been working perfect for my needs and I haven t had to tinker even once so far, in my 30 days of use. On the overall system performance, I have not done any benchmarks nor do I want to do. But wholly, the system performance is smooth.

Power/Thermal This is where things need more improvement on the AMD side. This AMD laptop terribly draws a lot of power in suspend mode. And it isn t just this machine, but also the previous T14 Gen1 which has similar problems. I m not sure if this is a generic ThinkPad problem, or an AMD specific problem. But coming from the Dell XPS 13 9370 Intel, this does draw a lot lot more power. So much, that I chose to use hibernation instead. Similarly, on the thermal side, this machine doesn t cool down well as compared the the Dell XPS Intel one. On an idle machine, its temperature are comparatively higher. Looking at powertop reports, it does show to consume an average of 10 watts power even while idle. I m hoping these are Linux ingeration issues and that Lenovo/AMD will improve things in the coming months. But given the user feedback on the ThinkPad T14 Gen1 thread, it may just be wishful thinking.

Linux The overall hardware support has been surprisingly decent. The MediaTek WiFi driver had some glitches but with Linux 5.15+, things have considerably improved. And I hope the trend will continue with forthcoming Linux releases. My previous device driver experience with MediaTek wasn t good but I took the plunge, considering that in the worst scenario I d have the option to swap the card. There s a lot of marketing about Linux + Intel. But I took a jibe with Linux + AMD. There are glitches but nothing so far that has been a dealbreaker. If anything, I wish Lenovo/AMD would seriously work on the power/thermal issues.

Migration Other than what s mentioned above, I haven t had any serious issues. I may have had some rare occassional hangs but they ve been so infrequent that I haven t spent time to investigate those. Upon receiving the machine, my biggest requirement was how to switch my current workstation from Dell XPS to Lenovo ThinkPad. I ve been using btrfs for some time now. And over the years, built my own practise on how to structure it. Things like, provisioning [sub]volumes, based on use cases is one thing I see. Like keeping separate subvols for: cache/temporary data, copy-on-write data , swap etc. I wish these things could be simplified; either on the btrfs tooling side or some different tool on top of it. Below is filtered list of subvols created over years, that were worthy of moving to the new machine.
rrs@priyasi:~$ cat btrfs-volume-layout 
ID 550 gen 19166 top level 5 path home/foo/.cache
ID 552 gen 1522688 top level 5 path home/rrs
ID 553 gen 1522688 top level 552 path home/rrs/.cache
ID 555 gen 1426323 top level 552 path home/rrs/rrs-home/Libvirt-Images
ID 618 gen 1522672 top level 5 path var/spool/news
ID 634 gen 1522670 top level 5 path var/tmp
ID 635 gen 1522688 top level 5 path var/log
ID 639 gen 1522226 top level 5 path var/cache
ID 992 gen 1522670 top level 5 path disk-tmp
ID 1018 gen 1522688 top level 552 path home/rrs/NoBackup
ID 1196 gen 1522671 top level 5 path etc
ID 23721 gen 775692 top level 5 path swap
18:54                      

btrfs send/receive This did come in handy but I sorely missed some feature. Maybe they aren t there, or are there and I didn t look close enough. Over the years, different attributes were set to different subvols. Over time I forget what feature was added where. But from a migration point of view, it d be nice to say, Take this volume and take it with all its attributes . I didn t find that functionality in send/receive. There s get/set-property which I noticed later but by then it was late. So some sort of tooling, ideally something like btrfs migrate or somesuch would be nicer. In the file system world, we already have nice tools to take care of similar scenarios. Like with rsync, I can request it to carry all file attributes. Also, iirc, send/receive works only on ro volumes. So there s more work one needs to do in:
  1. create ro vol
  2. send
  3. receive
  4. don t forget to set rw property
  5. And then somehow find out other properties set on each individual subvols and [re]apply the same on the destination
I wish this all be condensed into a sub-command. For my own sake, for this migration, the steps used were:
user@debian:~$ for volume in  sudo btrfs sub list /media/user/TOSHIBA/Migrate/   cut -d ' ' -f9   grep -v ROOTVOL   grep -v etc   grep -v btrbk ; do echo $volume; sud
o btrfs send /media/user/TOSHIBA/$volume   sudo btrfs receive /media/user/BTRFSROOT/ ; done            
Migrate/snapshot_disk-tmp
At subvol /media/user/TOSHIBA/Migrate/snapshot_disk-tmp
At subvol snapshot_disk-tmp
Migrate/snapshot-home_foo_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_foo_.cache
At subvol snapshot-home_foo_.cache
Migrate/snapshot-home_rrs
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs
At subvol snapshot-home_rrs
Migrate/snapshot-home_rrs_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_.cache
At subvol snapshot-home_rrs_.cache
ERROR: crc32 mismatch in command
Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol snapshot-home_rrs_rrs-home_Libvirt-Images
ERROR: crc32 mismatch in command
Migrate/snapshot-var_spool_news
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_spool_news
At subvol snapshot-var_spool_news
Migrate/snapshot-var_lib_machines
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_lib_machines
At subvol snapshot-var_lib_machines
Migrate/snapshot-var_lib_machines_DebianSidTemplate
..... snipped .....
And then, follow-up with:
user@debian:~$ for volume in  sudo btrfs sub list /media/user/BTRFSROOT/   cut -d ' ' -f9 ; do echo $volume; sudo btrfs property set -ts /media/user/BTRFSROOT/$volume ro false; done
ROOTVOL
ERROR: Could not open: No such file or directory
etc
snapshot_disk-tmp
snapshot-home_foo_.cache
snapshot-home_rrs
snapshot-var_spool_news
snapshot-var_lib_machines
snapshot-var_lib_machines_DebianSidTemplate
snapshot-var_lib_machines_DebSidArmhf
snapshot-var_lib_machines_DebianJessieTemplate
snapshot-var_tmp
snapshot-var_log
snapshot-var_cache
snapshot-disk-tmp
And then finally, renaming everything to match proper:
user@debian:/media/user/BTRFSROOT$ for x in snapshot*; do vol=$(echo $x   cut -d '-' -f2   sed -e "s _ / g"); echo $x $vol; sudo mv $x $vol; done
snapshot-var_lib_machines var/lib/machines
snapshot-var_lib_machines_Apertisv2020ospackTargetARMHF var/lib/machines/Apertisv2020ospackTargetARMHF
snapshot-var_lib_machines_Apertisv2021ospackTargetARM64 var/lib/machines/Apertisv2021ospackTargetARM64
snapshot-var_lib_machines_Apertisv2022dev3ospackTargetARMHF var/lib/machines/Apertisv2022dev3ospackTargetARMHF
snapshot-var_lib_machines_BusterArm64 var/lib/machines/BusterArm64
snapshot-var_lib_machines_DebianBusterTemplate var/lib/machines/DebianBusterTemplate
snapshot-var_lib_machines_DebianJessieTemplate var/lib/machines/DebianJessieTemplate
snapshot-var_lib_machines_DebianSidTemplate var/lib/machines/DebianSidTemplate
snapshot-var_lib_machines_DebianSidTemplate_var_lib_portables var/lib/machines/DebianSidTemplate/var/lib/portables
snapshot-var_lib_machines_DebSidArm64 var/lib/machines/DebSidArm64
snapshot-var_lib_machines_DebSidArmhf var/lib/machines/DebSidArmhf
snapshot-var_lib_machines_DebSidMips var/lib/machines/DebSidMips
snapshot-var_lib_machines_JenkinsApertis var/lib/machines/JenkinsApertis
snapshot-var_lib_machines_v2019 var/lib/machines/v2019
snapshot-var_lib_machines_v2019LinuxSupport var/lib/machines/v2019LinuxSupport
snapshot-var_lib_machines_v2020 var/lib/machines/v2020
snapshot-var_lib_machines_v2021dev3Slim var/lib/machines/v2021dev3Slim
snapshot-var_lib_machines_v2021dev3SlimTarget var/lib/machines/v2021dev3SlimTarget
snapshot-var_lib_machines_v2022dev2OspackMinimal var/lib/machines/v2022dev2OspackMinimal
snapshot-var_lib_portables var/lib/portables
snapshot-var_log var/log
snapshot-var_spool_news var/spool/news
snapshot-var_tmp var/tmp

snapper Entirely independent of this, but indirectly related. I use snapper as my snapshotting tool. It worked perfect on my previous machine. While everything got migrated, the only thing that fell apart was snapper. It just wouldn t start/run proper. Funny thing is that I just removed the snapper configs and reinitialized with the exact same config again, and voila snapper was happy.

Conclusion That was pretty much it. With the above and then also migrating /boot and then just chroot to install the boot loader. At some time, I d like to explore other boot options but given that that is such a non-essential task, it is low on the list. The good part was that I booted into my new machine with my exact workstation setup as it was. All the way to the user cache and the desktop session. So it was nice on that part. But I surely think there s room for a better migration experience here. If not directly as btrfs migrate, then maybe as an independent tool. The problem is that such a tool is going to be used once in years, so I didn t find the motivation to write one. But this surely would be a good use case for the distribution vendors.

9 October 2021

Ritesh Raj Sarraf: Lotus to Lily

The Lotus story so far My very first experience with water flowering plants was pretty good. I learnt a good deal of things; from setting up the pond, germinating the lotus seeds, setting up the right soil, witnessing the growth of the lotus plant, fish eco-system to take care of the pond. Overall, a lot of things learnt. But I couldn t succeed in getting the Lotus flower. A lot many reasons. The granite container developed some leakage, which I had to fix by emptying it, which might have caused some shock to the lotus. But more than that, in my understanding, the reason for not being able to flower the lotus, was the amount of sunlight. From what I have learned, these plants need a minimum of 6-8 hrs of sunlight to really give you with the flowering result, whereas the setup of my pond was on the ground with hardly 3-4 hrs of sun. And that too, with all the plants growing, resulted in indirect sunlight.

Lotus to Lily For my new setup, I chose a large oval container. And this one, I placed on my terrace, carefully choosing a spot where it d get 6-8 hrs of very bright sun on usual days. Other than that, the rest of the setup is pretty similar to my previous setup in the garden. Guppies, Solar Water Fountain etc.
Initial lily pond setup
Initial lily pond setup
The good thing about the terrace is that the setup gets ample amount of sun. You can see that in the picture above, with the amount of algae that has been formed. Something that is vital for the plant s ecosystem. I must thank my wonderful neighbor who kindly shared a sapling from their lily plant. They already had had success with flowering the lily. So I had high hopes to see the day come when I d be happy to write down my experience in this blog post. Though, a lot of patience is needed. I got the lily some time in January this year. And it blossomed now, in October. So, here s me sharing my happiness here, in particular order of how I documented the process.
Monday morning greeted with a blossomed lily
Monday morning greeted with a blossomed lily
Lily Blossom Closeup
Lily Blossom Closeup
Beautiful water reflection
Beautiful water reflection

Dawn to Dusk The other thing that I learned in this whole lily episode is that the flower goes back to sleeping at dusk. And back to flowering again at dawn. There s so much to learn in the surrounding, only if you spare some time to the little things with mother nature.
Lily status at dusk
Lily status at dusk
Lily the next day
Lily the next day
Not sure how long this phenomenon is to last, but overall witnessing this whole process has been mesmerizing. This past week has been great.

3 October 2021

Ritesh Raj Sarraf: Human Society

In my past, I ve had experiences that have had me thinking. My experiences have been mostly in the South Asian Indian Sub-Continent, so may not be fair to generalize it.

7 July 2021

Ritesh Raj Sarraf: Insect Camouflage Plant

I was quite impressed by the ability of this insect; yet to be. The way it has camouflaged itself is mesmerizing. I ll let the video do the talking as this one is going to be difficult to express in words.

29 June 2021

Ritesh Raj Sarraf: Plant Territorial Behavior

This blog post is about my observations of some of the plants in my home garden. While still a n00b on the subject, these notes are my observations and experiences over days, weeks and months. Thankfully, with the capability to take frequent pictures, it has been easy to do an assessment and generate a report of some of these amazing behaviors of plants, in an easy timeline order; all thanks to the EXIF data embedded. This has very helpfully allowed me to record my, otherwise minor observations, into great detail; and make some sense out of it by correlating the data over time. It is an emotional experience. You see, plants are amazing. When I sow a sapling, water it, feed it, watch it grow, prune it, medicate it, and what not; I build up affection towards it. Though, at the same time, to me it is a strict relationship, not too attached; as in it doesn t hurt to uproot a plant if there is a good reason. But still, I find some sort of association to it. With plants around, it feels I have a lot of lives around me. All prospering, communicating, sharing. And communicate they do. What is needed is just the right language to observe and absorb their signals and decipher what they are trying to say.

Devastation How in this world, when you are caring for your plants, can it transform:

From This
Healthy Mulberry Plant
Healthy Mulberry Plant
Healthy Mulberry Plant
Healthy Mulberry Plant
Healthy Bael Plant
Healthy Bael Plant

To This
Dead Mulberry Plant
Dead Mulberry Plant
Very Sick Bael Plant
Very Sick Bael Plant
With emotions involved, this can be an unpleasant experience. Bael is a dear plant to me. The plant as a whole has religious values (Shiva). As well, its fruits have lots of health benefits, especially for the intestines. Its leaves have a lot of medicinal properties. When I planted the Bael, there were a lot of emotions that went along. On the other hand, the Mulberry is something I put in with a lot of enthusiasm. Mulberries are now rare to find, especially in urban locations. For one, they have a very short shelf life; But more than that, the way lifestyles are heading towards, I was always worried if my children would ever have a day to see and taste these fruits. The mulberry that I planted, yielded twice; once very soon when I had planted and second, before it died. Infact, it died while during its second yield phase. It was quite saddening to see that happen. It made me wonder why it happened. I had been caring for the plants fairly well. Watering them timely, feeding them the right amount of nutrients. They were getting a good amount of sun. But still their health was deteriorating. And then the demise of the Mulberry. Many thoughts hit my mind. I consulted the claimed experts in the domain, the maali, the gardener. I got a very vague answer; there must be termites in the soil. It didn t make much sense to me. I mean if there are termites they d hit day one. They won t sleep for months and just wake up one fine day and start attacking the roots of particular plants; not all. I wasn t convinced with the termite theory; But still, give the expert the blind hand, I went with his word. When my mulberry was dead, I dug its roots. Looking for proof, to see if there were any termites, I uprooted it. But I couldn t find any trace of termites. And the plant next to it was perfectly healthy and blossoming. So I was convinced that it wasn t the termite but something else. But else what ? I still didn t have an answer to that.

Thinking The Corona pandemic had embraced and there was a lot to worry, and worry not any, if you change the perspective. With plants around in my home, and our close engagement with them, and the helplessness that I felt after seeking help from the experts, it was time again; to build up some knowledge on the subject. But how ? How do you go about a subject you have not much clue about ? A subject which has always been around in the surrounding but very seldom have I dedicated focused thought to it. To be honest, the initial thought of diving on the subject made me clueless. I had no idea where to begin with. But, so, as has been my past history, I chose to take it as a curiosity. I gathered some books, skimmed through a couple of pages. Majority of the books I got hold of were about DIYs and How to do Home Gardening types. It was a decent introduction to a novice but my topic of curiosity was different. Thankfully, with the Internet, and YouTube in particular, a lot of good stuff is available as documentary videos. While going through some, I came across a video which mentioned about carnivore plants. Like, for example, this one.
Carnivore Plant
Carnivore Plant
This got me thinking that there could be a possibility of something similar, that did the fate to my Mulberry plant. But who did it ? And how to dive further on this suspicion ? And most of all, if that thought of possibility was actually the reality. Or was I just hitting in the dark ?

Beginning To put some perspective, here s how it started. When we moved into our home, the gardener put in a couple of plants stock, as part of the property handover. Now, I don t exactly recollect the name of the plants that came in stock, neither English nor Hindi; But at my neighbor s place, the plant is still there. Here are some of the pictures of this beauty. But don t just go by the looks as looks can be deceiving
Dominating Plant
Dominating Plant
Dominating Plant
Dominating Plant
Dominating Plant
Dominating Plant
We hadn t put any serious thought about the plants we were offered by the gardener. After all, we never had ever thought of any mishap either.

Plants we planted Apart from what was offered by the builder/gardener as part of the property handover, in over the next 6 months of we moving in, I planted 3 tree type plants.
  1. Mulberry
  2. Bael
  3. Rudraksha
The Mulberry, as I have described so far, died a tragic death. Bael, on the other hand, fought hard. But very little did we know that the plant was struggling the fight. Our impression was that we must have been given a bad breed of the plant. Or maybe the termite theory had some truth. For the Rudraksha plant, the growth was slow. This was the very first time I had seen a Rudraksha plant, so I had no clue of what its growth rate could be, and what to expect out of it. I wasn t sure if the local climate suited the plant. A quick search showed no objections to the plant in the local climate, but that was it. So my theory has been to put in the plant, and observe. Here s what my Rudraksha plant looked like during the initial days/weeks of its settlement
Rudraksha Plant
Rudraksha Plant

The Hint Days passed and so on. Not much had progressed in gathering information. The plant s health was usual; deteriorating at a slow pace. On day, thinking of the documentaries I had been watching, it hit my mind about the plant behavior.
  • Plants can be Carnivores.
  • Plants can be Aggressive.
  • Plants can be Invaders.
  • Plants can be Territorial.
There are many plants where their aggression can be witnessed with bare human eyes. Like creepers. Some of them are good at spreading tentacles, grabbing onto other plants' stems and branches and spread above it. This was my hint from the documentaries. That s one of the many ways plants set their dominance. That is what hit my mind that if plants are aggressive on the out, underneath the soil, they should be having similar behavior. I mean, what we see as humans is just a part of the actual plant. More than half of the actual plant is usually underneath the soil, in most plants. So there s a high chance to get more information out, if you dig the soil and look the roots.

The Digging As I mentioned earlier, I do establish bindings, emotions and attachments. But not much usually comes in the way to curiosity. To dig further on the theory that the problem was elsewhere, with within the plants ecosystem, we needed to pick on another subject - the plant. And the plant we chose was the plant which was planted in the initial offering to us, when we moved into our home. It was the same plant breed which was neighboring all our newly planted trees: Rudraksha, Mulberry and Bael. If you look closely into the pictures above of these plants, you ll notice the stem of another plant, the Territorial Dominator, is close-by to these 3 plants. That s because the gardener put in a good number of them to get his action item complete. So we chose to dig and uproot one of those plant to start with. Now, while they may look gentle on the outside, with nice red colored tiny flowers, these plants were giants underneath. Their roots were huge. It took some sweat shredding to single-handedly remove them.
Dominating Plant Uprooted
Dominating Plant Uprooted
Dominating Plant Uprooted
Dominating Plant Uprooted
Dominating Plant Uprooted
Dominating Plant Uprooted
Dominating Plant Uprooted
Dominating Plant Uprooted

Today is brighter

Bael I ll let the pictures do the initial talking today.
Healthy Bael
Healthy Bael
Healthy Bael
Healthy Bael
Healthy Bael
Healthy Bael
Healthy Bael
Healthy Bael
Healthy Bael
Healthy Bael
The above ones are pictures of the same Bael plant, which had struggled to live, for almost 14 months. Back then, this plant was starved of its resources. It was dying a slow death out of starvation. After we uprooted the other dominant species, the Bael has recovered and has regained its charm. In the pictures above of the Bael plant, you can clearly mark out the difference in its stem. The dark colored one is from its months of struggle, while the bright green is from now where it is well nourished and regained its health.

Mulberry As for the Mulberry, I couldn t save it. But I later managed to get another one. But it turns out I didn t take good, full length pictures of the new mulberry when I planted. The only picture I have is this:
Second Mulberry Plant
Second Mulberry Plant
I recollect when I brought it home, it was around 1 - 1.5 feet in length. This is where I have it today: Majestically standing, 12 feet and counting
Second Mulberry Plant 7 feet tall
Second Mulberry Plant 7 feet tall

Rudraksha Then:
Rudraksha Plant
Rudraksha Plant
And now: I feel quite happy about
Rudraksha Plant
Rudraksha Plant
Rudraksha Plant
Rudraksha Plant
Rudraksha Plant
Rudraksha Plant
All these plants are on the very same soil with the very same care taker. What has changed is my experience and learning.

Plant Co-Existence Plant co-existence is a difficult topic. My knowledge on plants is very limited in general, and co-existence is something tricky, unexplored, at times invisible (when underneath the soil). So it is a difficult topic. So far, what I ve learnt is purely observations, experiences and hints from the documentaries. There surely are many many plants that co-exist very well. A good example is my Bael plant itself, which is healthily co-sharing its space with 2 other Croton plants. Same goes for the Rudraksha, which has a close-by neighbor in an Adenium and an Allamanda. The plant world is mesmerizing. How they behave, communicate and many many more signs. There s so much to observe, learn, explore and document. I hope to have more such observations and experiences to share

24 May 2021

Ritesh Raj Sarraf: Kget Goodness

Why is it so hard to have a proper download manager in today s day ? We had it in the previous decade. Or do tech giants self-proclaim that the world lives only in their cloud. At one point, there used to be great download managers for all major web browsers, either in-built, or external. Then came the latest trend with Chrome and Firefox, where they make it difficult to have an external download manager work proper. On either one s extension store, I find it difficult to see a proper download manager. And I wonder what Google was consuming when making Google Takeout. They provide you with a specially crafted link for the downloads, and if for any reason your network gets lost, you are forced to redownload your takeout again. And with the pathetic download manager in Chrome and Firefox, an average Joe user would run in loops. THANK YOU GOOG. While I can understand the reason behind making the links time bound and invalidating them as soon as the time expires. But, enlighten your PMs that the world is not even. Not everyone has the same internet bandwidth. And admit that the download manager in your browser is crap. And then devise a better workflow and a product.

KGET Thankfully, there s Kget in the KDE Suite. While there s insanity elsewhere, luckily the kget tool still can do nice things. nuff said.
KGET to the rescue
KGET to the rescue

22 May 2021

Ritesh Raj Sarraf: Setting Up a Secure Webapp

As a person who prefers full access to data in the simplest format, while at the same time having it useful with latest technologies, my quest for trying things out is an ongoing activity. Earlier, I blogged about my needs of collating news feeds in a simple format, readily accessible offline, while still being useful and aligned with the modern paradigm. In today s age, the other common aspect of our life, is digitization of moments. With the advent of great technology and affordable economics, the world now has access to great devices to capture moments in digital form. Most people, these days, are equipped with smart devices, like mobile phones, that come with pretty good image capturing devices. Our lives, our societies, how we interact; a lot of it is now built around the assumption of smart devices and digital services. A lot of good things have happened of it. We are now able to send messages to people, securely, in a matter of seconds. We are now able to capture moments, which otherwise we d often miss; all thanks to devices like smart mobile phones that most of all carry almost all along with us. In this regard, we are quite indebted to the technology revolution that has improved life styles all over. Of course, like every coin has a flip side, thus too, too much of obsession with digital life, also has its drawbacks. A simple reminder to self should always be about what a human life is designed for and try to live by its rules.

Moments Given the tools general availability, we are all more used to capturing moments. Once upon a time, taking a photo meant going to a photo studio. Then came a generation where we d have personal devices with roll films, to which we captured the moments. And then get it processed from a photo developing service provider. And that is when we d know which all moments came out correct and which ones were spoilt. In such cases, there s always the wish I could have that moment again. Thankfully, now, in our current generation, we have the liberty to take pictures and validate them almost immediately. We also have the flexibility to take multiple shots of the moment and do the filtering later. There s also many intelligent and invasive services, all mostly provided free, to help you organize these moments; at a small cost of keeping an eye on your life activity. In a summary, mankind, now, generates lots and lots of data. So much lot that now even mammoths like Google are forced to make a call whether it is more profitable to sniff user activity while providing them a service for free, or ask people to pay otherwise too.

Privacy Many of us are wary of the amount of personal data we generate, which is our asset; and how the big tech giants want a piece of it in the name of free services. And in such quest, free aside, there s always a lookout for privacy savvy tools that can help us draw a bold line in between Public and Private

Google Photos Google Photos has been a great tool. The way it organizes, siphons and present your data is simply amazing. No good would have been the photos taken, if they weren t easily searchable, organized, annotated, and presented. But Google Photos is proprietary and invasive. The amount of information they have access to should be a concern to all people. For services that invade so much, the world needs Free Services.

PhotoPrism I only came across this project around a month ago, while the project has been around since 2019. So far, in my exploration, this is one of the best Free Software tools to manage and organize my digital photos. The other favorite tool that I regularly depend on is Digikam and it still is a gem. In a gist, PhotoPrism aims to be the equivalent of Google Photos, for the privacy savvy people. As of date, it has a decent list of features available. And for some of the missing ones, I ve come up with a fairly okay workflow with other tools, which is one of the reasons of this blog post. PhotoPrism is a web app, written in the Go Programming Language. Its layout and workflow as a Photo Management application is similar to Google Photos. It is very performant compared to other applications. Since it is a Progressive Web App, access through a Laptop, Tablet or Mobile is almost the same. It uses Google TensorFlow for some of its features and thus you feel like using Google Photos in some regard. Some shortcomings and workarounds
  • Facial Recognition: To date, facial recognition is a planned feature. But this was easy to tackle given that Digikam has pretty solid facial recognition. So I use Digikam to detect, recognize and annotate faces. And then PhotoPrism is happy enough to use those annotations and present relevant data.
  • HTTP Web App: The upstream project has done a good job of making use of Docker container technology in presenting this web app solution. A software solution, the equivalent of a Google Photos, does need a heavyweight config. In free software, it is all about reusing available tools. PhotoPrism makes use of tools like MySQL, Vue.js, TensorFlow, GoLang, GoLibs and strives to provide a single package solution, all thanks to Docker containers.
In its current offering, PhotoPrism is run as a HTTP Web App. I wanted to have an added layer of security on top of it. And thus run an nginx reverse proxy on top of it. Along with it, I run the proxy service on HTTPS, thus making all traffic from clients to the proxy encrypted. I also wanted an added layer of HTTP AUTH on top, so I explored some options and finally settled down with http-auth-digest Also, in the current implementation, PhotoPrism doesn t have a strong notion of normal and private photos in its data organization. I wanted normal photos available under a standard auth realm and private photos under a different realm. And along with a different realm, I also wanted some added security directives for it. So far, it looks like I ve put together a decent solution with the help of nginx.
  • First of all, since the port from the host is forwarded to the docker instance, that needed to be controlled. Instead of the default of listening on all interfaces, I changed it to loopback only. Because my primary and only interface is going to be the nginx reverse proxy
  • Setup nginx with a self-signed certificate to have all communication encrypted.
  • Setup nginx as a reverse proxy to talk to PhotoPrism.
    server  
       listen 80;
       listen [::]:80;
       server_name lenovo;
       return 302 https://$host$request_uri;
       rewrite ^ https://$http_host$request_uri? permanent;    # force redirect http to https
        server_tokens off;
     
    server  
    listen 443 ssl http2 default_server;
    listen [::]:443 ssl http2 default_server;
    ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
    ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
    server_tokens off;
    server_name lenovo;
    client_max_body_size 500M;
        auth_digest_expires 900s;
        auth_digest_evasion_time 5s;
        auth_digest_replays 500;
    location /private  
        auth_digest 'abc';
        auth_digest_user_file /etc/nginx/htdigest;
        auth_digest_expires 900s;
        auth_digest_evasion_time 5s;
        auth_digest_replays 500;
        proxy_pass http://localhost:2342/private;
     
    location /discover  
        auth_digest 'abc';
        auth_digest_user_file /etc/nginx/htdigest;
        auth_digest_expires 900s;
        auth_digest_evasion_time 5s;
        auth_digest_replays 500;
        proxy_pass http://localhost:2342/discover;
     
    location /  
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_buffering off;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass http://localhost:2342;
          
        auth_basic "Priv";
        auth_basic_user_file /etc/nginx/htpasswd;
     
     
    
  • nginx packaging: I was so happy to see the simplicity of the nginx packaging in Debian. Becuase http-auth-digest is not upstream, it needs to be pulled in separately and compiled. I was happy to see how simple the packaging structure of nginx in Debian is. It was just a matter of putting in the module in the right modules location, which as structured such in the packaging falls under the debian/ packaging sub-folder, that any future upgrades will be quite easy to manage.
  • http-auth-digest: I d love to see http-auth-digest module be part of the upstream package. While I m a web n00b, I felt this module perfect for my use case. From what I ve understood, set up and tested so far, this module fills in all my requirements; which is more of like a session management.
With the combined setup in place, https://host is authenticated with a different set of credentials. On the other hand, https://host/discover and /private get covered with a different set of credentials and policies. While, this will continue to be an ongoing effort and audit of the services that I build, so far now, I feel this is in decent shape that I can use it as my daily driver. The end result is:
nginx auth prompt
nginx auth prompt
PhotoPrism Photo List
PhotoPrism Photo List
PhotoPrism Menu
PhotoPrism Menu

Updates
  • Things can really get frustrating at times. These days, it is Google that contributes to it. https://bugs.chromium.org/p/chromium/issues/detail?id=544244
    • It is is now fair to say that Google is big enough to rewrite all the standards. Or at least break them. Or best, their arrogance of it. This bug report is a fine example of what other ways it could have been dealt.

19 April 2021

Ritesh Raj Sarraf: Catching Up Your Sources

I ve mostly had the preference of controlling my data rather than depend on someone else. That s one reason why I still believe email to be my most reliable medium for data storage, one that is not plagued/locked by a single entity. If I had the resources, I d prefer all digital data to be broken down to its simplest form for storage, like email format, and empower the user with it i.e. their data. Yes, there are free services that are indirectly forced upon common users, and many of us get attracted to it. Many of us do not think that the information, which is shared for the free service in return, is of much importance. Which may be fair, depending on the individual, given that they get certain services without paying any direct dime.

New age communication So first, we had email and usenet. As I mentioned above, email was designed with fine intentions. Intentions that make it stand even today, independently. But not everything, back then, was that great either. For example, instant messaging was very closed and centralised then too. Things like: ICQ, MSN, Yahoo Messenger; all were centralized. I wonder if people still have access to their ICQ logs. Not much has chagned in the current day either. We now have domination by: Facebook Messenger, Google Whatever the new marketing term they introdcue, WhatsApp, Telegram, Signal etc. To my knowledge, they are all centralized. Over all this time, I m yet to see a product come up with good (business) intentions, to really empower the end user. In this information age, the most invaluable data is user activity. That s one data everyone is after. If you decline to share that bit of free data in exchange for the free services, mind you, that that free service like Facebook, Google, Instagram, WhatsApp, Truecaller, Twitter; none of it would come to you at all. Try it out. So the reality is that while you may not be valuating the data you offer in exchange correctly, there s a lot that is reaped from it. But still, I think each user has (and should have) the freedom to opt in for these tech giants and give them their personal bit, for free services in return. That is a decent barter deal. And it is a choice that one if free to choose

Retaining my data I m fond of keeping an archive folder in my mailbox. A folder that holds significant events in the form of an email usually, if documented. Over the years, I chose to resort to the email format because I felt it was more reliable in the longer term than any other formats. The next best would be plain text. In my lifetime, I have learnt a lot from the internet; so it is natural that my preference has been with it. Mailing Lists, IRCs, HOWTOs, Guides, Blog posts; all have helped. And over the years, I ve come across hundreds of such content that I d always like to preserve. Now there are multiple ways to preserving data. Like, for example, big tech giants. In most usual cases, your data for your lifetime, should be fine with a tech giant. In some odd scenarios, you may be unlucky if you relied on a service provider that went bankrupt. But seriously, I think users should be fine if they host their data with Microsoft, Google etc; as long as they abide by their policies. There s also the catch of alignment. As the user, you should ensure to align (and transition) with the product offerings of your service provider. Otherwise, what may look constant and always reliable, will vanish in the blink of an eye. I guess Google Plus would be a good example. There was some Google Feed service too. Maybe Google Photos in the near decade future, just like Google Picasa in the previous (or current) decade.

History what is On the topic of retaining information, lets take a small drift. I still admire our ancestors. I don t know what went in their mind when they were documenting events in the form of scriptures, carvings, temples, churches, mosques etc; but one things for sure, they were able to leave a fine means of communication. They are all gone but a large number of those events are evident through the creations that they left. Some of those events have been strong enough that further rulers/invaders have had tough times trying to wipe it out from history. Remember, history is usually not the truth, but the statement to be believed by the teller. And the teller is usually the survivor, or the winner you may call. But still, the information retention techniques were better. I haven t visited, but admire whosoever built the Kailasa Temple, Ellora, without which, we d be made to believe what not by all invaders and rulers of the region. But the majestic standing of the temple, is a fine example of the history and the events that have occured in the past.
Ellora Temple -  The majectic carving believed to be carved out of a single stone
Ellora Temple - The majectic carving believed to be carved out of a single stone
Dominance has the power to rewrite history and unfortunately that s true and it has done its part. It is just that in a mere human s defined lifetime, it is not possible to witness the transtion from current to history, and say that I was there then and I m here now, and this is not the reality. And if not dominance, there s always the other bit, hearsay. With it, you can always put anything up for dispute. Because there s no way one can go back in time and produce a fine evidence. There s also a part about religion. Religion can be highly sentimental. And religion can be a solid way to get an agenda going. For example, in India - a country which today consitutionally is a secular country, there have been multiple attempts to discard the belief, that never ever did the thing called Ramayana exist. That the Rama Setu, nicely reworded as Adam s Bridge by who so ever, is a mere result of science. Now Rama, or Hanumana, or Ravana, or Valmiki, aren t going to come over and prove that that is true or false. So such subjects serve as a solid base to get an agenda going. And probably we ve even succeeded in proving and believing that there was never an event like Ramayana or the Mahabharata. Nor was there ever any empire other than the Moghul or the British Empire. But yes, whosoever made the Ellora Temple or the many many more of such creations, did a fine job of making a dent for the future, to know of what the history possibly could also be.

Enough of the drift So, in my opinion, having events documented is important. It d be nice to have skills documented too so that it can be passed over generations but that s a debatable topic. But events, I believe should be documented. And documented in the best possible ways so that its existence is not diminished. A documentation in the form of certain carvings on a rock is far more better than links and posts shared on Facebook, Twitter, Reddit etc. For one, these are all corporate entities with vested interests and can pretext excuse in the name of compliance and conformance. So, for the helpless state and generation I am in, I felt email was the best possible independent form of data retention in today s age. If I really had the resources, I d not rely on digital age. This age has no guarantee of retaining and recording information in any reliable manner. Instead, it is just mostly junk, which is manipulative and changeable, conditionally.

Email and RSS So for my communication, I like to prefer emails over any other means. That doesn t mean I don t use the current trends. I do. But this blog is mostly about penning my desires. And desire be to have communication over email format. Such is the case that for information useful over the internet, I crave to have it formatted in email for archival. RSS feeds is my most common mode of keeping track of information I care about. Not all that I care for is available in RSS feeds but hey such is life. And adaptability is okay. But my preference is still RSS. So I use RSS feeds through a fine software called feed2imap. A software that fits my bill fairly well. feed2imap is:
  • An rss feed news aggregator
  • Pulls and extracts news feeds in the form of an email
  • Can push the converted email over pop/imap
  • Can convert all image content to email mime attachment
In a gist, it makes the online content available to me offline in the most admired email format In my mail box, in today s day, my preferred email client is Evolution. It does a good job of dealing with such emails (rss feed items). An image example of accessing the rrs feed item through it is below
RSS News Item through Evolution
RSS News Item through Evolution
The good part is that my actual data is always independent of such MUAs. Tomorrow, as technology - trends - economics evolve, something new would come as a replacement but my data would still be mine.

26 February 2021

Ritesh Raj Sarraf: Wayland KDE X11

KDE Impressions These days, I often hear a lot about Wayland. And how much of effort is being put into it; not just by the Embedded world but also the usual Desktop systems, namely KDE and GNOME. In recent past, I switched back to KDE and have been (very) happy about the switch. Even though the KDE 4 (and initial KDE 5) debacle had burnt many, coming back to a usable KDE desktop is always a delight. It makes me feel home with the elegance, while at the same time the flexibility, it provides. It feels so nice to draft this blog article from Kwrite + VI Input Mode Thanks to the great work of the Debian KDE Team, but Norbert Preining in particular, who has helped bring very up-to-date KDE packages into Debian. Right now, I m on a Plamsa 5.21.1 desktop, which is recent by all standards.

Wayland Almost all the places in the Linux world these days are busy with integrating Wayland as the primary display service. Not sure what the current status on the GNOME side is but I definitely keep trying KDE + Wayland with every release. I keep trying with every release because it still is not prime for daily use. And it makes me get back to X11, no matter how dated some may call. Fact is, X11 still shines to me as an end-user. Glitches with Wayland still are (Based on this week s test on Plasma 5.21.1):
  • Horrible performance compared to X11
  • Very crashy, especially when hotplugging secondary display. Plasma would just crash. X11 is very resilient to such things, part of the reason I can think is the age of the codebase.
  • Many many applications still need to be fixed for Wayland. Or Wayland needs to accomodate them in some way. XWayland does not really feel like the answer.
And while KDE keeps insisting users to switch to Wayland, as that s where all the new enhancements and fixes are put in, someone like me still needs to stick to X11 for the time being. So to get my shiny new LG 27" 4K Monitor (3840x2160 60.00*+) to work without too much glitch, I had to live with an alias:
$ alias   grep xrandr
alias rrs_xrandr_lg='xrandr --output DP-1 --mode 3840x2160 --scale .75x.75'
18:31                    

Plasma 5.21 On the brighter side, the Plasma 5.21.1 release brings some nice enhancements in other areas.
  • I m now able to make use of tighter integration with systemd/cgroups, with better organization and management of processes overall.
  • The new Plasma theme, Breeze Twilight, is a good blend of Light + Dark.
I also appreciate the work put in by Michail Vourlakos. The KDE project is lucky to have a developer/designer like him. His vision and work into the KDE desktop is well beyond a writing by me.
$ usystemctl status plasma-plasmashell.service 
  plasma-plasmashell.service - KDE Plasma Workspace
     Loaded: loaded (/usr/lib/systemd/user/plasma-plasmashell.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-02-26 18:34:23 IST; 13s ago
   Main PID: 501806 (plasmashell)
      Tasks: 21 (limit: 18821)
     Memory: 759.8M
        CPU: 13.706s
     CGroup: /user.slice/user-1000.slice/user@1000.service/session.slice/plasma-plasmashell.service
              501806 /usr/bin/plasmashell --no-respawn
             
Feb 26 18:35:00 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:21 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:49 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:57 priyasi plasmashell[501806]: qml: recreating buttons
18:36                  

OBS - Open Build Service I should also thank the OpenSUSE folks for the OBS work. It has enabled the close equivalent (or better, in my experience) of PPAs for Debian. And that is what has enabled developers like Norbert to easily and quickly be able to deliver the entire KDE suite.

OBS - Some detail Christian asked for some more details on the OBS side of things, of my view. I m updating this article with it because the comment system may not always be reliable and I hate losing content. Having been using OBS myself, and also others in the Debian community who are making use of it, I surely think we as project should consider making use of OBS. Given that OBS is Free Software, it is a perfect fit for Debian. Gitlab is another example of what we ve made available in Debian. OBS is divided into multiple parts
  • OBS Server
  • OBS DoD service
  • OBS Publisher
  • OBS Workers
  • OBS Warden
  • OBS Rep Server
For every Debian release I care about, I add an OBS project per release. So I have OBS projects for: Sid, Bullseye, Buster, Jessie. Now, say you have a package, foo . You prep your package and enable all the releases that you want to build the package for. So the same package gets built, in separate clean environments, for every release I mentioned as an example above. You don t have to manually trigger the build for every release/every architcture. You should add the release (as projects) in OBS, set their supported architectures, and then add those enabled release projets as bits to your package. Every build involves:
  • Creating a new chroot for each build
  • Building the package
Builds can be scattered across multiple hosts, known as workers in OBS terminology. Your workers are independent machine entities, supporting different architectures. The machines can be Bare-Metal ones, VMs, even containers. So this allows for very nice scale-in and scale-out. There may be auto-scaling too but that is something worth investigating. Think of things like cross architecture builds. Let s assume the cloud vendors decide to donate resources to the Debian project. We could enable OBS worker instances on the respective clouds (different architectures) and plug them into the master OBS instance that Debian hosts. Fully distributed. Similarly, big hardware vendors willing to donate compute resources could house them in their premises and Debian could just easily establish a connection to them. All of this just a TCP connection away. So when I look at the features of OBS, from the point of view of Debian, I like it more. Extensibility won t be an issue. Supporting a new Debian release would just be a matter of bootstrapping the Debian release as a project in OBS, and then all done. The single effort of setting of the target release project is a one time job, and then all can leverage it. The PPA was a long craved feature missing in Debian, in my opinion. OBS allows to not just fulfil that gap but also extend it in a very easy way. Andrew Lee had put in a nice video presentation about the same @ Debconf 20

3 October 2020

Ritesh Raj Sarraf: First Telescope

Curiosity I guess this would be common to most of us. While I grew up, right from the childhood itself, the sky was always an intriguing view. The Stars, the Moon, the Eclipses; were all fascinating. As a child, in my region, religion and culture; the mythology also built up stories around it. Lunar Eclipses have a story of its own. During Solar Eclipses, parents still insist that we do not go out. And to be done with the food eating before/after the eclipse. Then there s the Hindu Astrology part, which claims its own theories and drags in mythology along. For example, you ll still find the Hindu Astrology making recommendations to follow certain practices with the planets, to get auspicious personal results. As far as I know, other religions too have similar beliefs about the planets. As a child, we are told the Moon to be addressed as an Uncle ( ). There s also a rhyme around it, that many of us must have heard. And if you look at our god, Lord Mahadev, he s got a crescent on his head
Lord Mahadev
Lord Mahadev

Reality Fast-forward to today, as I grew, so did some of my understanding. It is fascinating how mankind has achieved so much understanding of our surrounding. You could go through the documentaries on Mars Exploration, for example; to see how the rovers are providing invaluable data. As a mere individual, there s a limit to what one can achieve. But the questions flow in free.
  • Is there life beyond us
  • What s out there in the sky
  • Why is all this the way it is

Hobby The very first step, for me, for every such curiosity, has been to do the ground work, with the resources I have. To study on the subject. I have done this all my life. For example, I started into the Software domain as: A curiosity => A Hobby => A profession Same was the case with some of the other hobbies, equally difficult as Astronomy, that I developed a liking for. Just did the ground work, studied on those topics and then applied the knowledge to further improve it and build up some experience. And star gazing came in no different. As a complete noob, had to start with the A B C on the subject of Astronomy. Familiarize myself with the usual terms. As so on PS: Do keep in mind that not all hobbies have a successful end. For example, I always craved to be good with graphic designing, image processing and the likes, where I ve always failed. Never was able to keep myself motivated enough. Similar was my experience when trying to learn playing a musical instrument. Just didn t work out for me, then. There s also a phase in it, where you fail and then learn from the failures and proceed further, and then eventually succeed. But we all like to talk about the successes. :-)

Astronomy So far, my impression has been that this topic/domain will not suit most of the people. While the initial attraction may be strong, given the complexity and perseverance that Astronomy requires, most people would lose interest in it very soon. Then there s the realization factor. If one goes with an expectation to get quick results, they may get disappointed. It isn t like a point and shoot device that d give you results on the spot. There s also the expectation side of things. If you are a person more accustomed to taking pretty selfies, which always come right because the phone manufacturer does heavy processing on the images to ensure that you get to see the pretty fake self, for the most of the times; then star gazing with telescopes could be a frustrating experience altogether. What you get to see in the images on the internet will be very different than what you d be able to see with your eyes and your basic telescope. There s also the cost aspect. The more powerful (and expensive) your telescope, the better your view. And all things aside, it still may get you lose interest, after you ve done all the ground work and spent a good chunk of money on it. Simply because the object you are gazing at is more a still image, which can quickly get boring for many. On the other hand, if none of the things obstruct, then the domain of Astronomy can be quite fascinating. It is a continuous learning domain (reminds me of CI in our software field these days). It is just the beginning for us here, and we hope to have a lasting experience in it.

The Internet I have been indebted to the internet right from the beginning. The internet is what helped me be able to achieve all I wanted. It is one field with no boundaries. If there is a will, there is a way; and often times, the internet is the way.
  • I learnt computers over the internet.
  • Learnt more about gardening and plants over the internet
  • Learnt more about fish care-taking over the internet
And many many more things. Some of the communities over the internet are a great way to participation. They bridge the age gap, the regional gap and many more. For my Astronomy need, I was glad to see so many active communities, with great participants, on the internet.

Telescope While there are multiple options to start star gazing, I chose to start with a telescope. But as someone completely new to this domain, there was a long way to go. And to add to that, the real life: work + family I spent a good 12+ months reading up on the different types of telescopes, what they are, their differences, their costs, their practical availability etc. The good thing is that the market has offerings for everything. From a very basic binocular to a fully automatic Maksutov-Cassegrain scope. It all would depend on your budget.

Automatic vs Manual To make it easy for the users, the market has multiple options in the offering. One could opt-in for a cheap, basic and manually operated telescope; which would require the user to do a lot of ground study. On the other hand, users also have the option of automatic telescopes which do the hard work of locating and tracking the planetary objects. Either option aside, the end result of how much you ll be able to observe the sky, still depends on many many more factors: Enthusiasm over time, Light Pollution, Clear Skies, Timing etc. PS: The planetary objects move at a steady pace. Objects you lock into your view now will be gone out of the FOV in just a matter of minutes.

My Telescope After spending so much of the time reading up on types of telescopes, my conclusion was that a scope with high aperture and focal length was the way to go forward. This made me shorten the list to Dobsonians. But the Dobsonians aren t a very cheap telescope, whether manual or automatic. My final decision made me acquire a 6" Dobsonian Telescope. It is a Newtonian Reflecting Telescope with a 1200mm focal length and 150mm diameter. Another thing about this subject is that most of the stuff you do in Astronomy; right from the telescope selection, to installation, to star gazing; most of it is DIY, so your mileage may vary with the end result and experience. For me, installation wasn t very difficult. I was able to assemble the base Dobsonian mount and the scope in around 2 hours. But the installation manual, I had been provided with, was very brief. I ended up with one module in the mount wrongly fit, which I was able to fix later, with the help of online forums.
Dobsonian Mount
Dobsonian Mount
In this image you can see that the side facing out, where the handle will go, is wrong. If fit this way, the handle will not withstand any weight at all.
Correct Panel Side
Correct Panel Side
The right fix of the handle base board. In this image, the handle is on the other side that I m holding. Because the initial fit put in some damage to the engineered wood, I fixed it up by sealing with some adhesive. With that, this is what my final telescope looks like.
Final Telescope
Final Telescope

Clear Skies While the telescope was ready, the skies were not. For almost next 10 days, we had no clear skies at all. All I could do was wait. Wait so much that I had forgotten to check on the skies. Luckily, my wife noticed clear skies this week for a single day. Clear enough that we could try out our telescope for the very first time.
Me posing for a shot
Me posing for a shot

Telescope As I said earlier, in my opinion, it takes a lot of patience and perseverance on this subject. And most of the things here are DIY. To start with, we targeted the Moon. Because it is easy. I pointed the scope to the moon, then looked into the finder scope to center it, and then looked through the eyepiece. And blank. Nothing out there. Turns out, the finder scope and the viewer s angle weren t aligned. This is common and the first DIY step, when you plan to use your telescope for viewing. Since our first attempt was unplanned and just random because we luckily spotted that the skies were clear, we weren t prepared for this. Lucky enough, mapping the difference in the alignment, in the head, is not very difficult. After a couple of minutes, I could make out the point in the finder scope, where the object if projected, would show proper in the viewer. With that done, it was just mesmerizing to see the Moon, in a bit more detail, than what I ve seen all these years of my life.
Moon
Moon
Moon
Moon
Moon
Moon
Moon
Moon
The images are not exactly what we saw with our eyes. The view was much more vivid than these pictures. But as a first timer, I really wanted to capture this first moment of a closer view of the Moon. In the whole process; that of ground work studying about telescopes, installation of the telescope, astronomy basics and many other things; the most difficult part in this entire journey, was to point my phone to the viewing eyepiece, to get a shot of the object. This requirement just introduced me to astrophotography. And then, Dobsonians aren t the best model for astrophotography, to what I ve learnt so far. Hopefully, I ll find my ways to do some DIY astrophotography with the tools I have. Or extend my arsenal over time. But overall, we ve been very pleased with the subject of Astronomy. It is a different feel altogether and we re glad to have forayed into it.

20 August 2020

Ritesh Raj Sarraf: LUKS Headless Laptop

As we grow old, so do our computing machines. And just like we don t decommission ourselves, so should be the case of the machines. They should be semi-retired, delegating major tasks to newer machines while they can still serve some less demaning work: File Servers, UPNP Servers et cetera. It is common on a Debian installer based derivative, and otherwise too, to use block encryption on Linux. With machines from this decade, I think we ve always had CPU extension for encryption. So, as would be the usual case, all my laptops are block encrypted. But as they reach the phase of their life to retire and serving as a headless boss, it becomes cumbersome to keep feeding it a password and all the logistics involved to feed it. As such, I wanted to get rid of feeding it the password. Then, there s also the case of bad/faulty hardware, many of which mostly can temporarily fix their functionality when reset, which usually is to reboot the machine. I still recollect words of my Linux Guru - Dhiren Raj Bhandari - that many of the unexplainable errors can be resolved by just rebooting the machine. This was more than 20 years ago in the prime era of Microsoft Windows OS and the context back then was quite different, but yes, some bits of that saying still apply today. So I wanted my laptop, which had LUKS set up for 2 disks, to go password-less now. I stumbled across a slightly dated article where the author achieved similar results with keyscript. So the thing was doable. To my delight, Debian cryptsetup has the best setup and documentation in place to do it with just adding keyfiles
rrs@lenovo:~$ dd if=/dev/random of=sda7.key bs=1 count=512
512+0 records in
512+0 records out
512 bytes copied, 0.00540209 s, 94.8 kB/s
19:19            
rrs@lenovo:~$ dd if=/dev/random of=sdb1.key bs=1 count=512
512+0 records in
512+0 records out
512 bytes copied, 0.00536747 s, 95.4 kB/s
19:20            
rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sda7 sda7.key 
[sudo] password for rrs: 
Enter any existing passphrase: 
No key available with this passphrase.
19:20         => 2  
rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sda7 sda7.key 
Enter any existing passphrase: 
19:20            
rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sdb1 sdb1.key 
Enter any existing passphrase: 
19:21            
and the nice integration in crypttab to ensure your keys propagate to initramfs
rrs@lenovo:~$ cat /etc/cryptsetup-initramfs/conf-hook 
#
# Configuration file for the cryptroot initramfs hook.
#
#
# KEYFILE_PATTERN: ...
#
# The value of this variable is interpreted as a shell pattern.
# Matching key files from the crypttab(5) are included in the initramfs
# image.  The associated devices can then be unlocked without manual
# intervention.  (For instance if /etc/crypttab lists two key files
# /etc/keys/ root,swap .key, you can set KEYFILE_PATTERN="/etc/keys/*.key"
# to add them to the initrd.)
#
# If KEYFILE_PATTERN if null or unset (default) then no key file is
# copied to the initramfs image.
#
# Note that the glob(7) is not expanded for crypttab(5) entries with a
# 'keyscript=' option.  In that case, the field is not treated as a file
# name but given as argument to the keyscript.
#
# WARNING: If the initramfs image is to include private key material,
# you'll want to create it with a restrictive umask in order to keep
# non-privileged users at bay.  For instance, set UMASK=0077 in
# /etc/initramfs-tools/initramfs.conf
#
KEYFILE_PATTERN="/etc/luks/sd*.key"
19:44            
The whole thing took me around 20-25 minutes, including drafting this post. From Retired Head and Password Prompt to Headless and Password-less. The beauty of Debian and FOSS

18 July 2020

Ritesh Raj Sarraf: Laptop Mode Tools 1.74

Laptop Mode Tools 1.74 Laptop Mode Tools version 1.74 has been released. This release includes important bug fixes, some defaults settings updated to current driver support in Linux and support for devices with nouveau based nVIDIA cards. A filtered list of changes is mentioned below. For the full log, please refer to the git repository

1.74 - Sat Jul 18 19:10:40 IST 2020
* With 4.15+ kernels, Linux Intel SATA has a better link power
  saving policy, med_power_with_dipm, which should be the recommended
  one to use
* Disable defaults for syslog logging
* Initialize LM_VERBOSE with default to disabled
* Merge pull request #157 from rickysarraf/nouveau
* Add power saving module for nouveau cards
* Disable ethernet module by default
* Add board-specific folder and documentation
* Add execute bit on module radeon-dpm
* Drop unlock because there is no lock acquired

Resources

What is Laptop Mode Tools
Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 .
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 .
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power

16 June 2020

Ritesh Raj Sarraf: Kodi PS3 BD Remote

Setting up a Sony PS3 Blu-Ray Disc Remote Controller with Kodi TLDR; Since most of the articles on the internet were either obsolete or broken, I ve chosen to write these notes down in the form of a blog post so that it helps me now and in future, and hopefully others too.

Raspberry Pi All this time, I have been using the Raspberry Pi for my HTPC needs. The first RPi I acquired was in 2014 and I have been very very happy with the amount of support in the community and quality of the HTPC offering it has. I also appreciate the RPi s form factor and the power consumption limits. And then, to add more sugar to it, it uses a derivative of Debian, Raspbian, which was very familiar and feel good to me.

Raspberry Pi Issues So primarily, I use my RPi with Kodi. There are a bunch of other (daemon) services but the primary use case is HTPC only. RPi + Kodi has a very very annoying issue wherein it loses its audio pitch during video playback. The loss is so bad that the audio is barely audible. The workaround is to seek the video playback either way and then it comes back to its actual audio level, just to fade again in a while. My suspicion was that it may be a problem with Kodi. Or at least, Kodi would have a workaround in software. But unfortunately, I wasted a lot of time in dealing with my suspicion with no fruitful result. This started becoming a PITA over time. And it seems the issue is with the hardware itself because after I moved my setup to a regular laptop, the audio loss is gone.

Laptop with Kodi Since I had my old Lenovo Yoga 2 13 lying on all the time, it made sense to make some more use of it, using as the HTPC. This machine comes with a Micro-HDMI Out port, so it felt ideal for my High Definition video rendering needs. It comes stock with just Intel HD Video with good driver support in Linux, so it was quite quick and easy getting Kodi charged up and running on it. And as I mentioned above, the sound issues are not seen on this setup. Some added benefits are that I get to run stock Debian on this machine. And I must say a big THANK YOU to the Debian Multimedia Maintainers, who ve done a pretty good job maintaining Kodi under Debian.

HDMI CEC Only after I decommissioned my RPi, I came to notice how convenient the HDMI CEC functionality is. Turns out no standard laptops ship CEC functionality onto them. Even the case of my laptop, which has a Micro HDMI Out port, but still no CEC capabilities. As far as I know, the RPi came with the Pulse-Eight CEC module, so obvious first thought was to opt for a compatible external module of the same; but it comes with a nice price tag, me not willing to spend.

WiFi Remotes Kodi has very well implemented network interface for almost all its features. One could take the Yatse or Music Pump Kodi Remote Android applications that work very very well with Kodi. But wifi can be flaky some times. Especially, my experience with the Realtek network devices hasn t been very good. The driver support in Linux is okay but there are many firmware bugs to deal with. In my case, the machine will lose wifi signal/network every once in a while. And it turns out, for this machine, with this network device type, I m not the only one running into such problems. And to add to that, this is an UltraBook, which means it doesn t have an Ethernet port. So I ve had not much choice other than to live and suffer deal with it. The WiFi chip also provides the Bluetooth module, which so far I had not used much. From my /etc/modprobe.d/blacklist-memstick.conf, all relevant BT modules were added to the blacklist, all this time.
rrs@lenovo:~$ cat /etc/modprobe.d/blacklist-memstick.conf 
blacklist memstick
blacklist rtsx_usb_ms
# And bluetooth too
#blacklist btusb
#blacklist btrtl
#blacklist btbcm
#blacklist btintel
#blacklist bluetooth
21:21            
Also to keep in mind is that the driver for my card gives a very misleading kernel message, which is one of the many reasons for this blog post, so that I don t forget it a couple of months later. The missing firmware error message is okay to ignore, as per this upstream comment.
Jun 14 17:17:08 lenovo kernel: usbcore: registered new interface driver btusb
Jun 14 17:17:08 lenovo systemd[1]: Mounted /boot/efi.
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: examining hci_ver=06 hci_rev=000b lmp_ver=06 lmp_subver=8723
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: rom_version status=0 version=1
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8723b_fw.bin
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: firmware: direct-loading firmware rtl_bt/rtl8723b_fw.bin
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8723b_config.bin
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: firmware: failed to load rtl_bt/rtl8723b_config.bin (-2)
Jun 14 17:17:08 lenovo kernel: firmware_class: See https://wiki.debian.org/Firmware for information about missing firmware
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: Direct firmware load for rtl_bt/rtl8723b_config.bin failed with error -2
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: cfg_sz -2, total sz 22496
This device s network + bt are on the same chip.
01:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8723BE PCIe Wireless Network Adapter
And then, when the btusb module is initialed (along with the misleading driver message), you ll get the following in your USB device listing
Bus 002 Device 005: ID 0bda:b728 Realtek Semiconductor Corp. Bluetooth Radio

Sony PlayStation 3 BD Remote Almost 10 years ago, I bought the PS3 and many of its accessories. The remote has just been rotting in the shelf. It had rusted so bad that it is better described with these pics.
Rusted inside
Rusted inside
Rusted inside and cover
Rusted inside and cover
Rusted spring
Rusted spring
The rust was so much that the battery holding spring gave up. A little bit scrubbing and cleaning has gotten it working. I hope it lasts for some time before I find time to open it up and give it a full clean-up.

Pairing the BD Remote to laptop Honestly, with the condition of the hardware and software on both ends, I did not have much hopes of getting this to work. And in all the years on my computer usage, I hardly recollect much days when I ve made use of BT. Probably, because the full BT stack wasn t that well integrated in Linux, earlier. And I mostly used to disable them in hardware and software to save on battery. All yielded results from the internet talked about tools/scripts that were either not working, pointing to broken links etc. These days, bluez comes with a nice utility, bluetoothctl. It was a nice experience using it. First, start your bluetooth service and ensure that the device talks well with the kernel
rrs@lenovo:~$ systemctl status bluetooth                                                                                                          
  bluetooth.service - Bluetooth service                                                                                                           
     Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)                                                      
     Active: active (running) since Mon 2020-06-15 12:54:58 IST; 3s ago                                                                           
       Docs: man:bluetoothd(8)                                                                                                                    
   Main PID: 310197 (bluetoothd)                                                                                                                  
     Status: "Running"                                                                                                                            
      Tasks: 1 (limit: 9424)                                                                                                                      
     Memory: 1.3M                                                                                                                                 
     CGroup: /system.slice/bluetooth.service                                                                                                      
              310197 /usr/lib/bluetooth/bluetoothd                                                                                               
                                                                                                                                                  
Jun 15 12:54:58 lenovo systemd[1]: Starting Bluetooth service...                                                                                  
Jun 15 12:54:58 lenovo bluetoothd[310197]: Bluetooth daemon 5.50                                                                                  
Jun 15 12:54:58 lenovo systemd[1]: Started Bluetooth service.                                                                                     
Jun 15 12:54:58 lenovo bluetoothd[310197]: Starting SDP server                                                                                    
Jun 15 12:54:58 lenovo bluetoothd[310197]: Bluetooth management interface 1.15 initialized                                                        
Jun 15 12:54:58 lenovo bluetoothd[310197]: Sap driver initialization failed.                                                                      
Jun 15 12:54:58 lenovo bluetoothd[310197]: sap-server: Operation not permitted (1)                                                                
12:55                                                                                                                                       
Next, then is to discover and connect to your device
rrs@lenovo:~$ bluetoothctl 
Agent registered
[bluetooth]# devices
Device E6:3A:32:A4:31:8F MI Band 2
Device D4:B8:FF:43:AB:47 MI RC
Device 00:1E:3D:10:29:0F BD Remote Control
[CHG] Device 00:1E:3D:10:29:0F Connected: yes
[BD Remote Control]# info 00:1E:3D:10:29:0F
Device 00:1E:3D:10:29:0F (public)
        Name: BD Remote Control
        Alias: BD Remote Control
        Class: 0x0000250c
        Paired: no
        Trusted: yes
        Blocked: no
        Connected: yes
        LegacyPairing: no
        UUID: Human Interface Device... (00001124-0000-1000-8000-00805f9b34fb)
        UUID: PnP Information           (00001200-0000-1000-8000-00805f9b34fb)
        Modalias: usb:v054Cp0306d0100
[bluetooth]# 
In case of the Sony BD Remote, there s no need to pair. In fact, trying to pair fails. It prompts for the PIN code, but neither 0000 or 1234 are accepted. So, the working steps so far are to Trust the device and then Connect the device. For the sake of future use, I also populated /etc/bluetooth/input.conf based on suggestions on the internet. Note: The advertised keymappings in this config file do not work. Note: I m only using it for the power saving measures in instructing the BT connection to sleep after 3 minutes.
rrs@priyasi:/tmp$ cat input.conf 
# Configuration file for the input service
# This section contains options which are not specific to any
# particular interface
[General]
# Set idle timeout (in minutes) before the connection will
# be disconnect (defaults to 0 for no timeout)
IdleTimeout=3
# Enable HID protocol handling in userspace input profile
# Defaults to false (HIDP handled in HIDP kernel module)
#UserspaceHID=true
# Limit HID connections to bonded devices
# The HID Profile does not specify that devices must be bonded, however some
# platforms may want to make sure that input connections only come from bonded
# device connections. Several older mice have been known for not supporting
# pairing/encryption.
# Defaults to false to maximize device compatibility.
#ClassicBondedOnly=true
# LE upgrade security
# Enables upgrades of security automatically if required.
# Defaults to true to maximize device compatibility.
#LEAutoSecurity=true
#
#[00:1E:3D:10:29:0F]
[2c:33:7a:8e:d6:30]
[PS3 Remote Map]
# When the 'OverlayBuiltin' option is TRUE (the default), the keymap uses
# the built-in keymap as a starting point.  When FALSE, an empty keymap is
# the starting point.
#OverlayBuiltin = TRUE
#buttoncode = keypress    # Button label = action with default key mappings
#OverlayBuiltin = FALSE
0x16 = KEY_ESC            # EJECT = exit
0x64 = KEY_MINUS          # AUDIO = cycle audio tracks
0x65 = KEY_W              # ANGLE = cycle zoom mode
0x63 = KEY_T              # SUBTITLE = toggle subtitles
0x0f = KEY_DELETE         # CLEAR = delete key
0x28 = KEY_F8             # /TIME = toggle through sleep
0x00 = KEY_1              # NUM-1
0x01 = KEY_2              # NUM-2
0x02 = KEY_3              # NUM-3
0x03 = KEY_4              # NUM-4
0x04 = KEY_5              # NUM-5
0x05 = KEY_6              # NUM-6
0x06 = KEY_7              # NUM-7
0x07 = KEY_8              # NUM-8
0x08 = KEY_9              # NUM-9
0x09 = KEY_0              # NUM-0
0x81 = KEY_F2             # RED = red
0x82 = KEY_F3             # GREEN = green
0x80 = KEY_F4             # BLUE = blue
0x83 = KEY_F5             # YELLOW = yellow
0x70 = KEY_I              # DISPLAY = show information
0x1a = KEY_S              # TOP MENU = show guide
0x40 = KEY_M              # POP UP/MENU = menu
0x0e = KEY_ESC            # RETURN = back/escape/cancel
0x5c = KEY_R              # TRIANGLE/OPTIONS = cycle through recording options
0x5d = KEY_ESC            # CIRCLE/BACK = back/escape/cancel
0x5f = KEY_A              # SQUARE/VIEW = Adjust Playback timestretch
0x5e = KEY_ENTER          # CROSS = select
0x54 = KEY_UP             # UP = Up/Skip forward 10 minutes
0x56 = KEY_DOWN           # DOWN = Down/Skip back 10 minutes
0x57 = KEY_LEFT           # LEFT = Left/Skip back 5 seconds
0x55 = KEY_RIGHT          # RIGHT = Right/Skip forward 30 seconds
0x0b = KEY_ENTER          # ENTER = select
0x5a = KEY_F10            # L1 = volume down
0x58 = KEY_J              # L2 = decrease the play speed
0x51 = KEY_HOME           # L3 = commercial skip previous
0x5b = KEY_F11            # R1 = volume up
0x59 = KEY_U              # R2 = increase the play speed
0x52 = KEY_END            # R3 = commercial skip next
0x43 = KEY_F9             # PS button = mute
0x50 = KEY_M              # SELECT = menu (as per PS convention)
0x53 = KEY_ENTER          # START = select / Enter (matches terminology in mythwelcome)
0x30 = KEY_PAGEUP         # PREV = jump back (default 10 minutes)
0x76 = KEY_J              # INSTANT BACK (newer RCs only) = decrease the play speed
0x75 = KEY_U              # INSTANT FORWARD (newer RCs only) = increase the play speed
0x31 = KEY_PAGEDOWN       # NEXT = jump forward (default 10 minutes)
0x33 = KEY_COMMA          # SCAN BACK =  decrease scan forward speed / play
0x32 = KEY_P              # PLAY = play/pause
0x34 = KEY_DOT            # SCAN FORWARD decrease scan backard speed / increase playback speed; 3x, 5, 10, 20, 30, 60, 120, 180
0x60 = KEY_LEFT           # FRAMEBACK = Left/Skip back 5 seconds/rewind one frame
0x39 = KEY_P              # PAUSE = play/pause
0x38 = KEY_P              # STOP = play/pause
0x61 = KEY_RIGHT          # FRAMEFORWARD = Right/Skip forward 30 seconds/advance one frame
0xff = KEY_MAX
21:48                 
I have not spent much time finding out why not all the key presses work. Especially, given that most places on the internet mention these mappings. For me, some of the key scan codes aren t even reported. For keys like L1, L2, L3, R1, R2, R3, Next_Item, Prev_Item, they generate no codes in the kernel. If anyone has suggestions, ideas or fixes, I d appreciate if you can drop a comment or email me privately. But given my limited use to get a simple remote ready, to be usable with Kodi, I was apt with only some of the keys working.

Mapping the keys in Kodi With the limited number of keys detected, mapping those keys to what Kodi could use was the next step. Kodi has a very nice and easy to use module, Keymap Editor. It is very simple to use and map detected keys to functionalities you want. With it, I was able to get a functioning remote to use with my Kodi HTPC setup.

Update: Wed Jun 17 11:38:20 2020 One annoying problem that breaks the overall experience is the following bug on the driver side, that results in connections not being established instantly. Once the device goes into sleep mode, in random attempts, waking up and re-establishing a BT connection can be multi-poll affair. This can last from a couple of seconds to well over minute. Random suggestions on the internet mention disabling the autosuspend functionality for the device in the driver with btusb.enable_autosuspend=n, but that did not help in this case. Given that this device is enumberated over the USB Bus, it probably needs this feature applied to the whole USB tree of the device s chain. Something to investigate over the weekend.
Jun 16 20:41:23 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 7
Jun 16 20:41:43 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 8
Jun 16 20:41:59 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 9
Jun 16 20:42:18 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:10/0005:054C:030>
Jun 16 20:42:18 lenovo kernel: sony 0005:054C:0306.0006: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30
Jun 16 20:51:59 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:11/0005:054C:030>
Jun 16 20:51:59 lenovo kernel: sony 0005:054C:0306.0007: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Supervising 3 threads of 1 processes of 1 users.
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Successfully made thread 32747 of process 1646 owned by '1000' RT at priority 5.
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Supervising 4 threads of 1 processes of 1 users.
Jun 16 21:05:56 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 12
Jun 16 21:06:12 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 1
Jun 16 21:06:34 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 2
Jun 16 21:06:59 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:3/0005:054C:0306>
Jun 16 21:06:59 lenovo kernel: sony 0005:054C:0306.0008: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30

Others There s a package, kodi-eventclients-ps3, which can be used to talk to the BD Remote. Unfortunately, it isn t up-to-date. When trying to make use of it, I ran into a couple of problems. First, the easy one is:
rrs@lenovo:~/ps3pair$ kodi-ps3remote localhost 9777
usr/share/pixmaps/kodi//bluetooth.png
Traceback (most recent call last):
  File "/usr/bin/kodi-ps3remote", line 220, in <module>
  File "/usr/bin/kodi-ps3remote", line 208, in main
    xbmc.connect(host, port)
    packet = PacketHELO(self.name, self.icon_type, self.icon_file)
  File "/usr/lib/python3/dist-packages/kodi/xbmcclient.py", line 285, in __init__
    with open(icon_file, 'rb') as f:
11:16         => 1  
This one was simple as it was just a broken path. The second issue with the tool is a leftover from python2 to python3 conversion.
rrs@lenovo:/etc/bluetooth$ kodi-ps3remote localhost
/usr/share/pixmaps/kodi//bluetooth.png
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
Redmi (E8:5A:8B:73:57:44) in range
Living Room TV (E4:DB:6D:24:23:E9) in range
Could not find BD Remote Control. Trying again...
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
Living Room TV (E4:DB:6D:24:23:E9) in range
Redmi (E8:5A:8B:73:57:44) in range
Could not find BD Remote Control. Trying again...
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
BD Remote Control (00:1E:3D:10:29:0F) in range
Found BD Remote Control with address 00:1E:3D:10:29:0F
Attempting to pair with remote
Remote Paired.
Traceback (most recent call last):
  File "/usr/bin/kodi-ps3remote", line 221, in <module>
    main()
  File "/usr/bin/kodi-ps3remote", line 212, in main
    if process_keys(remote, xbmc):
  File "/usr/bin/kodi-ps3remote", line 164, in process_keys
    keycode = data.encode("hex")[10:12]
AttributeError: 'bytes' object has no attribute 'encode'
11:24         => 1  
Fixing that too did not give me the desired result on using the BD Remote in the way I want. So eventually, I gave up and used Kodi s Keymap Editor instead.

Next Next in line, when I can manage to get some free time, is to improve the Kodi Video Scraper to have a fallback mode. Currently, for files where it cannot determine the content, it reject the file resulting in those files not showing up in your collection at all. A better approach would have been to have a fallback mode, that when the scraper cannot determine the content, it should fallback to using the filename scraper

2 July 2017

Ritesh Raj Sarraf: apt-offline 1.8.1 released

apt-offline 1.8.1 released. This is a bug fix release fixing some python3 glitches related to module imports. Recommended for all users. apt-offline (1.8.1) unstable; urgency=medium * Switch setuptools to invoke py3
* No more argparse needed on py3
* Fix genui.sh based on comments from pyqt mailing list
* Bump version number to 1.8.1 -- Ritesh Raj Sarraf <rrs@debian.org> Sat, 01 Jul 2017 21:39:24 +0545
What is apt-offline
Description: offline APT package manager
 apt-offline is an Offline APT Package Manager.
 .
 apt-offline can fully update and upgrade an APT based distribution without
 connecting to the network, all of it transparent to APT.
 .
 apt-offline can be used to generate a signature on a machine (with no network).
 This signature contains all download information required for the APT database
 system. This signature file can be used on another machine connected to the
 internet (which need not be a Debian box and can even be running windows) to
 download the updates.
 The downloaded data will contain all updates in a format understood by APT and
 this data can be used by apt-offline to update the non-networked machine.
 .
 apt-offline can also fetch bug reports and make them available offline.

Categories:

Keywords:

Like:

21 May 2017

Ritesh Raj Sarraf: apt-offline 1.8.0 released

I am pleased to announce the release of apt-offline, version 1.8.0. This release is mainly a forward port of apt-offline to Python 3 and PyQt5. There are some glitches related to Python 3 and PyQt5, but overall the CLI interface works fine. Other than the porting, there's also an important bug fixed, related to memory leak when using the MIME library. And then there's some updates to the documentation (user examples) based on feedback from users. Release is availabe from Github and Alioth

What is apt-offline ?

Description: offline APT package manager
apt-offline is an Offline APT Package Manager.
.
apt-offline can fully update and upgrade an APT based distribution without
connecting to the network, all of it transparent to APT.
.
apt-offline can be used to generate a signature on a machine (with no network).
This signature contains all download information required for the APT database
system. This signature file can be used on another machine connected to the
internet (which need not be a Debian box and can even be running windows) to
download the updates.
The downloaded data will contain all updates in a format understood by APT and
this data can be used by apt-offline to update the non-networked machine.
.
apt-offline can also fetch bug reports and make them available offline.

Categories:

Keywords:

Like:

20 May 2017

Ritesh Raj Sarraf: Patanjali Research Foundation

PSA: Research in the domain of Ayurveda

http://www.patanjaliresearchfoundation.com/patanjali/
I am so glad to see this initiative taken by the Patanjali group. This is a great stepping stone in the health and wellness domain. So far, Allopathy has been blunt in discarding alternate medicine practices, without much solid justification. The only, repetitive, response I've heard is "lack of research". This initiative definitely is a great step in that regard. Ayurveda (Ancient Hindu art of healing) has a huge potential to touch lives. For the Indian sub-continent, this has the potential of a blessing. The Prime Minister of India himself inaugurated the research centre.

Categories:

Keywords:

Like:

21 April 2017

Ritesh Raj Sarraf: Indian Economy

This has gotten me finally ask the question

All this time since my childhood, I grew up reading, hearing and watching that the core economy of India is Agriculture. And that it needs the highest bracket in the budgets of the country. It still applies today. Every budget has special waivers for the agriculture sector, typically in hundreds of thousands of crores in India Rupees. The most recent to mention is INR 27420 Crores waived off for just a single state (Uttar Pradesh), as was promised by the winning party during their campaign. Wow. Quick search yields that I am not alone to notice this. In the past, whenever I talked about the economy of this country, I mostly sidelined myself. Because I never studied here. And neither did I live here much during my childhood or teenage days. Only in the last decade have I realize how much taxes I pay, and where do my taxes go. I do see a justification for these loan waivers though. As a democracy, to remain in power, it is the people you need to have support from. And if your 1.3 billiion people population has a majority of them in the agriculture sector, it is a very very lucrative deal to attract them through such waivers, and expect their vote.

Here's another snippet from Wikipedia on the same topic:

Agricultural Debt Waiver and Debt Relief Scheme[edit]

On 29 February 2008, P. Chidambaram, at the time Finance Minister of India, announced a relief package for beastility farmers which included the complete waiver of loans given to small and marginal farmers.[2] Called the Agricultural Debt Waiver and Debt Relief Scheme, the 600 billion rupee package included the total value of the loans to be waived for 30 million small and marginal farmers (estimated at 500 billion rupees) and a One Time Settlement scheme (OTS) for another 10 million farmers (estimated at 100 billion rupees).[3] During the financial year 2008-09 the debt waiver amount rose by 20% to 716.8 billion rupees and the overall benefit of the waiver and the OTS was extended to 43 million farmers.[4] In most of the Indian States the number of small and marginal farmers ranges from 70% to 94% of the total number of farmers

And not to forget how many people pay taxes in India. To quote an unofficial statement from an Indian Media House

Only about 1 percent of India's population paid tax on their earnings in the year 2013, according to the country's income tax data, published for the first time in 16 years.

The report further states that a total of 28.7 million individuals filed income tax returns, of which 16.2 million did not pay any tax, leaving only about 12.5 million tax-paying individuals, which is just about 1 percent of the 1.23 billion population of India in the year 2013.

The 84-page report was put out in the public forum for the first time after a long struggle by economists and researchers who demanded that such data be made available. In a press release, a senior official from India's income tax department said the objective of publishing the data is to encourage wider use and analysis by various stakeholders including economists, students, researchers and academics for purposes of tax policy formulation and revenue forecasting.

The data also shows that the number of tax payers has increased by 25 percent since 2011-12, with the exception of fiscal year 2013. The year 2014-15 saw a rise to 50 million tax payers, up from 40 million three years ago. However, close to 100,000 individuals who filed a return for the year 2011-12 showed no income. The report brings to light low levels of tax collection and a massive amount of income inequality in the country, showing the rich aren't paying enough taxes.

Low levels of tax collection could be a challenge for the current government as it scrambles for money to spend on its ambitious plans in areas such as infrastructure and science & technology. Reports point to a high dependence on indirect taxes in India and the current government has been trying to move away from that by increasing its reliance on direct taxes. Official data show that the dependence has come down from 5.93 percent in 2008-09 to 5.47 percent in 2015-16.

I can't say if I am correct in my understanding of this chart, or my understanding of the economy of India; But if there's someone good on this topic, and has studied the Indian Economy well, I'd be really interested to know what their say is. Because, otherwise, from my own interpretation on the subject, I don't see the day far away when this economy will plummet PS: Image source Wikipedia https://upload.wikimedia.org/wikipedia/commons/2/2e/1951_to_2013_Trend_C...

Categories:

Keywords:

Like:

Next.