Search Results: "ocsi"

12 November 2023

Petter Reinholdtsen: New and improved sqlcipher in Debian for accessing Signal database

For a while now I wanted to have direct access to the Signal database of messages and channels of my Desktop edition of Signal. I prefer the enforced end to end encryption of Signal these days for my communication with friends and family, to increase the level of safety and privacy as well as raising the cost of the mass surveillance government and non-government entities practice these days. In August I came across a nice recipe on how to use sqlcipher to extract statistics from the Signal database explaining how to do this. Unfortunately this did not work with the version of sqlcipher in Debian. The sqlcipher package is a "fork" of the sqlite package with added support for encrypted databases. Sadly the current Debian maintainer announced more than three years ago that he did not have time to maintain sqlcipher, so it seemed unlikely to be upgraded by the maintainer. I was reluctant to take on the job myself, as I have very limited experience maintaining shared libraries in Debian. After waiting and hoping for a few months, I gave up the last week, and set out to update the package. In the process I orphaned it to make it more obvious for the next person looking at it that the package need proper maintenance. The version in Debian was around five years old, and quite a lot of changes had taken place upstream into the Debian maintenance git repository. After spending a few days importing the new upstream versions, realising that upstream did not care much for SONAME versioning as I saw library symbols being both added and removed with minor version number changes to the project, I concluded that I had to do a SONAME bump of the library package to avoid surprising the reverse dependencies. I even added a simple autopkgtest script to ensure the package work as intended. Dug deep into the hole of learning shared library maintenance, I set out a few days ago to upload the new version to Debian experimental to see what the quality assurance framework in Debian had to say about the result. The feedback told me the pacakge was not too shabby, and yesterday I uploaded the latest version to Debian unstable. It should enter testing today or tomorrow, perhaps delayed by a small library transition. Armed with a new version of sqlcipher, I can now have a look at the SQL database in ~/.config/Signal/sql/db.sqlite. First, one need to fetch the encryption key from the Signal configuration using this simple JSON extraction command:
/usr/bin/jq -r '."key"' ~/.config/Signal/config.json
Assuming the result from that command is 'secretkey', which is a hexadecimal number representing the key used to encrypt the database. Next, one can now connect to the database and inject the encryption key for access via SQL to fetch information from the database. Here is an example dumping the database structure:
% sqlcipher ~/.config/Signal/sql/db.sqlite
sqlite> PRAGMA key = "x'secretkey'";
sqlite> .schema
CREATE TABLE sqlite_stat1(tbl,idx,stat);
CREATE TABLE conversations(
      id STRING PRIMARY KEY ASC,
      json TEXT,
      active_at INTEGER,
      type STRING,
      members TEXT,
      name TEXT,
      profileName TEXT
    , profileFamilyName TEXT, profileFullName TEXT, e164 TEXT, serviceId TEXT, groupId TEXT, profileLastFetchedAt INTEGER);
CREATE TABLE identityKeys(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE items(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE sessions(
      id TEXT PRIMARY KEY,
      conversationId TEXT,
      json TEXT
    , ourServiceId STRING, serviceId STRING);
CREATE TABLE attachment_downloads(
    id STRING primary key,
    timestamp INTEGER,
    pending INTEGER,
    json TEXT
  );
CREATE TABLE sticker_packs(
    id TEXT PRIMARY KEY,
    key TEXT NOT NULL,
    author STRING,
    coverStickerId INTEGER,
    createdAt INTEGER,
    downloadAttempts INTEGER,
    installedAt INTEGER,
    lastUsed INTEGER,
    status STRING,
    stickerCount INTEGER,
    title STRING
  , attemptedStatus STRING, position INTEGER DEFAULT 0 NOT NULL, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync
      INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE stickers(
    id INTEGER NOT NULL,
    packId TEXT NOT NULL,
    emoji STRING,
    height INTEGER,
    isCoverOnly INTEGER,
    lastUsed INTEGER,
    path STRING,
    width INTEGER,
    PRIMARY KEY (id, packId),
    CONSTRAINT stickers_fk
      FOREIGN KEY (packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE sticker_references(
    messageId STRING,
    packId TEXT,
    CONSTRAINT sticker_references_fk
      FOREIGN KEY(packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE emojis(
    shortName TEXT PRIMARY KEY,
    lastUsage INTEGER
  );
CREATE TABLE messages(
        rowid INTEGER PRIMARY KEY ASC,
        id STRING UNIQUE,
        json TEXT,
        readStatus INTEGER,
        expires_at INTEGER,
        sent_at INTEGER,
        schemaVersion INTEGER,
        conversationId STRING,
        received_at INTEGER,
        source STRING,
        hasAttachments INTEGER,
        hasFileAttachments INTEGER,
        hasVisualMediaAttachments INTEGER,
        expireTimer INTEGER,
        expirationStartTimestamp INTEGER,
        type STRING,
        body TEXT,
        messageTimer INTEGER,
        messageTimerStart INTEGER,
        messageTimerExpiresAt INTEGER,
        isErased INTEGER,
        isViewOnce INTEGER,
        sourceServiceId TEXT, serverGuid STRING NULL, sourceDevice INTEGER, storyId STRING, isStory INTEGER
        GENERATED ALWAYS AS (type IS 'story'), isChangeCreatedByUs INTEGER NOT NULL DEFAULT 0, isTimerChangeFromSync INTEGER
        GENERATED ALWAYS AS (
          json_extract(json, '$.expirationTimerUpdate.fromSync') IS 1
        ), seenStatus NUMBER default 0, storyDistributionListId STRING, expiresAt INT
        GENERATED ALWAYS
        AS (ifnull(
          expirationStartTimestamp + (expireTimer * 1000),
          9007199254740991
        )), shouldAffectActivity INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), shouldAffectPreview INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), isUserInitiatedMessage INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'group-v2-change',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), mentionsMe INTEGER NOT NULL DEFAULT 0, isGroupLeaveEvent INTEGER
        GENERATED ALWAYS AS (
          type IS 'group-v2-change' AND
          json_array_length(json_extract(json, '$.groupV2Change.details')) IS 1 AND
          json_extract(json, '$.groupV2Change.details[0].type') IS 'member-remove' AND
          json_extract(json, '$.groupV2Change.from') IS NOT NULL AND
          json_extract(json, '$.groupV2Change.from') IS json_extract(json, '$.groupV2Change.details[0].aci')
        ), isGroupLeaveEventFromOther INTEGER
        GENERATED ALWAYS AS (
          isGroupLeaveEvent IS 1
          AND
          isChangeCreatedByUs IS 0
        ), callId TEXT
        GENERATED ALWAYS AS (
          json_extract(json, '$.callId')
        ));
CREATE TABLE sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample);
CREATE TABLE jobs(
        id TEXT PRIMARY KEY,
        queueType TEXT STRING NOT NULL,
        timestamp INTEGER NOT NULL,
        data STRING TEXT
      );
CREATE TABLE reactions(
        conversationId STRING,
        emoji STRING,
        fromId STRING,
        messageReceivedAt INTEGER,
        targetAuthorAci STRING,
        targetTimestamp INTEGER,
        unread INTEGER
      , messageId STRING);
CREATE TABLE senderKeys(
        id TEXT PRIMARY KEY NOT NULL,
        senderId TEXT NOT NULL,
        distributionId TEXT NOT NULL,
        data BLOB NOT NULL,
        lastUpdatedDate NUMBER NOT NULL
      );
CREATE TABLE unprocessed(
        id STRING PRIMARY KEY ASC,
        timestamp INTEGER,
        version INTEGER,
        attempts INTEGER,
        envelope TEXT,
        decrypted TEXT,
        source TEXT,
        serverTimestamp INTEGER,
        sourceServiceId STRING
      , serverGuid STRING NULL, sourceDevice INTEGER, receivedAtCounter INTEGER, urgent INTEGER, story INTEGER);
CREATE TABLE sendLogPayloads(
        id INTEGER PRIMARY KEY ASC,
        timestamp INTEGER NOT NULL,
        contentHint INTEGER NOT NULL,
        proto BLOB NOT NULL
      , urgent INTEGER, hasPniSignatureMessage INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE sendLogRecipients(
        payloadId INTEGER NOT NULL,
        recipientServiceId STRING NOT NULL,
        deviceId INTEGER NOT NULL,
        PRIMARY KEY (payloadId, recipientServiceId, deviceId),
        CONSTRAINT sendLogRecipientsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE sendLogMessageIds(
        payloadId INTEGER NOT NULL,
        messageId STRING NOT NULL,
        PRIMARY KEY (payloadId, messageId),
        CONSTRAINT sendLogMessageIdsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE preKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE signedPreKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE badges(
        id TEXT PRIMARY KEY,
        category TEXT NOT NULL,
        name TEXT NOT NULL,
        descriptionTemplate TEXT NOT NULL
      );
CREATE TABLE badgeImageFiles(
        badgeId TEXT REFERENCES badges(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        'order' INTEGER NOT NULL,
        url TEXT NOT NULL,
        localPath TEXT,
        theme TEXT NOT NULL
      );
CREATE TABLE storyReads (
        authorId STRING NOT NULL,
        conversationId STRING NOT NULL,
        storyId STRING NOT NULL,
        storyReadDate NUMBER NOT NULL,
        PRIMARY KEY (authorId, storyId)
      );
CREATE TABLE storyDistributions(
        id STRING PRIMARY KEY NOT NULL,
        name TEXT,
        senderKeyInfoJson STRING
      , deletedAtTimestamp INTEGER, allowsReplies INTEGER, isBlockList INTEGER, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER);
CREATE TABLE storyDistributionMembers(
        listId STRING NOT NULL REFERENCES storyDistributions(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        serviceId STRING NOT NULL,
        PRIMARY KEY (listId, serviceId)
      );
CREATE TABLE uninstalled_sticker_packs (
        id STRING NOT NULL PRIMARY KEY,
        uninstalledAt NUMBER NOT NULL,
        storageID STRING,
        storageVersion NUMBER,
        storageUnknownFields BLOB,
        storageNeedsSync INTEGER NOT NULL
      );
CREATE TABLE groupCallRingCancellations(
        ringId INTEGER PRIMARY KEY,
        createdAt INTEGER NOT NULL
      );
CREATE TABLE IF NOT EXISTS 'messages_fts_data'(id INTEGER PRIMARY KEY, block BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_idx'(segid, term, pgno, PRIMARY KEY(segid, term)) WITHOUT ROWID;
CREATE TABLE IF NOT EXISTS 'messages_fts_content'(id INTEGER PRIMARY KEY, c0);
CREATE TABLE IF NOT EXISTS 'messages_fts_docsize'(id INTEGER PRIMARY KEY, sz BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_config'(k PRIMARY KEY, v) WITHOUT ROWID;
CREATE TABLE edited_messages(
        messageId STRING REFERENCES messages(id)
          ON DELETE CASCADE,
        sentAt INTEGER,
        readStatus INTEGER
      , conversationId STRING);
CREATE TABLE mentions (
        messageId REFERENCES messages(id) ON DELETE CASCADE,
        mentionAci STRING,
        start INTEGER,
        length INTEGER
      );
CREATE TABLE kyberPreKeys(
        id STRING PRIMARY KEY NOT NULL,
        json TEXT NOT NULL, ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE callsHistory (
        callId TEXT PRIMARY KEY,
        peerId TEXT NOT NULL, -- conversation id (legacy)   uuid   groupId   roomId
        ringerId TEXT DEFAULT NULL, -- ringer uuid
        mode TEXT NOT NULL, -- enum "Direct"   "Group"
        type TEXT NOT NULL, -- enum "Audio"   "Video"   "Group"
        direction TEXT NOT NULL, -- enum "Incoming"   "Outgoing
        -- Direct: enum "Pending"   "Missed"   "Accepted"   "Deleted"
        -- Group: enum "GenericGroupCall"   "OutgoingRing"   "Ringing"   "Joined"   "Missed"   "Declined"   "Accepted"   "Deleted"
        status TEXT NOT NULL,
        timestamp INTEGER NOT NULL,
        UNIQUE (callId, peerId) ON CONFLICT FAIL
      );
[ dropped all indexes to save space in this blog post ]
CREATE TRIGGER messages_on_view_once_update AFTER UPDATE ON messages
      WHEN
        new.body IS NOT NULL AND new.isViewOnce = 1
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
      END;
CREATE TRIGGER messages_on_insert AFTER INSERT ON messages
      WHEN new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_delete AFTER DELETE ON messages BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        DELETE FROM sendLogPayloads WHERE id IN (
          SELECT payloadId FROM sendLogMessageIds
          WHERE messageId = old.id
        );
        DELETE FROM reactions WHERE rowid IN (
          SELECT rowid FROM reactions
          WHERE messageId = old.id
        );
        DELETE FROM storyReads WHERE storyId = old.storyId;
      END;
CREATE VIRTUAL TABLE messages_fts USING fts5(
        body,
        tokenize = 'signal_tokenizer'
      );
CREATE TRIGGER messages_on_update AFTER UPDATE ON messages
      WHEN
        (new.body IS NULL OR old.body IS NOT new.body) AND
         new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_insert_insert_mentions AFTER INSERT ON messages
      BEGIN
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
CREATE TRIGGER messages_on_update_update_mentions AFTER UPDATE ON messages
      BEGIN
        DELETE FROM mentions WHERE messageId = new.id;
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
sqlite>
Finally I have the tool needed to inspect and process Signal messages that I need, without using the vendor provided client. Now on to transforming it to a more useful format. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

26 August 2022

Antoine Beaupr : How to nationalize the internet in Canada

Rogers had a catastrophic failure in July 2022. It affected emergency services (as in: people couldn't call 911, but also some 911 services themselves failed), hospitals (which couldn't access prescriptions), banks and payment systems (as payment terminals stopped working), and regular users as well. The outage lasted almost a full day, and Rogers took days to give any technical explanation on the outage, and even when they did, details were sparse. So far the only detailed account is from outside actors like Cloudflare which seem to point at an internal BGP failure. Its impact on the economy has yet to be measured, but it probably cost millions of dollars in wasted time and possibly lead to life-threatening situations. Apart from holding Rogers (criminally?) responsible for this, what should be done in the future to avoid such problems? It's not the first time something like this has happened: it happened to Bell Canada as well. The Rogers outage is also strangely similar to the Facebook outage last year, but, to its credit, Facebook did post a fairly detailed explanation only a day later. The internet is designed to be decentralised, and having large companies like Rogers hold so much power is a crucial mistake that should be reverted. The question is how. Some critics were quick to point out that we need more ISP diversity and competition, but I think that's missing the point. Others have suggested that the internet should be a public good or even straight out nationalized. I believe the solution to the problem of large, private, centralised telcos and ISPs is to replace them with smaller, public, decentralised service providers. The only way to ensure that works is to make sure that public money ends up creating infrastructure controlled by the public, which means treating ISPs as a public utility. This has been implemented elsewhere: it works, it's cheaper, and provides better service.

A modest proposal Global wireless services (like phone services) and home internet inevitably grow into monopolies. They are public utilities, just like water, power, railways, and roads. The question of how they should be managed is therefore inherently political, yet people don't seem to question the idea that only the market (i.e. "competition") can solve this problem. I disagree. 10 years ago (in french), I suggested we, in Qu bec, should nationalize large telcos and internet service providers. I no longer believe is a realistic approach: most of those companies have crap copper-based networks (at least for the last mile), yet are worth billions of dollars. It would be prohibitive, and a waste, to buy them out. Back then, I called this idea "R seau-Qu bec", a reference to the already nationalized power company, Hydro-Qu bec. (This idea, incidentally, made it into the plan of a political party.) Now, I think we should instead build our own, public internet. Start setting up municipal internet services, fiber to the home in all cities, progressively. Then interconnect cities with fiber, and build peering agreements with other providers. This also includes a bid on wireless spectrum to start competing with phone providers as well. And while that sounds really ambitious, I think it's possible to take this one step at a time.

Municipal broadband In many parts of the world, municipal broadband is an elegant solution to the problem, with solutions ranging from Stockholm's city-owned fiber network (dark fiber, layer 1) to Utah's UTOPIA network (fiber to the premises, layer 2) and municipal wireless networks like Guifi.net which connects about 40,000 nodes in Catalonia. A good first step would be for cities to start providing broadband services to its residents, directly. Cities normally own sewage and water systems that interconnect most residences and therefore have direct physical access everywhere. In Montr al, in particular, there is an ongoing project to replace a lot of old lead-based plumbing which would give an opportunity to lay down a wired fiber network across the city. This is a wild guess, but I suspect this would be much less expensive than one would think. Some people agree with me and quote this as low as 1000$ per household. There is about 800,000 households in the city of Montr al, so we're talking about a 800 million dollars investment here, to connect every household in Montr al with fiber and incidentally a quarter of the province's population. And this is not an up-front cost: this can be built progressively, with expenses amortized over many years. (We should not, however, connect Montr al first: it's used as an example here because it's a large number of households to connect.) Such a network should be built with a redundant topology. I leave it as an open question whether we should adopt Stockholm's more minimalist approach or provide direct IP connectivity. I would tend to favor the latter, because then you can immediately start to offer the service to households and generate revenues to compensate for the capital expenditures. Given the ridiculous profit margins telcos currently have 8 billion $CAD net income for BCE (2019), 2 billion $CAD for Rogers (2020) I also believe this would actually turn into a profitable revenue stream for the city, the same way Hydro-Qu bec is more and more considered as a revenue stream for the state. (I personally believe that's actually wrong and we should treat those resources as human rights and not money cows, but I digress. The point is: this is not a cost point, it's a revenue.) The other major challenge here is that the city will need competent engineers to drive this project forward. But this is not different from the way other public utilities run: we have electrical engineers at Hydro, sewer and water engineers at the city, this is just another profession. If anything, the computing science sector might be more at fault than the city here in its failure to provide competent and accountable engineers to society... Right now, most of the network in Canada is copper: we are hitting the limits of that technology with DSL, and while cable has some life left to it (DOCSIS 4.0 does 4Gbps), that is nowhere near the capacity of fiber. Take the town of Chattanooga, Tennessee: in 2010, the city-owned ISP EPB finished deploying a fiber network to the entire town and provided gigabit internet to everyone. Now, 12 years later, they are using this same network to provide the mind-boggling speed of 25 gigabit to the home. To give you an idea, Chattanooga is roughly the size and density of Sherbrooke.

Provincial public internet As part of building a municipal network, the question of getting access to "the internet" will immediately come up. Naturally, this will first be solved by using already existing commercial providers to hook up residents to the rest of the global network. But eventually, networks should inter-connect: Montr al should connect with Laval, and then Trois-Rivi res, then Qu bec City. This will require long haul fiber runs, but those links are not actually that expensive, and many of those already exist as a public resource at RISQ and CANARIE, which cross-connects universities and colleges across the province and the country. Those networks might not have the capacity to cover the needs of the entire province right now, but that is a router upgrade away, thanks to the amazing capacity of fiber. There are two crucial mistakes to avoid at this point. First, the network needs to remain decentralised. Long haul links should be IP links with BGP sessions, and each city (or MRC) should have its own independent network, to avoid Rogers-class catastrophic failures. Second, skill needs to remain in-house: RISQ has already made that mistake, to a certain extent, by selling its neutral datacenter. Tellingly, MetroOptic, probably the largest commercial dark fiber provider in the province, now operates the QIX, the second largest "public" internet exchange in Canada. Still, we have a lot of infrastructure we can leverage here. If RISQ or CANARIE cannot be up to the task, Hydro-Qu bec has power lines running into every house in the province, with high voltage power lines running hundreds of kilometers far north. The logistics of long distance maintenance are already solved by that institution. In fact, Hydro already has fiber all over the province, but it is a private network, separate from the internet for security reasons (and that should probably remain so). But this only shows they already have the expertise to lay down fiber: they would just need to lay down a parallel network to the existing one. In that architecture, Hydro would be a "dark fiber" provider.

International public internet None of the above solves the problem for the entire population of Qu bec, which is notoriously dispersed, with an area three times the size of France, but with only an eight of its population (8 million vs 67). More specifically, Canada was originally a french colony, a land violently stolen from native people who have lived here for thousands of years. Some of those people now live in reservations, sometimes far from urban centers (but definitely not always). So the idea of leveraging the Hydro-Qu bec infrastructure doesn't always work to solve this, because while Hydro will happily flood a traditional hunting territory for an electric dam, they don't bother running power lines to the village they forcibly moved, powering it instead with noisy and polluting diesel generators. So before giving me fiber to the home, we should give power (and potable water, for that matter), to those communities first. So we need to discuss international connectivity. (How else could we consider those communities than peer nations anyways?c) Qu bec has virtually zero international links. Even in Montr al, which likes to style itself a major player in gaming, AI, and technology, most peering goes through either Toronto or New York. That's a problem that we must fix, regardless of the other problems stated here. Looking at the submarine cable map, we see very few international links actually landing in Canada. There is the Greenland connect which connects Newfoundland to Iceland through Greenland. There's the EXA which lands in Ireland, the UK and the US, and Google has the Topaz link on the west coast. That's about it, and none of those land anywhere near any major urban center in Qu bec. We should have a cable running from France up to Saint-F licien. There should be a cable from Vancouver to China. Heck, there should be a fiber cable running all the way from the end of the great lakes through Qu bec, then up around the northern passage and back down to British Columbia. Those cables are expensive, and the idea might sound ludicrous, but Russia is actually planning such a project for 2026. The US has cables running all the way up (and around!) Alaska, neatly bypassing all of Canada in the process. We just look ridiculous on that map. (Addendum: I somehow forgot to talk about Teleglobe here was founded as publicly owned company in 1950, growing international phone and (later) data links all over the world. It was privatized by the conservatives in 1984, along with rails and other "crown corporations". So that's one major risk to any effort to make public utilities work properly: some government might be elected and promptly sell it out to its friends for peanuts.)

Wireless networks I know most people will have rolled their eyes so far back their heads have exploded. But I'm not done yet. I want wireless too. And by wireless, I don't mean a bunch of geeks setting up OpenWRT routers on rooftops. I tried that, and while it was fun and educational, it didn't scale. A public networking utility wouldn't be complete without providing cellular phone service. This involves bidding for frequencies at the federal level, and deploying a rather large amount of infrastructure, but it could be a later phase, when the engineers and politicians have proven their worth. At least part of the Rogers fiasco would have been averted if such a decentralized network backend existed. One might even want to argue that a separate institution should be setup to provide phone services, independently from the regular wired networking, if only for reliability. Because remember here: the problem we're trying to solve is not just technical, it's about political boundaries, centralisation, and automation. If everything is ran by this one organisation again, we will have failed. However, I must admit that phone services is where my ideas fall a little short. I can't help but think it's also an accessible goal maybe starting with a virtual operator but it seems slightly less so than the others, especially considering how closed the phone ecosystem is.

Counter points In debating these ideas while writing this article, the following objections came up.

I don't want the state to control my internet One legitimate concern I have about the idea of the state running the internet is the potential it would have to censor or control the content running over the wires. But I don't think there is necessarily a direct relationship between resource ownership and control of content. Sure, China has strong censorship in place, partly implemented through state-controlled businesses. But Russia also has strong censorship in place, based on regulatory tools: they force private service providers to install back-doors in their networks to control content and surveil their users. Besides, the USA have been doing warrantless wiretapping since at least 2003 (and yes, that's 10 years before the Snowden revelations) so a commercial internet is no assurance that we have a free internet. Quite the contrary in fact: if anything, the commercial internet goes hand in hand with the neo-colonial internet, just like businesses did in the "good old colonial days". Large media companies are the primary censors of content here. In Canada, the media cartel requested the first site-blocking order in 2018. The plaintiffs (including Qu becor, Rogers, and Bell Canada) are both content providers and internet service providers, an obvious conflict of interest. Nevertheless, there are some strong arguments against having a centralised, state-owned monopoly on internet service providers. FDN makes a good point on this. But this is not what I am suggesting: at the provincial level, the network would be purely physical, and regional entities (which could include private companies) would peer over that physical network, ensuring decentralization. Delegating the management of that infrastructure to an independent non-profit or cooperative (but owned by the state) would also ensure some level of independence.

Isn't the government incompetent and corrupt? Also known as "private enterprise is better skilled at handling this, the state can't do anything right" I don't think this is a "fait accomplit". If anything, I have found publicly ran utilities to be spectacularly reliable here. I rarely have trouble with sewage, water, or power, and keep in mind I live in a city where we receive about 2 meters of snow a year, which tend to create lots of trouble with power lines. Unless there's a major weather event, power just runs here. I think the same can happen with an internet service provider. But it would certainly need to have higher standards to what we're used to, because frankly Internet is kind of janky.

A single monopoly will be less reliable I actually agree with that, but that is not what I am proposing anyways. Current commercial or non-profit entities will be free to offer their services on top of the public network. And besides, the current "ha! diversity is great" approach is exactly what we have now, and it's not working. The pretense that we can have competition over a single network is what led the US into the ridiculous situation where they also pretend to have competition over the power utility market. This led to massive forest fires in California and major power outages in Texas. It doesn't work.

Wouldn't this create an isolated network? One theory is that this new network would be so hostile to incumbent telcos and ISPs that they would simply refuse to network with the public utility. And while it is true that the telcos currently do also act as a kind of "tier one" provider in some places, I strongly feel this is also a problem that needs to be solved, regardless of ownership of networking infrastructure. Right now, telcos often hold both ends of the stick: they are the gateway to users, the "last mile", but they also provide peering to the larger internet in some locations. In at least one datacenter in downtown Montr al, I've seen traffic go through Bell Canada that was not directly targeted at Bell customers. So in effect, they are in a position of charging twice for the same traffic, and that's not only ridiculous, it should just be plain illegal. And besides, this is not a big problem: there are other providers out there. As bad as the market is in Qu bec, there is still some diversity in Tier one providers that could allow for some exits to the wider network (e.g. yes, Cogent is here too).

What about Google and Facebook? Nationalization of other service providers like Google and Facebook is out of scope of this discussion. That said, I am not sure the state should get into the business of organising the web or providing content services however, but I will point out it already does do some of that through its own websites. It should probably keep itself to this, and also consider providing normal services for people who don't or can't access the internet. (And I would also be ready to argue that Google and Facebook already act as extensions of the state: certainly if Facebook didn't exist, the CIA or the NSA would like to create it at this point. And Google has lucrative business with the US department of defense.)

What does not work So we've seen one thing that could work. Maybe it's too expensive. Maybe the political will isn't there. Maybe it will fail. We don't know yet. But we know what does not work, and it's what we've been doing ever since the internet has gone commercial.

Subsidies The absurd price we pay for data does not actually mean everyone gets high speed internet at home. Large swathes of the Qu bec countryside don't get broadband at all, and it can be difficult or expensive, even in large urban centers like Montr al, to get high speed internet. That is despite having a series of subsidies that all avoided investing in our own infrastructure. We had the "fonds de l'autoroute de l'information", "information highway fund" (site dead since 2003, archive.org link) and "branchez les familles", "connecting families" (site dead since 2003, archive.org link) which subsidized the development of a copper network. In 2014, more of the same: the federal government poured hundreds of millions of dollars into a program called connecting Canadians to connect 280 000 households to "high speed internet". And now, the federal and provincial governments are proudly announcing that "everyone is now connected to high speed internet", after pouring more than 1.1 billion dollars to connect, guess what, another 380 000 homes, right in time for the provincial election. Of course, technically, the deadline won't actually be met until 2023. Qu bec is a big area to cover, and you can guess what happens next: the telcos threw up their hand and said some areas just can't be connected. (Or they connect their CEO but not the poor folks across the lake.) The story then takes the predictable twist of giving more money out to billionaires, subsidizing now Musk's Starlink system to connect those remote areas. To give a concrete example: a friend who lives about 1000km away from Montr al, 4km from a small, 2500 habitant village, has recently got symmetric 100 mbps fiber at home from Telus, thanks to those subsidies. But I can't get that service in Montr al at all, presumably because Telus and Bell colluded to split that market. Bell doesn't provide me with such a service either: they tell me they have "fiber to my neighborhood", and only offer me a 25/10 mbps ADSL service. (There is Vid otron offering 400mbps, but that's copper cable, again a dead technology, and asymmetric.)

Conclusion Remember Chattanooga? Back in 2010, they funded the development of a fiber network, and now they have deployed a network roughly a thousand times faster than what we have just funded with a billion dollars. In 2010, I was paying Bell Canada 60$/mth for 20mbps and a 125GB cap, and now, I'm still (indirectly) paying Bell for roughly the same speed (25mbps). Back then, Bell was throttling their competitors networks until 2009, when they were forced by the CRTC to stop throttling. Both Bell and Vid otron still explicitly forbid you from running your own servers at home, Vid otron charges prohibitive prices which make it near impossible for resellers to sell uncapped services. Those companies are not spurring innovation: they are blocking it. We have spent all this money for the private sector to build us a private internet, over decades, without any assurance of quality, equity or reliability. And while in some locations, ISPs did deploy fiber to the home, they certainly didn't upgrade their entire network to follow suit, and even less allowed resellers to compete on that network. In 10 years, when 100mbps will be laughable, I bet those service providers will again punt the ball in the public courtyard and tell us they don't have the money to upgrade everyone's equipment. We got screwed. It's time to try something new.

Updates There was a discussion about this article on Hacker News which was surprisingly productive. Trigger warning: Hacker News is kind of right-wing, in case you didn't know. Since this article was written, at least two more major acquisitions happened, just in Qu bec: In the latter case, vMedia was explicitly saying it couldn't grow because of "lack of access to capital". So basically, we have given those companies a billion dollars, and they are not using that very money to buy out their competition. At least we could have given that money to small players to even out the playing field. But this is not how that works at all. Also, in a bizarre twist, an "analyst" believes the acquisition is likely to help Rogers acquire Shaw. Also, since this article was written, the Washington Post published a review of a book bringing similar ideas: Internet for the People The Fight for Our Digital Future, by Ben Tarnoff, at Verso books. It's short, but even more ambitious than what I am suggesting in this article, arguing that all big tech companies should be broken up and better regulated:
He pulls from Ethan Zuckerman s idea of a web that is plural in purpose that just as pool halls, libraries and churches each have different norms, purposes and designs, so too should different places on the internet. To achieve this, Tarnoff wants governments to pass laws that would make the big platforms unprofitable and, in their place, fund small-scale, local experiments in social media design. Instead of having platforms ruled by engagement-maximizing algorithms, Tarnoff imagines public platforms run by local librarians that include content from public media.
(Links mine: the Washington Post obviously prefers to not link to the real web, and instead doesn't link to Zuckerman's site all and suggests Amazon for the book, in a cynical example.) And in another example of how the private sector has failed us, there was recently a fluke in the AMBER alert system where the entire province was warned about a loose shooter in Saint-Elz ar except the people in the town, because they have spotty cell phone coverage. In other words, millions of people received a strongly toned, "life-threatening", alert for a city sometimes hours away, except the people most vulnerable to the alert. Not missing a beat, the CAQ party is promising more of the same medicine again and giving more money to telcos to fix the problem, suggesting to spend three billion dollars in private infrastructure.

15 October 2017

Norbert Preining: TeX Live Manager: JSON output

With the development of TLCockpit continuing, I found the need for and easy exchange format between the TeX Live Manager tlmgr and frontend programs like TLCockpit. Thus, I have implemented JSON output for the tlmgr info command. While the format is not 100% stable I might change some thing I consider it pretty settled. The output of tlmgr info --data json is a JSON array with JSON objects for each package requested (default is to list all).
[ TLPackageObj, TLPackageObj, ... ]
The structure of the JSON object TLPackageObj reflects the internal Perl hash. Guaranteed to be present keys are name (String) and avilable (Boolean). In case the package is available, there are the following further keys sorted by their type: A rather long example showing the output for the package latex, formatted with json_pp and having the list of files and the long description shortened:
[
    
      "installed" : true,
      "doccontainerchecksum" : "5bdfea6b85c431a0af2abc8f8df160b297ad73f6a324ca88df990f01f24611c9ae80d2f6d12c7b3767308fbe3de3fca3d11664b923ea4080fb13fd056a1d0c3d",
      "docfiles" : [
         "texmf-dist/doc/latex/base/README.txt",
         ....
         "texmf-dist/doc/latex/base/webcomp.pdf"
      ],
      "containersize" : 163892,
      "depends" : [
         "luatex",
         "pdftex",
         "latexconfig",
         "latex-fonts"
      ],
      "runsize" : 414,
      "relocated" : false,
      "doccontainersize" : 12812184,
      "srcsize" : 752,
      "revision" : 43813,
      "srcfiles" : [
         "texmf-dist/source/latex/base/alltt.dtx",
         ....
         "texmf-dist/source/latex/base/utf8ienc.dtx"
      ],
      "category" : "Package",
      "cataloguedata" :  
         "version" : "2017/01/01 PL1",
         "topics" : "format",
         "license" : "lppl1.3",
         "date" : "2017-01-25 23:33:57 +0100"
       ,
      "srccontainerchecksum" : "1d145b567cf48d6ee71582a1f329fe5cf002d6259269a71d2e4a69e6e6bd65abeb92461d31d7137f3803503534282bc0c5546e5d2d1aa2604e896e607c53b041",
      "postactions" : [],
      "binsize" :  ,
      "longdesc" : "LaTeX is a widely-used macro package for TeX, [...]",
      "srccontainersize" : 516036,
      "containerchecksum" : "af0ac85f89b7620eb7699c8bca6348f8913352c473af1056b7a90f28567d3f3e21d60be1f44e056107766b1dce8d87d367e7f8a82f777d565a2d4597feb24558",
      "executes" : [],
      "binfiles" :  ,
      "name" : "latex",
      "catalogue" : null,
      "docsize" : 3799,
      "available" : true,
      "runfiles" : [
         "texmf-dist/makeindex/latex/gglo.ist",
         ...
         "texmf-dist/tex/latex/base/x2enc.dfu"
      ],
      "shortdesc" : "A TeX macro package that defines LaTeX"
    
]
What is currently not available via tlmgr info and thus also not via the JSON output is access to virtual TeX Live databases with several member databases (multiple repositories). I am thinking about how to incorporate this information. These changes are currently available in the tlcritical repository, but will enter proper TeX Live repositories soon. Using this JSON output I will rewrite the current TLCockpit tlmgr interface to display more complete information.

6 August 2016

Robert Edmonds: Cable modems: Arris SB6190 vs. Netgear CM600

Recently I activated new cable ISP service at my home and needed to purchase a cable modem. There were only a few candidate devices that supported at least 24 downstream channels (preferably 32), and did not contain an integrated router or access point. The first modem I tried was the Arris SB6190, which supports 32 downstream channels. It is based on the Intel Puma 6 SoC, and looking at an older release of the SB6190 firmware source reveals that the device runs Linux. This device, running the latest 9.1.93N firmware, goes into a failure mode after several days of uptime which causes it to drop 1-2% of packets. Here is a SmokePing graph that measures latency to my ISP's recursive DNS server, showing the transition into the degraded mode: SmokePing Arris SB6190 Firmware 9.1.93N It didn't drop packets at random, though. Some traffic would be deterministically dropped, such as the parallel A/AAAA DNS lookups generated by the glibc DNS stub resolver. For instance, in the following tcpdump output:
[1] 17:31:46.989073 IP [My IP].50775 > 75.75.75.75.53: 53571+ A? www.comcast6.net. (34)
[2] 17:31:46.989102 IP [My IP].50775 > 75.75.75.75.53: 14987+ AAAA? www.comcast6.net. (34)
[3] 17:31:47.020423 IP 75.75.75.75.53 > [My IP].50775: 53571 2/0/0 CNAME comcast6.g.comcast.net., [ ]
[4] 17:31:51.993680 IP [My IP].50775 > 75.75.75.75.53: 53571+ A? www.comcast6.net. (34)
[5] 17:31:52.025138 IP 75.75.75.75.53 > [My IP].50775: 53571 2/0/0 CNAME comcast6.g.comcast.net., [ ]
[6] 17:31:52.025282 IP [My IP].50775 > 75.75.75.75.53: 14987+ AAAA? www.comcast6.net. (34)
[7] 17:31:52.056550 IP 75.75.75.75.53 > [My IP].50775: 14987 2/0/0 CNAME comcast6.g.comcast.net., [ ]
Packets [1] and [2] are the A and AAAA queries being initiated in parallel. Note that they both use the same 4-tuple of (Source IP, Destination IP, Source Port, Destination Port), but with different DNS IDs. Packet [3] is the response to packet [1]. The response to packet [2] never arrives, and five seconds later, the glibc stub resolver times out and retries in single-request mode, which performs the A and AAAA queries sequentially. Packets [4] and [5] are the type A query and response, and packets [6] and [7] are the AAAA query and response. The Arris SB6190 running firmware 9.1.93N would consistently interfere with these parallel DNS requests, but only when operating in its degraded mode. It also didn't matter whether glibc was configured to use an IPv4 or IPv6 nameserver, or which nameserver was being used. Power cycling the modem would fix the issue for a few days. My ISP offered to downgrade the firmware on the Arris SB6190 to version 9.1.93K. This firmware version doesn't go into a degraded mode after a few days, but it does exhibit higher latency, and more jitter: SmokePing Arris SB6190 Firmware 9.1.93K It seemed unlikely that Arris would fix the firmware issues in the SB6190 before the end of my 30-day return window, so I returned the SB6190 and purchased a Netgear CM600. This modem appears to be based on the Broadcom BCM3384 and looking at an older release of the CM600 firmware source reveals that the device runs the open source eCos embedded operating system. The Netgear CM600 so far hasn't exhibited any of the issues I found with the Arris SB6190 modem. Here is a SmokePing graph for the CM600, which shows median latency about 1 ms lower than the Arris modem: SmokePing Netgear CM600 It's not clear which company is to blame for the problems in the Arris modem. Looking at the DOCSIS drivers in the SB6190 firmware source reveals copyright statements from ARRIS Group, Texas Instruments, and Intel. However, I would recommend avoiding cable modems produced by Arris in particular, and cable modems based on the Intel Puma SoC in general.

27 April 2016

Stig Sandbeck Mathisen: Using LLDP on Linux. What's on the other side?

On any given server, or workstation, knowing what is at the other end of the network cable is often very useful. There s a protocol for that: LLDP. This is a link layer protocol, so it is not routed. Each end transmits information about itself periodically. You can typically see the type of equipment, the server or switch name, and the network port name of the other end, although there are lots of other bits of information available, too. This is often used between switches and routers in a server centre, but it is useful to enable on server hardware as well. There are a few different packages available. I ve looked at a few of them available for the RedHat OS family (Red Hat Enterprise Linux, CentOS, ) as well as the Debian OS family (Debian, Ubuntu, ) (Updated 2016-04-29, added more recent information about lldpd, and gathered the switch output at the end.)

ladvd A simple daemon, with no configuration needed. This runs as a privilege-separated daemon, and has a command line control utility. You invoke it with a list of interfaces as command line arguments to restrict the interfaces it should use. ladvd is not available on RedHat, but is available on Debian. Install the ladvd package, and run ladvdc to query the daemon for information.
root@turbotape:~# ladvdc
Capability Codes:
    r - Repeater, B - Bridge, H - Host, R - Router, S - Switch,
    W - WLAN Access Point, C - DOCSIS Device, T - Telephone, O - Other
Device ID        Local Intf Proto Hold-time Capability Port ID
office1-switch23 eno1       LLDP  98        B          42
Even better, it has output that can be parsed for scripting:
root@turbotape:~# ladvdc -b eno1
INTERFACE_0='eno1'
HOSTNAME_0='office1-switch23'
PORTNAME_0='42'
PORTDESCR_0='42'
PROTOCOL_0='LLDP'
ADDR_INET4_0=''
ADDR_INET6_0=''
ADDR_802_0='00:11:22:33:44:55
VLAN_ID_0=''
CAPABILITIES_0='B'
TTL_0='120'
HOLDTIME_0='103'
my new favourite :)

lldpd Another package is lldpd , which is also simple to configure and use. lldpd is not available on RedHat, but it is present on Debian. It features a command line interface, lldpcli , which can show output with different level of detail, and on different formats, as well as configure the running daemon.
root@turbotape:~# lldpcli show neighbors
-------------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------------
Interface:    eno1, via: LLDP, RID: 1, Time: 0 day, 00:00:59
  Chassis:
    ChassisID:    mac 00:11:22:33:44:55
    SysName:      office1-switch23
    SysDescr:     ProCurve J9280A Switch 2510G-48, revision Y.11.12, ROM N.10.02 (/sw/code/build/cod(cod11))
    Capability:   Bridge, on
  Port:
    PortID:       local 42
    PortDescr:    42
-------------------------------------------------------------------------------
Among the output formats are json , which is easy to re-use elsewhere.
root@turbotape:~# lldpcli -f json show neighbors
 
  "lldp":  
    "interface":  
      "eno1":  
        "chassis":  
          "office1-switch23":  
            "descr": "ProCurve J9280A Switch 2510G-48, revision Y.11.12, ROM N.10.02 (/sw/code/build/cod(cod11))",
            "id":  
              "type": "mac",
              "value": "00:11:22:33:44:55"
             ,
            "capability":  
              "type": "Bridge",
              "enabled": true
             
           
         ,
        "via": "LLDP",
        "rid": "1",
        "age": "0 day, 00:53:23",
        "port":  
          "descr": "42",
          "id":  
            "type": "local",
            "value": "42"
           
         
       
     
   
 

lldpad A much more featureful LLDP daemon, available for both the Debian and RedHat OS families. This has lots of features, but is less trivial to set up.

Configure lldp for each interface
#!/bin/sh
find /sys/class/net/ -maxdepth 1 -name 'en*'  
    while read device; do
        basename "$device"
    done  
    while read interface; do
         
            lldptool set-lldp -i "$interface" adminStatus=rxtx
            for item in sysName portDesc sysDesc sysCap mngAddr; do
                lldptool set-tlv -i "$interface" -V "$item" enableTx=yes  
                    sed -e "s/^/$item /"
            done
           
            sed -e "s/^/$interface /"
    done

Show LLDP neighbor information
#!/bin/sh
find /sys/class/net/ -maxdepth 1 -name 'en*'  
    while read device; do
        basename "$device"
    done  
    while read interface; do
        printf "%s\n" "$interface"
        ethtool $interface   grep -q 'Link detected: yes'    
            echo "  down"
            echo
            continue
         
        lldptool get-tlv -n -i "$interface"   sed -e "s/^/  /"
        echo
    done
[...]
enp3s0f0
  Chassis ID TLV
    MAC: 01:23:45:67:89:ab
  Port ID TLV
    Local: 588
  Time to Live TLV
    120
  System Name TLV
    site3-row2-rack1
  System Description TLV
    Juniper Networks, Inc. ex2200-48t-4g , version 12.3R12.4 Build date: 2016-01-20 05:03:06 UTC
  System Capabilities TLV
    System capabilities:  Bridge, Router
    Enabled capabilities: Bridge, Router
  Management Address TLV
    IPv4: 10.21.0.40
    Ifindex: 36
    OID: $
  Port Description TLV
    some important server, port 4
  MAC/PHY Configuration Status TLV
    Auto-negotiation supported and enabled
    PMD auto-negotiation capabilities: 0x0001
    MAU type: Unknown [0x0000]
  Link Aggregation TLV
    Aggregation capable
    Currently aggregated
    Aggregated Port ID: 600
  Maximum Frame Size TLV
    9216
  Port VLAN ID TLV
    PVID: 2000
  VLAN Name TLV
    VID 2000: Name bumblebee
  VLAN Name TLV
    VID 2001: Name stumblebee
  VLAN Name TLV
    VID 2002: Name fumblebee
  LLDP-MED Capabilities TLV
    Device Type:  netcon
    Capabilities: LLDP-MED, Network Policy, Location Identification, Extended Power via MDI-PSE
  End of LLDPDU TLV
enp3s0f1
[...]

on the switch side On the switch, it is a bit easier to see what s connected to each interface:

office switch On the switch side, this system looks like:
office1-switch23# show lldp info remote-device
 LLDP Remote Devices Information
  LocalPort   ChassisId                 PortId PortDescr SysName
  --------- + ------------------------- ------ --------- ----------------------
  [...]
  42          22 33 44 55 66 77         eno1   Intel ... turbotape.example.com
  [...]
office1-switch23# show lldp info remote-device 42
 LLDP Remote Device Information Detail
  Local Port   : 42
  ChassisType  : mac-address
  ChassisId    : 00 11 22 33 33 55
  PortType     : interface-name
  PortId       : eno1
  SysName      : turbotape.example.com
  System Descr : Debian GNU/Linux testing (stretch) Linux 4.5.0-1-amd64 #1...
  PortDescr    : Intel Corporation Ethernet Connection I217-LM
  System Capabilities Supported  : bridge, router
  System Capabilities Enabled    : bridge, router
  Remote Management Address
     Type    : ipv4
     Address : 192.0.2.93
     Type    : ipv6
     Address : 20 01 0d b8 00 00 00 00 00 00 00 00 00 00 00 01
     Type    : all802
     Address : 22 33 44 55 66 77

datacenter switch
ssm@site3-row2-rack1> show lldp neighbors
Local Interface    Parent Interface    Chassis Id          Port info          System Name
[...]
ge-0/0/38.0        ae1.0               01:23:45:67:89:58   Interface   2 as enp3s0f0 server.example.com
ge-1/0/38.0        ae1.0               01:23:45:67:89:58   Interface   3 as enp3s0f1 server.example.com
[...]
ssm@site3-row2-rack1> show lldp neighbors interface ge-0/0/38
LLDP Neighbor Information:
Local Information:
Index: 157 Time to live: 120 Time mark: Fri Apr 29 13:00:19 2016 Age: 24 secs
Local Interface    : ge-0/0/38.0
Parent Interface   : ae1.0
Local Port ID      : 588
Ageout Count       : 0
Neighbour Information:
Chassis type       : Mac address
Chassis ID         : 01:23:45:67:89:58
Port type          : Mac address
Port ID            : 01:23:45:67:89:58
Port description   : Interface   2 as enp3s0f0
System name        : server.example.com
System Description : Linux server.example.com 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 4
System capabilities
        Supported  : Station Only
        Enabled    : Station Only
Management Info
        Type              : IPv6
        Address           : 2001:0db8:0000:0000:0000:dead:beef:cafe
        Port ID           : 2
        Subtype           : 2
        Interface Subtype : ifIndex(2)
        OID               : 1.3.6.1.2.1.31.1.1.1.1.2

27 October 2015

Lunar: Reproducible builds: week 26 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes Mattia Rizzolo created a bug report to continue the discussion on storing cryptographic checksums of the installed .deb in dpkg database. This follows the discussion that happened in June and is a pre-requisite to add checksums to .buildinfo files. Niko Tyni identified why the Vala compiler would generate code in varying order. A better patch than his initial attempt still needs to be written. Packages fixed The following 15 packages became reproducible due to changes in their build dependencies: alt-ergo, approx, bin-prot, caml2html, coinst, dokujclient, libapreq2, mwparserfromhell, ocsigenserver, python-cryptography, python-watchdog, slurm-llnl, tyxml, unison2.40.102, yojson. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: reproducible.debian.net pbuilder has been updated to version 0.219~bpo8+1 on all eight build nodes. (Mattia Rizzolo, h01ger) Packages that FTBFS but for which no open bugs have been recorded are now tested again after 3 days. Likewise for depwait packages. (h01ger) Out of disk situations will not cause IRC notifications anymore. (h01ger) Documentation update Lunar continued to work on writing documentation for the future reproducible-builds.org website. Package reviews 44 reviews have been removed, 81 added and 48 updated this week. Chris West and Chris Lamb identified 70 fail to build from source issues. Misc. h01ger presented the project in Mexico City at the 3er Congreso de Seguridad de la Informaci n where it became clear that we lack academic papers related to reproducible builds. Bryan has been doing hard work to improve reproducibility for OpenWrt. He wrote a report linking to the patches and test results he published.

17 March 2013

Gregor Herrmann: RC bugs 2013/11

& another week has gone by where I tried to work on RC bugs. here's the overview:

28 June 2012

Sylvain Le Gall: OASIS 0.3.0 release

Logo OASIS small OASIS is a tool to integrate a configure, build and install system in your OCaml project. It helps to create standard entry points in your build system and allows external tools to analyse your project easily. It is the building brick of OASIS-DB, a CPAN for OCaml. This release takes almost 18 months to complete. This is too long and I will talk in another blog post on the way I am trying to improve this right now (esp. using continuous integration). This new release fixes a small bug (1 line) that prevents setup.ml to run with OCaml 4.00. If you have projects that was generated with a former release consider upgrading to OASIS 0.3.0. There are several big new stuff that comes with this release. It now supports Pack: field for libraries which allows to pack your library using -for-pack and so on. We are also compiling .cmxs (dynlink object for native) by default and we publish them in the META file. This feature was implemented in order to get more libraries to provide .cmxs and to help project like Ocsigen to take advantage of this. If you want to get rid of this at configure time, you can use ./configure --override native_dynlink false. We introduce two new default flags: --enable-tests and --disable-docs for configure. These are implicit flags that define if we will run Test sections or compile Document sections. They are especially useful to reduce the number of dependencies, because dependencies of Test will be excluded by default. We recommend to set Build$: flag(tests) to any Library or Executable sections that are only useful for tests. This allows you to really cut down the number of dependencies. The last change I want to introduce is about the old setup-dev subcommand which is now deprecated. It has been replaced by 2 different update schemes. I am pretty excited by this feature which in fact comes from OASIS user (esp. Lwt project). The former scheme was to have a big 'setup.ml' that always call the command oasis to update. This was complex and not very useful. We now have 2 mode: dynamic and weak. dynamic allow you to have a small setup.ml and to keep your VCS history clean but you need to install OASIS. weak need a big setup.ml but only need to call the command oasis if someone change something in _oasis. This mode is targeted to project that wants to be able to checkout from VCS an OCaml project without installing OASIS. The difference generated by weak mode doesn't pollute too much the VCS history because most the time they make sense. For example, you upgrade your package version number in _oasis and it produces a change with 6 lines where the version number changes in every META, setup.ml files and so on. This version has been tested on: You can download it here or use your favorite package manager: Debian (UPDATE: pending upload)
$ apt-get install -t experimental oasis
GODI
$ godi_console perform -build apps-oasis
odb.ml
$ ocaml odb.ml oasis
Here is the complete changelog: EXTREMLY IMPORTANT changes (read this) PACKAGES uploaded to oasis-db will be automatically "derived" before OCaml 4.00 release (i.e. oUnit v1.1.1 will be regenerated with this new version as oUnit v1.1.1~oasis1). PACKAGES not uploaded to oasis-db need to be regenerated. In order not to break 3rd party tools that consider a tarball constant, I recommend to create a new version. Thanks to INRIA OCaml team for synchronizing with us on this point. Major changes: You can now have the following example:
     ...
     Executable test_exec
       Install: false
       Build$: flag(tests)
       MainIs: testExec.ml
       BuildDepends: oUnit
     
     Test main
       Command: $test_exec
       TestTools: test_exec
     ...
The Run$: flag(tests) is implicit for the section Test main. The default value is false for tests. If all the executable for test are flagged correctly (Build$: flag(tests)), you'll get rid of the dependency on oUnit. It works the same for documentation, however the default is true. (Closes: #866) In order to allow interdependent flags, we transform back lazy values into unit -> string functions. This allows to change a flag value on the command line and to update all the dependent values. (Closes: #827, #938) It defines different ways to manage the auto-update of setup.ml: The choice between weak and dynamic depends on your need with regard to VCS and to the presence of oasis. The weak allow to checkout the project from VCS and be able to work on it, without the need of installing 'oasis' as long as you don't change the file _oasis. But it clutter your VCS history with changes to the build system each time you change something in _oasis. The 'dynamic' mode gives you no VCS history pollution but makes mandatory to have installed oasis libraries. Avoid copying executable to their real name. It helps to call ocamlbuild a single time for the whole build rather than calling it n time (n = number of executable sections) and copying resulting exec. This speeds up the build process because ocamlbuild doesn't have to compute/scan dependencies each time. The drawback is that you have to use $foo when you want to call Executable foo, because $foo will be '_build/.../main.byte'. For example:
CCOpt: -DEXTERNAL_EXP10 -L/sw/lib "-framework vecLib"
Will be parsed correctly and outputed according to target OS. In order to ease building oasis, we have minimize the number of dependencies. You only need to install ocamlmod, ocamlify and ocaml-data-notation for a standard build without tests. Dependencies on pcre, extlib and ocamlgraph has been dropped. The remaining dependencies are hidden behind a flag tests. OASIS now produces .cmxs file by default and add them to META. Now a META looks like:
     ...
     archive(byte) = "oasis.cma"
     archive(byte, plugin) = "oasis.cma"
     archive(native) = "oasis.cmxa"
     archive(native, plugin) = "oasis.cmxs"
     ...
This will ultimately help to generate automatically .cmxs for all oasis enabled projects. We hope that this new feature will improve dynamic linking use in OCaml (esp. for project like Ocsigen). Other changes Thanks to Anil Madhavapeddy, Pierre Chambart, Christophe Troestler, Jeremie Dimino, Ronan Le Hy, Yaron Minsky and Till Varoquaux for their help with this release. Also thanks to all the testers of the numerous release candidates. This was a long work and each time a tester has downloaded oasis has helped me to know that I was working for someone.

22 October 2010

Sylvain Le Gall: Compiling pcre-ocaml with Visual Studio 2008

One big change in the recent OASIS v0.2 release is the replacement of Str by Pcre. The big advantage of Pcre is that it can be used in multi-threaded environment, whereas it is not recommended to do so with Str. Since OASIS is used by the OCsigen part of OASIS-DB, we need to make it work safely with Lwt with multiple users at the same time. Note that, we can probably use Str directly with Lwt, because it is not really multi-threaded. But we want to be safe on this point and Pcre is a very powerful library. But the OCaml Pcre library depends on an external C library (pcre). This is not a problem on Linux et al, where it is shipped with the OS by default. But on Windows, you need to build it yourself. We want to do it using Microsoft Visual Studio 2008, mainly because OCaml was compiled with it -- and it seems the most natural way to build C library on Windows. As usual, building Open Source C libraries using MS Visual Studio, is not the most common way to process. However, the use of CMake enables a simple way to do it. Here is how we can compile pcre and pcre-ocaml for Windows: CMake-GUI with pcre Visual Studio 2008 with pcre The best way to test your newly created OCaml pcre library is to try to compile an example from the examples directory of pcre-ocaml itself. Sylvain Le Gall is an OCaml consultant at OCamlCore SARL

30 March 2009

Daniel Leidert: Network doesn t come up after update to udev 0.140

The recent update to udev 0.140 lead to a system without network access to me. The error messages were:
Running 0dns-down to make sure resolv.conf is ok...done.
Setting up networking....
Configuring network interfaces...
ioctl[SIOCGIFFLAGS]: No such device
Could not get interface  ath0  flags
ioctl[SIOCSIWPMKSA]: No such device
ioctl[SIOCSIWMODE]: No such device
Could not configure driver to use managed mode
ioctl[SIOCGIWRANGE]: No such device
ioctl[SIOCGIFINDEX]: No such device
ioctl[SIOCSIWENCODEEXT]: No such device
ioctl[SIOCSIWENCODE]: No such device
ioctl[SIOCSIWENCODEEXT]: No such device
ioctl[SIOCSIWENCODE]: No such device
ioctl[SIOCSIWENCODEEXT]: No such device
ioctl[SIOCSIWENCODE]: No such device
ioctl[SIOCSIWENCODEEXT]: No such device
ioctl[SIOCSIWENCODE]: No such device
ioctl[SIOCSIWAP]: No such device
ioctl[SIOCGIFFLAGS]: No such device
wpa_supplicant: /sbin/wpa_supplicant daemon failed to start
/etc/network/if-pre-up.d/wpasupplicant exited with return code 1
SIOCSIFADDR: No such device
ath0: ERROR while getting interface flags: No such device
SIOCSIFNETMASK: No such device
SIOCSIFBRDADDR: No such device
ath0: ERROR while getting interface flags: No such device
ath0: ERROR while getting interface flags: No such device
Failed to bring up ath0.
done.
It seems, the update added rules to /etc/udev/rules.d/70-persistent-net.rules by increasing the device number and applying the last applicable NAME directive instead of the first one. This lead to the following file here:
SUBSYSTEM=="net", DRIVERS=="?*", ATTRS address =="XX:XX:XX:XX:XX:a7", ATTRS type =="1", NAME="ath0"
SUBSYSTEM=="net", DRIVERS=="?*", ATTRS address =="XX:XX:XX:XX:XX:18", NAME="eth0"
SUBSYSTEM=="net", DRIVERS=="?*", ATTRS address =="XX:XX:XX:XX:XX:XX", NAME="eth1"
# PCI device 0x10ec:0x8139 (8139too)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR address =="XX:XX:XX:XX:XX:18", ATTR type =="1", KERNEL=="eth*", NAME="eth2"
# PCI device 0x168c:0x0013 (ath_pci)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR address =="XX:XX:XX:XX:XX:a7", ATTR type =="1", KERNEL=="ath*", NAME="ath1"
So the devices created were eth2 and ath1 but /etc/network/interfaces contained entries for eth0 and ath0. So fixed /etc/udev/rules.d/70-persistent-net.rules and rebooted. <rant>Is it really necessary to break network access with an update???</rant> Update I forgot this one: #521521.

15 February 2009

Michael Prokop: Debian GNU/Linux 5.0 codename Lenny - News for sysadmins

Alright, Debian GNU/Linux 5.0 AKA as Lenny has been released. Time for a Debian unstable unfreeze party! 8-) What does the new stable release bring for system administrators? I ll give an overview what news you might expect when upgrading from Debian GNU/Linux 4.0, codename Etch (released on 8th April 2007) to the current version Debian GNU/Linux 5.0, codename Lenny (released on 14th February 2009). I try to avoid duplicated information so make sure to read the release announcement and the official release notes for Lenny beforehand. Noteworthy Changes Virtualisation Virtualisation related new tools: Desktop oriented packages like virtualbox and qemu are available as well of course. Noteworthy Updates This is a (selective) list of some noteworthy updates: New packages Lenny ships over 7000 new packages. Lists of new/removed/replaced packages are available online. I ll name 238 sysadmin related packages that might be worth a look. (Note: I don t list addon stuff like optional server-modules, docs-only and kernel-source related packages. I plan to present some of the following packages in more detail in separate blog entries.) Further Ressources

30 January 2008

Sylvain Le Gall: OCamlMeeting in Paris -- Debian summary

On January 26th 2008, the OCaml community has its first meeting in Europe. There was 5 different talks, including one from me. The meeting was pretty constructive and i think some great things should come out of it. In this post, i will focus on the Debian maintainer point of view. The meeting was organized by me, with the help of Garbriel Kerneis and Vincent Balat. I wish to thank Mme Turgis from ENST/ParisTech who has made possible all this to happen. Introduction: I announce the creation of OCamlCore, a SS2L. It will provide commercial support for OCaml and related products. It will also provide community with some resources (including a GForge, a planet, some SCM repository...) For now, i will open administration to Debian people who wish to help (for root rights) and from the whole community for other task (gforge and general administration). For now, i will choose alone who can administrate and who cannot... This is not a very open position, but this server is also for my company. Xavier Leroy give us a nice talk that has let us see some future direction: But above all, Xavier talked about the fact that there is a lack of manpower at INRIA and invited the community to build tools and other things (including a CPAN like distribution) to make the language more widespread. He also stated that a lot of things will stay in place in the name of backward compatibility. OCsigen Vincent Balat has given us a nice talk about OCsigen, a new framework for programming web application. This program is already in Debian and it is at the root of Debian discussion about moving .cm[ao] into libxxx-ocaml package. This new framework allows to program web application using continuation, which should ease a lot of aspect. It also features static type checking, including static verification of the generated HTML pages. GODI I was impressed to see G. Stolpmann coming to the meeting. Unfortunately, i cannot follow the whole talk, since i have some phone call to make, in order to book the restaurant. From what i have seen, it seems that GODI is growing, in term of manpower. They have some issues with OCaml 3.10, because of camlp4 and some packages. At the end of the conference, he was pushing for a more general use of GODIVA. In particular, it requires to have a very uniform way of building software. It should be the foundation of a build standard for OCaml software. In my humble opinion, i don't agree on this point. Basically, its proposal was based on configure/make all, which is too C style way of doing. It is not bad, but just exclude ocamlbuild enabled software. ocamlbuild Nicolas Pouillard has made an introduction on how to use ocamlbuild. I have discovered how it was working, since i didn't have time to work on this topic. It seems a good thing. I hope to be able to try it at some point. The conclusion was about modifying ocamlbuild to allow the use of multiple plugins. For now, the plugin is a separate file which is compiled first and then loaded, as a special case. Nicolas is looking for a full OCaml way to handle this for multiple plugins. OCaml in Debian I have spoken about different aspect of Debian packaging. After a short introduction and some history about OCaml in Debian, I come to some conclusions/problems: Talking with Xavier give me some additional informations: At the end of the presentation, there were some questions. I discover that many people are ignoring the ABI problem with OCaml and its libraries. They seem to discover that it requires regular rebuild of our packages. I also have a discussion about the fact that Debian binary distribution doesn't fit developers need. It seems that developers prefer to have the source of libraries they compile and also the most recent release. But at the same time, people avoid using GODI, which also have the problem of needing to wait for a particular library to be released... After some discussion, I point that some companies prefer to live in a frozen world, which Debian stable can provide. Living in a frozen world helps company developers to have the same environment from the beginning to the end of a project - which can last years. It also help to avoid build environment de-synchronization between company developers. There was also a question about interaction between installed OCaml package and GODI software. I cannot answer it, expect with a "patches are welcome". OCaml on a JVM using OCaml-Java Xavier Clerc explained us how he has achieved running a full OCaml environment on a JVM. This work was pretty impressive, because he was able to compile OCaml top level and provide it as an applet. This is something amazing (and working). He stated also that this should allow to do some binding with java graphical user interface from OCaml. I was pretty busy with a phone call (to a restaurant for the dinner), so i don't have attend the whole conference. Anyway, the project is promising, but it needs to recompile every library using its compiler. I think this is not suited for Debian for now. Workshop The workshop was shorter than it should have been. This is due to accumulated time shift during the day. However, it was pretty interesting, because everybody can talked almost freely. The initial discussion was about the "Unicode situation". This was a question about Unicode in Unison, but there is no real solution about it. For reader information, unison has problem with filename containing UTF-8 char. In particular when syncing two directories which are on two file system with a different encoding on each. Even if the file can be represented on the two file systems, the comparison (between UTF-8 and ISO8859-15) fails. This is because the comparison is a byte to byte comparison. We discuss further the idea of OSR (OCaml standard recommendation) which was briefly discussed in the morning. I think David Teller (who have also done a great part of the IRC retranscription) was very keen on this idea. I invited Nicolas Pouillard to discuss the three following subjects. About camlp4 documentation, Nicolas told people about the he set up. About ocamlfind and ocamlbuild, he went back to the multiple plugin problems. About ocamlbuild and GODI, Gerd, Nicolas and Xavier tried to build a list of what metadata can be shared. This includes list of files to be installed (PLIST in GODI). Concerning the "feedback from the Gallium", Xavier explains that he has done this in the morning (which is true). I think that Xavier implicitly agreed on the fact that if this kind of event happens again, he can do another talk about future of OCaml. Concerning Summer of Code, I just remind that people can write description of Google Summer of Code project in the cocan wiki. Another one remind about Jane St Capital summer of code. Points concerning common interfaces raise the discussion about OSR. The organization of next year events doesn't raise particular interest. I think people would be interested to have one next year, if something comes out of this meeting.