sreview-web. It's possible to install the four packages on different machines, but let's not go into too much detail there, yet.
sreview-config --action=dump. This will show you the current configuration of sreview. If you want to change something, either change it in /etc/sreview/config.pm, or just run
sreview-config --set=variable=value --action=update.
sreview-user -d --action=create -u <your email>. This will create an administrator user in the sreview database.
http://localhost:8080/, and test whether you can log on.
state_actionsconfiguration parameter (e.g., by way of
sreview-config --action=update --set=state_actions=...or by editing /etc/sreview/config.pm).
state_actionsentry for notification so that it sends out a notification (e.g., through an IRC bot or an email address, or something along those lines). Alternatively, enable the "anonreviews" option, so that the overview page has links to your talk.
parse_reconfiguration parameters of SReview. The first should contain a filesystem glob that will find your raw assets; the second should parse the filename into room, year, month, day, hour, minute, and second, components. Look at the defaults of those options for examples (or just use those, and store your files as
tests/comparators/dtb: compatibility with version 1.4.5. (Closes: #880279)
binwalk: improve names in output of "internal" members. #877525
ps: ps2ascii > 9.21 now varies on timezone, so skip this test for now.
dtby: only parse the version number, not any "-dirty" suffix.
debian/watch: Use HTTPS URI.
utils/file: Diff container metadata centrally. This fixes a last remaining bug in fuzzy-matching across containers. (Closes: #797759)
Standards-Versionto 4.1.1, no changes needed.
wpa_supplicantrelease. I first published a patchset based on the patches shipped after the embargo for the oldstable/jessie release. After feedback from the list, I also built packages for i386 and ARM. I have also reviewed the WPA protocol to make sure I understood the implications of the changes required to backport the patches. For example, I removed the patches touching the WNM sleep mode code as that was introduced only in the 2.0 release. Chunks of code regarding state tracking were also not backported as they are part of the state tracking code introduced later, in 3ff3323. Finally, I still have concerns about the nonce setup in patch #5. In the last chunk, you'll notice
peer->tkis reset, to_set to negotiate a new
TK. The other approach I considered was to backport 1380fcbd9f ("TDLS: Do not modify RNonce for an TPK M1 frame with same INonce") but I figured I would play it safe and not introduce further variations. I should note that I share Matthew Green's observations regarding the opacity of the protocol. Normally, network protocols are freely available and security researchers like me can easily review them. In this case, I would have needed to read the opaque 802.11i-2004 pdf which is behind a TOS wall at the IEEE. I ended up reading up on the IEEE_802.11i-2004 Wikipedia article which gives a simpler view of the protocol. But it's a real problem to see such critical protocols developed behind closed doors like this. At Guido's suggestion, I sent the final patch upstream explaining the concerns I had with the patch. I have not, at the time of writing, received any response from upstream about this, unfortunately. I uploaded the fixed packages as DLA 1150-1 on October 31st.
dch --ltsflag in Debian bug #762715 which is currently pending review
We build our computers the way we build our cities--over time, without a plan, on top of ruins. - Ellen Ullman
waybackplugin that saves pages on the Wayback machine for eternal archival. The
archiveplugin can also similarly save pages to the local filesystem. I also added bash completion, expanded unit tests and documentation, fixed default file paths and a bunch of bugs, and refactored the code. Finally, I also started using two external Python libraries instead of rolling my own code: the pyxdg and requests-file libraries, the latter which I packaged in Debian (and fixed a bug in their test suite). The program is working pretty well for me. The only thing I feel is really missing now is a retry/fail mechanism. Right now, it's a little brittle: any network hiccup will yield an error email, which are readable to me but could be confusing to a new user. Strangely enough, I am particularly having trouble with (local!) DNS resolution that I need to look into, but that is probably unrelated with the software itself. Thankfully, the user can disable those with
WARNINGs. Furthermore, some plugins still have some rough edges. For example, The Transmission integration would probably work better as a distinct plugin instead of a simple
execcall, because when it adds new torrents, the output is totally cryptic. That plugin could also leverage more feed parameters to save different files in different locations depending on the feed titles, something would be hard to do safely with the
execplugin now. I am keeping a steady flow of releases. I wish there was a way to see how effective I am at reaching out with this project, but unfortunately GitLab doesn't provide usage statistics... And I have received only a few comments on IRC about the project, so maybe I need to reach out more like it says in the fine manual. Always feels strange to have to promote your project like it's some new bubbly soap... Next steps for the project is a final review of the API and release production-ready 1.0.0. I am also thinking of making a small screencast to show the basic capabilities of the software, maybe with asciinema's upcoming audio support?
-- remove needless blockquote wrapper around some tables -- -- haskell newbie tips: -- -- @ is the "at-pattern", allows us to define both a name for the -- construct and inspect the contents as once -- -- is the "empty record pattern": it basically means "match the -- arguments but ignore the args" cleanBlock (BlockQuote t@[Table ]) = t
<blockquote>elements needlessly wrapping a
<table>. I can't specify the
Tabletype on its own, because then I couldn't address the table as a whole, only its parts. I could reconstruct the whole table bits by bits, but it wasn't as clean. The other pattern was how to, at last, address multiple string elements, which was difficult because Pandoc treats spaces specially:
cleanBlock (Plain (Strong (Str "Notifications":Space:Str "for":Space:Str "all":Space:Str "responses":_):_)) = 
-- the "GAByline" div has a date, use it to generate the ikiwiki dates -- -- this is distinct from cleanBlock because we do not want to have to -- deal with time there: it is only here we need it, and we need to -- pass it in here because we do not want to mess with IO (time is I/O -- in haskell) all across the function hierarchy cleanDates :: ZonedTime -> Block -> [Block] -- this mouthful is just the way the data comes in from -- LWN/Pandoc. there could be a cleaner way to represent this, -- possibly with a record, but this is complicated and obscure enough. cleanDates time (Div (_, [cls], _) [Para [Str month, Space, Str day, Space, Str year], Para _]) cls == "GAByline" = ikiwikiRawInline (ikiwikiMetaField "date" (iso8601Format (parseTimeOrError True defaultTimeLocale "%Y-%B-%e," (year ++ "-" ++ month ++ "-" ++ day) :: ZonedTime))) ++ ikiwikiRawInline (ikiwikiMetaField "updated" (iso8601Format time)) ++ [Para ] -- other elements just pass through cleanDates time x = [x]
IOin this case. Turns out that getting the current time is
IOin Haskell: indeed, it's not a "pure" function that will always return the same thing. But this means that I would have had to change the signature of all the functions that touched time to include
IO. I eventually moved the time initialization up into
mainso that I had only one
IOfunction and moved that timestamp downwards as simple argument. That way I could keep the rest of the code clean, which seems to be an acceptable pattern. I would of course be happy to get feedback from my Haskell readers (if any) to see how to improve that code. I am always eager to learn.
There is no [web extension] only XUL! - Inside joke
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.
strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.
buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.
Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo. For questions or comments use the issue tracker off the GitHub repo.
Changes in tint version 0.0.2 (2017-10-29)
- Set a few defaults for a decent-looking skeleton and template: font, fontsize, margins, left-justify closing (#3)
- Blockquote display is now a default as well (#4).
skeleton.Rmdand vignette source accordingly
- Documented new default options (#5 and #6).
- Links are now by default printed as footnotes (#9).
.buildinfofiles that can be generated with a new
dpkg-genbuildinfo --always-include-kerneloption. (#873937)
.classfile to be stale if it shares the same timestamp of the
.clj. We thus adjust the timestamps of the
.cljto always be younger. (#877418)
dh_strip_nondeterminism: Log which handler processed a file. (#876140)
bin/strip-nondeterminism: Print a warning in
--verbosemode if no canonical time specified.
rustcon Jenkins for the
health_check: Include the running kernel version when reporting multiple kernel installed in
chrome-container, since it will not be reachable via
127.0.0.1as is (rightfully) assumed by default. If we want Warden authentication stubbing to work (as we do, since our application uses Devise) and transactional fixtures as well (e.g. rails handling database cleanup between tests without database_cleaner gem) we also need to ensure that the application server is being started by the tests and the tests are actually run against that server. Otherwise we might run into problems. Getting the containers ready Assuming you already have a container setup (and are using docker-compose like we do) there is not that much to change on the docker front. Basically you need to add a new service called
chromeand point it to an appropriate image and add a link to it in your existing
web-container. I ve decided to use standalone-chrome for the browser part, for which there are docker images provided by the selenium project (they also have images for other browsers). Kudos for that.
... services: chrome: image: selenium/standalone-chrome web: links: - chromeThe link ensures that the chrome instance is available before we run the tests and that the the
web-container is able to resolve the name of this container. Unfortunately this is not true for the other way round, so we need some magic in our test code to find out the ip-address of the
web-container. More to this later. Other than that, you probably want to configure a volume for you to be able to access the screenshots, which get saved to
config/puma.rbwhich most likely collides with the necessary settings. You can ensure these settings yourself or use rails from branch
5-1-stablewhich includes this change. I decided for the latter and pinned my
Gemfileto the then current commit. Register a driver for the remote chrome To register the required driver, you ll have to add some lines to your
if ENV['DOCKER'] selenium_url = "http://chrome:4444/wd/hub" Capybara.register_driver :selenium_remote do app Capybara::Selenium::Driver.new(app, :url => selenium_url, :browser => :remote, desired_capabilities: :chrome ) end endNote that I added those lines conditionally (since I still want to be able to use a local chrome via chromedriver) if an environment variable
DOCKERis set. We defined that environment variable in our
Dockerfileand thus you might need to adapt this to your case. Also note that the selenium_url is hard-coded. You could very well take a different approach, e.g. using an externally specified SELENIUM_URL, but ultimately the requirement is that the driver needs to know that the chrome instance is running on host
RSpec.configure do config ... config.before(:each, type: :system, js: true) do if ENV['DOCKER'] driven_by :selenium_remote ip = Socket.ip_address_list.detect addr addr.ipv4_private? .ip_address host! "http://# ip :# Capybara.server_port " else driven_by :headless_chrome end endNote the part with the ip-address: it tries to find an IPv4 private address for the
web-container (the container running the tests) to ensure the
chrome-container uses this address to access the application. The
Capybara.server_portis important here, since it will correspond to the puma instance launched by the tests. That heuristic (first ipv4 private address) works for us at the moment, but it might not work for you. It is basically a workaround to the fact that I couldn t get
webresolvable for the chrome container which may be fixable on the docker side, but I was to lazy to further investigate that. If you change it: Just make sure the
host!method uses an URI pointing to an address of the
web-container that is reachable to the
chrome-container. Define tests with
js: trueLast but certainly not least, you need actual tests of the required type and with or without
js: true. This can be achieved by creating tests files starting like this:
RSpec.feature "Foobar", type: :system, js: true doSince the new rspec-style system tests are based around the feature-specs which used to be around previously, the rest of the tests is exactly like it is described for feature specs. Run the tests To run the tests a commandline like the following should do: docker-compose run web rspec It won t make a big noise about running the tests against chrome, unless something fails. In that case you ll see a message telling you where the screenshot has been placed. Troubleshooting Below I add some hints about problems I ve seen during configuring that: Test failing, screenshot shows login screen In that case puma might be configured wrongly or you are not using transactional fixtures. See the hints above about the rails version to use which also includes some pointers to helpful explanations. Note that rspec-rails by default does not output the puma startup output as it clutters the tests. For debugging purposes it might be helpful to change that by adding the following line to your tests:
ActionDispatch::SystemTesting::Server.silence_puma = falseError message: Unable to find chromedriver This indicates that your driver is not configured properly, because the default for system tests is to be
driven_byselenium, which tries to spawn an own chrome instance and is suitable for non-dockerized tests. Check if your tests are marked as js: true (if you followed the instructions above) and that you properly added the before-hook to your rspec-configuration. Collisions with VCR If you happen to have tests that make use of the vcr gem you might see it complaining about not knowing what to do with the requests between the driver and the chrome instance. You can fix this, by telling VCR to ignore that requests, by adding a line where you configured VCR:
VCR.configure do config # required so we don't collide with capybara tests config.ignore_hosts 'chrome' ...
|Planet site||Admin contact|
|All projects||Free-RTC Planet (http://planet.freertc.org)||contact firstname.lastname@example.org|
|XMPP||Planet Jabber (http://planet.jabber.org)||contact email@example.com|
|SIP||Planet SIP (http://planet.sip5060.net)||contact firstname.lastname@example.org|
|SIP (Espa ol)||Planet SIP-es (http://planet.sip5060.net/es/)||contact email@example.com|
Trueand a hash of the definition of what
Trueis can both be treated the same by Annah's compiler.
False. Continue this until I've built up enough Annah code to write some almost useful programs. Annah can't do any IO on its own (though it can model IO similarly to how Haskell does), so for programs to be actually useful, there needs to be Scuttlebutt client support. The way typing works in Annah, a program's type can be expressed as a Scuttlebutt link. So a Scuttlebutt client that wants to run Annah programs of a particular type can pick out programs that link to that type, and will know what type of data the program consumes and produces. Here are a few ideas of what could be built, with fairly simple client-side support for different types of Annah programs...
Dashboard, and display its output like a Scuttlebutt message, in a dashboard window. The dashboard message gets updated whenever other Scuttlebutt messages come in. The Annah program picks out the messages it's interested in, and generates the dashboard message. So, send a message updating your boat's position, and everyone sees it update on the map. Send a message with updated weather forecasts as they're received, and everyone can see the storm developing. Send another message updating a waypoint to avoid the storm, and steady as you go... The coders, meanwhile, probably tweak their dashboard's code every day. As they add git-ssb repos, they make the dashboard display an overview of their bugs. They get CI systems hooked in and feeding messages to Scuttlebutt, and make the dashboard go green or red. They make the dashboard A-B test itself to pick the right shade of red. And so on... The dashboard program is stored in Scuttlebutt so everyone is on the same page, and the most recent version of it posted by a team member gets used. (Just have the old version of the program notice when there's a newer version, and run that one..) (Also could be used in disaster response scenarios, where the data and visualization tools get built up on the fly in response to local needs, and are shared peer-to-peer in areas without internet.)
Filtered Message. It can leave a message unchanged, or filter it out, or perhaps minimize its display. I publish the Annah program on my feed, and tell my Scuttlebutt client to filter all messages through it before displaying them to me. I published the program in my Scuttlebutt feed, and so my friends can use it too. They can build other filtering functions for other stuff (such an an excess of orange in photos), and integrate my frog filter into their filter program by simply composing the two. If I like their filter, I can switch my client to using it. Or not. Filtering is thus subjective, like Scuttlebutt, and the subjectivity is expressed by picking the filter you want to use, or developing a better one.
Editwhich gets posted to the user's feed. Rendering the page is just a matter of finding the
Editmessages for it from people who are allowed to edit it, and combining them. Anyone can fork a wiki page by posting an
Editto their feed. And can then post a smart link to their fork of the page. And anyone can merge other forks into their wiki page (this posts a control message that makes the Annah program implementing the wiki accept those forks'
Editmessages). Or grant other users permission to edit the wiki page (another control message). Or grant other users permissions to grant other users permissions. There are lots of different ways you might want your wiki to work. No one wiki implementation, but lots of Annah programs. Others can interact with your wiki using the program you picked, or fork it and even switch the program used. Subjectivity again.
Game, and generates a tab with a list of available games. The players of a particular game all experience the same game interface, because the code for it is part of their shared Scuttlebutt message pool, and the code to use gets agreed on at the start of a game. To play a game, the Scuttlebutt client runs the Annah program, which generates a description of the current contents of the game board. So, for chess, use Annah to define a
ChessMovedata type, and the Annah program takes the feeds of the two players, looks for messages containing a
ChessMove, and builds up a description of the chess board. As well as the pieces on the game board, the game board description includes Annah functions that get called when the user moves a game piece. That generates a new
ChessMovewhich gets recorded in the user's Scuttlebutt feed. This could support a wide variety of board games. If you don't mind the possibility that your opponent might cheat by peeking at the random seed, even games involving things like random card shuffles and dice rolls could be built. Also there can be games like Core Wars where the gamers themselves write Annah programs to run inside the game. Variants of games can be developed by modifying and reusing game programs. For example, timed chess is just the chess program with an added check on move time, and time clock display.