Since switching to mainline Linux (from 3.14), I only got an 18-bit (6 bits per channel) display output, down from full 24-bit.
I decided to look through the device tree files, and the display is only listed as 6 bits there, even for the Chrome OS kernel, which I found weird.
With the 5.3 kernel I got "24-bit" colour again. I had also applied a kernel patch implementing gamma lut support, so I decided to test redshift, and found... this:
To get more securely random strings, you can run:
head -c24 /dev/random | base64
You might need to run
in the background until the kernel can get enough entropy.
$ uname -a
Linux alarm 5.3.0-ARCH #4 SMP PREEMPT Tue Sep 17 16:36:05 NZST 2019 armv7l GNU/Linux
It took three or four compiles before I got it working properly, but I'm now running mainline Linux, less than 36 hours after the release being tagged in git.
Booting feels a lot faster, as I'm now using lz4 kernel compression instead of xz.
The Rockchip DRM driver now has gamma support, which is nice.
The only problem so far is that X now runs with vsync, so scrolling doesn't feel so smooth.
I usually use /opt for mounting drives when I already have something mounted on /mnt.
/media is probably more appropriate, but it's two more characters, so takes too long to type.
There is also LessPass, which uses the new Bluetooth ultra-long range feature to sync your password database distances of up to thousands of kilometres, even if you have wifi off and cover your computer with tinfoil.
One of the killer features of keepassxc is that you can see the characters of your password being typed in one by one, thanks to the virtual keyboard feature of X.
You can also configure delays, useful for the stupid Google login window which doesn't show the password field until you enter your email.
keepassxc also has ssh agent integration, so once you've set it up you don't have to type in ssh passwords when your database is unlocked.
I'd be interested in seeing if people with different machines get different results to me, so why not grab the old gzip from http://ftp.gnu.org/gnu/gzip/gzip-1.2.4.tar.gz and see how it compares to your system gzip.
Don't forget to set the CFLAGS when doing ./configure - the default is just to optimize -O1.
When using a modern compiler with all of the standard flags for a lightly optimized build
(-Ofast -march=native -ffast-math -funroll-loops -fomit-frame-pointer),
there is no difference (i.e. within margin of error) in performance between the current gzip-1.10 and gzip-1.2.4 from 25 years earlier.
Modern gzip actually executes around 6% more instructions than 1993 gzip did, but it seems the bottleneck is elsewhere, so it ends up at the same speed.
@klaatu the reason why groff(1) outputs x^Hx sequences in titles is to make the characters bold - with paper terminals, it would print the character twice, and so with twice the ink.
less(1) detects this and automatically uses the terminal control character for bold (or for colours, if you have that configured).
groff also uses _^Hx to underline text.
It looks like the zramctl man page should have made it clearer that the block devices it creates are compressed.
Chrome OS uses zram as swap, and it gives a pretty good compression ratio - generally around 50% from my testing, so believe those scammers who tell you how to "Download a RAM stick cheap for free" (provided you don't already have the module).
zram can be used for more than just swap - it can also be used for testing filesystems or lvm in RAM, or as an alternative to tmpfs.
> gWO 13x18: "I don't remember where Scientific Linux is ... I know it's still active."
Are you still in that hole you dug in the ground?
Also fun is when you mount root with commit=60, but haven't got an IO scheduler such as BFQ set up, so other disk accesses take lightyears (oops, I meant epoch wraparounds).
I've also taken to mounting the pacman cache and /usr/share/locale as tmpfs, and I update ~16 packages at a time, as I don't have much free disk space.
I'll be switching to Slackware as soon as qt5 appears in -current...
By the way, I really hate Arch updates. If you happen to close something while an update is going on, you'll have to wait a few hours until the library versions match up again.
That's really fun when you can't even start X (or wayland, for that matter).
The worst in terms of ABI compatibility (i.e. symlinking between versions doesn't work) is icu, so I've started keeping old versions of libicu in /opt. :)
Or, mlt could have just gone and used png, which would have got them down to 6.4M. I have a feeling they *do* do that, just Arch didn't package it properly.
The strength of using png shows why domain-specific compression algorithms are important - no-one sensible would try to use HuffYUV on audio or vorbis for an image...
Actually, an older version of mlt on Arch was 38M, which is still a lot, but heaps less than 258M.
Someone should probably create a bug report for Arch.
For this specific case, the block size doesn't matter too much unless it's very small (such as the 128k with xz).
When compressing something like source code, the results are obviously going to be very different as the lumas are just simple gradients, and all the files are the same size while source code has a lot of symbols, which are shared between files, and the files are ~~usually~~ hopefully quite short.
An experiment with compression:
MLT has 256M (!!) of uncompressed luma wipes, such as the image on this toot.
Compressing with gz yields 54M for squashfs and 52M for a tarball (zip gives a similar size).
With xz, it compresses to 21M for squashfs but 6.8M for txz. Individually compressing files with xz gives 7.2M.
However, when increasing the squashfs block size to 1M from 128K, xz drops to 8.7M but the result for gzip doesn't change much.
In the shownotes for 13x12, you forgot the / to close a strong tag, which made everything below it bold:
> from the <strong>a<strong> package set
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!