2018-11-09

Getting GNU Make 3.81 compiling again on Ubuntu 18.04

I have to use GNU Make 3.81 with a lot of my software because they use my Binodeps and Jardeps libraries, and they manifest a bug introduced in Make 3.82 (#48643). [Update 2021-11-15 Looks like it's not really a bug, as the behaviour has been in 3.75, 3.79.1, 3.80 and 3.82+. I still think it is something of design error, but it is evidently established behaviour in both implementation and documentation (under ought-to-exist). Someone has proposed a patch which allows old and new behaviour; maybe it will be accepted. My own patch would be too disruptive.] On a fresh system, I usually fetch make-3.81.tar.bz2, unpack it, build it, and install it in /usr/local:

cd /tmp
wget 'https://ftp.gnu.org/gnu/make/make-3.81.tar.bz2'
tar xjf make-3.81.tar.bz2
cd make-3.81
./configure
make
sudo make install

This stopped working recently, as the make command completes:

gcc  -g -O2   -o make  ar.o arscan.o commands.o default.o dir.o expand.o file.o function.o getopt.o getopt1.o implicit.o job.o main.o misc.o read.o remake.o remote-stub.o rule.o signame.o strcache.o variable.o version.o vpath.o hash.o glob/libglob.a  
glob/libglob.a(glob.o): In function `glob_in_dir':
/tmp/make-3.81/glob/glob.c:1361: undefined reference to `__alloca'
/tmp/make-3.81/glob/glob.c:1336: undefined reference to `__alloca'
/tmp/make-3.81/glob/glob.c:1277: undefined reference to `__alloca'
/tmp/make-3.81/glob/glob.c:1250: undefined reference to `__alloca'
glob/libglob.a(glob.o): In function `glob':
/tmp/make-3.81/glob/glob.c:575: undefined reference to `__alloca'
glob/libglob.a(glob.o):/tmp/make-3.81/glob/glob.c:726: more undefined references to `__alloca' follow
collect2: error: ld returned 1 exit status

This is likely a result of moving from Ubuntu 16.04 to 18.04. The issue is discussed in the GNU Make thread undefined reference to `__alloca', where a line in configure.ac is highlighted [my emphasis]:

#define GLOB_INTERFACE_VERSION 1
#if !defined _LIBC && defined __GNU_LIBRARY__ && __GNU_LIBRARY__ > 1
# include <gnu-versions.h>
# if _GNU_GLOB_INTERFACE_VERSION == GLOB_INTERFACE_VERSION

The recommended fix is to change the comparison from == to >=. However, there is no configure.ac in the tar file, but there are similar lines in configure and configure.in (which, I presume, are generated from configure.ac, which itself probably only exists in the source repository, and is not routinely packaged up with the tarballs). Changing the one in configure.in seems to do the trick.

2018-09-10

Issues mounting MTP on Kubuntu

I've had difficulty getting Kubuntu to mount a Samsung Galaxy S8 reliably. Using the Device Notifier gets as far as showing directory structure and thumbnails, but the MTP process dies if you try to read a file properly. Also, the phone asks the user whether it should be accessed via USB only after the first mount attempt. If you say “Allow”, it withdraws its current configuration (thereby invalidating the first mount), and re-offers it (thereby causing the Device Notifier to pop up again, and requiring the user to open another window). Perhaps it's a clash between USB and Android requirements: (say) the phone must respond to a mount-triggered USB request at once, but Android also has to wait for user authorization, and has no way to asynchronously inform the host of new files appearing on an existing mount? To get anywhere, I've had to abandon the Device Notifier, install jmtpfs on the host, and run it manually, and twice. I've also had to enable Developer Options on the phone(?!).

Now I'm trying to write an auto-mounting script for a headless machine, so that the latest photos and videos I've recorded on my devices can be automatically moved off the device simply by plugging it in. The files will later be dropped into an ingest process to make them ready for presentation over DLNA. I use this to watch for USB devices being plugged and unplugged:

$ inotifywait -m -r /dev/bus/usb -e CREATE -e DELETE
/dev/bus/usb/001 CREATE 010
/dev/bus/usb/001 DELETE 010
/dev/bus/usb/001 CREATE 011
/dev/bus/usb/001 DELETE 011

CREATE and DELETE events specify the bus number and device number (e.g., 001:010) when the phone is plugged in or unplugged. The output of lsusb -v -s 001:010 provides the vendor id and serial number of the device, and whether an MTP interface is provided, so events for non-MTP devices can be ignored.

On plugging in, the CREATE event is received. The phone lights up, but doesn't yet ask the user if the host has permission to access its files. I ensure a mount point exists, and run jmtpfs on it, specifying the device id:

mkdir -p "/var/mtp/$vendid-$serialno"
jmtpfs -device=001,010 "/var/mtp/$vendid-$serialno"

This triggers the phone to ask for authorization from the user. Although the response is still pending, the mount appears to succeed, so I proceed to scan the mount point for interesting files with find. All attempts to scan or access fail with I/O errors, so there's nothing to do but unmount with:

fusermount -u "/var/mtp/$vendid-$serialno"

Now I tap “Allow” on the phone, and I get a DELETE for 001:010, immediately followed by CREATE for 001:011. The device id has changed, but the vendor id and serial number are the same, so the same mount point is used, and mounts without error as before. This time, a scan of files succeeds, and unmounting can take place when they have been processed.

So, the trick seems to be:

  • Expect failure and assume a retry will occur. (If it doesn't, it obviously wasn't that important.)
  • Use the vendor id and serial number to avoid treating the retry as new device. (You don't actually need to remember that there was an earlier failure, just make sure that your action in the second cycle tries to do exactly what it tried to do before.)

2018-09-09

Two logical interfaces on one physical, on Ubuntu 18.04 without Netplan

If my ISP-provided home gateway allowed DNS aliases to be configured, I'd get it to map foo.home to bar.home, a headless server. foo.home is meant to be present in my home network and in my relatives', to identify a host providing write access at each site to a library of photos, videos and music that are synchronized between sites. The home gateway has no such aliasing feature, so I've done it by adding an interface on the bar.home host. The new interface looks like a different host to the gateway, so it can have a different name. It happens to get a different IP too.

I could have achieved largely the same with the spare wireless interface, but why use up airwaves to travel 1 foot between a static host and the access point? I could have bought a USB Ethernet dongle, but I did it without any extra hardware or using up a socket on the gateway by creating a virtual interface faux0 piggybacked on the physical wired interface enp3s0. Here's what I did on Ubuntu Server 18.04.

From Netplan to Ifupdown

Ubuntu 18.04 uses Netplan by default. Its configuration files match /etc/netplan/*.yaml. Older Ubuntus use ifup and ifdown which read configuration from /etc/network/interfaces, and ifup -a is run at boot to bring up all marked interfaces. I'd hoped to configure Netplan to set up two logical interfaces on one physical one, with different MAC addresses, and each making DHCP requests with different names, but it doesn't seem to have any way to do that. According to Netplan documentation, installing the package ifupdown is sufficient to disable Netplan:

sudo apt-get install ifupdown

Now you need configuration to make ifupdown perform Netplan's duties:

# In /etc/network/interfaces
auto lo
iface lo inet loopback

auto enp3s0
iface enp3s0 inet dhcp

The interface name enp3s0 is the host's sole wired Ethernet device. Yours might have a different name, perhaps the traditional eth0. You can list all interface names with:

ip link show

Just to make sure, I also renamed 01-netcfg.yaml to 01-netcfg.yaml-disabled. That's the only file I found in /etc/netplan/, so that really should render it inert, as Netplan doesn't modify interfaces it does not match in its configuration.

Things that didn't work

I also investigated removing the package netplan.io, but was told that that would also remove ubuntu-minimal. I suspected that might be a bad idea. There's also a package netplan, which also provides the /usr/sbin/netplan binary, but it was not installed.

Creating the second interface

With ifupdown now responsible for interface configuration at boot, define the new interface:

# In /etc/network/interfaces
auto lo
iface lo inet loopback

auto enp3s0
iface enp3s0 inet dhcp

auto faux0
iface faux0 inet dhcp
pre-up ip link add faux0 link enp3s0 address XX:XX:XX:XX:XX:XX type macvlan
pre-up /sbin/sysctl -w net.ipv6.conf.faux0.autoconf=0
post-down ip link delete faux0

I named the new, virtual interface faux0. It's created with an ip link command just before the interface comes up, and similarly deleted just after being taken down, using the pre-up and post-down directives.

The pre-up /sbin/sysctl is not essential, but disables SLAAC on the interface, which is appropriate for virtual interfaces. Don't know whether I'll need it, but the interface seemed to be accumulating a lot of IPv6 addresses, so I'll try it and see.

The new interface has a distinct MAC address XX:XX:XX:XX:XX:XX, specified as it is created. I've borrowed one from a device I know will not be seen on my home network, but there's probably a better strategy, something that a virtualization system employs, perhaps. It's not something I've looked into yet. Maybe someone will explain in a comment, because I get a lot of those. The interfaces file format also has a hwaddress setting, but it seemed to have no effect.

The new interface is configured to request a DHCP lease with the name foo:

# In /etc/dhcp/dhclient.conf
interface "faux0" {
  send host-name "foo";
  send dhcp-client-identifier 1:XX:XX:XX:XX:XX:XX;
}

I've thrown in a dhcp-client-identifier setting, but I'm not sure how vital it is. It seems you can use any string (with quotes if necessary), and it's just to stop the gateway from thinking the two DHCP clients are the same, leading to both interfaces coming up as under the same name in the gateway's web interface, making it less clear what you're port-forwarding to. However, that could have been caused in my case by bad data cached in the gateway, flushed out by leaving the server off while deleting the entries in the gateway. I'm going to leave them the setting in for now, as it seems harmless. I explicitly set an identifier for the main interface too, for completeness.

Things that didn't work

Setting the hostname for the interface with a hostname directive in /etc/network/interfaces didn't work because dhclient doesn't recognize it. Hence, it is set in dhclient's own configuration.

Setting the MAC address with a hwaddress also didn't work.

ARP flux

ARP flux can be a problem. Both interfaces can respond to ARP requests for either of their IPs. My home gateway then detects that both IPs map to the same MAC, and therefore to the same hostname, so both names end up resolving to the same IP. The other address, though it gets properly assigned to the right interface, never gets used. Functionally, this is fine, and actually meets the goal of having DNS aliases. However, it messes up the rendition and editing of port-forwarding rules in the gateway. If you have a rule forwarding to the disused MAC, its IP has no name, so the IP is displayed as the destination, not the hostname. It's also impossible to select that IP as a destination, because you can only select by name on this particular gateway.

To fix this, it's possible to disable the interfaces responding to ARP requests on behalf of each other, and the following seems to the right combination of settings to avoid one of the interfaces going dead (according to this serverfault article):

# In /etc/sysctl.conf
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
net.ipv4.conf.all.rp_filter=2

You can test these temporarily with the likes of:

sudo sysctl -w net.ipv4.conf.all.arp_ignore=1
sudo sysctl -w net.ipv4.conf.all.arp_announce=2
sudo sysctl -w net.ipv4.conf.all.rp_filter=2

Things that didn't work

Not setting rp_filter results in one of the interfaces being unable to receive traffic, effectively leaving it dead.


That should be it. Rebooting should put that into effect, but without rebooting, this should be enough (from the machine's console, not remotely!):

sudo ifdown enp3s0
sudo ifup enp3s0
sudo ifup faux0

In summary:

  • Install ifupdown.
  • Remove or rename Netplan files matching /etc/netplan/*.yaml to disable Netplan.
  • Create entries in /etc/network/interfaces to take over Netplan's duties, and augment to set up the extra interface.
  • Tell dhclient to use foo instead of the machine's hostname.
  • Take steps to prevent ARP flux.

Happy now?


[Edited to include notes on ARP flux.]

2018-08-18

JDK9 doclet API frustration

The new Javadoc doclet API promises a better view of Javadoc comments than before, one consistent and integrated with other source-related tools. I recently decided that my old doclet (“ssdoc”) based on the old API was becoming unmaintainable, and that I should start writing afresh against the new API (“Polydoclot” at the same location).

One way that the new API helps is that HTML tags and entity/character references in Javadoc comments are distinctly parsed along with Javadoc tags, important if your doclet is generating something other than HTML. If you were writing XHTML, you'd have to recognize empty HTML tags, and infer implicit closing of (say) <p> by a <div>, so that you could meet the strict requirement of XHTML that all elements are properly closed. For LaTeX output, references like &amp; would first have to be decoded into & before being re-escaped as \&.

So, that's a big improvement. However, I've found a few faults (at least, as I deem them) in the new API/implementation:

  1. It does not resolve understood HTML entity/character references, even though the new API obviates retaining them in their original form. (The old API did not have this option.)
  2. It does not resolve context-sensitive signatures in {@link}, {@linkplain}, {@value} and @see tags any more. (It used to!)
  3. It does not recursively parse the content of unknown in-line tags. (It used to!)
  4. Unknown in-line tags have their own class UnknownInlineTagTree, instead of simply being of the supertype InlineTagTree. Similarly, unknown block tags have their own class UnknownBlockTagTree, instead of simply being of the supertype BlockTagTree. This causes problems when tags defined in future JDKs are supplied to doclets compiled against older APIs.
  5. By now, there ought to be a formal way of determining how to link to elements within a Javadoc installation. (It keeps changing, and pinning it down would be too restrictive for alternative doclets.)

Here are those points in detail.

Lack of HTML reference resolution

The new API parses Javadoc source looking for Javadoc tags, HTML tags, and HTML entity/character references, and has distinct classes to represent each of these three groups. Since the HTML tags are distinctly represented from plain text by StartElementTree and EndElementTree, HTML references no longer need to remain escaped, and could just appear as the resolved character in a TextTree. The only times that can't happen are when the referenced entity is not recognized, or when it maps to a character not expressible in a Java string. Otherwise, why not just resolve them away? Whether you're generating HTML or something else, the escaping is only required within the source. The resolution has to be done whatever the output, and it is the same whatever the output.

Lack of signature resolution

The old API modelled {@link}, {@linkplain}, {@value} and @see tags with the SeeTag class. Javadoc would parse (say) {@link Service#close()}, work out that Service referred to (say) org.example.Service based on imports, on nested class declarations of the file containing the {@link}, or on the enclosing package, pick the zero-argument method called close from it, and then provide references to the modelled method through SeeTag.referencedMember().

In the new API, {@link}/{@linkplain}, {@value} and @see tags are modelled with LinkTree, ValueTree and SeeTree respectively. The first two each provide a ReferenceTree directly, and SeeTree provides one as the first element of its content, as it's meant to cope with other kinds of references. In turn, ReferenceTree provides just a flat string taken unchanged from the tag. This requires the doclet author to write some 300 lines of code to meet this contract:

/**
 * Resolve a signature in a given element context.
 *
 * @param context the element whose documentation
 * provided the signature
 *
 * @param signature the flat, unresolved signature,
 * as provided by ReferenceTree.getSignature()
 *
 * @return the corresponding element, or null if
 * not found
 */
Element resolveSignature(Element context, String signature);

I imagine this design decision is based on not wanting the Javadoc tool to do things that are doclet-specific. But how else should {@link} be interpreted? The output might be different between (say) HTML and LaTeX, but it still fundamentally refers to the same program element, independently of how the doclet will choose to use it!

Lack of recursive parsing of in-line tag content

In the old API, if an unknown in-line tag was encountered, it would be modelled as a plain (unspecialized) Tag, but its content would be parsed as a sequence of inner tags, accessible through inlineTags(). In the new API, the content is just a flat string! Yet UnknownInlineTagTree.getContent() returns a list of documentation tree nodes, implying that the content should have been recursively parsed. Instead, it returns a list of exactly one TextTree. This requires an explicit parsing routine that reparses arbitrary text according to Javadoc rules, and the only way I could find to do that was to spoof a FileObject with the content wrapped in <body>.

Again, this looks like a design decision to avoid the Javadoc tool from doing something doclet-specific, but Javadoc already has to impose some basic structure on the content, i.e., braces of nested tags have to match up, so it can't be left as free-format for the doclet. And, if Javadoc is going so far as to parse the braces, it might as well finish the job, especially since &#123; and &#125; will be needed to escape any braces to be passed literally to the doclet, which means &amp; will also be needed. Then, the documenter shouldn't have to remember which characters need to be escaped based on context (especially if the doclet doesn't recognize a tag), and the doclet author shouldn't have to re-escape & or re-piece together the parsed components just so that the rest of it can be re-interpreted as Javadoc+HTML again.

An alternative might be for the doclet to be able to declare which tags it recognizes, which ones should have their content parsed, etc. A method declareTags(TagTypes tts) on Doclet could be invoked at a sufficiently early stage to collect that information. It would be an opportunity to specify argument syntax in general too, as you might want to define {@link}-like tags that take an element reference as an argument, for example. However, that forces that documenter to be over-conscious of whether an extension tag will be recognized.

Special classes for unknown tags

So, there's a class BlockTagTree, the base type for all block tags. It also has a subtype UnknownBlockTagTree, which adds a method to get the parsed content of the block tag. What if a previously unknown block tag @foo starts being recognized by a new Javadoc implementation and API? You'd have a new FooTagTree class extending BlockTagTree, but now the object representing the tag can't go to the same places as it did when it was unknown. Sure, the visitor type probably has a new method on it to accept the new type, but if the doclet was compiled against the old API, it cannot override that, and it won't go through visitUnknownBlockTag(), because it's the wrong type. Fortunately, the doclet can specify the most recent version of Java (and Javadoc, implicitly?) it recognizes, allowing Javadoc to deliberately fail to recognize the new tag. Does it do that for block tags? Not sure yet.

It doesn't do that for in-line tags! JDK10 introduces a {@summary} in-line tag to be used to explicitly delimit the “first sentence” of an element's description, when application of the default rules (“Look for the first dot and whitespace.”) leads to the wrong result. It also defines a SummaryTree class to represent this. However, even though my doclet's highest language version is declared as 9, the {@summary} tag doesn't come through as UnknownInlineTagTree, so it is ignored, and the most important content of the documentation goes missing. My doclet is compiled against 9, so SummaryTree is not available, so the doclet has no option to provide a special visitor for that case. If I compile against 10, it won't be runnable against 9, because SummaryTree will be unavailable at runtime.

If UnknownInlineTagTree were to be abolished, with InlineTagTree subsuming its functions, a JDK10 default implementation of visitSummary(...) (which no JDK9 doclet can override) could call visitUnknownInlineTag(...) (which would now take an InlineTagTree instead of UnknownInlineTagTree), and some sensible default action could be taken, leading to some future-proofing for doclet implementations.

(This ties in with the generic, recursive parsing of in-line tags. The method getContent() is on UnknownInlineTagTree, but moving it to InlineTagTree kind-of implies that you unconditionally parse all tags' content, whether the tag type is known or not.)

Not distinguishing between block and in-line tags

Now that JDK10 recognizes the in-line {@summary}, it tramples on my own @summary, even though it's a block tag. These are syntactically distinguishable!

Lack of mechanism to derive URI for element documentation

The original Javadoc mapped methods to simple fragment identifiers, so foo(String,int) became #foo(java.lang.String, int). Later versions of Javadoc changed the scheme to avoid brackets and spaces, possibly to make it more compatible with (say) the more limited XML fragment-identifier syntax. It also used to erase parameter types, but later versions do not, and varargs are no longer flattened into arrays. (And wtf? Brackets are back in 10!) This makes linking to an installation generated by a different doclet awkward.

By now, there ought to be a formal way of determining how to link into a Javadoc installation without having to be the doclet that created it. For both the old and new versions of my doclet, I came up with the following. The doclet should generate (say) doc-properties.xml alongside package-list or element-list. This would be the XML representation of a Properties object, a property of which describes how to mechanically generate links to the documentation of specific elements, relative to the documentation's base address. Another doclet, told to -link to such an installation, would look up doc-properties.xml (in the same way it must already look up package-list/element-list), extract a well-known property, and use its value in a MacroFormatter. This would automatically tell it how to link within the site, while independently using its own scheme, which it can express to other doclets through the same mechanism. The format string would be arcane, e.g.:

{?PACKAGE:{${PACKAGE}:\\.:/}{?CLASS:/{${CLASS}:\\.:\\$}{?FIELD:-field-{FIELD}:{?EXEC:-{?CONSTR:constr:method-{EXEC}}{@PARAMETER:I:/{?PARAMETER.{I}.DIMS:{PARAMETER.{I}.DIMS}:0}{${PARAMETER.{I}}:\\.:\\$}}}}:/package-summary}:{${MODULE}:\\.:\\$}-module}

…but it's only meant to be machine-readable.

I chose XML as it obviates charset issues. Simply serve as application/xml. A Properties object leaves room for expansion, and you could probably deprecate package-list/element-list altogether by incorporating their information into the same doc-properties.xml, although retaining the simpler format could still be useful for interfacing with other languages.

Summary

Please, authors of javadoc:

  • Specify contractually that the documentation author shall write literal text, HTML element tags, HTML references, and Javadoc in-line tags (recursively containing such structured content) in the bodies and block-tag content of Javadoc comments, regardless of the documentation output format. Javadoc shall supply literal text, HTML element tags, unrecognized HTML references, and Javadoc in-line tags to the doclet, regardless of the documentation output format.
  • Resolve HTML references into their corresponding unescaped text, if possible, and merge with adjacent literal text.
  • Specify that unrecognized in-line tags should be interpreted as if only their content existed.
  • If you're going to make the effort of recognizing @see, {@link} and {@value} tags, bother to resolve the signatures within them to Elements too.
  • Either uniformly parse all tag's content recursively, or introduce a means for the doclet to declare tags whose content should be recursively parsed. Failing that, at least expose the routine to do the parsing directly, rather than forcing the doclet author to draw such a routine out of the API's own rectum.
  • Introduce a means for a doclet to declare tags whose arguments should be resolved as element references.
  • Move the methods of UnknownBlockTagTree and UnknownInlineTagTree to BlockTagTree and InlineTagTree respectively, and deprecate Unknown*TagTree.
  • Devise and specify a technique for expressing how to link with documentation elements, something that can be statically served with the documentation just like package-list already is.

Fixing SDDM scale on 4K screens

I'm running Kubuntu 18.04 on a 4K screen*, and everything is tiny. I can fix the desktop when I'm logged in by scaling the display in the “Display and Monitor” settings. This doesn't affect the display manager's screen before you log in, though. As a note to myself if I have to do this again, I modified /usr/share/sddm/scripts/Xsetup, adding this to the end:

xrandr --output eDP-1-1 --fbmm 346x194

That file is obviously for SDDM only. Other display managers might have a similar script in a different location.

The string eDP-1-1 and the screen's physical size are given by xrandr:

$ xrandr --query | grep ' connected'
eDP-1-1 connected primary 3840x2160+0+0 (normal left inverted right x axis y axis) 346mm x 194mm

I suspect that the reported dimensions might only be accurate after you've applied scaling in the desktop.

*(Why did I get a 4K screen? Twenty years ago, I might actually have been able to see the difference…)

2018-03-21

Effective defaults for equals and hashCode in Java?

So is it not possible to do this:

package java.lang;

public interface RootInterface {
  default boolean equals(Object other) {
    return this == other;
  }

  default int hashCode() {
    return System.identityHashCode(this);
  }
}

Then interpret all interfaces that don't extend anything as implicitly extending RootInterface? Then remove equals and hashCode from java.lang.Object, and get it to implement RootInterface?

package java.lang.Object;

public class Object implements RootInterface {
  ... // no hashCode or equals
}

Result: Interfaces can provide effective defaults for equals and hashCode? Nothing else breaks (except that RootInterface might be better off in a package not implicitly imported)?

This round tuit was brought to you by avoiding real work.


Update: It's possibly a bad idea for interfaces not to implicitly extend Object, as <?> and <? extends Object> then wouldn't be able to match any interface type, even though you could be sure the underlying object was certainly an Object.

2018-01-02

EU Cookie Law dumbness

I've wanted to say something about this for a long time, but never got a round tuit.

The “EU Cookie Law” is supposed to give website visitors the right to refuse the use of cookies. The way this seems to be interpreted is that sites that use cookies must place an intrusive warning over their content for new visitors, advising them that cookies are in use, possibly offering some cookie settings and a policy for the site, and generally obtaining consent to use cookies. After some explicit or implicit action by the visitor, the warning goes away, and that particular visitor is never bothered with them again.

But there's a problem. The site remembers that the visitor has seen the warning by using a cookie! This means that you cannot use the site without using a cookie!

And it's all so pointless. Visitors already have the ability to refuse the use of cookies by configuring their browsers. Granted, not everyone is aware of this, and knows how, and browsers' configuration capabilities may vary, but it's a browser problem.

The worst part is that the cookie law prevents this browser problem being solved in the browser. If you turn cookies off, the site can't remember that you've already been warned, and always puts up the warning, often obscuring essential parts of the content.

Here's a site that seems to explain the Cookie Law, but also looks like it offers cookie compliance services (despite its .org suffix): The Cookie Law Explained The Cookie Law is a piece of privacy legislation that requires websites to obtain consent from visitors to store or retrieve any information on a computer or any other web connected device, like a smartphone or tablet.


Here are some more details, updated 2022-04-02.

Exascerbations

There are several variations to the way cookie consent is obtained, and these can make the problem worse:

  • The cookie consent form often pops up over the page content, and sometimes prevents scrolling, making the content inaccessible until the form is submitted.

    (I suspect the law requires the consent request to be ‘prominent’, and no site wants to risk being regarded as less than that. A visitor is likely more motivated to click it away as soon as possible too, the more intrusive it is.)

  • The consent form often dazzles with hundreds of options. Many sites will fortunately show all consent turned off (where possible) by default, but some don't. Most sites display the ‘Consent to all’ submission button much more prominently than the ‘Save current options’ button. Few have a ‘Reject all’ button, and are misleading anyway, since a cookie will be used to record the lack of consent.

  • JavaScript is often required to submit the consent form, so the user has to whitelist the site for JavaScript before he has had an opportunity to check the content, and judge whether it's worth the risk.

  • When the consent rejection cookie expires, you go through it all again. I dare say, sites are not motivated to renew it automatically.

Alternative solution

A better solution would be to allow visitors to exploit the fact that not retaining a cookie is sufficient to implement lack of consent, and then it's a matter of having browser functionality that lets the user choose which cookies to retain. The law should work more like this:

  1. As with the current law, require sites to classify their cookies by purpose. Cookie consent pop-ups often indicate that some of the site's cookies are essential for the functioning of the site, some are for performance, and some for marketing; there might be other classes, such as function enhancement. These broad classifications must have already been deemed good determinants for whether to retain a cookie, so they should continue in the new law.

  2. Require sites to attribute their cookies according to purpose classification. For example, if it's a performance cookie, set an attribute such as cookie-name=cookie-value; Complience=http://cookie.law.eu/performance. A site is then legally (or at least enforceably, or reputationally) required to ensure that the cookie is not used for other purposes. The purpose of a cookie is now available and machine-readable in its delivery.

This approach has the following benefits:

  1. Browsers can offer (say) whitelisting of cookies based on site and cookie purpose. When visiting a new site, the user is assured that no new cookies will be stored, unless the site is making an enforceable declaration that they will only be used for the declared purposes, and only if those purposes are whitelisted. Cookies that do not follow the attribution convention will be deemed to have unknown purpose, and can be automatically discarded.

    No pop-ups are required, because the site is not required to obtain consent. The browser simply refuses to give it by not storing the cookie. Notification of cookie policy can just be a discreet link.

    No JavaScript is required, because no pop-up is required.

  2. If a site is suspected of misusing a cookie, there must already be a way under the current law to investigate it and enforce the rules (or the law has no teeth!). Use the same mechanism here. The only difference is that the purpose of a cookie that is under investigation is embedded in its delivery, rather than in some separate policy declaration made by the site.

    This, of course, is a mechanism to be used rarely. The threat of its use should ensure compliance, and underpins the assurance that the visitor has about cookie use.


Note on EU membership and Brexit

I am not a Brexiteer. Brexit was dumb, is no real solution to anything, and has probably committed the UK to self-destruction. Being able to replace the EU Cookie Law is barely a Brexit benefit, and it could have been done while in the EU by persuading MEPs to vote on it. Even if the UK unilaterally changes it now, it hardly has the clout by itself to enforce it.