If those cookie notices that pop up on many websites were annoying you before, I have bad news for you, because they are only increasing in number and the resolution needed to clear them up is still a while away. From mid-October the Irish Data Protection Commissioner started enforcing the law in this area after a 6 month grace period, which has lead to a significant increase in the number of websites
Although many people blame this on GDPR, the rules are mostly detailed in a slightly older European directive called the ePrivacy Directive. The GDPR just created big fines and enforcement mechanisms, so everyone started to take notice.
I thought it would be good to have a look at the plans to resolve this. The main piece of the puzzle is the EU’s replacement legislation, called the ePrivacy Regulation. This is a similar size of legislation to the GDPR and together they will form the bedrock of the Europe’s vision for a regulated internet over the coming decades.
The GDPR concerned itself with information that companies create about us (which we call “data”). Who is allowed to record data about us, what level of consent do we have to give and what responsibilities do organisations have when they create and manage this data about us.
The ePrivacy regulation, on the other hand, concerns itself with privacy in communications. It looks at two broad areas – 1) communications between our devices and 2) storage of information on our devices.
When we think about communication between our devices (mobiles, laptops, tablets), the world has changed significantly in the last 5 years alone. I created my first Whatsapp group in 2014. Before that, almost all digital communications I sent by text message or phone call was delivered by my mobile network. Vodafone, O2 or Meteor, as it was back then.
These Telcos were fairly well regulated. Now, however, most of our communications is conducted through companies like Whatsapp, or Facebook messenger, or Snapchat. Regulation needs to catch up with this, to modernise the rules that Telcos play by, and to sure everyone plays by the same rules.
The challenge is to create enough regulations to ensure confidentiality and security for European citizens when we’re sending instant messages or emails, but not to make the regulations so burdensome that no new startups can come along to offer better services and challenge the incumbents, or so that the quality of services available in Europe decline.
Much of this is still being debated, but in general everyone seems happy that the content of your message will be confidential (and most likely encrypted), so that no company can read it without your express consent. The more contentious points are around the metadata of your messages – for example, who a text message was from, or to, what time it was sent and from what location. It’s obvious that your phone company should keep metadata to know how many texts you have sent so that they can bill you properly, but can whatsapp, for example, create a list of the people you text most frequently, without your consent?
The one area where legislators feel there might be a case for looking at the content of messages is fighting child sexual abuse. This is very difficult, because there isn’t any way to do this without breaking encryption, so they have agreed to kick the can down the road and discuss this point last.
Other interesting areas here concern communication between two devices, but not two people. Should you need to consent for the communications from your driverless car, or your healthcare device, or your AR glasses or your networked kettle? (Yes, is the general answer) And if so, how?
The second half of the ePrivacy Regulation is focused on information stored on your device, and in particular, cookies. They acknowledge that the current system is not working, so they’re trying to answer the question – what level of data capture and cookies should be reasonably allowed without having to ask someone’s consent, and what should someone have to consent to, even if it is awkward?
Think about analytics. Should someone have to ask everyone’s permission to count the number of people that visit their website, or what sections they visit? Probably not, and severely limiting it could just make digital businesses worse at serving our needs. On the other hand, using analytics tools to build a profile of us, our repeat behaviour and personalising our experiences should probably require our consent.
One proposed solution is a technical one, where you can have browser settings to indicate, for example, that you’re ok with all analytics cookies, but not with advertising cookies. You could then browse around multiple websites without being interrupted and asked for permission. Although, even in this scenario, it’s hard to see why each individual company wouldn’t still show you a pop-up asking for permission to track you with ad cookies on their site alone, even though you’ve opted-out more generally. So the pop-up problem may persist.
The main focus of the debate on both of these areas is around the concept of “legitimate interest“. Should the regulations ban all of these activities without consent as a baseline, then allow only selected activities? Or should they do the opposite – allow companies to place cookies or read metadata when they have “legitimate interest”, but then list all the specific examples where “legitimate interest” doesn’t apply?
I think of it like fraud prevention. You can either assume all potential customers are criminals and make them jump through hoops to prove they’re not (which is why setting up online banking is a nightmare), or you assume good faith by default, but then put in many measures to catch fraudsters. This provides for a better experience for most people, but allows a few bad actors to slip through the net.
The latter is the “legitimate interest” approach, saying organisations can do a small amount of cookie-ing and metadata processing without our consent in a way that folks would generally consider “reasonable,” but then go on to list the specific cases which definitely aren’t “legitimate interest”, and re-iterate that consent is needed for all other instances.
I think this approach is best. It makes compliance just a little bit easier for most businesses who aren’t taking the mick and, most importantly, doesn’t unintentionally restrict future innovations from new startups.
The ePrivacy Regulation was supposed to come into effect with GDPR in 2018, but it’s still being debated, trying to find the balance between the privacy rights of individuals and the burden on businesses.
📰 News
Tech jobs. 10 years ago we were worried that most tech jobs in Ireland were support, sales and admin roles, but since then we’ve done a great job at attracting roles that create things too – product and developer roles. In another positive development here, Microsoft announced 200 engineering jobs in Dublin. Link.
Apple reduced the cut it takes on in-app purchase from 30% to 15%, but only for small developers (earning under $1m year). This seems like a political move to avoid framing the fight as Apple vs. small businesses, but I don’t think it gets them around the fact that they’re charging competitors, like Spotify, a 30% tax. Link.
Subsidising news. It looks like Google have agreed to pay French news outlets for sending traffic to them, after the French regulator demanded they do. I really thought Google might just remove news links, as they’ve done elsewhere. Link.
💡 Interesting Links
Unscientific Machine Learning? There is a myriad of articles being published decrying how AI and machine learning is being used by big tech. It’s often hard to discern the important stuff from the politicized chicken little stuff, but when MIT publish a piece titled “The way we train AI is fundamentally flawed” and it’s based on reports from within Google, it’s worth reading. Their concern is the industry standard way we currently test machine learning models. We use training data, create models, then ask the models questions like “Is this a cat?” or “Does this x-ray look like cancer?” If the model passes the test, we release it into the real world, where often it then fails. The problem, they say, is that our testing is generally trying to prove that the models work, whereas we should be trying to prove they don’t work, and only relying on them when we can’t break them. Link. Reminds me of this wonderful brain teaser – link [Youtube]
Data for Research. Researchers and academics want access to the data tech companies have, Richard Allan explores some of the reasons those companies might be reluctant to share. Link.
Productivity overload. One interesting aspect of the rise of the knowledge worker is that, unlike in industrial assembly lines, where increasing productivity is the responsibility of management and company owners, the productivity of autonomous knowledge worker becomes their personal responsibility, and all the anxiety that comes with it. Link.