Showing all posts tagged facebook:

The Wrong Frame

The conversation about the proposed Australian law requiring Internet companies to pay for news continues (previously, previously).

Last time around, Google had agreed to pay A$60m to local news organisations, and had therefore been exempted from the ban. Facebook initially refused to cough up, and banned news in Australia — and Australian news sites entirely — but later capitulated and reversed their ban on news pages in Australia. They even committed to invest $1 billion in news.

One particular thread keeps coming up in this debate, which is that news publications benefit from the traffic that Facebook and Google send their way. This is of course true, which is why legislation that demands that FB & Google pay for links to news sites is spectacularly ill-conceived, easy to criticise, and certain to backfire if implemented.

Many cite the example of Spain, where Google shuttered the local Google News service after a sustained campaign — only for newspapers to call on European competition authorities to stop Google shutting its operation. However, it turns out that since the Google News shutdown in Spain, overall traffic to news sites went largely unchanged.

Getting the facts right in these cases is very important because the future of the web and of news media is at stake. The last couple of decades have in my opinion been a huge mistake, with the headlong rush after ever more data to produce ever more perfectly targeted advertising obscuring all other concerns. Leaving aside privacy as an absolute good, even on the utilitarian terms of effective advertising, this has been a very poor bargain. Certainly I have yet to see any targeted ads worth their CPM, despite the torrent of data I generate. Meanwhile, ads based off a single bit of information — "Dominic is reading Wired" (or evo, or Monocle) have lead me to many purchases.

The worst of it is that news media do not benefit at all from the adtech economy. Their role is to be the honeypot that attracts high-value users — but the premise of cross-site tracking is that once advertisers have identified those high-value users, they can go and advertise to them on sites that charge a lot less than top-tier newspapers or magazines. The New York Times found this out when they turned off tracking on their website due to GDPR — and saw no reduction in ad revenues.

Of course not every site has the cachet or the international reach of the NYT, but if you want local news, you read your local paper — say, the Sydney Morning Herald. Meanwhile, if you're an advertiser wanting to reach people in Sydney, you can either profile them and track them all over the web (or rather, pay FB & G to do it for you) — or just put your ad in the SMH.

Hard cases make bad law. The question of how to make news media profitable in the age of the Web where the traditional dynamics of that market have been completely upended is a hard and important one. This Australian law is not the right way to solve that question, even aside from the implications of this basically being a handout to Rupert Murdoch — and one which would end up being paid in the US, not even in Australia.

Let us hope that the next government to address this question makes a better job of it.


🖼️ Photo by AbsolutVision on Unsplash

The Framing Continues

The framing of Australia's battle against Google and Facebook continues in a new piece with the inflammatory title Australian law could make internet ‘unworkable’, says World Wide Web inventor Tim Berners-Lee.

Here's what Sir Timothy had to say:

"Specifically, I am concerned that that code risks breaching a fundamental principle of the web by requiring payment for linking between certain content online"

This is indeed the problem: I am not a lawyer, nor do I play one on the internet, so I won't comment on the legalities of the Australian situation — but any requirement to pay for links would indeed break the Web (not the Internet!) as we know it. But that's not the issue at risk, despite Google's attempts to frame the situation that way (emphasis mine):

Google contends the law does require it to pay for clicks. Google regional managing director Melanie Silva told the same Senate committee that read Berners-Lee’s submission last month she is most concerned that the code "requires payments simply for links and snippets."

As far as I can tell, the News Media and Digital Platforms Mandatory Bargaining Code does not actually clarify one way or the other whether it applies to links or snippets. This lack of clarity is the problem with regulations drafted to address tech problems created by the refusal of tech companies to engage in good-faith negotiations. Paying for links, such as the links throughout this blog post, is one thing — and that would indeed break the Web. Paying for snippets, where the whole point is that Google or Facebook quote enough of the article, including scraping images, that readers may not feel they need to click through to the original source, is something rather different.

Lazily conflating the two only helps unscrupulous actors hide behind respected names like Tim Berners-Lee's to frame the argument their own way. In law and in technology, details matter.

And of course you can't trust anything Facebook says, as they have once again been caught over-inflating their ad reach metrics:

According to sections of a filing in the lawsuit that were unredacted on Wednesday, a Facebook product manager in charge of potential reach proposed changing the definition of the metric in mid-2018 to render it more accurate.

However, internal emails show that his suggestion was rebuffed by Facebook executives overseeing metrics on the grounds that the "revenue impact" for the company would be "significant", the filing said.

The product manager responded by saying "it’s revenue we should have never made given the fact it’s based on wrong data", the complaint said.

The proposed Australian law is a bad law, and the reason it is bad is because it is based on a misapprehension of the problem it aims to solve.

In The Frame

Google and Facebook have been feuding with the Australian government for a while, because in our cyberpunk present, that's what happens: transnational megacorporations go toe-to-toe with governments. The news today is that Google capitulated, and will pay a fee to continue accessing Australian news, while Facebook very much did not capitulate. This is what users are faced with, whether sharing a news item from an Australian source, or sharing an international source into Australia:

Image

I see a lot of analysis and commentary around this issue that is simply factually wrong, so here's a quick explainer. Google first, because I think it's actually the more interesting of the two.

The best way to influence the outcome of an argument is to apply the right framing from the beginning. If you can get that framing accepted by other parties — opponents, referees, and bystanders in the court of public opinion — you’re home free. For a while there, it looked like Google had succeeded in getting their framing accepted, and in the longer run, that may still be enough of a win for them.

The problem that news media have with Google is not with whether or not Google links to their websites. After all, 95% of Australian search traffic goes to Google, so that’s the way to acquire readers. The idea is that Google users search for some topic that’s in the news, click through to a news article, and there they are, on the newspaper’s website, being served the newspaper’s ads.

The difficulty arises if Google does not send the readers through to the newspaper’s own site, but instead displays the text of the article in a snippet on its own site. Those readers do not click through to the newspaper’s site, do not get served ads by the newspaper, and do not click around to other pages on the newspaper’s site. In fact, as far as the newspaper is concerned, those readers are entirely invisible, not even counted as immaterial visitors to swell their market penetration data.

This scenario is not some far-fetched hypothetical; this exact sequence of events played out with a site called CelebrityNetWorth. The site was founded on the basis that people would want to know how rich a given famous person was, and all was well — until Google decided that, instead of sending searches on to CelebrityNetWorth, they would display the data themselves, directly in Google. CelebrityNetWorth's traffic cratered, together with their ad revenue.

That is the scenario that news media want to avoid.

Facebook does the same sort of thing, displaying a preview of the article directly in the Facebook News Feed. However, the reason why Google have capitulated to Australia's demands and Facebook have not is that Facebook is actively trying to get out of dealing with news. It's simply more trouble than it's worth, netting them accusations from all quarters: they are eviscerating the news media, while also radicalising people by creating filter bubbles that only show a certain kind of news. I would not actually be surprised if they used the Australian situation as an experiment prior to phasing out news more generally (it's already only 4% of the News Feed, apparently).

There has also been some overreach on the Australian side, to be sure. In particular, early drafts of the bill would have required that tech companies give their news media partners 28 days’ notice before making any changes that would affect how users interact with their content.

The reason these algorithms important is that for many years websites — and news media sites are no exception — have had to dance to the whims of Facebook and Google's algorithms. In the early naive days of the web, you could describe your page by simply putting relevant tags in the META elements of the page source. Search engines would crawl and index these, and a search would find relevant pages. However, people being people, unscrupulous web site operators quickly began "tag stuffing", putting all sorts of tags in their pages that were not really relevant but would boost their search ranking.

And so began an arms race between search engines trying to produce better results for users, and "dark SEO" types trying to game the algorithm.

Then on top of that come social networks like Facebook, which track users' engagement with the platform and attempt to present users with content that will drive them to engage further. A simplistic (but not untrue) extrapolation is that inflammatory content does well in that environment because people will be driven to interact with it, share it, comment on it, and flame other commenters.

So we have legitimate websites (let's generously assume that all news media are legit) trying to figure out this constantly changing landscape, dancing to the platforms' whims. They have no insight into the workings of the algorithm; after all, nothing can be published without the scammers also taking advantage. Even the data that is provided is not trustworthy; famously, Facebook vastly over-inflated its video metrics, leading publications to "pivot to video", only to see little to no return on their investments. Some of us, of course, pointed out at the time that not everyone wants video — but publications desperate for any SEO edge went in big, and regretted it.1

Who decides what we see? The promise of "new media" was that we would not be beholden to the whims of a handful of (pale, male and stale) newspaper editors. Instead, we now have a situation in which it is not even clear what is news and what is not, with everybody — users and platforms — second-guessing each other.

And so we find ourselves running an experiment in Australia: is it possible to make news pay? Or will users not miss it once it's gone? Either way, it's going to be interesting. For now, the only big loser seems to be Bing, who had hoped to swoop in and take the Australian web search market from Google. The deal Google signed with News Corporation runs for three years, which should be enough time to see some results.


🖼️ Photo by Markus Winkler on Unsplash


  1. Another Facebook metric that people relied on was Potential Reach; now it emerges that Facebook knowingly allowed customers to rely on vastly over-inflated Potential Reach numbers

Branded

A recurring topic when it comes to curbing the power of Facebook to influence the real world is somehow to curtail its huge advertising revenue. Campaigns such as Sleeping Giants have made it their business to call out advertisers whose brands had been associated with unsavoury themes, causing revenue to alt-right websites to drop as much as 90% (despite some shenanigans to attempt to reverse the drain).

In the wake of all this, large corporations such as Disney have made a big deal of "boycotting" Facebook:

Walt Disney has dramatically slashed its advertising spending on Facebook according to people familiar with the situation, the latest setback for the tech giant as it faces a boycott from companies upset with its handling of hate speech and divisive content.

The reasons for the supposed boycott are never stated clearly, but centre on supposed enablement of the alt-right by Facebook. I suspect that the actual recruitment is happening elsewhere, e.g. through YouTube’s recommendation algorithm, but that is a whole other issue.

Facebook seems unswayed:

Facebook executives, including Carolyn Everson, vice president of its Global Business Group, previously told advertisers that the company wouldn’t change its policies based on revenue pressure.

This actually looks like the correct response, given that otherwise pressure could presumably also be brought in the other direction. Imagine weapons manufacturers demanding that calls for gun control be censored or otherwise limited, and threatening to cancel advertising.

Facebook may also have correctly identified the real reason for the "boycott". Disney’s results for the past year show that overall revenue fell 42% to $11.78 billion, driven primarily by an operating loss of $1.96 billion in the parks and consumer products business, and a 16% fall in their studio business. The coronavirus pandemic causing cinemas and amusement parks to close is hardly Disney’s fault1, but it’s not surprising that they might look to cut some advertising expenditures, while also making themselves look good in the process.

It’s not cost cutting (bad, reactive), it’s joining a boycott (good, proactive).

It’s also worth looking at who is cutting what. Disney is still advertising on FB, but it’s direct-action ads to drive people to sign up to Disney+, their streaming service which is one of the few bright spots on their results with 60.5 million paying customers. That’s what FB is good for. It’s terrible at brand advertising, where you’re trying to build buzz around a new film that everyone has to see, rather than customising the benefits of Disney+ to each specific audience.

If you want everyone to pack the cinemas to see the new Star Wars film, you don’t need to advertise to everyone individually; you just get a billboard in Times Square. On the other hand, you can sell Disney+ many different ways:

  • Parents of young children: it’s a Pixar delivery mechanism!
  • Teenage boys (and men who never grew up, don’t @ me): it’s all Marvel superheroes and Star Wars all the time!
  • Older adults: National Geographic documentaries!
  • Musical fans: we have Hamilton now!

And so on: micro-segmentation is what adtech in general is good for.

This is why it’s worth looking beyond the headlines, at a boycott that is both more and less than it appears. Facebook will weather this boycott, and so will Disney.


In a timely update, today brings the story of a Dutch broadcaster that killed cookies and saw advertising revenue go way up. It turns out, advertisers don’t need to know much about users, beyond what they are reading or watching, in order to make sensible decisions about whether and how to advertise to them or not.

Instead of targeting a certain type of customer, advertisers target customers reading a certain type of article or watching a certain type of show.

The article calls this approach "contextual advertising", and according to the results of NPO’s testing, they convert at least as well as, if not better than, micro-targeted ones.

In January and February of this year, NPO says, its digital ad revenue was up 62 percent and 79 percent, respectively, compared to last year. Even after the coronavirus pandemic jolted the global economy and caused brands to drastically scale back advertising—and forcing many publications to implement pay cuts and layoffs—NPO's revenue is still double-digit percentage points higher than last year.

Everyone’s happy! Well, except for adtech vendors:

The main explanation is simple: because the network is no longer relying on microtargeted programmatic ad tech, it now keeps what advertisers spend rather than giving a huge cut to a bunch of intermediaries.2

And good riddance to them. Their only value proposition (such as it is) is that they will identify the high-value users browsing, say, NPO’s web site, and enable customers to advertise to them elsewhere on the web where the cost of displaying the ad is lower. What’s in it for NPO and other high-value outlets? Nothing; their value is actively being hollowed out. The advertisers aren’t that much better off, because now their ad and their brand is getting displayed in cheap locations beside low-value content, instead of on a reliable solid broadcaster’s web site. Everybody loses, except the adtech creepiness pushers themselves.

The sooner we move away from micro-targeting, the better.


🖼️ Photos by Annie Spratt and Travis Gergen on Unsplash


  1. Although I would argue that a decision to re-open Disneyland etc while the outbreak is still under way is extremely dubious. Easy to say when it’s not my revenue on the line, sure, but I also like to sleep soundly at night. 

  2. There used to be a gendered term here, for no good reason, so I fixed it. 

The Thing With Zoom

Zoom was having an excellent quarantine — until it wasn’t.

This morning’s news is from Bloomberg: Zoom Sued for Fraud Over Privacy, Security Flaws. But how did we get here?

Here is what’s interesting about the Thing with Zoom: it’s an excellent example of a company getting it mostly right for its stated aims and chosen target market — and still getting tripped up by changing conditions.

To recap, very quickly: with everybody suddenly stuck home and forbidden to go to the office, there was an equally sudden explosion in video calling — first for purely professional reasons, but quickly spreading to virtual happy hours, remote karaoke, video play dates, and the like. Zoom was the major beneficiary of this growth, with daily active users going from 10 million to over 200 million in 3 months.

One of the major factors that enabled this explosive growth in users is that Zoom has always placed a premium on ease of use — some would argue, at the expense of other important aspects, such as the security and privacy of its users.

There is almost always some tension between security and usability. Security features generally involve checking, validating, and confirming that a user is entitled to perform some action, and asking them for permission to take it. Zoom generally took the approach of not asking users questions which might confuse them, and removing as much friction as possible from the process of getting users into a video call — which is, after all, the goal of its enterprise customers.

Doing The Right Thing — Wrong

I cannot emphasise enough that this focus on ease of use is what made Zoom successful. I think I have used every alternative, from the big names like WebEx (even before its acquisition by Cisco!), to would-be contenders like whatever Google’s thing is called this week, to has-beens like Skype, to also-rans like BlueJeans. The key use case for me and for Zoom’s other corporate customers is, if I send one of my prospects a link to a video call, how quickly can they show up in my call so that I can start my demo? Zoom absolutely blew away the competition at this one crucial task.

Arguably, Zoom pushed their search for ease of use a bit too far. On macOS, if you click on a link to a Zoom chat, a Safari window will open and ask you whether you want to run Zoom. This one click is the only interaction that is needed, especially if you already have Zoom installed, but it was apparently still too much — so Zoom actually started bundling a hidden web server with their application, purely so that they could bypass this alert.

Sneaking a web server onto users’ systems was bad enough, but worse was to come. First of all, Zoom’s uninstall routine did not remove the web server, and it was capable of reinstalling the Zoom client without user interaction. But what got the headlines was the vulnerability that this combination enabled: a malicious website could join visitors to a Zoom conference, and since most people had their webcam on by default, active video would leak to the attacker.

This behaviour was so bad that Apple actually took the unprecedented step of issuing an operating system patch to shut Zoom down.

Problem solved?

This hidden-web-server saga was a preview run for what we are seeing now. Zoom had over-indexed on its customers, namely large corporations who were trying to reach their own customers. The issue with being forcibly and invisibly joined to a Zoom video conference simply by visiting a malicious web server did not affect those customers – but it did affect Zoom’s users.

The distinction is one that is crucial in the world of enterprise software procurement, where the person who signs the cheque is rarely the one who will be using the tool. Because of this disconnect, vendors by and large optimise for that economic buyer’s requirements first, and only later (if at all) on the actual users’ needs.

With everyone locked up at home, usage of Zoom exploded. People with corporate accounts used them in the evening to keep up with their social lives, and many more signed up for the newly-expanded free tier. This new attention brought new scrutiny, and from a different angle from what Zoom was used to or prepared for.

For instance, it came to light that the embedded code that let users log in to Zoom on iOS with their Facebook credentials was leaking data to Facebook even for users without a Facebook account. Arguably, Zoom had not done anything wrong here; as far as I can tell, the leakage was due to Facebook’s standard SDK grabbing more data than it was supposed to have, in a move that is depressingly predictable coming from Facebook.

In a normal circumstance, Zoom could have apologised, explained that they had moved too quickly to enable a consumer feature that was outside their usual comfort zone without understanding all the implications, and moved on. However, because of the earlier hidden-web-server debacle, there was no goodwill for this sort of move. Zoom did act quickly to remove the offending Facebook code, but worse was to come.

Less than a week later, another story broke, claiming that Zoom is Leaking Peoples' Email Addresses and Photos to Strangers. Here is where the story gets really instructive.

This "leak" is due to the sort of strategy tax that was almost inevitable in hindsight. Basically, Zoom added a convenience feature for its enterprise customers, called Company Directory, which assumes that anyone sharing the same domain in their email address works for the same company. In line with their guiding principle of building a simple and friction-free user experience, this assumption makes it easier to schedule meetings with one’s colleagues.

The problem only arose when people started joining en masse from their personal email accounts. Zoom had excluded the big email providers, so that people would not find themselves with millions of "colleagues" just because they had all signed up with Gmail accounts. However, they had not made an exhaustive list of all email providers, and so users found themselves with "colleagues" who simply happened to be customers of the same ISP or email provider. The story mentioned Dutch ISPs like xs4all.nl, dds.nl, and quicknet.nl, but the same issue would presumably apply to all small regional ISPs and niche email providers.

Ordinarily, this sort of "privacy leak" is a storm in a teacup; it’s no worse than a newsletter where all the names are in the To: line instead of being in Bcc:. However, by this point Zoom was in the full glare of public attention, and the story blew up even in the mainstream press, outside of the insular tech world.

Now What?

Zoom’s CEO, Eric Yuan, issued a pretty comprehensive apology. I will quote the key paragraphs below:

First, some background: our platform was built primarily for enterprise customers – large institutions with full IT support. These range from the world’s largest financial services companies to leading telecommunications providers, government agencies, universities, healthcare organizations, and telemedicine practices. Thousands of enterprises around the world have done exhaustive security reviews of our user, network, and data center layers and confidently selected Zoom for complete deployment.

However, we did not design the product with the foresight that, in a matter of weeks, every person in the world would suddenly be working, studying, and socializing from home. We now have a much broader set of users who are utilizing our product in a myriad of unexpected ways, presenting us with challenges we did not anticipate when the platform was conceived.

These new, mostly consumer use cases have helped us uncover unforeseen issues with our platform. Dedicated journalists and security researchers have also helped to identify pre-existing ones. We appreciate the scrutiny and questions we have been getting – about how the service works, about our infrastructure and capacity, and about our privacy and security policies. These are the questions that will make Zoom better, both as a company and for all its users.

We take them extremely seriously. We are looking into each and every one of them and addressing them as expeditiously as we can. We are committed to learning from them and doing better in the future.

It’s too early to say what the long-term consequences for Zoom will be, but this is a good apology, and a reasonable set of early moves by the company to repair its public image. To be clear, the company still has a long way to go, and to succeed, it will need to rebalance its exclusive focus on usability to be much more considerate of privacy and security.

For instance, there were a couple of zero-days bugs found in the macOS client (since patched in Version 4.6.9) which would have allowed for privilege escalation. These particular flaws cannot be remotely exploited, so they would require would-be attackers to have access to the operating system already, but it’s still far from ideal. In particular, one of these bugs took advantage of some shortcuts that Zoom had taken in its installer, once again in the name of ease-of-use.

Installers on macOS have the option of running a "preflight" check, where they verify all their prerequisites are met. After this step, they will request confirmation from the user before running the installer proper. Zoom’s installer actually completed all its work in this preflight step, including specifically running a script with root (administrator) privileges. This script could be replaced by an attacker, whose malicious script would then be run with those same elevated privileges.

Personally I hope that Zoom figures out a way to resolve this situation. The user experience is very pleasant (even after installation!), and given that I work from home all the time — not just in quarantine — Zoom is a key part of my work environment.

Lessons To Learn

1: Pivoting is hard

Regardless of the outcome for Zoom, though, this is a cautionary tale in corporate life and communications. Zoom was doing everything right for its previous situation, but this exclusive focus made it difficult to react to changes in that situation. The pivot from corporate enterprise users to much larger numbers of personal users is an opportunity for Zoom if they can monetise this vastly expanded user base, but it also exposes them to a much-changed environment. Corporate users are more predictable in their environments and routines, and in the way they interact with apps and services. Home users will do all sorts of unexpected things and come from unexpected places, exposing many more edge cases in developers’ assumptions.

Companies should not assume that they can easily "pivot" to a whole new user population, even one that is attractively larger and more promising of profits, without making corresponding changes to core assumptions about how they go to market.

2: A good reputation once lost is hard to regain

A big part of Zoom’s problem right now is that they had squandered their earlier goodwill with techies when they hid a web server on their machines. Without that earlier situation, they might have been able to point out that many of the current problems are on the level of tempests in teacups — bugs to be sure, which need to be fixed, but hardly existential PROBLEMS.

As it happened, though, the Internet hive mind was all primed to think the worst of Zoom, and indeed actively went looking for issues once Zoom was in the glare of the spotlight. In this situation, there is not much to be done in the short term, apart from what Zoom actually did: apologise profusely, promise not to do it again, and attempt to weather the storm.

One move I have not yet seen them make which would be very powerful would be to hire a well-known security expert with a reputation for impartiality. One part of their job would be to act as figurehead and lightning conductor for the company’s security efforts, but an equally important part would be as internal naysayer: the VP of Nope, someone able to say a firm NO to bad ideas. Hiding a web server? Bad idea. Shortcutting the installer? Bad idea. Assuming everyone with an email address not on a very short list of mega-providers is a colleague of everyone else with the same email domain? Bad idea.


UPDATE: Showing how amazingly prescient this recommendation was, shortly after I published this post, Alex Stamos announced that he was joining Zoom to help them "build up their security program":

Alex Stamos is of course the ex-CSO at Facebook, who since departing FB has made something of a name for himself by commenting publicly about security and privacy issues. As such, he’s pretty much the perfect hire: high public profile, known as an impartial expert, and deeply experienced specifically in end-user security issues, not just the sort of enterprise aspects which Zoom had previously been focusing on.

I will be watching his and Zoom’s next moves with interest.


3: Bottom line: build good products

Most companies need to review both security and usability — but it’s probably worth noting that a good product is the best way of saving yourself. Even in a post-debacle roundup of would-be alternatives to Zoom, Zoom still came out ahead, despite being penalised for its security woes. They still have the best product, and, yes, the one that is easiest to use.

But if you get the other two factors right, you, your good product, and your long-suffering comms team will all have an easier life.


🖼️ Photos by Allie Smith on Unsplash

Be Smart, Use Dumb Devices

The latest news in the world of Things Which Are Too "Smart" For Their Users’ Good is that Facebook have released a new device in their Portal range: a video camera that sits on your TV and lets you make video calls via Facebook Messenger and WhatsApp (which is also owned by Facebook).

This is both a great idea and a terrible one. I am on the record as wanting a webcam for my AppleTV so that I could make FaceTime calls from there:

In fact, I already do the hacky version of this by mirroring my phone’s screen with AirPlay and then propping it up so the camera has an appropriate view.

Why would I do this? One-word answer: kids. The big screen has a better chance of holding their attention, and a camera with a nice wide field of view would be good too, to capture all the action. Getting everyone to sit on the couch or rug in front of the TV is easier than getting everyone to look into a phone (or even iPad). I’m not sure about the feature where the camera tries to follow the speaker; in these sorts of calls, several people are speaking most of the time, so I can see it getting very confused. It works well in boardroom setups where there is a single conversational thread, but even then, most of the good systems I’ve seen use two cameras, so that the view can switch in software rather than waiting for mechanical rotation.

So much for the "good idea" part. The reason it’s a terrible idea in this case is that it’s from Facebook. Nobody in their right mind would want an always-on device from Facebook in their living room, with a camera pointed at their couch, and listening in on the video calls they make. Facebook have shown time and time and time again that they simply cannot be trusted.

An example of why the problem is Facebook itself, rather than any one product or service, is the hardware switch for turning the device’s camera off. The highlight shows if the switch is in the off position, and a LED illuminates… to show that the camera and microphone are off.

Many people have commented that this setup looks like a classic dark pattern in UX, just implemented in hardware. My personal opinion is that the switch is more interesting as an indicator of Facebook’s corporate attitude to internet services: they are always on, and it’s an anomaly if they are off. In fact, they may even consider the design of this switch to be a positive move towards privacy, by highlighting when the device is in "privacy mode". The worrying aspect is that this design makes privacy an anomaly, a mode that is entered briefly for whatever reason, a bit like Private or Incognito mode in a web browser. If you’re wondering why a reasonable person might be concerned about Facebook’s attitude to user privacy, a quick read of just the "Privacy issues" section of the Wikipedia article on Facebook criticism will probably have you checking your permissions. At a bare minimum, I assume that entering "privacy mode" is itself a tracked event, subject to later analysis…

Trust, But Verify

IoT devices need a high degree of trust anyway because of all the information that they are inherently privy to. Facebook have proven that they will go to any lengths to gather information, including information that was deliberately not shared by users, process it for their own (and their advertising customers’) purposes, and do an utterly inadequate job of protecting it.

The idea of a smart home is attractive, no question – but why do the individual devices need to be smart in their own right? Unnecessary capabilities increase the vulnerability surface for abuse, either by a vendor/operator or by a malicious attacker. Instead, better to focus on devices which have the minimum required functionality to do their job, and no more.

A perfect example of this latter approach is IKEA’s collaboration with Sonos. The Symfonisk speakers are not "smart" in the sense that they have Alexa, Siri, or Google Assistant on board. They also do not connect directly to the Internet or to any one particular service. Instead, they rely on the owner’s smartphone to do all the hard work, whether that is running Spotify or interrogating Alexa. The speaker just plays music.

I would love a simple camera that perched on top of the TV, either as a peripheral to the AppleTV, or extending AirPlay to be able to use video sources as well. However, as long as doing this requires a full device from Facebook1 – or worse, plugging directly into a smart TV2 – I’ll keep on propping my phone up awkwardly and sharing the view to the TV.


  1. Or Google or Amazon – they’re not much better. 

  2. Sure, let my TV watch everything that is displayed and upload it for creepy "analysis".3 

  3. To be clear, I’m not wearing a tinfoil hat over here. I have no problem simply adding a "+1" to the viewer count for The Expanse or whatever, but there’s a lot more that goes on my TV screen: photos of my kids, the content of my video calls, and so on and so forth. I would not be okay with sharing the entire video buffer with unknown third parties. This sort of nonsense is why my TV has never been connected to the WiFi. It went online once, using an Ethernet cable, to get a firmware update – and then I unplugged the cable. 

Once More On Privacy

Facebook is in court yet again over the Cambridge Analytica scandal, and one of their lawyers made a most revealing assertion :

There is no invasion of privacy at all, because there is no privacy

Now on one level, this is literally true. Facebook's lawyer went on to say that:

Facebook was nothing more than a "digital town square" where users voluntarily give up their private information

The issue is a mismatch in expectations. Users have the option to disclose information as fully public, or variously restricted: only to their friends, or to members of certain groups. The fact that something is said in the public street does not mean that the user would be comfortable having it published in a newspaper, especially if they were whispering into a friend’s ear at the time.

Legally, Facebook may well be in the right (IANAL, nor do I play one on the Internet), but in terms of users’ expectations, they are undoubtedly in the wrong. However, for once I do not lay all the blame on Facebook.

Mechanisation and automation are rapidly subverting common-sense expectations in a number of fields, and consequences can be wide-reaching. Privacy is one obvious example, whether it is Facebook’s or Google’s analysis of our supposedly private conversations, or facial recognition in public places.

For an example of the reaction to the deployment of these technologies, the city of San Francisco, generally expected to be an early adopter of technological solutions, recently banned the use of facial recognition technology. While the benefits for law enforcement of ubiquitous automated facial recognition are obvious, the adoption of this technology also subverts long-standing expectations of privacy – even in undoubtedly public spaces. While it is true that I can be seen and possibly recognised by anyone who is in the street at the same time as me, the human expectation is that I am not creating a permanent, searchable record of my presence in the street at that time, nor that such a record would be widely available.

To make the example concrete, let’s talk for a moment about numberplate recognition. Cars and other motor vehicles have number plates to make them recognisable, including for law enforcement purposes. As technology developed, automated reading of license plates became possible, and is now widely adopted for speed limit enforcement. Around here things have gone a step further, with average speeds measured over long distances.

Who could object to enforcing the law?

The problem with automated enforcement is that it is only as good as it is programmed to be. It is true that hardly anybody breaks the speed limit on the monitored stretches of motorway any more – or at least, not more than once. However, there are also a number of negative consequences. Lane discipline has fallen entirely by the wayside since the automated systems were introduced, with slow vehicles cruising in the middle or even outside lanes, with empty lanes on the inside. The automated enforcement has also removed any pressure to consider what is an appropriate speed for the conditions, with many drivers continuing to drive at or near the speed limit even in weather or traffic conditions where that speed is totally unsafe. Finally, there is no recognition that, at 4am with nobody on the roads, there is no need to enforce the same speed limit that applies at rush hour.

Human-powered on-the-spot enforcement – the traffic cop flagging down individual motorists – had the option to modulate the law, turning a blind eye to safe speed and punishing driving that might be inside the speed limit but unsafe in other ways. Instead, automated enforcement is dumb (it is, after all, binary) and only considers the single metric it was designed to consider.

There are of course any number of problems with a human-powered approach as well; members of ethnic or social minorities all have stories involving the police looking for something – anything – to book them for. I’m a straight white cis-het guy, and still once managed to fall foul of the proverbial bored cops, who took my entire car apart looking for drugs (that weren’t there) and then left me by the side of the road to put everything back together. However, automated enforcement makes all of these problems worse.

Facial recognition has documented issues with accuracy when it comes to ethnic minorities and women – basically anyone but the white male programmers who created the systems. If police start relying on such systems, people are going to have serious difficulties trying to prove that they are not the person in the WANTED poster – because the computer says they are a match. And that’s if they don’t just get gunned down, of course.

It is notoriously hard to opt out of these systems when they are used for advertising, but when they are used for law enforcement, it becomes entirely impossible to opt out, as a London man found when he was arrested for covering his face during a facial recognition trial on public streets. A faulty system is even worse than a functional one, as its failure modes are unpredictable.

Systems rely on data, and data storage is also problematic. I recently had to get a government-issued electronic ID. Normally this should be a simple online application, but I kept getting weird errors, so I went to the office with my (physical) ID instead. There, we realised that the problem was with my place of birth. I was born in what was then Strathclyde, but this is no longer an option in up-to-date systems, since the region was abolished in 1996. However, different databases were disagreeing, and we were unable to move forward. In the end, the official effectively helped me to lie to the computer, picking an acceptable jurisdiction in order to move forwards in the process – and thereby of course creating even more inaccuracies and inconsistency. So much for "the computer is always right"… Remember, kids: Garbage In, Garbage Out!

What, Me Worry?

The final argument comes down, as it always does with privacy, to the objection that "there’s nothing to fear if you haven’t done anything wrong". Leaving aside the issues we just discussed around the possibility of running into problems even when you really haven’t done anything wrong, the issue is with the definition of "wrong". Social change is often driven by movement in the grey areas of the law, as well as selective enforcement of those laws. First gay sex is criminalised, so underground gay communities spring up. Then attitudes change, but the laws are still on the books; they just aren’t enforced. Finally the law catches up. If algorithms actually are watching all of our activity and are able to infer when we might be doing something that’s frowned upon by some1, that changes the dynamic very significantly, in ways which we have not properly considered as a society.

And that’s without even considering where else these technologies might be applied, beyond our pleasant Western bubble. What about China, busy turning Xinjiang into an open-air prison for the Uyghur minority? Or "Saudi" Arabia, distributing smartphone apps to enable husbands to deny their wives permission to travel?

Expectations of privacy are being subverted by scale and automation, without a real conversation about what that means. Advertisers and the government stick to the letter of the law, but there is no recognition of the material difference between surveillance that is human-powered, and what happens when the same surveillance is automated.


Photo by Glen Carrie and Bryan Hansonvia Unsplash


  1. And remember, the algorithms may not even be analysing your own data, which you carefully secured and locked down. They may have access to data for one of your friends or acquaintances, and then the algorithm spots a correlation in patterns of communication, and associates you with them. Congratulations, you now have a shadow profile. And what if you are just really unlucky in your choice of local boozer, so now the government thinks you are affiliated with the IRA offshoot du jour, when all you were after was a decent pint of Guinness? 

How Much Trouble Is Facebook In?

Users (including me) are deleting Facebook, but FB reports no drop in active users. What gives?

It’s not just Bloomberg, either; a survey published in Forbes claims that More Than 1 in 4 Americans Have Deleted Facebook. I’m not American, nor do I play one on TV, but I deleted the FB app from all my devices a while ago. I still have my account, but I went from checking it multiple times per day to glancing at it once every couple of weeks. Informally, I speak to lots of people who have done the same thing.

Once again, what gives?

Counting And Overcounting

There is nothing surprising here: any action is enough for FB to count you as active, so they can claim with a straight face that even someone like me is still "active" for purposes of their statistics – and the rates they can charge advertisers.

Remember when Facebook inflated video viewing stats for two years? Good times, good times. Turned out, they were counting anything over three seconds as if you had viewed the whole thing. The only problem is, it might take you that long to figure out how to dismiss the annoying thing.

Unsurprisingly, advertisers who had been paying through the nose for those video ad placements were not best pleased, especially as the scale of the over-counting became clear:

Ad buying agency Publicis Media was told by Facebook that the earlier counting method likely overestimated average time spent watching videos by between 60% and 80%

On A Mission

Facebook take their mission extremely seriously. Currently it says this:

Give people the power to build community and bring the world closer together.

The old formulation was perhaps clearer:

To give people the power to share and make the world more open and connected.

Either way, the Rohingya in Burma1, to cite just one example, might have preferred if people had not shared libels and built communities around hunting them down and ejecting them from their villages.

Facebook, however, in dogged pursuit of this ideal, builds and maintains so-called shadow profiles, even for users who had the foresight never to sign up for Facebook. These profiles are built up by using various tracking mechanisms that follow users around the Web – famously, the Like button, although supposedly that has now been defanged. One also suspects a certain amount of information sharing between Facebook’s various properties, notably Instagram and WhatsApp.

The AOL Of Our Century

The bottom line is, you’re not getting out of Facebook that easily, if only because of the famous truism of the ad-funded web: "if you’re not paying for it, you’re the product". With Facebook, as with all social media sites, that is true in a very literal sense. What they are selling to their advertisers is exposure to the greatest number of eyeballs, ideally filtered according to certain characteristics. If the pool starts shrinking, their opportunity to make money off advertisers shrinks commensurately. If people start seriously messing with the stats, for instance by using tools like fuzzify.me, such that the filters no longer return groupings of users that are attractive to advertisers, that will also be a problem. Any drop in Daily or Monthly Active Users (DAU and MAU) would be a much more immediate threat, though, and that is why as long as users check Facebook even occasionally, there will never be a serious drop in usage reported – right up until the day the whole thing dies unceremoniously in a corner.


  1. I refuse to call it Myanmar. 

Needy Much, Facebook?

This notification was on my iPad:

A HUNDRED messages? Okay, maybe something blew up. I’ve not been looking at Facebook for a while, but I’ve been reluctant to delete my account entirely because it’s the only way I keep in touch with a whole bunch of people. Maybe something happened?

I open the app, and I’m greeted with this:

Yeah, no notifications whatsoever inside the app.

Facebook is now actively lying to get its Daily Active Users count up. Keep this sort of thing in mind when they quote such-and-such a number of users.

To Facebook, user engagement stats are life itself. If they ever start to slide seriously, their business is toast. Remember in 2016, when Facebook was sued over inflated video ad metrics? Basically, if you scrolled past a video ad in your feed, that still counted as a “view", resulting in viewer counts that were inflated by 80%.

Earlier this year, Facebook had its first loss in daily active users in the US and Canada. They are still growing elsewhere, but not without consequences, as the New York Times reports in a hard-hitting piece entitled Where Countries Are Tinderboxes and Facebook Is a Match.

At this point, I imagine anyone still working for Facebook is not nearly as forward with that fact at dinner parties or in bars, instead offering the sort of generic “yeah, I work in IT" non-answer that back-office staff at porn sites are used to giving.

This Is Where We Are, July 2017 Edition

A quick review of the status of the Big Three1 social networks as of right now.

It seems Facebook is testing ads in Messenger now, which is an incredibly wrong-headed idea:

Messenger isn’t really a “free time" experience the way Facebook proper is — you use the former with purpose, the latter idly. Advertisements must cater to that, just like anywhere else in the world: you don’t see the same ads on subway walls (where you have to sit and stare) as on billboards (where you have two or three seconds max and your attention is elsewhere).

I always hated Messenger anyway, just out of reflex because they had felt the need to split it off into a separate app. In fact, I kept using Paper until Facebook finally broke it, in no small part because it kept everything together in one app. It also looked good, as opposed to the hot mess of FB’s default apps.

Between that and the “Moments" rubbish junking up the top of every one of the FB apps, I am actively discouraged from using them. At this point I pretty much only open FB if I have a notification from there.

Meanwhile, Twitter is continuing on its slow death spiral. It is finally becoming what it was always described as: a “micro-blogging" platform. People write 100-tweet threads instead of just one blog post, and this is so prevalent that there are tools out there that will go and assemble these threads in one place for ease of reading.

It’s got to the point that I read Twitter (and a ton of blogs via RSS, because I’m old-school that way), but most of my actual interaction these days is via LinkedIn. I even had a post go viral over there - 7000-odd views and more than a hundred likes, at time of writing.

So this is where we are, right now in July 2017: Twitter for ephemeral narcissism, Facebook for interacting with (or avoiding) the same people you deal with day to day, and LinkedIn for actually getting things done.

See you out there.

Photo by Osman Rana on Unsplash


  1. I don’t Instagram, I’m too old for Tumblr, and - oh sorry Snapchat, didn’t see you down there