Showing all posts tagged facebook:

Branded

A recurring topic when it comes to curbing the power of Facebook to influence the real world is somehow to curtail its huge advertising revenue. Campaigns such as Sleeping Giants have made it their business to call out advertisers whose brands had been associated with unsavoury themes, causing revenue to alt-right websites to drop as much as 90% (despite some shenanigans to attempt to reverse the drain).

In the wake of all this, large corporations such as Disney have made a big deal of "boycotting" Facebook:

Walt Disney has dramatically slashed its advertising spending on Facebook according to people familiar with the situation, the latest setback for the tech giant as it faces a boycott from companies upset with its handling of hate speech and divisive content.

The reasons for the supposed boycott are never stated clearly, but centre on supposed enablement of the alt-right by Facebook. I suspect that the actual recruitment is happening elsewhere, e.g. through YouTube’s recommendation algorithm, but that is a whole other issue.

Facebook seems unswayed:

Facebook executives, including Carolyn Everson, vice president of its Global Business Group, previously told advertisers that the company wouldn’t change its policies based on revenue pressure.

This actually looks like the correct response, given that otherwise pressure could presumably also be brought in the other direction. Imagine weapons manufacturers demanding that calls for gun control be censored or otherwise limited, and threatening to cancel advertising.

Facebook may also have correctly identified the real reason for the "boycott". Disney’s results for the past year show that overall revenue fell 42% to $11.78 billion, driven primarily by an operating loss of $1.96 billion in the parks and consumer products business, and a 16% fall in their studio business. The coronavirus pandemic causing cinemas and amusement parks to close is hardly Disney’s fault1, but it’s not surprising that they might look to cut some advertising expenditures, while also making themselves look good in the process.

It’s not cost cutting (bad, reactive), it’s joining a boycott (good, proactive).

It’s also worth looking at who is cutting what. Disney is still advertising on FB, but it’s direct-action ads to drive people to sign up to Disney+, their streaming service which is one of the few bright spots on their results with 60.5 million paying customers. That’s what FB is good for. It’s terrible at brand advertising, where you’re trying to build buzz around a new film that everyone has to see, rather than customising the benefits of Disney+ to each specific audience.

If you want everyone to pack the cinemas to see the new Star Wars film, you don’t need to advertise to everyone individually; you just get a billboard in Times Square. On the other hand, you can sell Disney+ many different ways:

  • Parents of young children: it’s a Pixar delivery mechanism!
  • Teenage boys (and men who never grew up, don’t @ me): it’s all Marvel superheroes and Star Wars all the time!
  • Older adults: National Geographic documentaries!
  • Musical fans: we have Hamilton now!

And so on: micro-segmentation is what adtech in general is good for.

This is why it’s worth looking beyond the headlines, at a boycott that is both more and less than it appears. Facebook will weather this boycott, and so will Disney.


In a timely update, today brings the story of a Dutch broadcaster that killed cookies and saw advertising revenue go way up. It turns out, advertisers don’t need to know much about users, beyond what they are reading or watching, in order to make sensible decisions about whether and how to advertise to them or not.

Instead of targeting a certain type of customer, advertisers target customers reading a certain type of article or watching a certain type of show.

The article calls this approach "contextual advertising", and according to the results of NPO’s testing, they convert at least as well as, if not better than, micro-targeted ones.

In January and February of this year, NPO says, its digital ad revenue was up 62 percent and 79 percent, respectively, compared to last year. Even after the coronavirus pandemic jolted the global economy and caused brands to drastically scale back advertising—and forcing many publications to implement pay cuts and layoffs—NPO's revenue is still double-digit percentage points higher than last year.

Everyone’s happy! Well, except for adtech vendors:

The main explanation is simple: because the network is no longer relying on microtargeted programmatic ad tech, it now keeps what advertisers spend rather than giving a huge cut to a bunch of intermediaries.2

And good riddance to them. Their only value proposition (such as it is) is that they will identify the high-value users browsing, say, NPO’s web site, and enable customers to advertise to them elsewhere on the web where the cost of displaying the ad is lower. What’s in it for NPO and other high-value outlets? Nothing; their value is actively being hollowed out. The advertisers aren’t that much better off, because now their ad and their brand is getting displayed in cheap locations beside low-value content, instead of on a reliable solid broadcaster’s web site. Everybody loses, except the adtech creepiness pushers themselves.

The sooner we move away from micro-targeting, the better.


🖼️ Photos by Annie Spratt and Travis Gergen on Unsplash


  1. Although I would argue that a decision to re-open Disneyland etc while the outbreak is still under way is extremely dubious. Easy to say when it’s not my revenue on the line, sure, but I also like to sleep soundly at night. 

  2. There used to be a gendered term here, for no good reason, so I fixed it. 

The Thing With Zoom

Zoom was having an excellent quarantine — until it wasn’t.

This morning’s news is from Bloomberg: Zoom Sued for Fraud Over Privacy, Security Flaws. But how did we get here?

Here is what’s interesting about the Thing with Zoom: it’s an excellent example of a company getting it mostly right for its stated aims and chosen target market — and still getting tripped up by changing conditions.

To recap, very quickly: with everybody suddenly stuck home and forbidden to go to the office, there was an equally sudden explosion in video calling — first for purely professional reasons, but quickly spreading to virtual happy hours, remote karaoke, video play dates, and the like. Zoom was the major beneficiary of this growth, with daily active users going from 10 million to over 200 million in 3 months.

One of the major factors that enabled this explosive growth in users is that Zoom has always placed a premium on ease of use — some would argue, at the expense of other important aspects, such as the security and privacy of its users.

There is almost always some tension between security and usability. Security features generally involve checking, validating, and confirming that a user is entitled to perform some action, and asking them for permission to take it. Zoom generally took the approach of not asking users questions which might confuse them, and removing as much friction as possible from the process of getting users into a video call — which is, after all, the goal of its enterprise customers.

Doing The Right Thing — Wrong

I cannot emphasise enough that this focus on ease of use is what made Zoom successful. I think I have used every alternative, from the big names like WebEx (even before its acquisition by Cisco!), to would-be contenders like whatever Google’s thing is called this week, to has-beens like Skype, to also-rans like BlueJeans. The key use case for me and for Zoom’s other corporate customers is, if I send one of my prospects a link to a video call, how quickly can they show up in my call so that I can start my demo? Zoom absolutely blew away the competition at this one crucial task.

Arguably, Zoom pushed their search for ease of use a bit too far. On macOS, if you click on a link to a Zoom chat, a Safari window will open and ask you whether you want to run Zoom. This one click is the only interaction that is needed, especially if you already have Zoom installed, but it was apparently still too much — so Zoom actually started bundling a hidden web server with their application, purely so that they could bypass this alert.

Sneaking a web server onto users’ systems was bad enough, but worse was to come. First of all, Zoom’s uninstall routine did not remove the web server, and it was capable of reinstalling the Zoom client without user interaction. But what got the headlines was the vulnerability that this combination enabled: a malicious website could join visitors to a Zoom conference, and since most people had their webcam on by default, active video would leak to the attacker.

This behaviour was so bad that Apple actually took the unprecedented step of issuing an operating system patch to shut Zoom down.

Problem solved?

This hidden-web-server saga was a preview run for what we are seeing now. Zoom had over-indexed on its customers, namely large corporations who were trying to reach their own customers. The issue with being forcibly and invisibly joined to a Zoom video conference simply by visiting a malicious web server did not affect those customers – but it did affect Zoom’s users.

The distinction is one that is crucial in the world of enterprise software procurement, where the person who signs the cheque is rarely the one who will be using the tool. Because of this disconnect, vendors by and large optimise for that economic buyer’s requirements first, and only later (if at all) on the actual users’ needs.

With everyone locked up at home, usage of Zoom exploded. People with corporate accounts used them in the evening to keep up with their social lives, and many more signed up for the newly-expanded free tier. This new attention brought new scrutiny, and from a different angle from what Zoom was used to or prepared for.

For instance, it came to light that the embedded code that let users log in to Zoom on iOS with their Facebook credentials was leaking data to Facebook even for users without a Facebook account. Arguably, Zoom had not done anything wrong here; as far as I can tell, the leakage was due to Facebook’s standard SDK grabbing more data than it was supposed to have, in a move that is depressingly predictable coming from Facebook.

In a normal circumstance, Zoom could have apologised, explained that they had moved too quickly to enable a consumer feature that was outside their usual comfort zone without understanding all the implications, and moved on. However, because of the earlier hidden-web-server debacle, there was no goodwill for this sort of move. Zoom did act quickly to remove the offending Facebook code, but worse was to come.

Less than a week later, another story broke, claiming that Zoom is Leaking Peoples' Email Addresses and Photos to Strangers. Here is where the story gets really instructive.

This "leak" is due to the sort of strategy tax that was almost inevitable in hindsight. Basically, Zoom added a convenience feature for its enterprise customers, called Company Directory, which assumes that anyone sharing the same domain in their email address works for the same company. In line with their guiding principle of building a simple and friction-free user experience, this assumption makes it easier to schedule meetings with one’s colleagues.

The problem only arose when people started joining en masse from their personal email accounts. Zoom had excluded the big email providers, so that people would not find themselves with millions of "colleagues" just because they had all signed up with Gmail accounts. However, they had not made an exhaustive list of all email providers, and so users found themselves with "colleagues" who simply happened to be customers of the same ISP or email provider. The story mentioned Dutch ISPs like xs4all.nl, dds.nl, and quicknet.nl, but the same issue would presumably apply to all small regional ISPs and niche email providers.

Ordinarily, this sort of "privacy leak" is a storm in a teacup; it’s no worse than a newsletter where all the names are in the To: line instead of being in Bcc:. However, by this point Zoom was in the full glare of public attention, and the story blew up even in the mainstream press, outside of the insular tech world.

Now What?

Zoom’s CEO, Eric Yuan, issued a pretty comprehensive apology. I will quote the key paragraphs below:

First, some background: our platform was built primarily for enterprise customers – large institutions with full IT support. These range from the world’s largest financial services companies to leading telecommunications providers, government agencies, universities, healthcare organizations, and telemedicine practices. Thousands of enterprises around the world have done exhaustive security reviews of our user, network, and data center layers and confidently selected Zoom for complete deployment.

However, we did not design the product with the foresight that, in a matter of weeks, every person in the world would suddenly be working, studying, and socializing from home. We now have a much broader set of users who are utilizing our product in a myriad of unexpected ways, presenting us with challenges we did not anticipate when the platform was conceived.

These new, mostly consumer use cases have helped us uncover unforeseen issues with our platform. Dedicated journalists and security researchers have also helped to identify pre-existing ones. We appreciate the scrutiny and questions we have been getting – about how the service works, about our infrastructure and capacity, and about our privacy and security policies. These are the questions that will make Zoom better, both as a company and for all its users.

We take them extremely seriously. We are looking into each and every one of them and addressing them as expeditiously as we can. We are committed to learning from them and doing better in the future.

It’s too early to say what the long-term consequences for Zoom will be, but this is a good apology, and a reasonable set of early moves by the company to repair its public image. To be clear, the company still has a long way to go, and to succeed, it will need to rebalance its exclusive focus on usability to be much more considerate of privacy and security.

For instance, there were a couple of zero-days bugs found in the macOS client (since patched in Version 4.6.9) which would have allowed for privilege escalation. These particular flaws cannot be remotely exploited, so they would require would-be attackers to have access to the operating system already, but it’s still far from ideal. In particular, one of these bugs took advantage of some shortcuts that Zoom had taken in its installer, once again in the name of ease-of-use.

Installers on macOS have the option of running a "preflight" check, where they verify all their prerequisites are met. After this step, they will request confirmation from the user before running the installer proper. Zoom’s installer actually completed all its work in this preflight step, including specifically running a script with root (administrator) privileges. This script could be replaced by an attacker, whose malicious script would then be run with those same elevated privileges.

Personally I hope that Zoom figures out a way to resolve this situation. The user experience is very pleasant (even after installation!), and given that I work from home all the time — not just in quarantine — Zoom is a key part of my work environment.

Lessons To Learn

1: Pivoting is hard

Regardless of the outcome for Zoom, though, this is a cautionary tale in corporate life and communications. Zoom was doing everything right for its previous situation, but this exclusive focus made it difficult to react to changes in that situation. The pivot from corporate enterprise users to much larger numbers of personal users is an opportunity for Zoom if they can monetise this vastly expanded user base, but it also exposes them to a much-changed environment. Corporate users are more predictable in their environments and routines, and in the way they interact with apps and services. Home users will do all sorts of unexpected things and come from unexpected places, exposing many more edge cases in developers’ assumptions.

Companies should not assume that they can easily "pivot" to a whole new user population, even one that is attractively larger and more promising of profits, without making corresponding changes to core assumptions about how they go to market.

2: A good reputation once lost is hard to regain

A big part of Zoom’s problem right now is that they had squandered their earlier goodwill with techies when they hid a web server on their machines. Without that earlier situation, they might have been able to point out that many of the current problems are on the level of tempests in teacups — bugs to be sure, which need to be fixed, but hardly existential PROBLEMS.

As it happened, though, the Internet hive mind was all primed to think the worst of Zoom, and indeed actively went looking for issues once Zoom was in the glare of the spotlight. In this situation, there is not much to be done in the short term, apart from what Zoom actually did: apologise profusely, promise not to do it again, and attempt to weather the storm.

One move I have not yet seen them make which would be very powerful would be to hire a well-known security expert with a reputation for impartiality. One part of their job would be to act as figurehead and lightning conductor for the company’s security efforts, but an equally important part would be as internal naysayer: the VP of Nope, someone able to say a firm NO to bad ideas. Hiding a web server? Bad idea. Shortcutting the installer? Bad idea. Assuming everyone with an email address not on a very short list of mega-providers is a colleague of everyone else with the same email domain? Bad idea.


UPDATE: Showing how amazingly prescient this recommendation was, shortly after I published this post, Alex Stamos announced that he was joining Zoom to help them "build up their security program":

Alex Stamos is of course the ex-CSO at Facebook, who since departing FB has made something of a name for himself by commenting publicly about security and privacy issues. As such, he’s pretty much the perfect hire: high public profile, known as an impartial expert, and deeply experienced specifically in end-user security issues, not just the sort of enterprise aspects which Zoom had previously been focusing on.

I will be watching his and Zoom’s next moves with interest.


3: Bottom line: build good products

Most companies need to review both security and usability — but it’s probably worth noting that a good product is the best way of saving yourself. Even in a post-debacle roundup of would-be alternatives to Zoom, Zoom still came out ahead, despite being penalised for its security woes. They still have the best product, and, yes, the one that is easiest to use.

But if you get the other two factors right, you, your good product, and your long-suffering comms team will all have an easier life.


🖼️ Photos by Allie Smith on Unsplash

Be Smart, Use Dumb Devices

The latest news in the world of Things Which Are Too "Smart" For Their Users’ Good is that Facebook have released a new device in their Portal range: a video camera that sits on your TV and lets you make video calls via Facebook Messenger and WhatsApp (which is also owned by Facebook).

This is both a great idea and a terrible one. I am on the record as wanting a webcam for my AppleTV so that I could make FaceTime calls from there:

In fact, I already do the hacky version of this by mirroring my phone’s screen with AirPlay and then propping it up so the camera has an appropriate view.

Why would I do this? One-word answer: kids. The big screen has a better chance of holding their attention, and a camera with a nice wide field of view would be good too, to capture all the action. Getting everyone to sit on the couch or rug in front of the TV is easier than getting everyone to look into a phone (or even iPad). I’m not sure about the feature where the camera tries to follow the speaker; in these sorts of calls, several people are speaking most of the time, so I can see it getting very confused. It works well in boardroom setups where there is a single conversational thread, but even then, most of the good systems I’ve seen use two cameras, so that the view can switch in software rather than waiting for mechanical rotation.

So much for the "good idea" part. The reason it’s a terrible idea in this case is that it’s from Facebook. Nobody in their right mind would want an always-on device from Facebook in their living room, with a camera pointed at their couch, and listening in on the video calls they make. Facebook have shown time and time and time again that they simply cannot be trusted.

An example of why the problem is Facebook itself, rather than any one product or service, is the hardware switch for turning the device’s camera off. The highlight shows if the switch is in the off position, and a LED illuminates… to show that the camera and microphone are off.

Many people have commented that this setup looks like a classic dark pattern in UX, just implemented in hardware. My personal opinion is that the switch is more interesting as an indicator of Facebook’s corporate attitude to internet services: they are always on, and it’s an anomaly if they are off. In fact, they may even consider the design of this switch to be a positive move towards privacy, by highlighting when the device is in "privacy mode". The worrying aspect is that this design makes privacy an anomaly, a mode that is entered briefly for whatever reason, a bit like Private or Incognito mode in a web browser. If you’re wondering why a reasonable person might be concerned about Facebook’s attitude to user privacy, a quick read of just the "Privacy issues" section of the Wikipedia article on Facebook criticism will probably have you checking your permissions. At a bare minimum, I assume that entering "privacy mode" is itself a tracked event, subject to later analysis…

Trust, But Verify

IoT devices need a high degree of trust anyway because of all the information that they are inherently privy to. Facebook have proven that they will go to any lengths to gather information, including information that was deliberately not shared by users, process it for their own (and their advertising customers’) purposes, and do an utterly inadequate job of protecting it.

The idea of a smart home is attractive, no question – but why do the individual devices need to be smart in their own right? Unnecessary capabilities increase the vulnerability surface for abuse, either by a vendor/operator or by a malicious attacker. Instead, better to focus on devices which have the minimum required functionality to do their job, and no more.

A perfect example of this latter approach is IKEA’s collaboration with Sonos. The Symfonisk speakers are not "smart" in the sense that they have Alexa, Siri, or Google Assistant on board. They also do not connect directly to the Internet or to any one particular service. Instead, they rely on the owner’s smartphone to do all the hard work, whether that is running Spotify or interrogating Alexa. The speaker just plays music.

I would love a simple camera that perched on top of the TV, either as a peripheral to the AppleTV, or extending AirPlay to be able to use video sources as well. However, as long as doing this requires a full device from Facebook1 – or worse, plugging directly into a smart TV2 – I’ll keep on propping my phone up awkwardly and sharing the view to the TV.


  1. Or Google or Amazon – they’re not much better. 

  2. Sure, let my TV watch everything that is displayed and upload it for creepy "analysis".3 

  3. To be clear, I’m not wearing a tinfoil hat over here. I have no problem simply adding a "+1" to the viewer count for The Expanse or whatever, but there’s a lot more that goes on my TV screen: photos of my kids, the content of my video calls, and so on and so forth. I would not be okay with sharing the entire video buffer with unknown third parties. This sort of nonsense is why my TV has never been connected to the WiFi. It went online once, using an Ethernet cable, to get a firmware update – and then I unplugged the cable. 

Once More On Privacy

Facebook is in court yet again over the Cambridge Analytica scandal, and one of their lawyers made a most revealing assertion :

There is no invasion of privacy at all, because there is no privacy

Now on one level, this is literally true. Facebook's lawyer went on to say that:

Facebook was nothing more than a "digital town square" where users voluntarily give up their private information

The issue is a mismatch in expectations. Users have the option to disclose information as fully public, or variously restricted: only to their friends, or to members of certain groups. The fact that something is said in the public street does not mean that the user would be comfortable having it published in a newspaper, especially if they were whispering into a friend’s ear at the time.

Legally, Facebook may well be in the right (IANAL, nor do I play one on the Internet), but in terms of users’ expectations, they are undoubtedly in the wrong. However, for once I do not lay all the blame on Facebook.

Mechanisation and automation are rapidly subverting common-sense expectations in a number of fields, and consequences can be wide-reaching. Privacy is one obvious example, whether it is Facebook’s or Google’s analysis of our supposedly private conversations, or facial recognition in public places.

For an example of the reaction to the deployment of these technologies, the city of San Francisco, generally expected to be an early adopter of technological solutions, recently banned the use of facial recognition technology. While the benefits for law enforcement of ubiquitous automated facial recognition are obvious, the adoption of this technology also subverts long-standing expectations of privacy – even in undoubtedly public spaces. While it is true that I can be seen and possibly recognised by anyone who is in the street at the same time as me, the human expectation is that I am not creating a permanent, searchable record of my presence in the street at that time, nor that such a record would be widely available.

To make the example concrete, let’s talk for a moment about numberplate recognition. Cars and other motor vehicles have number plates to make them recognisable, including for law enforcement purposes. As technology developed, automated reading of license plates became possible, and is now widely adopted for speed limit enforcement. Around here things have gone a step further, with average speeds measured over long distances.

Who could object to enforcing the law?

The problem with automated enforcement is that it is only as good as it is programmed to be. It is true that hardly anybody breaks the speed limit on the monitored stretches of motorway any more – or at least, not more than once. However, there are also a number of negative consequences. Lane discipline has fallen entirely by the wayside since the automated systems were introduced, with slow vehicles cruising in the middle or even outside lanes, with empty lanes on the inside. The automated enforcement has also removed any pressure to consider what is an appropriate speed for the conditions, with many drivers continuing to drive at or near the speed limit even in weather or traffic conditions where that speed is totally unsafe. Finally, there is no recognition that, at 4am with nobody on the roads, there is no need to enforce the same speed limit that applies at rush hour.

Human-powered on-the-spot enforcement – the traffic cop flagging down individual motorists – had the option to modulate the law, turning a blind eye to safe speed and punishing driving that might be inside the speed limit but unsafe in other ways. Instead, automated enforcement is dumb (it is, after all, binary) and only considers the single metric it was designed to consider.

There are of course any number of problems with a human-powered approach as well; members of ethnic or social minorities all have stories involving the police looking for something – anything – to book them for. I’m a straight white cis-het guy, and still once managed to fall foul of the proverbial bored cops, who took my entire car apart looking for drugs (that weren’t there) and then left me by the side of the road to put everything back together. However, automated enforcement makes all of these problems worse.

Facial recognition has documented issues with accuracy when it comes to ethnic minorities and women – basically anyone but the white male programmers who created the systems. If police start relying on such systems, people are going to have serious difficulties trying to prove that they are not the person in the WANTED poster – because the computer says they are a match. And that’s if they don’t just get gunned down, of course.

It is notoriously hard to opt out of these systems when they are used for advertising, but when they are used for law enforcement, it becomes entirely impossible to opt out, as a London man found when he was arrested for covering his face during a facial recognition trial on public streets. A faulty system is even worse than a functional one, as its failure modes are unpredictable.

Systems rely on data, and data storage is also problematic. I recently had to get a government-issued electronic ID. Normally this should be a simple online application, but I kept getting weird errors, so I went to the office with my (physical) ID instead. There, we realised that the problem was with my place of birth. I was born in what was then Strathclyde, but this is no longer an option in up-to-date systems, since the region was abolished in 1996. However, different databases were disagreeing, and we were unable to move forward. In the end, the official effectively helped me to lie to the computer, picking an acceptable jurisdiction in order to move forwards in the process – and thereby of course creating even more inaccuracies and inconsistency. So much for "the computer is always right"… Remember, kids: Garbage In, Garbage Out!

What, Me Worry?

The final argument comes down, as it always does with privacy, to the objection that "there’s nothing to fear if you haven’t done anything wrong". Leaving aside the issues we just discussed around the possibility of running into problems even when you really haven’t done anything wrong, the issue is with the definition of "wrong". Social change is often driven by movement in the grey areas of the law, as well as selective enforcement of those laws. First gay sex is criminalised, so underground gay communities spring up. Then attitudes change, but the laws are still on the books; they just aren’t enforced. Finally the law catches up. If algorithms actually are watching all of our activity and are able to infer when we might be doing something that’s frowned upon by some1, that changes the dynamic very significantly, in ways which we have not properly considered as a society.

And that’s without even considering where else these technologies might be applied, beyond our pleasant Western bubble. What about China, busy turning Xinjiang into an open-air prison for the Uyghur minority? Or "Saudi" Arabia, distributing smartphone apps to enable husbands to deny their wives permission to travel?

Expectations of privacy are being subverted by scale and automation, without a real conversation about what that means. Advertisers and the government stick to the letter of the law, but there is no recognition of the material difference between surveillance that is human-powered, and what happens when the same surveillance is automated.


Photo by Glen Carrie and Bryan Hansonvia Unsplash


  1. And remember, the algorithms may not even be analysing your own data, which you carefully secured and locked down. They may have access to data for one of your friends or acquaintances, and then the algorithm spots a correlation in patterns of communication, and associates you with them. Congratulations, you now have a shadow profile. And what if you are just really unlucky in your choice of local boozer, so now the government thinks you are affiliated with the IRA offshoot du jour, when all you were after was a decent pint of Guinness? 

How Much Trouble Is Facebook In?

Users (including me) are deleting Facebook, but FB reports no drop in active users. What gives?

It’s not just Bloomberg, either; a survey published in Forbes claims that More Than 1 in 4 Americans Have Deleted Facebook. I’m not American, nor do I play one on TV, but I deleted the FB app from all my devices a while ago. I still have my account, but I went from checking it multiple times per day to glancing at it once every couple of weeks. Informally, I speak to lots of people who have done the same thing.

Once again, what gives?

Counting And Overcounting

There is nothing surprising here: any action is enough for FB to count you as active, so they can claim with a straight face that even someone like me is still "active" for purposes of their statistics – and the rates they can charge advertisers.

Remember when Facebook inflated video viewing stats for two years? Good times, good times. Turned out, they were counting anything over three seconds as if you had viewed the whole thing. The only problem is, it might take you that long to figure out how to dismiss the annoying thing.

Unsurprisingly, advertisers who had been paying through the nose for those video ad placements were not best pleased, especially as the scale of the over-counting became clear:

Ad buying agency Publicis Media was told by Facebook that the earlier counting method likely overestimated average time spent watching videos by between 60% and 80%

On A Mission

Facebook take their mission extremely seriously. Currently it says this:

Give people the power to build community and bring the world closer together.

The old formulation was perhaps clearer:

To give people the power to share and make the world more open and connected.

Either way, the Rohingya in Burma1, to cite just one example, might have preferred if people had not shared libels and built communities around hunting them down and ejecting them from their villages.

Facebook, however, in dogged pursuit of this ideal, builds and maintains so-called shadow profiles, even for users who had the foresight never to sign up for Facebook. These profiles are built up by using various tracking mechanisms that follow users around the Web – famously, the Like button, although supposedly that has now been defanged. One also suspects a certain amount of information sharing between Facebook’s various properties, notably Instagram and WhatsApp.

The AOL Of Our Century

The bottom line is, you’re not getting out of Facebook that easily, if only because of the famous truism of the ad-funded web: "if you’re not paying for it, you’re the product". With Facebook, as with all social media sites, that is true in a very literal sense. What they are selling to their advertisers is exposure to the greatest number of eyeballs, ideally filtered according to certain characteristics. If the pool starts shrinking, their opportunity to make money off advertisers shrinks commensurately. If people start seriously messing with the stats, for instance by using tools like fuzzify.me, such that the filters no longer return groupings of users that are attractive to advertisers, that will also be a problem. Any drop in Daily or Monthly Active Users (DAU and MAU) would be a much more immediate threat, though, and that is why as long as users check Facebook even occasionally, there will never be a serious drop in usage reported – right up until the day the whole thing dies unceremoniously in a corner.


  1. I refuse to call it Myanmar. 

Needy Much, Facebook?

This notification was on my iPad:

A HUNDRED messages? Okay, maybe something blew up. I’ve not been looking at Facebook for a while, but I’ve been reluctant to delete my account entirely because it’s the only way I keep in touch with a whole bunch of people. Maybe something happened?

I open the app, and I’m greeted with this:

Yeah, no notifications whatsoever inside the app.

Facebook is now actively lying to get its Daily Active Users count up. Keep this sort of thing in mind when they quote such-and-such a number of users.

To Facebook, user engagement stats are life itself. If they ever start to slide seriously, their business is toast. Remember in 2016, when Facebook was sued over inflated video ad metrics? Basically, if you scrolled past a video ad in your feed, that still counted as a “view", resulting in viewer counts that were inflated by 80%.

Earlier this year, Facebook had its first loss in daily active users in the US and Canada. They are still growing elsewhere, but not without consequences, as the New York Times reports in a hard-hitting piece entitled Where Countries Are Tinderboxes and Facebook Is a Match.

At this point, I imagine anyone still working for Facebook is not nearly as forward with that fact at dinner parties or in bars, instead offering the sort of generic “yeah, I work in IT" non-answer that back-office staff at porn sites are used to giving.

This Is Where We Are, July 2017 Edition

A quick review of the status of the Big Three1 social networks as of right now.

It seems Facebook is testing ads in Messenger now, which is an incredibly wrong-headed idea:

Messenger isn’t really a “free time" experience the way Facebook proper is — you use the former with purpose, the latter idly. Advertisements must cater to that, just like anywhere else in the world: you don’t see the same ads on subway walls (where you have to sit and stare) as on billboards (where you have two or three seconds max and your attention is elsewhere).

I always hated Messenger anyway, just out of reflex because they had felt the need to split it off into a separate app. In fact, I kept using Paper until Facebook finally broke it, in no small part because it kept everything together in one app. It also looked good, as opposed to the hot mess of FB’s default apps.

Between that and the “Moments" rubbish junking up the top of every one of the FB apps, I am actively discouraged from using them. At this point I pretty much only open FB if I have a notification from there.

Meanwhile, Twitter is continuing on its slow death spiral. It is finally becoming what it was always described as: a “micro-blogging" platform. People write 100-tweet threads instead of just one blog post, and this is so prevalent that there are tools out there that will go and assemble these threads in one place for ease of reading.

It’s got to the point that I read Twitter (and a ton of blogs via RSS, because I’m old-school that way), but most of my actual interaction these days is via LinkedIn. I even had a post go viral over there - 7000-odd views and more than a hundred likes, at time of writing.

So this is where we are, right now in July 2017: Twitter for ephemeral narcissism, Facebook for interacting with (or avoiding) the same people you deal with day to day, and LinkedIn for actually getting things done.

See you out there.

Photo by Osman Rana on Unsplash


  1. I don’t Instagram, I’m too old for Tumblr, and - oh sorry Snapchat, didn’t see you down there

Privacy? on the Internet?

Periodically something happens that gets everyone very worked up about privacy online. Of course anyone who has ever administered a mail server has to leave the room when that conversation starts, because our mocking laughter apparently upsets people.1

The latest outrage is that Facebook has apparently been messing with people's feeds. No, I don't mean the stuff about filtering out updates from pages that aren't paying for placement.

No, I don't mean the auto-playing videos either. Yes, they annoy me too.

No, it seems that Facebook manipulated the posts that showed up in certain users' feeds, sending them more negative information to see whether this would affect their mood - as revealed, naturally, through their Facebook postings.

Now, it has long been a truism that online, and especially when it comes to Facebook, privacy is dead. The simplistic response is of course "if you wanted it to be a secret, then why did you share it on Facebook?". This is, of course, a valid point as far as it goes. The problem is that the early assumptions about Facebook no longer hold true.

Time was, Facebook knew about what you did on Facebook, but once you left the site, you were free to get up to things you might not want to share with everybody. Then those "Like" buttons started proliferating everywhere. Brands and website operators wanted to garner "likes" from users to prove their popularity, or at least the effectiveness of their latest marketing gimmick ("like our site for the chance to win an iPad!").

It turns out that on top of tracking what you actually "like", Facebook can track any page you look at that has a Like button embedded. Given that the things are absolutely everywhere, that gives them probably the most complete picture of any ad network out there.

Then Facebook changed their news delivery options. It used to be that "liking" a page meant that you would see all their updates. Now, it means that about 2% of the people who "like" the page see the updates - unless the page operators choose to pay to amplify their reach... Note that these pages do not necessarily belong to brands and advertisers. If your old school has a page that you "like", in the expectation that you will now receive their updates, you're out of luck. Guess you'd better arrange a fundraiser at your next reunion to gather cash to pay Facebook. On the plus side, you have a built-in excuse for poor attendance at the reunion: "ah, I guess they were in the 98% that Facebook didn't deliver the notifications to".

And now Facebook have gone whole-hog, not just preventing information from reaching users' feeds, but actively changing the contents of the users' feeds - in the name of Science, sure.

This is far beyond what people think they have signed up for. There is a big difference between being tracked on Facebook, and being tracked by Facebook, everywhere you go. The difference is not just moral, but commercial. After all, tracking users across multiple websites has been standard operating procedure for ad networks for a long time now. If you've ever shopped online for something and then seen nothing but ads for that one thing for a month thereafter, you have experienced this first-hand. It's mildly creepy, but at this point everyone is pretty well inured to this level of tracking.

Being tracked by ad networks is different from being tracked by Facebook in one very important way. So far, nobody seems to have figured out a good way to make money with content on the internet. A few people do okay with subscriptions, but it tends to be a niche thing. Otherwise, pretty much everything is ad-funded in some way. Now, banner ads can be annoying, and the tracking can get creepy, but at least the money from the ad impressions is going to the site operator, who provides the content that keeps us all coming back.

The "like" button subverts this mechanism, because it's just as creepy and Big-Brotherish, but none of the money goes to the site's operator. All the money and data go only to Facebook, who are even now trying to figure out how to modify your feed to make you want to buy things. Making you feel bad was only step 1, but not everyone goes straight to retail therapy as a remedy. Step 2 is hacking our exocortices (hosted on Facebook) to manipulate the "buy now!" instinct directly.

If you enjoyed this article, please like it on Facebook.


  1. If you don't know what I'm talking about, let's just say I really, really know what I'm talking about when I say you shouldn't send credit card numbers in the clear, and leave it at that. 

Are you KIDDING ME?

all the trigger warnings

There is a Facebook page entitled "Elliot Rodger is an American hero" (no link, but you can find it easily enough). Facebook offers the ability to report pages that are harassing, so that's what I did - and look what their response is!

Apparently this page does not violate Facebook's Community Standards. These would be the same standards that get people in trouble for posting pictures of mothers breastfeeding, or the kids' bath time.

To quote from those Community Standards:

Facebook does not permit hate speech
, but distinguishes between serious and humorous speech. While we encourage you to challenge ideas, institutions, events, and practices, we do not permit individuals or groups to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition.

I would say this is pretty obviously hate speech and not humorous in the least. Look, this isn't 4chan. I have no doubt there are already one million animated gifs of kawaii kittens acting out Elliot Rodger's shooting spree, complete with "Never gonna give you up" on the soundtrack, but that's expected over there. If you have no rules, that's what happens - but if you set rules, guess what? People expect you to enforce them, universally and fairly.

This isn't quite my "boycott Facebook" moment, but it's one more broken thread in the string that's holding me there.

Jumping the fence

Facebook just released their new iPhone client, an app called Paper. It’s quite nice, and gets good reviews.

Bit of a jerk move on the name, mind.

If you are in the US, you can just download Facebook Paper, but if you’re in the rest of the world, you’re out of luck.

Or are you?

There are a few different unofficial ways to get apps onto an iPhone, bypassing these sorts of geographical restrictions: sideloading, changing the country on your existing iTunes account, or creating a whole new Apple ID from scratch.1

Sideloading

Sideloading2 means that you install the app from your computer, but without going through iTunes. You will need to have access to the actual app file, so you will need a co-conspirator in the US to get you the app. Your confederate can find these as .ipa files in the iTunes Media/Mobile Applications subdirectory of their main iTunes directory.

Once you have the relevant .ipa file, you can use the iPhone Configuration Utility3 to load the app onto your phone. Once you’ve done this, the app should behave normally, including for updates.

Changing the country

You can change the country of an existing iTunes account quite easily: open the App Store app, scroll all the way to the bottom of the “Featured" tab, tap on your Apple ID, choose “View Apple ID" in the popup, and tap on “Country/Region" to change to the US store.

HT1311_03--account_settings-002-en.png

There is a pretty big downside to this method: your payment details will be reset, which would not be too bad, except that it also loses any recurring subscriptions you have set up. I have a few that I didn’t want to mess this, so I didn’t follow through, and can’t vouch that this method works.

Creating a new Apple ID

I didn’t want to do this because it seemed like it would be a huge hassle, but it’s actually fairly painless. There is only one wrinkle to be aware of. Apple in their wisdom will not let you create an Apple ID from scratch without setting a means of payment. However, if you sign out from your existing Apple ID, then go to install a free app (such as, oh for instance Facebook Paper), you are prompted to log in with an existing Apple ID or create a new one. If you start the process this way, you will then be able to select “None" for your method of payment.

none+iphone+13+11+16.jpg

You’ll need an e-mail address that you have not previously used with Apple to complete the registration. Once you have done this, finish downloading Facebook Paper, then log out of your US account and log back in as yourself.

Facebook Paper should pick up your existing FB credentials saved in iOS and work normally from this point on.


  1. Well, or move physically to another country, but that’s a bit beyond the scope of this post.  

  2. This is the method I used to load Google+ onto my iPad back when it was iPhone only. Remember when we were all excited about G+? 

  3. This page is not really up to Apple’s usual standards: all-lower-case title for a start, and a confusing mix of version numbers and platforms all jumbled together with no explanation.