Showing all posts tagged google:

Deliver A Better Presentation — 2023 Edition

During the ongoing process of getting back on the road and getting used to meeting people in three dimensions again, I noticed a few presenters struggling with displaying slides on a projector. These skills may have atrophied with remote work, so I thought it was time for a 2023 update to a five-year-old blog post of mine where I shared some tips and tricks for running a seamless presentation.

Two Good Apps

One tip that remains unchanged from 2018 is a super-useful (free) Mac app called Display Menu. Its original purpose was to make it easy to change display resolutions, which is no longer as necessary as it once was, but the app still has a role in giving a one-click way to switch the second display from extended to mirrored. In other words, you see the same on the projector as on your laptop display. You can also do this in Settings > Displays, of course, but Display Menu lives in the menu bar and is much more convenient.

Something else that can happen during presentations is the Mac going to sleep. My original recommendation of Caffeine is no longer with us, but it has been replaced by Amphetamine. As with Display Menu, this is an app that lives in the menu bar, and lets you delay sleep or prevent it entirely. It’s worth noting that entering presenter mode in PowerPoint or Keynote will prevent sleep automatically, but many people like to show their slides in slide sorter view rather than actually presenting1.

Two Good Techniques

If you are using the slide sorter view in order to be able to control your presentation better and jump back and forth, you really need to learn to use Presenter Mode instead. This mode lets you use one screen, typically your laptop's own, as your very own speaker's courtesy monitor, with a thumbnail view of the current and next slides, as well as your presenter notes and a timer. Meanwhile all the audience sees is the current slide, in full screen on the external display. You can also use this mode to jump around in your deck if needed to answer audience questions — but do this sparingly, as it breaks the thread of the presentation.

My original recommendation to set Do Not Disturb while presenting has been superseded by the Focus modes introduced with macOS Monterey. You can still just set Do Not Disturb, but Focus has the added intelligence of preventing notifications only until the end of the current calendar event.2 However, you can also create more specific Focus modes to fit your own requirements.

A Nest Of Cables

The cable situation is much better than it was in 2018. VGA is finally dead, thanks be, and although both HDMI and USB-C are still out there, many laptops have both ports, and even if not, one adapter will cover you. Also, that single adapter is much smaller than a VGA brick! I haven't seen a Barco ClickShare setup in a long time; I think everyone realised they were cool, but more trouble than they were worth. Apple TVs are becoming pretty ubiquitous — but do bear in mind that sharing your screen to them via AirPlay will require getting on some sort of guest wifi, which may be a bit of a fiddle. Zoom and Teams room setups have displaced WebEx almost everywhere, and give the best of both worlds: if you can get online, you can join the room's meeting, and take advantage of screen, camera, and speakers.

Remote Tips

All of those recommendations apply to in-person meetings when you are in the room with your audience. I offered some suggestions in that older piece about remote presentations, but five years ago that was still a pretty niche pursuit. Since 2020, on the other hand, all of us have had to get much better at presenting remotely.

Many of the tips above also apply to remote presentations. Presumably you won't need to struggle with cables in your own (home) office, but on the other hand you will need to get set up with several different conferencing apps. Zoom and Teams are duking it out for ownership of this market, with Google Meet or whatever it's called this week a distant third. WebEx and Amazon Chime are nowhere unless you are dealing with Cisco or Amazon respectively, or maybe one of their strategic customers or suppliers. The last few years have seen an amazing fall from grace for WebEx in particular.

Get Zoom and Teams at least set up ahead of time, and if possible do a test meeting to make sure they are using the right audio and video devices and so on. Teams in particular is finicky with external webcams, so be ready to use your built-in webcam instead. If you haven't used one of these tools before and you are on macOS Monterey, remember that you will need to grant it access to the screen before you can share anything — and when you do that, you will need to restart the app, dropping out of whatever meeting you are in. This is obviously disruptive, so get this setup taken care of beforehand if at all possible.

Can You See Me Now?

On the topic of remote meetings, get an external webcam, and set it up above a big external monitor — as big as you can accomodate in your workspace and budget. The webcam in your laptop is rubbish, and you can't angle it independently from the display, so one or the other will always be wrong — or quite possibly both.

Your Mac can also now use your iPhone as a webcam. This feature, called Continuity Camera, may or may not be useful to you, depending on whether you have somewhere to put your phone so that it has a good view of you — but it is a far better camera than what is in your MacBook's lid, so it's worth at least thinking about.

I Can See You

Any recent MacBook screen is very much not rubbish, on the other hand, but it is small, and once again, hard to position right. An external display is going to be much more ergonomic, and should be paired with an external keyboard and mouse. We all spend a lot of time in front of our computers, so it's worth investing in our setups.

Apart from the benefits of better ergonomics when working alone, two separate displays also help with running remote presentations, because you can set one to be your presenter screen and share the other with your audience. You can also put your audience's faces on the screen below the webcam, so that you can look "at" them while talking. Setting things up this way also prevents you from reading your slides — but you weren't doing that anyway, right? Right?

I hope some of these tips are helpful. I will try to remember to share another update in another five years, and see where we are then (hint: not the Metaverse). None of the links above was sponsored, by the way — but if anyone has a tool that they would like me to check out, I'm available!


🖼️ Photos by Charles Deluvio and ConvertKit on Unsplash; Continuity Camera image from Apple.


  1. Yeah, I have no idea either. 

  2. This cleverness can backfire if your meeting overruns, though, and all those backed-up notifications all hit your screen at once. DING-DING-DING-DING-DING! 

Business Case In The Clouds

A perennial problem in tech is people building something that is undeniably cool, but is not a viable product. The most common definition of "viable" revolves around the size and accessibility of the target market, but there are other factors as well: sustainability, profitability, growth versus funding, and so on.

I am as vulnerable as the next tech guy to this disease, which is just one of many reasons why I stay firmly away from consumer tech. I know myself well enough to be aware that I would fall in love with something that is perfectly suited to my needs and desires — and therefore has a minuscule target market made up of me and a handful of other weirdos.

One of the factors that makes this a constant ongoing problem, as opposed to one that we as an industry can resolve and move on from, is that advancing tech continuously expands the frontiers of what is possible, but market positioning does not evolve in the same direction or at the same speed. If something simply can't be done, you won't even get to the "promising demo video on Kickstarter" stage. If on the other hand you can bodge together some components from the smartphone supply chain into something that at least looks like it sort of works, you might fool yourself and others into thinking you have a product on your hands.

The thing is, a product is a lot more than just the technology. There are a ton of very important questions that need to be answered — and answered very convincingly, with data to back up the answers — before you have an actual product. Here are some of the key questions:

  • How many people will buy one?
  • How much are they willing to pay?
  • Given those two numbers, can we even manufacture our potential product at a cost that lets us turn a profit? If we have investors, what are their expectations for the size of that profit?
  • Are there any regulations that would bar us from entering a market (geographical or otherwise)? How much would it cost to comply with those regulations? Are we still profitable after paying those costs?
  • How are we planning to do customer acquisition? If we have a broad market and a low-cost product, we're going to want to blanket that segment with advertising and have as self-service a sales channel as possible. On the other hand, if we are going high-end and bespoke, we need an equally bespoke sales channel. Both options cost money, and they are largely mutually exclusive. And again, that cost comes out of our profit margin.
  • What's the next step? Is this just a one-shot campaign, or do we have plans for a follow-on product, or an expansion to the product family?
  • Who are our competitors? Do they set expectations for our potential customers?
  • How might those competitors react? Can they lower their own prices enough that we have to reduce ours and erode our profit margin? Can they cross-promote with other products while we are stuck being a one-trick pony?

These are just some of the obvious questions, the ones that you should not move a single step forward without being able to answer. There are all sorts of second- and third-order follow-ups to these. Nevertheless, things-that-are-not-viable-products keep showing up, simply because they are possible and technically cool.

Possible, Just Not Viable

One example of how this process can play out would be Google Stadia (RIP). At the time of its launch, everyone was focused on technical feasibility:

[...] streaming games from datacenters like they’re Netflix titles has been unproven tech, and previous attempts have failed. And in places like the US with fixed ISP data caps, how would those hold up to 4-20 GB per hour data usage?

[...] there was one central question. Would it even work?

Some early reviewers did indeed find that the streaming performance was not up to scratch, but all the long-term reports I heard from people like James Whatley were that the streaming was not the problem:

The gamble was always: can Google get good at games faster than games can get good at streaming. And I guess we know (we always knew) the answer now. To be clear: the technology is genuinely fantastic but it was an innovation that is looking - now even more overtly - for a problem to solve.

As far as we can tell from the outside (and it will be fascinating to read the tell-all book when it comes out), Google fixated on the technical aspect of the problem. In fairness, they were and are almost uniquely well-placed to make the technology work that enables game streaming: data centers everywhere, fast network connections, and in-house expertise on low-latency data streaming. The part which apparently did not get sufficient attention was how to turn those technical capabilities into a product that would sell.

Manufacturing hardware is already not Google's strong suit. Sure, they make various phones and smart home devices, but they are bit-players in terms of volume, preferring to supply software to an ecosystem of OEMs. However, what really appears to have sunk Stadia is the pricing strategy. The combination of both a monthly subscription and having to buy individual games appears to have been a deal-killer, especially in the face of other streaming services from long-established players such as Microsoft or Sony which only charge a subscription fee.

To recap: Google built some legitimately very cool technology, but priced it in a way that made it unattractive to its target customers. Those customers were already well-served by established suppliers, who enjoyed positive reputations — as opposed to Google's reputation for killing services, one that has been further reinforced by the whole Stadia fiasco. Finally, there was no uniquely compelling reason to adopt Stadia — no exclusives, no special integration with other Google services, just "isn't it cool to play games streamed from the cloud instead of running on your local console?" Gamers already own consoles or game on their phones, especially the ones with the sort of fat broadband connection required to enable Stadia to work; there is not a massive untapped market to expand into here.

So much for Google. Can Facebook — sorry, Meta — do any better?

Open Questions In An Open World

Facebook rebranded as Meta to underline its commitment to a bright AR/VR future in the Metaverse (okay, and to jettison the increasingly stale and negative branding of the Blue App). The question is, will it work?

Early indications are not good: Meta’s flagship metaverse app is too buggy and employees are barely using it, says exec in charge. Always a sign of success when even the people building the thing can't find a reason to spend time with it. Then again, in fairness, the NYT reports that spending time in Meta's Horizon VR service was "surprisingly fun", so who knows.

The key point is that the issue with Meta is not one of technical feasibility. AR/VR are possible-ish today, and will undoubtedly get better soon. Better display tech, better battery life, and better bandwidth are all coming anyway, driven by the demands of the smartphone ecosystem, and all of that will also benefit the VR services. AR is probably a bit further out, except for industrial applications, due to the need for further miniaturisation if it's going to be accepted by users.

The relevant questions for Meta are not tech questions. Benedict Evans made the same point discussing Netflix:

As I look at discussions of Netflix today, all of the questions that matter are TV industry questions. How many shows, in what genres, at what quality level? What budgets? What do the stars earn? Do you go for awards or breadth? What happens when this incumbent pulls its shows? When and why would they give them back? How do you interact with Disney? These are not Silicon Valley questions - they’re LA and New York questions.

The same factors apply to Horizon. It's a given that Meta can build this thing; the tech exists or is already on the roadmap, and they have (or can easily buy) the infrastructure and expertise. The questions that remain are all "but why, tho" questions:

  • Who will use Horizon? How many of these people exist?
  • How will Horizon pay for itself? Subscriptions — in exchange for what value? Advertising — in what new formats?
  • What's the plan for customer acquisition? Meta keeps trying to integrate its existing services, with unified messaging across Facebook, Instagram, and WhatsApp, but it doesn't really seem to be getting anywhere with consumers.
  • Following on from that point, is any of this going to be profitable at Meta's scale? That qualification is important: to move the needle for Zuckerberg & co., this thing has to rope in hundreds of millions of users. It can't just hit a Kickstarter milestone and declare victory.
  • What competitors are out there, and what expectations have they already set? If Valve failed to get traction with VR when everybody was locked down at home and there was a new VR-exclusive Half-Life game1, what does that say about the addressable market?

None of these are questions that can be answered based on technical capabilities. It doesn't matter how good the display tech in the headsets is, or whether engineers figure out how to give Horizon avatars innovative features such as, oh I don't know, legs. What matters is what people can do in Horizon that they can't do today, IRL or in Flatland. Nobody will don a VR headset to look at Instagram photos; that works better on a phone. And while some people will certainly try to become VR influencers, that is a specialised skill requiring a ton of support; it's not going to be every aspiring singer, model, or fitness instructor who is going to make that transition. Meta will need a clear and convincing answer that is not "what if work meetings but worse in every way".

So there you have it, one failed product and one that is still unproven, both cautionary tales of putting the tech before the actual product.


  1. I love this devastating quote from PCGamesN: "Half-Life: Alyx, [...] artfully crafted though it was, [...] had all the cultural impact of a Michael Bublé album." Talk about vicious! 

Network TV

It is hardly news that the ad load on YouTube has become ridiculous, with both pre-roll, and several mid-roll slots, even on shorter videos. In parallel with the rise of annoying ads, YouTube is also deluging me with come-ons for YouTube Premium, their paid ad-free experience. I haven't coughed up because a) I'm cheap, and b) this feels like blackmail: "pay up or we'll make even more annoying unskippable ads".

YouTube charges through the nose for add-on offerings like YouTube Premium or YouTube TV, the US-only streaming replacement for cable TV. The expense of these services highlights just how profitable advertising is for them — and still they need to add more and more slots. The suspicion is of course that individual ads cost less, so YouTube needs to show ever more in order to continue their growth trajectory in the face of competition for eyeballs from the likes of TikTok.

Now news emerges that YouTube is negotiating to add content to its subscription services:

YouTube has been in closed-door talks with streaming broadcasters about a new product the video giant wants to launch in Australia, which industry insiders say is an ambitious play to own the home screen of televisions.
The company is seeking deals with Australian broadcasters to sell subscriptions to services such as Nine-owned Stan and Foxtel’s Binge directly through YouTube, which would then showcase the streamers’ TV and movie content to users.

Not being in Australia, I'm not familiar with either Stan or Binge, but the idea would appear to be to get more users habituated to paying for subscriptions through YouTube. There are already paid-subscription YouTube channels out there, but not many; it seems that most creators have opted for the widest possible distribution and monetisation via ads, instead of direct monetisation via paying subscriptions in exchange for a smaller audience. Perhaps the pull of these shows will be enough to jump-start that model? Presumably the reason for launching this offering in Australia is that it will be a pilot whose results will be watched closely before rolling out in other markets (or not).

This whole approach seems a bit backward to me though. YouTube Is pretty unassailably established as the platform for video on the web; TikTok is effectively mobile-only and playing a somewhat different game. What if Google exploited that position by working with ISPs? I'm resistant to paying for YouTube Premium specifically, but if you hid the same amount somewhere in my ISP bill, or made a bundle around it with something else, I'd probably cough up. ISPs that sign up could also implement local caches (presumably part-funded by Google) to improve performance for their users, maybe get better traffic data to optimise the service — without illegal preferencing, of course.

Instead of trying to jump-start a new revenue stream by getting users to pay for something they already get for free by offering a slightly nicer experience, better for YouTube to get into a channel where users are already habituated to paying for add-on services, and where the incumbents (the ISPs) are desperate to position themselves as more than undifferentiated dumb pipes. A better streaming video experience is already the most obvious reason for most households to upgrade their internet connection, so the link is already there in consumers' minds.

Susan Wojcicki, have your people call me.


🖼️ Photo by Erik Allen on Unsplash

The Framing Continues

The framing of Australia's battle against Google and Facebook continues in a new piece with the inflammatory title Australian law could make internet ‘unworkable’, says World Wide Web inventor Tim Berners-Lee.

Here's what Sir Timothy had to say:

"Specifically, I am concerned that that code risks breaching a fundamental principle of the web by requiring payment for linking between certain content online"

This is indeed the problem: I am not a lawyer, nor do I play one on the internet, so I won't comment on the legalities of the Australian situation — but any requirement to pay for links would indeed break the Web (not the Internet!) as we know it. But that's not the issue at risk, despite Google's attempts to frame the situation that way (emphasis mine):

Google contends the law does require it to pay for clicks. Google regional managing director Melanie Silva told the same Senate committee that read Berners-Lee’s submission last month she is most concerned that the code "requires payments simply for links and snippets."

As far as I can tell, the News Media and Digital Platforms Mandatory Bargaining Code does not actually clarify one way or the other whether it applies to links or snippets. This lack of clarity is the problem with regulations drafted to address tech problems created by the refusal of tech companies to engage in good-faith negotiations. Paying for links, such as the links throughout this blog post, is one thing — and that would indeed break the Web. Paying for snippets, where the whole point is that Google or Facebook quote enough of the article, including scraping images, that readers may not feel they need to click through to the original source, is something rather different.

Lazily conflating the two only helps unscrupulous actors hide behind respected names like Tim Berners-Lee's to frame the argument their own way. In law and in technology, details matter.

And of course you can't trust anything Facebook says, as they have once again been caught over-inflating their ad reach metrics:

According to sections of a filing in the lawsuit that were unredacted on Wednesday, a Facebook product manager in charge of potential reach proposed changing the definition of the metric in mid-2018 to render it more accurate.

However, internal emails show that his suggestion was rebuffed by Facebook executives overseeing metrics on the grounds that the "revenue impact" for the company would be "significant", the filing said.

The product manager responded by saying "it’s revenue we should have never made given the fact it’s based on wrong data", the complaint said.

The proposed Australian law is a bad law, and the reason it is bad is because it is based on a misapprehension of the problem it aims to solve.

Privacy Versus AI

There is a widespread assumption in tech circles that privacy and (useful) AI are mutually exclusive. Apple is assumed to be behind Amazon and Google in this race because of its choice to do most data processing locally on the phone, instead of uploading users’ private data in bulk to the cloud.

A recent example of this attitude comes courtesy of The Register:

Predicting an eventual upturn in the sagging smartphone market, [Gartner] research director Ranjit Atwal told The Reg that while artificial intelligence has proven key to making phones more useful by removing friction from transactions, AI required more permissive use of data to deliver. An example he cited was Uber "knowing" from your calendar that you needed a lift from the airport.

I really, really resent this assumption that connecting these services requires each and every one of them to have access to everything about me. I might not want information about my upcoming flight shared with Uber – where it can be accessed improperly, leading to someone knowing I am away from home and planning a burglary at my house. Instead, I want my phone to know that I have an upcoming flight, and offer to call me an Uber to the airport. At that point, of course I am sharing information with Uber, but I am also getting value out of it. Otherwise, the only one getting value is Uber. They get to see how many people in a particular geographical area received a suggestion to take an Uber and declined it, so they can then target those people with special offers or other marketing to persuade them to use Uber next time they have to get to the airport.

I might be happy sharing a monthly aggregate of my trips with the government – so many by car, so many on foot, or by bicycle, public transport, or ride sharing service – which they could use for better planning. I would absolutely not be okay with sharing details of every trip in real time, or giving every busybody the right to query my location in real time.

The fact that so much of the debate is taken up with unproductive discussions is what is preventing progress here. I have written about this concept of granular privacy controls before:

The government sets up an IDDB which has all of everyone's information in it; so far, so icky. But here's the thing: set it up so that individuals can grant access to specific data in that DB - such as the address. Instead of telling various credit card companies, utilities, magazine companies, Amazon, and everyone else my new address, I just update it in the IDDB, and bam, those companies' tokens automatically update too - assuming I don't revoke access in the mean time.

This could also be useful for all sorts of other things, like marital status, insurance, healthcare, and so on. Segregated, granular access to the information is the name of the game. Instead of letting government agencies and private companies read all the data, users each get access only to those data they need to do their jobs.

Unfortunately, we are stuck in an stale all-or-nothing discussion: either you surround yourself with always-on internet-connected microphones and cameras, or you might as well retreat to a shack in the woods. There is a middle ground, and I wish more people (besides Apple) recognised that.


Photo by Kyle Glenn on Unsplash

Don't Tell Me What I Can Or Can't Do

So I was watching that Spike Jonze-directed HomePod ad, and I noticed something odd:

See it? No?

How about now?

See, the funny thing is – this was a full-screen video. However, I could not dismiss the warning, which also meant that all the controls stayed visible on the screen, instead of disappearing as they should.

What is this, incompetence, or malice – or both? This was a video embedded on a third-party site. Was Google attempting to prevent me viewing it full-screen unless I clicked through to youtube.com, presumably for some adtech or tracking reason of its own?

Anyway, it’s a good thing that Safari is quite happy to ignore these sorts of shenanigans. It also lets me do picture-in-picture, though I have to click twice to dismiss Google’s useless context menu.

While we’re on that topic, Google’s menu is not just in the way, it’s also insulting: "Stats for nerds"? What is this, elementary school? If I want statistics on the video or the stat of my buffer, just give them to me, without silly names.

Google are of course hell-bent on taking over absolutely everything about your browser, whether it’s constantly nagging you to use Chrome, trying to get you to agree to some T&C document before you can do a search, or actually hijacking your keyboard commands:

Can’t we just go back to Google giving good search results and leaving it at that?

Quis Custodiet Ipsos Custodes?

Google is back in the news - and once again, it’s not for anything good. They added a Promotions folder to Gmail some time ago, and the pitch was that all the emails from brands that wanted to engage with you would automagically end up in there.

The problem is that this mechanism works a little too well, as Seth Godin describes:

You take the posts from this blog and dump them into my promo folder--and the promo folder of more than a hundred thousand people who never asked you to hide it.
Emails from my favorite charities end up in my promo folder. The Domino Project blog goes there as well. Emails from Medium, from courses I've signed up for, from services I confirmed just a day earlier. Items sent with full permission, emails that by most definitions aren't "promotions."
Here's a simple way to visualize it: Imagine that your mailman takes all the magazines you subscribe to, mixes them in with the junk mail you never asked for, and dumps all of it in a second mailbox, one that you don't see on your way into the house every day. And when you subscribe to new magazines, they instantly get mixed in as well.

It may be that this mechanism has recently received a revamp, as others are reporting sudden impacts on their newsletters:

Uh oh, it looks like your embed code is broken.

The charitable explanation would be that Google’s system may be extrapolating from a few people who hit "report as spam" instead of "unsubscribe". However, there is an inherent conflict of interest when an advertising-funded company offers to rid us of unwanted advertising in the one channel in which it does not itself sell advertising.

I wrote about this issue the last time this functionality was showing up in breathless headlines about how "Google kills email spam!!!1!":

the actual reason Google is doing this is to reduce or even eliminate a channel marketers can use to connect with consumers without going through Google. Subscribing to e-mail updates is a direct connection between consumers and brands. Google would rather be the middleman in that transaction, selling AdWords to brands and collecting a toll on all the traffic.

Much like Facebook choking off unpaid organic reach in favour of forcing operators of pages (including free community pages!) to pay to promote their content, Google is choking off what had been a communications channel that it did not gather a tax on. Facebook was able to do what they did because they own their own platform and can make their own rules. Google might be able to get away with their own cash grab because of the dominance of Gmail in the email world – but email is not just Google.

As convenient as Gmail is, a single middleman becoming this important is very dangerous for email. In the same way, as good as Google Reader was, it became so central to website subscriptions that nearly everything ended up funnelling through there. When Google killed Reader, it was an event of apocalyptic proportions. Fortunately, Google had only killed one RSS platform, and others were able to release their own in short order.

Will Gmail end up like Facebook – or like Reader?


Photo by Mathyas Kurmann on Unsplash

Law in the Time of Google

It’s quite amazing how people misunderstand the intersection between the Internet and the law.1 A case in point is this article from Reuters, describing the likely consequences of the €2.4B fine that the EU Commission has slapped Google with.

It starts off well enough, discussing how being under the spotlight in this way will probably limit Google’s freedom of movement and decision in the future. Arguably, one of the reasons why Microsoft missed the boat on mobile was the hangover from the EU’s sanctions against them for bundling Internet Explorer with Windows.

However, there is one quote in the article which is quite remarkably misguided:

Putting the onus on the company underlines regulators' limited knowledge of modern technologies and their complexity, said Fordham Law School Professor Mark Patterson.

"The decision shows the difficulty of regulating algorithm-based internet firms," he said. "Antitrust remedies usually direct firms that have violated antitrust laws to stop certain behaviour or, less often, to implement particular fixes.

In the past, Google has shown itself to be adept at rules-lawyering and hunting down the smallest loophole. They are of course hardly alone in this practice, with Uber being the poster-child - or the WANTED poster? - for this sort of arrogant Silicon Valley, "ask for forgiveness, not for permission", attitude.

In light of that fact, it is quite smart for the Commission to make Google responsible for working out the terms of enforcement.

Of course I fully expect Google to appeal this ruling - but I also expect them to lose. It is far too late for the various shopping comparison sites whose business was sucked dry by Google, but it does underline that regulators are far more willing to intervene in the construction of these types of "platform" businesses.


For more thoughtful and detailed commentary, I suggest Ben Thompson’s take, where he provides some useful background, as well as answers these key questions:

  • What is a digital monopoly?
  • What is the standard for determining illegal behavior?
  • What constitutes a competitive product?

I do not quite agree with his conclusions, which reflect a US-EU divide in the very conception of competition and monopoly. To generalise wildly, the EU focuses on long-term market structure, where the US focuses on short-term consumer benefit. A superior user experience is good, in the American conception, even if competitive businesses are crushed to deliver it. The European conception is that it is important to foster competitive offerings, even at the expense of the user experience.

Where I think this falls down is that, with Internet products in particular, it is relatively easy for users to create their own experience based on their particular requirements. When Google just spoon-feeds everything from the same search box, the initial baseline experience may be better, but the more specialised tools that satisfy specific requirements suffer and die, or never get developed in the first place.

Another problem is that the Internet giants who concentrate power in this way are all American companies, and often their offerings outside the US are limited or crippled in important ways. I live in Italy, and from here many of Google’s suggestions and integrations don’t work or provide a sub-standard experience.

I want a healthy market of many providers, including sub-scale regional ones, so that I can assemble my own user experience to suit my own requirements. Trusting huge organisations with their own motivations leads to weird places.


  1. I should specify at this point that I am not a lawyer (IANAL), and I don’t even play one on TV, so I won’t comment on the finer legal points of the decision or on whether it is justified according to different definitions of "competition". 

None

Own Your Interfaces

The greatest benefit of the Internet is the democratisation of technology. Development of customised high-tech solutions is no longer required for success, as ubiquitous commodity technology makes it easy to bring new product offerings to market.

Together with the ongoing move from one-time to recurring purchases, this process of commoditisation moves the basis of the competition to the customer experience. For most companies, the potential lifetime value of a new customer is now many times the profit from their initial purchase. This hoped-for future revenue makes it imperative to control the customer's experience at every point.

As an illustration, let us consider two scenarios involving outsourcing of products that are literally right in front of their users for substantial parts of the day.

Google Takes Its Eye Off the Watch

The first is Google and Android's answer to the Apple Watch, Android Wear. As is (usually) their way, Google have not released their own smartwatch product. Instead, they have released the Android Wear software platform, and left it to their manufacturing partners to build the actual physical products.

Results have been less than entirely positive:

If Android Wear is to be taken as seriously as the Apple Watch, we actually need an Android version of the Apple Watch. And these LG watches simply aren't up to the task.

Lacking the sort of singular focus and vertical integration between hardware and software that Apple brings to bear, these watches fail to persuade, and not by a little:

I think Google and LG missed the mark on every level with the Style, and on the basis of features alone that it is simply a bad product.

So is the answer simply to follow Apple's every move?

It is certainly true Google have shown with their Nexus and Pixel phones just how much better a first-party Android phone can be, and it is tempting to extrapolate that success to a first-party Google Watch. However, smartwatches are still very much a developing category, and it is not at all clear whether they can go beyond the current fitness-focused market. In fact, I would not be surprised to see a contraction in the size of the overall smartwatch market. Many people who bought a first-generation device out of curiosity and general technophilia may well opt not to replace that device.

Apple Displays Rare Clumsiness

In that case, let us look at an example outside the smartwatch market - and one where the fumble was Apple's.

Ever since Retina displays became standard first on MacBooks1 and then on iMacs, Mac users have clamoured for a large external display from Apple, to replace the non-Retina Apple Thundebolt Display that still graces many desks. Bandwidth constraints meant that this was not easy to do until a new generation of hardware came to market, but Apple fans were disappointed when, instead of their long-awaited Apple Retina 5K Display, they were recommended to buy a pretty generic-looking offering from LG.

Insult was added to injury when it became known that the monitor was extremely sensitive to interference, and in fact became unusable if placed anywhere near a wifi router:

the hardware can become unusable when located within 2 meters of a router.

Two metres is not actually that close; it's over six feet, if you're not comfortable with metric units. Many home office setups would struggle with that constraint - I know mine would.

Many have pointed out that one of the reasons for preferring expensive Apple solutions is that they are known to be not only beautifully designed, but obsessively over-engineered. It beggars belief that perfectionist, nit-picking Apple would have let a product to market with such a basic flaw - and yet, today, if an Apple fan spends a few thousand dollars on a new MacBook Pro and a monitor in an Apple Store, they will end up looking at a generic-looking LG monitor all day - if, that is, they can use the display at all.

Google and Apple both ceded control of a vitally important part of the customer experience to a third party, and both are now paying the price in terms of dissatisfied users. There are lessons here that also apply outside of manufacturing and product development.

Many companies, for instance, outsource functions that are seen as ancillary to third parties. A frequent candidate for these arrangements is support - but to view support this way is a mistake. It is a critical component of the user experience, and all the more so because it is typically encountered at times of difficulty. A positive support experience can turn a customer into a long-term fan, while a negative one can put them off for good.

Anecdata Time

A long time ago and far far away, I did a stint in technical support. During my time there, my employer initiated a contract with a big overseas outsourcing firm. The objective was to add a "tier zero" level of support, which could deal with routine queries - the ones where the answer was a polite invitation to Read The Fine Manual, basically - and escalate "real" issues to the in-house support team.

The performance of the outsourcer was so bad that my employer paid a termination fee to end the contract early, after less than one year. Without going into the specifics, the problem was that the support experience was so awful that it was putting off our customers. Given that we sold mainly into the large enterprise space, where there is a relatively limited number of customers in the first place, and that we aimed to cross-sell our integrated products to existing customers, a sudden increase in the number of unhappy customers was a potential disaster.

We went back to answering the RTFM queries ourselves, customer sat went back up into the green, and everyone was happy - well, except for the outsourcer, presumably. The company had taken back control of an important interface with its customers.

Interface to Differentiate

There are only a few of these interfaces and touch-points where a company has an opportunity to interact with its customers. Each interaction is an opportunity to differentiate against the competition, which is why it is so vitally important to make these interactions as streamlined and pleasant as possible.

This requirement is doubly important for companies who sell subscription offerings, as they are even more vulnerable to customer flight. In traditional software sales, the worst that can happen is that you lose the 20% (or whatever) maintenance, as well as a cross-sell or up-sell opportunity that may or may not materialise. A cancelled subscription leaves you with nothing.

A customer who buys an Android Wear smartwatch and has a bad experience will not remember that the watch was manufactured by LG; they will remember that their Android Wear device was not satisfactory. In the same way, someone who spends their day looking at an LG monitor running full-screen third-party applications - say, Microsoft Word - will be more open to considering a non-Apple laptop, or not fighting so hard to get a MacBook from work next time around. Both companies ceded control of their interface with their customers.

Usually companies are very eager to copy Apple and Google's every move. This is one situation where instead there is an opportunity to learn from their mistakes. Interfaces with customers are not costs to be trimmed; instead, they can be a point of differentiation. Treat them as such.


Image by Austin Neill via Unsplash


  1. Yes yes, except for the Air. 

Head in the Vapour

In news which should surprise absolutely nobody, Google - I mean, Alphabet - have killed their ridiculous "Project Ara" modular phone.

Here’s why this was a stupid idea from the beginning. Description from the Project ARA homepage:

The Ara frame is built with durable latches and connectors to keep modules secured. Ara modules are designed around standards, allowing them to work with new generations of frames and new form factors.

All of that means bulk - increased size and weight. Also, you’re still going to be constrained by what can fit on that chassis; there would be a spot where you could fit a camera, but if you want a bigger camera or don’t want a camera at all, this architecture doesn’t help you. It also sounds fragile, with many points of failure. These modules could easily become dislodged in your pocket, so you pull your phone out to take a picture and realise that you need to reconnect the camera module to the phone, but now the OS doesn’t recognise it, so you have to do a hard reboot - and now the sun has set or the child has run off, and you have a handful of modules and nobody to throw them at.

The real problem, though, is the goal of this project. The only attraction of modular systems is if you are going to upgrade components piecemeal: instead of buying an entire new phone every 18 months or whatever your replacement cycle is, you can judiciously upgrade just the screen or add a fingerprint reader or an NFC antenna, or so the theory goes.

In practice, nobody wants to do that. First of all, even on desktop systems where the bulk and weight are less of a factor, the market has moved decisively towards fully integrated all-in-one systems. People have voted with their pocketbooks for integrated convenience over flexible modularity. And that's in static desktop applications. When we’re talking about something people carry around all day, bulk and weight are an even bigger factor.

Secondly, most upgrades require many systems to be upgraded at once - at which point you might as well just buy a new phone anyway. This isn’t PC gaming, where you can get measurable benefits from upgrading your video card. Mobile phone hardware is still evolving far more rapidly than desktop hardware, and the benefits of full integration far outweigh the benefits of modularity.

We used to talk about the notion of a Personal Area Network, back when meaningful computing power was too heavy to hold in one hand. The idea was that you would carry a PC in a backpack, and a screen in your hand, an earpiece in your ear, maybe something like Google Glass, and so on. By the time the tech would have been there to enable that vision, it was already obsolete, because you can hold more computing power than you can use in the palm of your hand.

We may get back to that vision if wearables take off in a meaningful way, but the idea of modularising the phone itself was a pointless detour.

What it is, is typical Google - I mean Alphabet. Announce some random blue-sky project, let nerds everywhere geek out on how it could work without ever considering whether it should be done in the first place, and then kill it off once it hits the real world. The annoying thing is that Google actually gets credit for doing this over and over again, instead of ridicule for not thinking things through. Yes yes, fail fast and let a thousand flowers bloom and all that, but some adult oversight in the planning phases would not go amiss.

I forget who initially suggested the position of VP of Nope, but I think Google needs one. The idea is that this is an exec, senior enough that they have to be taken seriously, who just sits in the back of the room, and when someone proposes something obviously idiotic, they just clear their throat and say "nope". Their salary would be very well earned.


UPDATE: Just noticed that John Gruber pointed out back in 2014 that the emperor had no clothes, and before that in 2013:

you’d still be throwing out old components on a regular basis, and the march of progress is such that it won’t take long until your base board is outdated too.

Exactly.


Images from the Project ARA homepage while it lasts.