Not Biting My Tongue

I spend a lot of time explaining enterprise buyers and vendors. There are often perfectly good reasons for doing something in a way that is now considered old-fashioned or uncool. Especially for vendors, the argument of "people still buy X! for money!" is a powerful incentive to continue making X.

Where things go wrong is when stodgy enterprise vendors put on their dad-jeans and go down to the skate park.

Case in point: BMC trying to jump on the AIOps bandwagon. The whole thing is a pretty spectacular case study in missing the point, but I think this paragraph is the nadir:

As mentioned above, AIOps platforms should encompass the IT disciplines of Performance Management, Service Management, Automation, and Process Improvement, along with technologies such as monitoring, service desk, capacity management, cloud computing, SaaS, mobility, IoT and more.

If you’re not familiar with AIOps, it’s a model that Gartner came up with (paid link, unless you’re a Gartner subscriber) to describe some shifts in the IT operations market. The old category of ITOA had been broadened to the point that it was effectively meaningless, and AIOps recognises a new approach to the topic.

The first thing to know about AIOps is that the "AI" bit did not originally stand for Artificial Intelligence. When the term was originally coined, AIOps actually stood for Algorithmic IT Operations. However, in these fallen times when everyone and their dog claims AI, Machine Learning, or other poorly-understood snake-oil, everyone assumed that AIOps was something to do with AI. Even Gartner have now given up, and retconned it to "Artificial Intelligence for IT Operations".

Anyway, AIOps solutions sit at the intersection of monitoring, service desk, and automation. The idea is that they ingest monitoring data, apply algorithms to help operators find valuable needles in the haystack of alerts, sync with service desk systems to plug in to existing processes, and trigger automated diagnostic and resolution activities.

So far so good - but here’s why it’s so laughable for BMC to claim AIOps.

BMC’s whole model is BSM - Business Service Management. Where the centre of AIOps is the algorithms, the centre of BSM is the CMDB.

The model for applying BSM goes something like this:

  1. Fully populate CMDB: define service models & document infrastructure
  2. When an alert comes in, determine which infrastructure element it came from, then walk the service model to determine what the cause and effect are
  3. Create a ticket in the ITSM suite to track resolution

Note the hidden assumptions, even in this grossly over-simplified version:

  1. The CMDB can be fully populated given finite time and effort
  2. All alerts relate to known elements, and all elements have known dependencies
  3. Every failure has one cause and falls within one group’s area of responsibility

In today’s IT, precisely none of these assumptions hold true. No matter how much effort and how many auto-discovery tools are thrown at the task, the CMDB will always be a snapshot in time1. Jorge Luis Borges famously documented the logical endpoint of this progression:

... In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

purportedly from Suárez Miranda, Travels of Prudent Men, Book Four, Ch. XLV, Lérida, 1658

There is also a timing factor: what happens if an alert comes in between a change occurring and being documented? Another question is, what happens if operators simply don’t have visibility into part of the infrastructure - say, managed hosting, or outside telco networks? And finally, the big one: what if there is no one root cause? Modern architectures are sufficiently robust and resilient that it’s quite rare for any one macro-event to take them out. What gets you is usually a combination of a number of smaller issues, all occurring together in some unforeseen way.

The whole architecture of BSM is built around assumptions that are less and less true. This is not to say that individual products within that suite don’t have value, but the old BSM model is no longer fit for purpose. The result is an example of "shipping the org chart": the CMDB is at the core and Remedy is the interface, because that is what the organisation demands. However, you can’t just drape AIOps over the old suite and call it good! Radical changes are required, not weak attempts to shoe-horn existing "IT disciplines" into the new mold.

AIOps represents the algorithmic convergence of ITOM & ITSM. In contrast, if we consider the sequence of BSM, these are assumed to be different discrete steps in a sequential process. This is Waterfall thinking applied to IT Ops, where today’s IT infrastructures demand Agile thinking.

The most relevant question for users is, of course, "do I trust a legacy vendor to deliver a new model that is so radically different from what it has built its entire strategy around?"

The answer is simple, because it’s determined by the entire structure and market position of all the Big Four vendors. Like its peers, BMC makes its revenue in the old model of IT. As long as there is money to be made by doing the same things it has always done, there is enormous inertia to work against (the Innovator’s Dilemma in action). It takes an existential threat to disturb that sort of equilibrium. It was not until ServiceNow was seriously threatening the Remedy user base that BMC started to offer SaaS options and subscription pricing. It will take an equivalent upheaval in its business for any legacy vendor to adopt a radically new strategy like AIOps. These days, customers can’t wait for one vendor to see the writing on the wall; they need to move at the speed their customers require.

Much as I would like to believe that we have got BMC running scared, I don’t think that’s the case - so they will continue along their very profitable way. This is of course exactly how it should be! If they were to jump on every new bandwagon, their shareholders would be rightly furious. They absolutely should focus on doing what they do well.

But that does not include doing AIOps. If you’re a practitioner looking at this, I hope it’s obvious who you want to go with: the people creating the new model and who are steeped in what is required to deliver and adopt it - or the ones who see a keyword trending on Google, and write a quick ambulance-chasing blog post - or claim that Remedy is a key part of AIOps - or even that mainframes are.


  1. Which is why BMC’s own automation products have their separate real-time operational data stores, which sync with the CMDB on a schedule.

Biting My Tongue

So I'm working with a prospect in the fashion and luxury goods area. We've been doing a Proof of Value for the last few weeks, and we're now at the point of presenting the results.

So I built this slide deck as if it were a fashion collaboration, "Moogsoft X $PROSPECT_NAME", "Spring-Summer 2017", and so on. I'm super proud of it - not just the conceit, but also the results we have been able to provide for very little effort - but I'm also kind of bummed that I can never show it to anyone outside the company.

This prospect does not want its name used anywhere, so even if - I mean, when we close the deal, they will only ever appear anywhere as "fashion & luxury goods house".

This is not the first time this has happened to me. At a previous startup, we sold to, umm, let’s call them a certain automotive manufacturer and motorsports team based near Modena. While negotiating the price, the customer asked for "a last effort" in terms of discounting. In exchange, we asked for them to provide us with an official reference. After consulting with their brand marketing people, it turned out that the fee for use of their trademark would have been nearly twice the total value of the software deal… We respectfully declined their kind offer.

After all, the main thing is to do the deal and provide value; even if we can't get the logo on our site, it's still a win.

My only remaining problem (beyond actually getting the deal over the line) is that my wife wants me to be paid for this current opportunity in handbags, while the Moogsoft colleague who helped me out wants her share in eau de toilette…

New Economy, Meet Old Continent

In the latest setback for Uber, an advocate at the ECJ argued that:

Uber should be considered a transport service. But even if it wasn’t, he still thinks the French law at issue didn’t have to be notified to the EU, because it affects digital services "only in an incidental manner"

Uber has always argued that it is primarily an app facilitating connections between users and independent service providers - who are certainly not employees, no sir, and therefore not owed health insurance, pensions, or indeed much of anything.

Given that the supposedly independent service providers are often full-time drivers, and that the car itself is also often leased from Uber, this claim was always somewhat disingenuous. It is of a part with other examples of "sharing economy" (scare quotes very much intended). Very little of Airbnb is people renting out their spare rooms; much of it is people renting out stables of properties, purchased for the explicit purpose of renting them out through Airbnb. There are even companies like Airsorted that will take care of the whole process, letting landlords sit back and take in the proceeds - while conveniently forgetting to pay taxes on their earnings.

This is not "sharing" economy; these are fully professionalised marketplaces. The real sharing economy equivalents are BlaBlaCar or Couchsurfing. These offerings really do enable non-professionals to share something.

I love the services offered by both Uber and Airbnb, and they both filled gaps in their respective markets. Before Uber, getting around a strange city was an exercise in frustration, dealing with unfamiliar currencies, unpredictable waits and routes, and general uncertainty and unpleasantness. Now, I can summon a car to my location and pay through my phone, without any worry about whether I have enough local currency or whatever.

In the same way, I booked my entire summer holiday (except for two nights) via Airbnb. Travelling with two kids is an exercise in expensive frustration, if you are looking at standard hotels. Most places will let you put one cot or foldaway bed in a double room, but when you are travelling with two kids, they want you to book two rooms or bump you up to an expensive suite. Airbnb is a lifesaver, giving a huge supply of inexpensive accommodation for larger groups, all easily accessible and searchable in one location.

If these new services are to have a future, though, they have to comply with laws and regulations, including new rules that might be needed for these new centralised platforms. The EU defining them as full services that are mediated by apps, rather than simple apps with no responsibilities, is an important step in that process.

Law in the Time of Google

It’s quite amazing how people misunderstand the intersection between the Internet and the law.1 A case in point is this article from Reuters, describing the likely consequences of the €2.4B fine that the EU Commission has slapped Google with.

It starts off well enough, discussing how being under the spotlight in this way will probably limit Google’s freedom of movement and decision in the future. Arguably, one of the reasons why Microsoft missed the boat on mobile was the hangover from the EU’s sanctions against them for bundling Internet Explorer with Windows.

However, there is one quote in the article which is quite remarkably misguided:

Putting the onus on the company underlines regulators' limited knowledge of modern technologies and their complexity, said Fordham Law School Professor Mark Patterson.

"The decision shows the difficulty of regulating algorithm-based internet firms," he said. "Antitrust remedies usually direct firms that have violated antitrust laws to stop certain behaviour or, less often, to implement particular fixes.

In the past, Google has shown itself to be adept at rules-lawyering and hunting down the smallest loophole. They are of course hardly alone in this practice, with Uber being the poster-child - or the WANTED poster? - for this sort of arrogant Silicon Valley, "ask for forgiveness, not for permission", attitude.

In light of that fact, it is quite smart for the Commission to make Google responsible for working out the terms of enforcement.

Of course I fully expect Google to appeal this ruling - but I also expect them to lose. It is far too late for the various shopping comparison sites whose business was sucked dry by Google, but it does underline that regulators are far more willing to intervene in the construction of these types of "platform" businesses.


For more thoughtful and detailed commentary, I suggest Ben Thompson’s take, where he provides some useful background, as well as answers these key questions:

  • What is a digital monopoly?
  • What is the standard for determining illegal behavior?
  • What constitutes a competitive product?

I do not quite agree with his conclusions, which reflect a US-EU divide in the very conception of competition and monopoly. To generalise wildly, the EU focuses on long-term market structure, where the US focuses on short-term consumer benefit. A superior user experience is good, in the American conception, even if competitive businesses are crushed to deliver it. The European conception is that it is important to foster competitive offerings, even at the expense of the user experience.

Where I think this falls down is that, with Internet products in particular, it is relatively easy for users to create their own experience based on their particular requirements. When Google just spoon-feeds everything from the same search box, the initial baseline experience may be better, but the more specialised tools that satisfy specific requirements suffer and die, or never get developed in the first place.

Another problem is that the Internet giants who concentrate power in this way are all American companies, and often their offerings outside the US are limited or crippled in important ways. I live in Italy, and from here many of Google’s suggestions and integrations don’t work or provide a sub-standard experience.

I want a healthy market of many providers, including sub-scale regional ones, so that I can assemble my own user experience to suit my own requirements. Trusting huge organisations with their own motivations leads to weird places.


  1. I should specify at this point that I am not a lawyer (IANAL), and I don’t even play one on TV, so I won’t comment on the finer legal points of the decision or on whether it is justified according to different definitions of "competition".

None

The Enemy Within The Browser

At what point do the downsides of Javascript in the browser exceed the upsides? Have we already passed that point?

If you have any concept of security, the idea of downloading code from the Internet and immediately executing it, sight unseen, on your local machine, should give you the screaming heebie-jeebies. A lot of work has gone into sandboxing the browser processes so that Javascript cannot escape the browser itself, and later, the individual web page that it came from. However, this only dealt with the immediate and obvious vulnerability.

These days, the problem with Javascript is that it is used to track users all over the internet and serve them ads for the same products on every site. Quite why this requires 14 MB and 330 HTTP requests for 537 words is not entirely clear.

Actually, no, it is entirely clear: it is because the copro-grammers ("writers of feces") who produce this stuff have no respect for the users. The same utter disrespect underlies the recent bloat in iOS apps:

One Friday I turned off auto-update for apps and let the update queue build up for a week. The results shocked me.
After the first week I had 7.59GB of updates to install, spread across 67 apps – averaging 113MB per app.

Okay, so maybe you say who cares, you only update apps over wifi - but do you only browse on wifi? 14 MB for a few hundred words - that adds up fast.

And what else is that Javascript up to, beyond wasting bytes - both over the air, and in local storage?

How about snaflling data entered into a form, regardless of whether it has been submitted?

Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

My house, my rules. I look forward to iOS 11, and enabling every blocking feature I can.

I really want media sites to earn money so that they can continue to exist, but they cannot do it at my expense. A banner ad is fine, but 14 MB of Javascript to serve me the same banner ad everywhere - at my expense! - is beyond the pale.

Javascript delenda est.

Thoughts about WWDC '17

First of all, let’s get the elephant in the room out of the way; no new iPhone was announced. I was not necessarily expecting one to show up - that seems more suited to a September event, unless there were specific iOS features that were enabled by new hardware and that developers needed to know about.

We did get a whole ton of new features for iOS 11 (it goes up to eleven!), but many of them were aimed squarely at the iPad. With no new iPhone, the iPad got most of the new product glory, sharing only with the iMac Pro and the HomePod (awful name, by the way).

On that note, some people were confused by the iMac Pro, but Apple has helpfully clarified that there is also going to be a Mac Pro and external displays to go with it:

In addition to the new iMac Pro, Apple is working on a completely redesigned, next-generation Mac Pro architected for pro customers who need the highest-end, high-throughput system in a modular design, as well as a new high-end pro display.

I doubt I will ever buy a desktop Mac again, except possibly if Apple ever updates the Mac mini, so this is all kind of academic for me - although I really hope the dark-coloured wireless extended keyboard from the iMac Pro will also be available for standalone purchase.

What I am really excited about is the new 10.5" iPad Pro and the attendant features in iOS 111. The 12.9" is too big for my use case (lots of travel), and the 9.7" Pro always looked like a placeholder device to me. Now we have a full lineup, with the 9.7" non-Pro iPad significantly different from the 10.5" iPad Pro, and the 12.9" iPad Pro there for people who really need the larger size - or maybe just don’t travel with their iPad quite as much as I do.

My current iPad (an Air 2) is my main personal device apart from my iPhone. The MacBook Pro is my work device, and opening it up puts me in "work mode", which is not always a good thing. On the iPad, I do a ton of reading, but I also create a fair amount of content. The on-screen keyboard and various third-party soft-tip styluses (styli?) work fine, but they’re not ideal, and so I have lusted after an iPad Pro for a while now. However, between the lack of sufficient hardware differentiation compared to what I have2, and lack of software support for productivity, I never felt compelled to take the plunge.

Now, I can’t wait to get my hands on an iPad Pro 10.5".

I already use features like the sidebar and side-by-side multitasking, but what iOS 11 brings is an order of magnitude beyond - especially with the ability to drag & drop between applications. Right now, while I may build an outline of a document on my iPad, I rarely do the whole thing there, because it is just so painful to do any complex work involving multiple switches between applications - so I end up doing all of that on my Mac.

The problem is that there is a friction in working with a Mac; I need (or feel that I need) longer stretches of time and more work-like environments to pull out my Mac. That friction is completely absent with an iPad; I am perfectly happy to get it out if I have more than a minute or so to myself, and there is plenty of room to work on an iPad in settings (such as, to pick an example at random, an economy seat on a short-haul flight) where there is simply no room to type on a Mac.

The new Files app also looks very promising. Sure, you can sort of do everything it does in a combination of iCloud Drive, Dropbox, and Google Drive, and I do - but I always find myself hunting around for the latest revision, and then turning to the share sheet to get whatever I need to where I can actually work on it.

With iOS 11, it looks like the iPad will truly start delivering on its promise as (all together now) a creation device, not just a consumption device.

Ask me again six months from now…

And if you want more exhaustive analysis, Federico Viticci has you covered.


  1. Yes, there was also some talk about the Watch, but since I gave up on fitness tracking, I can't really see the point in that whole product line. That's not to say that it has no value, just that I don't see the value to me. It certainly seems to be the smartwatch to get if you want to get a smartwatch, but the problem with that proposition is that I don't particularly want any smartwatch.

  2. To me this is the explanation for the 13 straight quarters of iPad sales drop: an older iPad is still a very capable device, and outside of very specific use cases, or people upgrading from something like an iPad 2 or 3, there hasn’t been a compelling reason to upgrade - yet. For me at least, that compelling reason has arrived, with the combination of 10.5" iPad Pro and iOS 11. After the holiday quarter, I suppose we will find out how many people feel the same way.

Incentives Drive Behaviour - Security Is No Exception

Why is security so hard?

Since I no longer work in security, I don’t have to worry about looking like an ambulance-chasing sales person, and I can opine freely about the state of the world.

The main problem with security is the intersection of complexity and openness. In the early days of computers there was a philosophical debate about the appropriate level of security to include in system design. The apex of openness was probably MIT’s Incompatible Time-Sharing System, which did not even oblige users to log on - although it was considered polite to do so.

I will just pause here to imagine that ethos of openness in the context of today’s social media, where the situation is so bad that Twitter felt obliged to change its default user icon because the "egg" had become synonymous with bad behaviour online.

By definition, security and openness are always in opposition. Gene "Spaf" Spafford, who knows a thing or two about security, famously opined that:

The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts.

Obviously, such a highly-secure system is not very usable, so people come up with various compromises based on their personal trade-off between security and usability. The problem is that this attempt to mediate between two opposite impulses adds complexity to the system, which brings its own security vulnerabilities.

Ultimately, IT security is a constant Red Queen’s Race, with operators of IT systems rushing to patch the latest flaws, knowing all the while that more flaws are lurking behind those, or being introduced with new functionality.

Every so often, maintainers of a system will just throw up their hands, declare a system officially unmaintainable, and move to something else. This process is called "End of Life", and is supposed to coincide with users also moving to the new supported platform.

Unfortunately this mass upgrade does not always take place. Many will cite compatibility as a justification, and certainly any IT technician worth their salt knows better than to mess with a running system without a good reason. More often, though, the reason is cost. In a spreadsheet used to calculate the return on different proposed investments, "security" falls under the heading of "risk avoidance"; a nebulous event in the future, that may become less probable if the investment is made.

For those who have not dealt with many finance people, as a rule, they hate this sort of thing. Unless you have good figures for both the probability of the future event and its impact, they are going to be very unhappy with any proposed investment on that basis.

The result is that old software sticks around long after it should have been retired.

As recently as November 2015, it emerged that Paris’ Orly airport was still operating on Windows 3.1 - an operating system that has not been supported since 2001.

The US military still uses 8" floppy disks for its ICBMs:

"This system remains in use because, in short, it still works," Pentagon spokeswoman Lt Col Valerie Henderson told the AFP news agency.

And of course we are still dealing with the fallout from the recent WannaCry ransomware worm, targeting Windows XP - an operating system that has not been supported since 2014. Despite that, it is still the fourth most popular version of Windows (behind Windows 7, Windows 10, and Windows 8.1), with 5.26% share.

Get to the Point!

It’s easy to mock people still using Windows XP, and to say that they got no more than they deserved - but look at that quote from the Pentagon again:

"This system remains in use because, in short, it still works"

Windows XP still works fine for its users. It is still fit for purpose. The IT industry has failed to give those people a meaningful reason to upgrade - and so many don’t, or wait until they buy new hardware and accept whatever comes with the new machine.

Those upgrades do not come nearly as frequently as they used to, though. In the late Nineties and early Oughts, I upgraded my PC every eighteen months or so (as funds permitted), because every upgrade brought huge, meaningful differences. Windows 95 really was a big step up from Windows 3.1. On the Mac side, System 7 really was much better than System 6. Moving from a 486 to a Pentium, or from 68k to PowerPC, was a massive leap. Adding a 3dfx card to your system made an enormous difference.

Vice-versa, a three-year-old computer was an unusable pile of junk. Nerds like me installed Linux on them and ran them side by side with our main computers, but most people had no interest in doing such things.

These days, that’s no longer the case. For everyday web browsing, light email, and word processing, a decade-old computer might well still cut it.

That’s not even to mention institutional use of XP; Britain’s NHS, for instance, was hit quite hard by WannaCry due to their use of Windows XP. For large organisations like the NHS, the direct financial cost of upgrading to a newer version of Windows is a relatively small portion of the overall cost of performing the upgrades, ensuring compatibility of all the required software, and retraining literally hundreds of thousands of staff.

So, users have weak incentives to upgrade to new, presumably more secure, versions of software; got it. Should vendors then be obliged to ship them security patches in perpetuity?

Zeynep Tufekci has argued as much in a piece for the New York Times:

First, companies like Microsoft should discard the idea that they can abandon people using older software. The money they made from these customers hasn’t expired; neither has their responsibility to fix defects.

Unfortunately, it’s not that simple, as Steven Bellovin explains:

There are two costs, a development cost $d and an annual support cost $s for n years after the "warranty" period. Obviously, the company pays $d and recoups it by charging for the product. Who should pay $n·s?

The trouble is that n can be large; the support costs could thus be unbounded.

Can we bound n? Two things are very clear. First, in complex software no one will ever find the last bug. As Fred Brooks noted many years ago, in a complex program patches introduce their own, new bugs. Second, achieving a significant improvement in a product's security generally requires a new architecture and a lot of changed code. It's not a patch, it's a new release. In other words, the most secure current version of Windows XP is better known as Windows 10. You cannot patch your way to security.

Incentives matter, on the vendor side as well as on the user side. Microsoft is not incentivised to do further work on Windows XP, because it has already gathered all the revenue it is ever going to get from that product. From a narrowly financial perspective, Microsoft would prefer that everyone purchase a new license for Windows 10, either standalone or bundled with the purchase of new hardware, and migrate to that platform.

Note that, as Steven Bellovin points out above, this is not just price-gouging; there are legitimate technical reasons to want users to move to the latest version of your product. However, financial incentives do matter, a lot.

This is why if you care about security, you should prefer services that come with a subscription.

If you’re not Paying, you’re the Product

Subscription licensing means that users pay a recurring fee, and in return, vendors provide regular updates, including both new features and fixes such as security patches.

As usual, Ben Thompson has a good primer on the difference between one-off and subscription pricing. His point is that subscriptions are better for both users and vendors because they align incentives correctly.

From a vendor’s perspective, one-off purchases give a hit of revenue up front, but do not really incentivise long-term engagement. It is true that in the professional and enterprise software world, there is also an ongoing maintenance charge, typically on the order of 18-20% per year. However, that is generally accounted for differently from sales revenue, and so does not drive behaviour to nearly the same extent. In this model, individual sales people have to behave like sharks, always in motion, always looking for new customers. Support for existing customers is a much lower priority.

Vice versa, with a subscription there is a strong incentive for vendors to persuade customers to renew their subscription - including by continuing to provide new features and patches. Subscription renewal rates are scrutinised carefully by management (and investors), as any failure to renew may well be symptomatic of problems.

Users are also incentivised to take advantage of the new features, since they have already paid for them. When upgrades are freely available, they are far more likely to be adopted - compare the adoption rate for new MacOS or iOS versions to the rate for Windows (where upgrades cost money) or Android (where upgrades might not be available, short of purchasing new hardware).

This is why Gartner expects that by 2020, more than 80 percent of software vendors will change their business model from traditional license and maintenance to subscription.

At Work - and at Home, Too

One final point: this is not just an abstract discussion for multi-million-euro enterprise license agreements. The exact same incentives apply at home.

A few years ago, I bought a cordless phone that also communicated with Skype. From the phone handset, I could make or answer either a POTS call, or a Skype voice call. This was great - for a while. Unfortunately the hardware vendor never upgraded the phone’s drivers for a new operating system version, which I had upgraded to for various reasons, including improved security.

For a while I soldiered on, using various hacks to keep my Skype phone working, but when the rechargeable batteries died, I threw the whole thing in the recycling bin and got a new, simpler cordless phone that did not depend on complicated software support.

A cordless phone is simple and inexpensive to replace. Imagine that had been my entire Home of the Future IoT setup, with doorbells, locks, alarms, thermostats, fridges, ovens, and who knows what else. "Sorry, your home is no longer supported."1

With a subscription, there is a reasonable expectation that vendors will continue to provide support for the reasonable lifetime of their products (and if they don’t, there is a contract with the force of law behind it).

Whether it’s for your home or your business, if you rely on it, make sure that you pay for a subscription, so that you can be assured of support from the vendor.


  1. Smart home support: "Have you tried closing all the windows and then reopening them one by one?"

It Has Come To This

Dear websites that force mobile versions even when I explicitly request the desktop site: please do not hesitate to FOAD. Extra points for losing my context when you do.

Talk Softly

With the advent of always-on devices that are equipped with sensitive microphones and a permanent connection to the Internet, new security concerns are emerging.

Virtual assistants like Apple’s Siri, Microsoft’s Cortana and Google Now have the potential to make enterprise workers more productive. But do "always listening" assistants pose a serious threat to security and privacy, too?

Betteridge’s Law is in effect here. Sure enough, the second paragraph of the article discloses its sources:

Nineteen percent of organizations are already using intelligent digital assistants, such as Siri and Cortana, for work-related tasks, according to Spiceworks’ October 2016 survey of 566 IT professionals in North America, Europe, the Middle East and Africa.

A whole 566 respondents, you say? From a survey run by a help desk software company? One suspects that the article is over-reaching a bit - and indeed, if we click through to the actual survey, we find this:

Intelligent assistants (e.g., Cortana, Siri, Alexa) used for work-related tasks on company-owned devices had the highest usage rate (19%) of AI technologies

That is a little bit different from what the CSO Online article is claiming. Basically, anyone with a company-issued iPhone who has ever used Siri to create an appointment, set a reminder, or send a message about anything work-related would fall into this category.

Instead, the article makes the leap from that limited claim to extrapolating that people will be bringing their Alexa device to work and connecting it to the corporate network. Leaving aside for a moment the particular vision of hell that is an open-plan office where everyone is talking into the air all the time, what does that mean for the specific recommendations in the article?

  1. Focus on user privacy
  2. Develop a policy
  3. Treat virtual assistant devices like any IoT device
  4. Decide on BYO or company-owned
  5. Plan to protect

These are actually not bad recommendations - but they are so generic as to be useless. Worse, when they do get into specifics, they are almost laughably paranoid:

Assume all devices with a microphone are always listening. Even if the device has a button to turn off the microphone, if it has a power source it’s still possible it could be recording audio.

This is drug-dealer level of paranoia. Worrying that Alexa might be broadcasting your super secret and valuable office conversations does not even make the top ten list of concerns companies should have about introducing such devices into their networks.

The most serious threat you can get from Siri at work is co-workers pranking you if you enable access from the lock screen. In that case, anyone can grab your unattended iPhone and instruct Siri to call you by some ridiculous name. Of course I would never sabotage a colleague’s phone by renaming him "Sweet Cakes". Ahem. Interestingly, it turns out that the hypothetical renaming also extends to the entry in the Contacts…

The real concern is that by focusing on these misguided recommendations, the focus is taken off advice that would actually be useful in the real world. For instance, if you must have IoT devices in the office for some reason, this is good advice:

One way to segment IoT devices from the corporate network is to connect them to a guest Wi-Fi network, which doesn’t provide access to internal network resources.

This recommendation applies to any device that needs Internet access but does not require access to resources on the internal network. This will avoid issues where, by compromising a device (or its enabling cloud service), intruders are able access your internal network in what is known as a "traversal attack". If administrators restrict the device’s access to the network, that will also restrict the amount of damage an intruder can do.

Thinking about access to data is a good idea in general, not just for voice assistants or IoT devices:

Since personal virtual assistants "rely on the cloud to comprehend complex commands, fetch data or assign complex computing tasks to more resources," their use in the enterprise raises issues about data ownership, data retention, data and IP theft, and data privacy enforcement that CISOs and CIOs will need to address.

Any time companies choose to adopt a service that relies on the cloud, their attack surface is not limited to the device itself, but also extends to that back-end service - which is almost certainly outside their visibility and control. Worse, in a BYOD scenario, users may introduce new devices and services to the corporate network that are not designed or configured for compliance with organisations’ security and privacy rules.

Security is important - but let’s focus on getting the basics right, without getting distracted by overly-specific cybersecurity fantasy role-playing game scenarios involving Jason Bourne hacking your Alexa to steal your secrets.

A New Law

I was hanging out on LinkedIn, and I happened to notice a new pop-up, offering to help me boost my professional image with new photo filters.

My professional image may well need all sorts of help, but I do wonder whether this feature was the most productive use of LinkedIn’s R&D time.

Maybe this is the twenty-first century version of Zawinski's Law:

Every social networking app attempts to expand until it has photo filters. Those apps which cannot so expand are replaced by ones which can.

(I did not use the filters.)