What's A Computer?

So there’s an Apple ad for the iPad Pro out there, which is titled “What’s a computer?". It’s embedded here, in case you’re like me and don’t see ads on TV:

tl;dr is that the video follows a young girl around as she does various things using her iPad Pro, signing a friend’s cast over FaceTime and sending a picture of it via Messages and so on.

It’s all very cute and it highlights the capabilities of the iPad Pro (and of iOS 11) very well.

However, there is a hidden subtext here, that only young people who grow up knowing only phones and tablets will come to think of them as their only devices in this way. Certainly it’s true of my kids; I no longer have any desktop computers in the house, so they have never seen one. There is a mac Mini media server, but it runs headless in a cupboard, so it hardly looks like a “computer". My wife and I have MacBooks, but they’re our work machines. My personal device is my iPad Pro.

My son actually just started computing classes in school this year, and was somewhat bemused to be faced with an external keyboard and mouse. At least they’ve moved on from CRTs since my day…

There is another group of users who have adopted the iPad enthusiastically, and that is older people. My mother used to invite me for lunch, and then casually mention that she “had some emails" for me to do. She would sit across the room from the computer and dictate to me, because she never felt comfortable doing anything on the infernal machine herself.

Since she got her first iPad a few years ago, she has not looked back. She is now a regular emailer – using the on-screen keyboard, no less, as I have not been able to persuade her to spring for a Pro yet. She surfs the web, comments on pictures of her grandchildren, keeps up with distant friends via Skype and Facebook, and even plays Sudoku.

That last point is particularly significant, as for people who grew up long before computers in homes, it is a major shift to embrace the frivolous nature of some (most?) of what we do on these devices.

None of this is to say that I disagree with Apple’s thesis in the ad. My own children only really know iPads first-hand. They see adults using laptops occasionally, and of course spending too much time on their phones, but they don’t get to use either of those devices themselves.

I just think that they should do a Volume Two of that ad, featuring older people, and perhaps emphasising slightly different features - zoomed text, for instance, VoiceOver, or the many other assistive technologies built into iOS. Many older people are enthusiastic iPad users, but are not naturally inclined to upgrade, and so may still be using an iPad 2 or an original iPad mini. A campaign to showcase the benefits of the Pro could well get more of these users to upgrade - and that’s a win for everyone.

The Smoke from the Air



Lovely day to be flying into London!

I was flying into LHR this time, but a couple of weeks ago, I was flying out of LCY. Here’s a shot of Canary Wharf from the other direction, looking out along the runway:



This is why, despite flying way too much, I still love the window seat.

Think Outside The Black Box

AI and machine-learning (ML) are the hot topic of the day. As is usually the case when something is on the way up to the Peak of Inflated Expectations, wild proclamations abound of how this technology is going to either doom or save us all. Going on past experience, the results will probably be more mundane – it will be useful in some situations, less so in others, and may be harmful where actively misused or negligently implemented. However, it can be hard to see that stable future from inside the whirlwind.

Uhoh, This content has sprouted legs and trotted off.

In that vein, I was reading an interesting article which gets a lot right, but falls down by conflating two issues which, while related, should remain distinct.

there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another.

The black-box nature of AI comes with the territory. The whole point is that, instead of having to write extensive sets of deterministic rules (IF this THEN that ELSE whatever) to cover every possible contingency, you feed data to the system and get results back. Instead of building rules, you train the system by telling it which results are good and which are not, until it starts being able to identify good results on its own.

This is great, as developing those rules is time-consuming and not exactly riveting, and maintaining them over time is even worse. There is a downside, though, in that rules are easy to debug. If you want to know why something happened, you can step through execution one instruction at a time, set breakpoints so that you can dig into what is going on at a precise moment in time, and generally have a good mechanical understanding of how the system works - or how it is failing.

I spend a fair amount of my time at work dealing with prospective customers of our own machine-learning solution. There are two common objections I hear, which fall at opposite ends of the same spectrum, but both illustrate just how different users find these new techniques.

Yes, there is an XKCD for every occasion

The first group of doubters ask to “see the machine learning". Whatever results are presented are dismissed as “just statistics". This is a common problem in AI research, where there is a general public perception of a lack of progress over the last fifty years. It is certainly true that some of the overly-optimistic predictions by the likes of Marvin Minsky have not worked out in practice, but there have been a number of successes over the years. The problem is that each time, the definition of AI has been updated to exclude the recent achievement.

Something of the calibre of Siri or Alexa would absolutely have been considered AI, but now their failure to understand exactly what is meant in every situation is considered to mean that they are not AI. Certainly Siri is not conscious in any way, just a smart collection of responses, but neither is it entirely deterministic in the way that something like Eliza is.1

This leads us to the second class of objection: “how can I debug it?" People want to be able to pause execution and inspect the state of variables, or to have some sort of log that explains exactly the decision tree that led to a certain outcome. Unfortunately machine learning simply does not work that way. Its results are what they are, and the only way to influence them is to flag which are good and which are bad.

This is where the confusion I mentioned above comes in. When these techniques are applied in a purely technical domain - in my case, enterprise IT infrastructure - the results are fairly value-neutral. If a monitoring event gets mis-classified, the nature of Big Data (yay! even more buzzwords!) means that the overall issue it is a symptom of will probably still be caught, because enough other related events will be classified correctly. If however the object of mis-categorisation happens to be a human being, then even one failure could affect that person’s job prospects, romantic success, or even their criminal record.

The black-box nature of AI & ML is where very great care must be taken to ensure that ML is a safe and useful technique to use in each case, in legal matters especially. The code of law is about as deterministic as it is possible to be; edge cases tend to get worked out in litigation, but the code itself generally aims for clarity. It is also mostly easy to debug: the points of law behind a judicial decision are documented and available for review.

None of these constraints apply to ML. If a faulty facial-recognition algorithm places you at the heart of a riot, it’s going to be tough to explain to your spouse or boss why you are being hauled off in handcuffs. Even if your name is ultimately cleared, there may still be long-term damage done, to your reputation or perhaps to your front door.

It’s important to note that, despite the potential for draconian consequences, the law is actually in some ways a best case. If an algorithm kicks you off Google and all its ancillary services (or Facebook or LinkedIn or whatever your business relies on), good luck getting that decision reviewed, certainly in any sort of timely manner.

The main fear that we should have when it comes to AI is not “what if it works and tries to enslave us all", but “what if it doesn’t work but gets used anyway".

Photo by Ricardo Gomez Angel via Unsplash


  1. Yes, it is noticeable that all of these personifications of AI just happen to be female. 

The office is wherever I am

This is a surprisingly practical setup. In this shot (taken with my iPad, because using the phone would have required a mirror) I am on a conference call while also reviewing some slides in PowerPoint. The AirPods mean I don’t get tangled up in wires, and the little stand means I have my hands free to drink coffee or whatever.

It’s not quite as good as doing it all on the iPad’s big screen, but the phone does get connectivity absolutely everywhere, and it’s much less of A Thing to set up in a café than an iPad, let alone a MacBook. I got a wifi only iPad, because I would use its cellular modem about once a month if that, and it’s simply not worth it. And for reasons best known to themselves, my phone provider doesn’t offer tethering as an option on any contract I would be remotely interested in.

Hello Frankfurt!



Frankfurt looking very Blade Runner-esque this morning, with the tips of the banks’ towers lost in the mist and low cloud.

This is the start of a new experiment with sharing photos here, as opposed to social media. Let’s see how long it lasts.

More of Me

I have not been posting here nearly as much as I mean to, and I need to figure out a way to fix that.

In my defence, the reason is that I have been writing a lot lately, just not here. I have monthly columns at DevOps.com and IT Chronicles, as well as what I publish over at the Moogsoft blog. I aim for weekly blog posts, but that’s already three weekly slots out of four in each month taken up right there - plus I do a ton of other writing (white papers, web site copy, other collateral) that doesn’t get associated so directly with me.

As it happens, though, I am quite proud of my latest three pieces, so I’m going to link them here in case you’re interested. None of these are product pitches, not even the one on the company blog, more reflections on the IT industry and where it is going.

Do We Still Need the Datacenter? - a deliberately provocative title, I grant you, but it was itself provoked by a moment of cognitive dissonance when I was planning for the Gartner Data Center show while talking to IT practitioners who are busily getting rid of their data centers. Gartner themselves have recognised this shift, renaming the event to "IT Infrastructure, Operations Management & Data Center Summit" - a bit of a mouthful, but more descriptive.

Measure What’s Important: DevFinOps - a utopian piece, suggesting that we should embed financial data (cost and value) directly in IT infrastructure, to simplify impact calculation and rationalise decision making. I doubt this will ever come to pass, at least not like this, but it’s interesting to think about.

Is Premature Automation Holding IT Operations Back? - IT at some level is all about automation. The trick is knowing when to automate a task. At what point is premature automation considered not just wasteful, but actively harmful?


Photos by Patrick Perkins on Unsplash

Stick THIS in your carry-on!

Airline carry-on restrictions: WTF?

On my last few trips I’ve noticed a marked uptick in the probability of pax being pulled up for a check of their carry-on bags, leading almost certainly to those bags being gate-checked.

I travel a lot, and mostly for short trips - meaning one to three nights away, often with multiple stops in between. This means that having my luggage checked dramatically increases the chance of SNAFUs. Therefore I own a super-light roll-aboard, pack sparingly, and weigh it before departure to make sure I’m below the stingy 8kg limit.

Lately, even this level of paranoia has not been enough. I have had my below-weight roll-aboard gate-checked because my rucksack was “too big".

Now this is a standard 20L day-pack, not some expedition-grade monster, and it contains a MacBook, an iPad, their retinue of charger bricks and cables - and that’s pretty much it. Sure, there are usually some mints and a stick of lip-balm rolling around the bottom, but it’s not like I have a tent and a sleeping bag in there. Yet, this is apparently outside the regulation size for a “personal item".

My question is, what do airlines think a “laptop bag" should be? The only way I could lose any significant weight would be to lose the charger cables, and that would negatively affect the usefulness of the laptop just a bit. Meanwhile, women’s handbags are never checked, despite the fact that my wife carries a bigger and heavier bag when she goes shopping than I shoulder for travel.

There is some sort of assumption here about what a laptop bag is that I’m not sure airlines get.

The paranoid assumption would be that they are trying to make us pay extra for checked bags. I do indeed often travel on hand-luggage-only fares, but it’s not to save money. In fact, it’s the other way around: I never check luggage unless I am forced to, and therefore I take advantage of these (very slightly) cheaper fares.

The real problem here is people - monsters - putting two bags in the overhead lockers. Regardless of size, this is a jerk move. Unless you’re sitting in an emergency-exit or bulkhead row, where you have to, don’t do this, period. Your roll-aboard goes in the overhead locker, and your personal item goes at your feet. If it doesn’t fit, that’s a clue you’re doing it wrong!

As for airlines: enforce what matters. If the limit is 8kg and someone rocks up with a bag that’s practically spherical and straining at its zippers, and which weighs in at 15kg, that’s fair game for a gate check. In fact I would argue that such blatant fare evasion would be worth a paid gate check, but I don’t want to give airlines any ideas!

However, if someone has a rollaboard that fits in the cage, doesn’t abuse the weight limit, and has one more bag that they keep “at your feet or under the seat in front of you", as the announcement puts it - let it go. Seriously, those people are good customers who don’t cause trouble because their entire ambition is to get through the airport as swiftly and efficiently as possible. They have the equipment and experience to do this; all they ask is that you work with them, rather than against them.

This has been your First World Problem for the week.

The Internet of (Insecure) Things

Back in 2014, I wrote an article entitled Why the Blinking Twelves is an Internet of Things problem in the making. If you’re not familiar with the idiom of the "blinking twelves", allow me to enlighten you:

Back in the last century, digital clocks with seven-segment displays became ubiquitous, including as part of other items of home electronics such as VCRs. When first plugged in, these would blink "12:00" until the time was set by the user.
Technically-minded people soon noticed that when they visited less technical friends or relatives, all the appliances in the house would still be showing the "blinking twelves" instead of the correct time. The "blinking twelves" rapidly became short-hand for "civilians" not being able to – or not caring to – keep up with the demands of ubiquitous technology.
One of the most frustrating things for techies about the "blinking twelves" was that nobody else seemed to care or even notice the problem that was driving them nuts. How could people not see the blinking twelves all around them, and do something about them?
It took Windows for the problem to become obvious. Windows computers, brought a much higher level of technological complexity, the computer needed regular maintenance and people rapidly realised that updates and patches were required at regular intervals if their computers were to remain functional and secure.
The problem that we are facing is that technology has already begun to spread beyond the desktop. Even the most technophobic now carry a phone that is "smart" to a greater or lesser degree and many people treat these devices much like their old VCRs, installing them once and then forgetting about them. However, all of these devices are running 24/7, connected to the public Internet, with little to no management or updates.

In the three years since I wrote that article, the number of Internet-enabled devices has simply exploded.

I know, it’s from Business Insider, take it with a large grain of salt - but the trend is unarguable.

Here’s the problem: all of those Internet-enabled Things are cheap, and therefore based on existing components, including software. Most software, at least below the level of the specialised RTOSen found in nuclear power plants and the like, is built around the assumption of regular maintenance and updates provided by knowledgeable operators. However, once these Things are deployed in the field, where "in the field" often means the home or office of people who are not IT professionals, it is a given that they will not receive that level of care.

When something like Krack hits, the odds are good that manufacturers for many devices will already have disappeared without providing patches. Even for devices from more stable vendors who do provide ongoing support, maybe the device is obsolete and replaced by newer versions with incompatible architectures. But even supposing that all the stars align and the patch is available, it will still not be deployed widely - because of the "blinking twelves" problem. Non-specialist owners will not know or care to update their devices, and so the cycle continues.

Our only hope is that we are saved by our devices' obsolescence, as the lack of updates eventually prevents them from functioning at all. Maybe this won’t be the final straw, but soon enough the figleaf in every click-through agreement about the software being "provided as is" and "no warranty of merchantability or fitness for purpose" will be ripped away, in favour of the sorts of consumer protection regulations that these same devices would be subject to if they were not Internet-enabled.

The alternative is that in the Smart Home of the Future that we keep being promised, troubleshooting steps really will require us to close all the windows, exit, and start what we were doing all over again.

Me, I’ll move to a cabin in the woods.


Photo by Heather Zabriskie on Unsplash

Living in the Future

So far this week my phone got me on a plane:

Around London by Tube and train:

Got me coffee, dinner and some light shopping:

Let me summon a car to my exact location, and pay for the trip:

Oh, and I wrote and published this blog post on my phone.

And I think I also made some phone calls, all while keeping on top of email, using social media, reading books and magazines, listening to music and podcasts, and finding my way around.

Tell me again how phones are overpriced and boring? 🤔


You may have noticed that none of those images are of my actual phone, or of my own boarding passes and credit cards. That is because I am not a complete idiot. Yes, people really do post pictures of those online. No, it is very much not a good idea.