The Driver Behind The Curtain

Truly autonomous driving is an incredibly hard problem to solve. It would be hard enough in controlled situations, but in uncontrolled ones, where other road users may or may not be respecting the rules of the road1, it’s pretty close to being impossible to achieve a perfect solution. The best we can hope for is one that is better than the current state of affairs, with distracted human drivers taking an incredible toll on life.

That is the promise of self-driving cars: get the dangerous, unpredictable humans out of the loop. Getting there, however, is tough. It turns out that the tragic death of a woman in Arizona due to a failure of an Uber experiment in autonomous driving may have been caused by the uncanny valley of partial autonomy.

Let’s take it as given that fully-autonomous (Level 5) vehicles are safer than human-driven ones. However, nobody has built one yet. What we do have are vehicles that may on occasion require human occupants to take control, and to do so with very little warning. According to the crash reports, the Uber driver in the Arizona crash had no more than six seconds’ warning of an obstacle ahead, and perhaps as little as 1.3 seconds.

Contrary to some early reports, the driver was not looking at a smartphone (although more time for our phones is one of the benefits to be expected from actual self-driving cars), but at "a touchscreen that was used to monitor the self-driving car software":

"The operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review," the [NTSB] report said.

The Uncanny Valley

I wrote about this uncanny valley problem of autonomous vehicles before:

as long as human drivers are required as backup to self-driving tech that works most of the time, we are actually worse off than if we did not have this tech at all.

In the first known fatal accident involving self-driving tech, the driver may have ignored up to seven warnings to put his hands back on the wheel. That was an extreme case, with rumours that the driver may even have been watching a film on a laptop, but in the Arizona case, the driver may have had only between four and one seconds of warning. If you’re texting or even carrying on a conversation with other occupants of the car, four seconds to context-switch back to driving and re-acquire situational awareness is not a lot. One second? Forget it.

Uber may have made that already dangerous situation worse by limiting the software’s ability to take action autonomously when it detected an emergency condition:

the automated braking that might have prevented the death of pedestrian Elaine Herzberg had been switched off "to reduce the potential for erratic vehicle behavior." Such functions were delegated to the driver, who was simultaneously responsible for preventing accidents and monitoring the system’s performance.

In other words, to prevent the vehicle suddenly jamming on the brakes in unclear situations like the one in Arizona, where "the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path", Uber simply opted to delegate all braking to the "safety driver" – while also requiring her to "monitor the system’s performance". This situation – distracting the driver who is also expected to take immediate (and correct) action in an emergency – could hardly have been better designed to produce the outcome we saw in Arizona.

This is exactly what I predicted in my previous post on Uber:

Along the way to full Level 5 autonomy, we must pass through an “uncanny valley" of partial autonomy, which is actually more dangerous than no autonomy at all.
Adding the desperate urgency of a company whose very survival depends on the success of this research seems like a very bad idea on the surface of it. It is all too easy to imagine Uber (or any other company, but right now it’s Uber), with only a quarter or two worth of cash in the bank, deciding to rush out self-driving tech that is 1.0 at best.
It’s said that you shouldn’t buy any 1.0 product unless you are willing to tolerate significant imperfections. Would you ride in a car operated by software with significant imperfections?
Would you cross the street in front of one?

What Next?

Uber has now ceased tests of self-driving cars in Arizona, but it is continuing the work in Pittsburgh, having already been kicked out of San Francisco after one of its self-driving cars ran a red light right in front of SFMOMA.

Despite these setbacks, it is however continuing work on its other projects, such as flying taxis.

Thats seems perfectly safe, and hardly at all likely to go horribly wrong in its own turn.

Drone crash during ski race

GIF is of a drone almost crashing into a skier during a race in Madonna di Campiglio.

  1. Such as they are, yes, I am familiar with The Invention Of Jaywalking

The VP of Nope

I have a character in my head, the VP of Nope. This is pure wish-fulfilment on my part: when everyone was in the room taking an utterly wrong and bone-headed decision, I wish there had been someone present who was sufficiently senior to just say "nnnope" and move on.

It seems I’m not the only one to feel that way, judging by the reactions to my tweet where I mentioned this:

(Scoff all you want, but those are pretty big engagement numbers for me.)

The VP of Nope has to be a VP in order not to have to get bogged down in particulars. Software engineers in particular are very susceptible to getting ideas into their heads which are great in a small context, but have all sorts of problems if you take a step back and look at them again in a wider context.

Here’s an example from my own history. I used to work for a company whose products used fat clients – as in, natively compiled applications for each supported platform. This was fine at the time, but web applications were obviously the future, and so it came to pass that a project was initiated to write thin web clients instead. I was part of the early review group, and we were all horrified to find that the developers had opted to write everything in Flex.

If you are not familiar with Adobe Flex1, it had a very brief heyday as a way to write rich web interfaces, but had the very significant drawback of running on top of Adobe’s late, unlamented Flash technology. There were several very significant problems due to that dependency:

  • Corporate IT security policies almost never allowed the Flash plugin to be installed on people’s browsers. This meant that a Flex GUI was either a complete non-starter, or required exceptions to be requested and granted for every single machine that was going to connect to the application back-end, thereby losing most of the benefits of moving away from the fat client in the first place.
  • Thin clients are supposed to be less resource-hungry on the client machine than fat clients (although of course they are much more dependant on network performance). While web browsers were indeed lighter-weight than many fat clients, especially Java-based ones, the Flash browser plugin was a notorious resource hog, nullifying or even reversing any energy savings.
  • While Apple’s iPad was not yet nearly as dominant as it is today, when it is the only serious tablet, it was still very obvious that tablets and mobile devices in general were The Future. Every company was falling over its feet to provide some sort of tablet app, but famously, Steve Jobs hated Flash, and articulated why in his open letter, Thoughts on Flash. All of Steve’s reasons were of course in themselves valid and sufficient reasons not to develop anything in Flex, or indeed to require Flash in any way, but the fact that Steve Jobs was committing to never supporting Flash on Apple devices killed Flash dead (and there was much rejoicing). Sure, it took a couple of years for the corpse to stop twitching, but the writing was on the wall.

Building any sort of strategic application in Flex after the release of that letter in April 2010 was a brain-meltingly idiotic and blinkered decision – and all of us on the early review programme said so, loudly, repeatedly, and (eventually) profanely. However, none of us had sufficient seniority to make our opinions count, and so the rough beast, its hour come round at last, slouched towards GA to be born into an uncaring world.

This Is Not A Rare Event

I have any number of examples like this one, where one group took a narrow view of a problem, unaware of or even wilfully ignoring the wider context. In this particular case, Engineering had determined that they could develop a thin web client more quickly and easily by using a piece of Adobe technology than by dealing with the (admittedly still immature) HTML5 tools available at the time. Given their internal metrics and constraints, this may even have been the right decision – but it resulted in an outcome that was so wrong as to actively damage the prospects of what had been until then perfectly viable products.

In such situations, the knock-on effects of the initial fumble are often even worse than the immediate impact, and so it was to prove in this case as well. First, enormous amounts of time, energy, and goodwill were wasted arguing back and forth, and then the whole GUI had to be re-written from scratch a second time without Flex, once it became apparent to enough people what a disaster the first rewrite was. Meanwhile, customers were continuing to use the old fat client, which was falling further and further behind the state of the art, since all of Engineering’s effort was being expended on either rewriting the GUI yet again, or strenuously defending the most recent rewrite against its critics. All of this wasted and misdirected effort was a major contributing factor to later strategic stumbles whose far-reaching consequences are still playing out now, nearly a decade later.

This is what is referred to as an omnishambles, a situation that is comprehensively messed up in every possible way – and the whole thing could have been headed off before it even began by the VP of Nope, quietly clearing their throat at the back of the room and shaking their head, once.

Their salary would be very well earned.


Photo by Vladimir Kudinov on Unsplash


  1. Originally developed by Adobe, it now seems to be staggering through an unloved half-life as an open-source project under the umbrella of the Apache foundation. Just kill it already! Kill it with fire! 

Apple Abroad

I am broadly bullish about Apple’s purchase of digital magazine subscription service Texture. I do however have concerns about Apple’s ability and willingness to deliver this service internationally. This concern is based on many past examples of Apple rolling out services to the US (and maybe UK) first, and the rest of the world only slowly, piecemeal, and according to no obvious or consistent logic.

Subscription hell is a real problem, and it creates a substantial barrier for users considering new subscriptions. Even if the financial element were removed, I have had to adopt a strict one-in, one-out policy for podcasts, because I simply don’t have enough hours in the day to listen to them all. (It doesn’t help when The Talk Show does one of its three-hour-long monster episodes, either.) Add a price component to that decision, and I’m even more reluctant to spend money on something I may not use enough to justify the cost. I would love to subscribe to the Financial Times and the Economist, but there is no way I could get through that much (excellent) writing, and they are pretty expensive subscriptions.

On the other hand, the idea of paying for one Netflix-style sub that includes a whole bunch of magazines, so that I can read what I want, seems pretty attractive on the surface. Even better if I can change the mix of consumption from one month (beach holiday) to the next (international business travel) without having to set up a whole bunch of new subs, with all the attendant friction.

Here’s the problem, though. Apple has form in releasing services in the US, and then only rolling them out internationally at a glacially slow pace. I realise that many commentators may not be aware of this issue, so let’s have a quick rundown, just off the top of my head.

News

Apple’s News app is still only officially available in the US, UK, and Australia. Luckily this restriction is pretty easy to fool by setting your iOS device to a region where it is supported, and there you go – the News app is now available on your home screen. Still, it seems an odd miss for what they regularly claim as a strategic service.

Siri on AppleTV

I have ranted before about the shameful lack of Siri on AppleTV, but this issue still hasn’t been resolved. Worse, the list of countries where Siri is available on AppleTV makes no sense. What concerns me, obviously, is the absence of Italy, especially when much smaller countries (the Netherlands? Norway?) are included, but there are other oddities. For instance, French is fine in France and Canada, but not in Belgium. Why? Quebec French is far more different than Belgian French. Also, Siri works just fine in way more countries and languages than are on that list, so it’s far from obvious why it’s not available on tvOS.

The worst is that it is not possible to get around this one, as the restriction is tied to the country where the user’s Apple ID is registered, and that in turn is tied inextricably to the credit card’s billing address. Short of registering a whole new credit card, if you live outside one of the blessed countries, you’re not going to be able to use the Siri remote for its intended function. Given that nobody likes that remote, and fully 20% of its button complement is dedicated to Siri, this limitation substantially detracts from the usage experience of what is already a pretty expensive device.

Apple Pay in Messages

As with Siri on tvOS, this is a weird restriction, given that Apple Pay works fine in many countries – but is not available in Messages. I could understand if this were a banking restriction, but why not enable payment in Apple Store vouchers? Given my monthly spend, I’d be happy to take the occasional bar tab in store credit, and put it towards my iCloud, Apple Music, other subscriptions, and occasional apps. But no, I’m not allowed to do that.

TV app

Returning to the TV theme, if you’re outside a fairly short list of countries, you are still using the old Video app on iOS and tvOS, not the new TV app. Given that the TV app was announced in October of 2016 and launched at the end of that year, this is a pretty long wait. It’s especially annoying if you regularly use both the iTunes Store and a local iTunes library, as those live in separate places, especially in light of the next item.

iTunes Store

Even when a service is available, that doesn’t mean it’s the same everywhere. One of the most glaring examples is that I still can’t buy TV shows through the Italian iTunes Store. I’m not quite sure why this is, unless it’s weird geographical licensing hangovers. Cable TV providers, Amazon, and Netflix all seem to have worked out licensing for simulcast with the US, though, so it is possible to solve this.

Movies are another problem, because even when they are available, sometimes (but not always!) the only audio track is the Italian dubbed version, which I do not want. Seriously, Apple – literally every DVD has multiple audio tracks; could you at least do the same with Movies in the iTunes Store?

And sometimes films or books simply aren’t available in the Italian store, but they are in the US store. It’s not a licensing issue, because Amazon carries them quite happily in both countries. A couple of times I have asked authors on Twitter whether they know what is going on, but they are just as mystified as I am.

It Works In My Country

There is a more complete list of iOS feature availability out there, and I would love if someone were able to explain the logic behind the different availability of seemingly similar functionality in certain countries – and the different lists of countries for seemingly identical features! Right now, Apple’s attitude seems to be a variation of the classic support response, “it works on my machine": “but it works in my country…".

And that’s why I worry about Apple’s supposed Texture-based revamp of Apple News: maybe it gets locked down so I can’t have it at all, or maybe it’s neutered so I can’t access the full selection of magazines, or some other annoyance. I just wish Apple would introduce an “International" region, where as long as you accept to do everything in English, they just give you full access and call it good, without making us jump through all these ridiculous hoops.

Needy Much, Facebook?

This notification was on my iPad:

A HUNDRED messages? Okay, maybe something blew up. I’ve not been looking at Facebook for a while, but I’ve been reluctant to delete my account entirely because it’s the only way I keep in touch with a whole bunch of people. Maybe something happened?

I open the app, and I’m greeted with this:

Yeah, no notifications whatsoever inside the app.

Facebook is now actively lying to get its Daily Active Users count up. Keep this sort of thing in mind when they quote such-and-such a number of users.

To Facebook, user engagement stats are life itself. If they ever start to slide seriously, their business is toast. Remember in 2016, when Facebook was sued over inflated video ad metrics? Basically, if you scrolled past a video ad in your feed, that still counted as a “view", resulting in viewer counts that were inflated by 80%.

Earlier this year, Facebook had its first loss in daily active users in the US and Canada. They are still growing elsewhere, but not without consequences, as the New York Times reports in a hard-hitting piece entitled Where Countries Are Tinderboxes and Facebook Is a Match.

At this point, I imagine anyone still working for Facebook is not nearly as forward with that fact at dinner parties or in bars, instead offering the sort of generic “yeah, I work in IT" non-answer that back-office staff at porn sites are used to giving.

Cloud Adoption Is Still Not A Done Deal

I have some thoughts on this new piece from 451 Research about IT provisioning. The report is all about how organisations that are slow to deliver IT resources will struggle to achieve their other goals. As business becomes more and more reliant on IT, the performance of IT becomes a key controlling factor for the overall performance of the entire business.

This connection between business and IT is fast becoming a truism; very few businesses could exist without IT, and most activities are now IT-enabled to some extent. If you’re selling something, you’ll have a website. People need to be able to access that website, and you need to make regular changes as you roll out new products, run sales promotions, or whatever. All of that requires IT support.

Where things get interesting is in the diagnosis of why some organisations succeed and others do not:

Just as internal IT culture and practices have an impact on provisioning time, they can also severely impact acceptance of technologies. Although the promise of machine learning and artificial intelligence (AI) is emerging among IT managers who took early steps toward machine-enabled infrastructure control, much work remains in convincing organizations of the technologies' benefits. In fact, the more manual the processes are for IT infrastructure management, the less likely that IT managers believe that machine learning and AI capabilities in vendor products will simplify IT management. Conversely, most managers in highly automated environments are convinced that these technologies will improve IT management.

If the IT team is still putting hands on keyboards for routine activities, that’s a symptom of some deeper rot.

It may appear easy to regard perpetual efforts of organizations to modernize their on-premises IT environments as temporary measures to extract any remaining value from company-owned datacenters before complete public cloud migration occurs. However, the rate of IT evolution via automation technologies is accelerating at a pace that allows organizations to ultimately transform their on-premises IT into cloudlike models that operate relatively seamlessly through hybrid cloud deployments.

The benefits of private cloud are something I have been writing about for a long time:

The reason this type of organisation might want to look at private cloud is that there’s a good chance that a substantial proportion of that legacy infrastructure is under- or even entirely un-used. Some studies I’ve seen even show average utilisation below 10%! This is where they get their elasticity: between the measured service and the resource pooling, they get a much better handle on what that infrastructure is currently used for. Over time, private cloud users can then bring their average utilisation way up, while also increasing customer satisfaction.

The bottom line is, if you already own infrastructure, and if you have relatively stable and predictable workloads, your best bet is to figure out ways to use what you already have more efficiently. If you just blindly jump into the public cloud, without addressing those cultural challenges, all you will end up with is a massive bill from your public cloud provider.

Large organisations have turning circles that battleships would be embarrassed by, and their radius is largely determined by culture, not by technology. Figuring out new ways to use internal resources more efficiently (private cloud), perhaps in combination with new types of infrastructure (public cloud), will get you where you need to be.

That cultural shift is the do-or-die, though. The agility of a 21st century business is determined largely by the agility of its IT support. Whatever sorts of resources the IT department is managing, they need to be doing so in a way which delivers the kinds of speed and agility that the business requires. If internal IT becomes a bottleneck, that’s when it gets bypassed in favour of that old bugbear of shadow IT.

IT is becoming more and more of a differentiator between companies, and it is also a signifier of which companies will make it in the long term – and which will not. It may already be too late to change the culture at organisations still mired in hands-on, artisanal provisioning of IT resources, but it is certain that completing that transition should be a priority.


Photo by Amy Skyer on Unsplash

Cloud Adoption Is Still Not A Done Deal

I have some thoughts on this new piece from 451 Research about IT provisioning. The report is all about how organisations that are slow to deliver IT resources will struggle to achieve their other goals. As business becomes more and more reliant on IT, the performance of IT becomes a key controlling factor for the overall performance of the entire business.

This connection between business and IT is fast becoming a truism; very few businesses could exist without IT, and most activities are now IT-enabled to some extent. If you’re selling something, you’ll have a website. People need to be able to access that website, and you need to make regular changes as you roll out new products, run sales promotions, or whatever. All of that requires IT support.

Where things get interesting is in the diagnosis of why some organisations succeed and others do not:

Just as internal IT culture and practices have an impact on provisioning time, they can also severely impact acceptance of technologies. Although the promise of machine learning and artificial intelligence (AI) is emerging among IT managers who took early steps toward machine-enabled infrastructure control, much work remains in convincing organizations of the technologies' benefits. In fact, the more manual the processes are for IT infrastructure management, the less likely that IT managers believe that machine learning and AI capabilities in vendor products will simplify IT management. Conversely, most managers in highly automated environments are convinced that these technologies will improve IT management.

If the IT team is still putting hands on keyboards for routine activities, that’s a symptom of some deeper rot.

It may appear easy to regard perpetual efforts of organizations to modernize their on-premises IT environments as temporary measures to extract any remaining value from company-owned datacenters before complete public cloud migration occurs. However, the rate of IT evolution via automation technologies is accelerating at a pace that allows organizations to ultimately transform their on-premises IT into cloudlike models that operate relatively seamlessly through hybrid cloud deployments.

The benefits of private cloud are something I have been writing about for a long time:

The reason this type of organisation might want to look at private cloud is that there’s a good chance that a substantial proportion of that legacy infrastructure is under- or even entirely un-used. Some studies I’ve seen even show average utilisation below 10%! This is where they get their elasticity: between the measured service and the resource pooling, they get a much better handle on what that infrastructure is currently used for. Over time, private cloud users can then bring their average utilisation way up, while also increasing customer satisfaction.

The bottom line is, if you already own infrastructure, and if you have relatively stable and predictable workloads, your best bet is to figure out ways to use what you already have more efficiently. If you just blindly jump into the public cloud, without addressing those cultural challenges, all you will end up with is a massive bill from your public cloud provider.

Large organisations have turning circles that battleships would be embarrassed by, and their radius is largely determined by culture, not by technology. Figuring out new ways to use internal resources more efficiently (private cloud), perhaps in combination with new types of infrastructure (public cloud), will get you where you need to be.

That cultural shift is the do-or-die, though. The agility of a 21st century business is determined largely by the agility of its IT support. Whatever sorts of resources the IT department is managing, they need to be doing so in a way which delivers the kinds of speed and agility that the business requires. If internal IT becomes a bottleneck, that’s when it gets bypassed in favour of that old bugbear of shadow IT.

IT is becoming more and more of a differentiator between companies, and it is also a signifier of which companies will make it in the long term – and which will not. It may already be too late to change the culture at organisations still mired in hands-on, artisanal provisioning of IT resources, but it is certain that completing that transition should be a priority.


Photo by Amy Skyer on Unsplash

Privacy Versus AI

There is a widespread assumption in tech circles that privacy and (useful) AI are mutually exclusive. Apple is assumed to be behind Amazon and Google in this race because of its choice to do most data processing locally on the phone, instead of uploading users’ private data in bulk to the cloud.

A recent example of this attitude comes courtesy of The Register:

Predicting an eventual upturn in the sagging smartphone market, [Gartner] research director Ranjit Atwal told The Reg that while artificial intelligence has proven key to making phones more useful by removing friction from transactions, AI required more permissive use of data to deliver. An example he cited was Uber "knowing" from your calendar that you needed a lift from the airport.

I really, really resent this assumption that connecting these services requires each and every one of them to have access to everything about me. I might not want information about my upcoming flight shared with Uber – where it can be accessed improperly, leading to someone knowing I am away from home and planning a burglary at my house. Instead, I want my phone to know that I have an upcoming flight, and offer to call me an Uber to the airport. At that point, of course I am sharing information with Uber, but I am also getting value out of it. Otherwise, the only one getting value is Uber. They get to see how many people in a particular geographical area received a suggestion to take an Uber and declined it, so they can then target those people with special offers or other marketing to persuade them to use Uber next time they have to get to the airport.

I might be happy sharing a monthly aggregate of my trips with the government – so many by car, so many on foot, or by bicycle, public transport, or ride sharing service – which they could use for better planning. I would absolutely not be okay with sharing details of every trip in real time, or giving every busybody the right to query my location in real time.

The fact that so much of the debate is taken up with unproductive discussions is what is preventing progress here. I have written about this concept of granular privacy controls before:

The government sets up an IDDB which has all of everyone's information in it; so far, so icky. But here's the thing: set it up so that individuals can grant access to specific data in that DB - such as the address. Instead of telling various credit card companies, utilities, magazine companies, Amazon, and everyone else my new address, I just update it in the IDDB, and bam, those companies' tokens automatically update too - assuming I don't revoke access in the mean time.

This could also be useful for all sorts of other things, like marital status, insurance, healthcare, and so on. Segregated, granular access to the information is the name of the game. Instead of letting government agencies and private companies read all the data, users each get access only to those data they need to do their jobs.

Unfortunately, we are stuck in an stale all-or-nothing discussion: either you surround yourself with always-on internet-connected microphones and cameras, or you might as well retreat to a shack in the woods. There is a middle ground, and I wish more people (besides Apple) recognised that.


Photo by Kyle Glenn on Unsplash

How To Run A Good Presentation

There are all sorts of resources about creating a good slide deck, and about being a good public speaker – but there seems to be a gap when it comes to the actual mechanics of delivering a presentation. Since I regularly see even experienced presenters get some of this stuff wrong, I thought I’d write up some tips from my own experience.

I Can’t See My Audience

The first question is, are you presenting to a local audience, or is your audience somewhere else? This seriously changes things, and in ways that you might not have considered. For a start, any sort of rich animation in your slides is probably bad for a remote presentation, as it is liable to be jerky or even to fail entirely.

You should definitely connect to a remote meeting a few minutes ahead of time, even if you have already installed the particular client software required, as there can still be weird issues due to some combination of the version of the plugin itself, your web browser, or their server-side software. If the meeting requires some software you have not used before, give yourself at least fifteen minutes to take care of downloading, installing, and setting that up to your satisfaction.

Even when people turn on their webcam (and assuming you can see something useful through it, as opposed to some ceiling tiles), once you start presenting you probably won’t be able to see them any more, so remember to stop every few minutes to check that everyone is still with you, that they can see whatever you are currently presenting, and whether they have any questions. This is good advice in general, but it’s easier to remember when the audience is in the room with you. When you’re just talking away to yourself, it can be hard to remember that there are other people listening in – or trying to.

Fancy "virtual meeting room" setups like Cisco’s TelePresence are all very well – as long as all participants have access to the same setup. Most times that I have used such systems, a few participants were connecting in from desktop devices, from their computers, or even from phones, which of course gave them far less rich functionality. Don’t assume that everyone is getting the full “sitting right across the table from each other" experience!

My Audience Can’t See Me

In one way, presenting remotely without a webcam trained on you can be very freeing. I pace a lot; I do laps of the room while talking into a wireless headset. I think this helps me keep up the energy and momentum of a live presentation, which otherwise can be hard to maintain – both when I’m presenting and when I’m in the audience.

One complication is the lack of presenter mode. I’m on the record as a big fan of presenter mode, and I rely on this feature heavily during live presentations, both for speaker notes on the current slide and to remind myself about the next slide. Depending on the situation, I may also use the presenter view to jump around in my deck, presenting slides in a different order than the one they were saved in. Remote presentation software won’t let you do this, or at least, not easily. You can hack it if you have two monitors available, by setting the “display screen" to be the one shared with the remote audience, and setting the other one to be the “presenter screen", but this is a bit fiddly to set up, and is very dependent on the precise meeting software being used.

This is particularly difficult when you’re trying to run a demo as well, because that generally means mirroring your screen so the remote audience sees the same thing as you do. This is basically impossible to manage smoothly in combination with presenter view, so don’t even try.

Be In The Room

If you are in the room with your audience, there’s a different set of advice. First of all, do use presenter mode, so that you can control the slides properly. Once you switch over to a demo, though, mirror your screen so that you are not craning your neck to look over your own shoulder like a demented owl while trying to drive a mouse that is backwards from your perspective. Make it so you can operate your computer normally, and just mirror the display. Practice switching between these modes beforehand. A tool that can really help here is the free DisplayMenu utility. This lives in your menu bar and lets you toggle mirroring and set the resolution of all connected displays independently.

Before you even get to selecting resolutions, you need to have the right adapters – and yes, you still need to carry dongles for both VGA and HDMI, although in the last year or so the proportions have finally flipped, and I do sometimes see Mini DisplayPort too. I have yet to see even the best-equipped conference rooms offer USB-C cables, but I am seeing more and more uptake of wireless display systems, usually either an AppleTV, or Barco ClickShare. The latter is a bit fiddly to set up the first time, so if you’re on your own without someone to run interference for five minutes, try to get a video cable instead. Once it’s installed, though, it’s seamless – and makes it very easy to switch devices, so that you can do things like use an iPad as a virtual whiteboard.

Especially during the Q&A, it is easy to get deeply enough into conversation that you don’t touch your trackpad or keyboard for a few minutes, and your machine goes to sleep. Now your humorous screensaver is on the big screen, and everyone is distracted – and even more so while you flail at the keyboard to enter your password in a hurry. To avoid this happening, there’s another wonderful free utility, called Caffeine. This puts a little coffee cup icon in your menu bar: when the cup is full, your Mac’s sleep settings are overridden and it will stay awake until the lid is closed or you toggle the cup to empty.

Whether the audience is local or remote, Do Not Disturb mode is your friend, especially when mirroring your screen. Modern presentation software is generally clever enough to set your system to not display on-screen alerts while you are showing slides (unless you are one of those monsters who share their decks in “slide sorter" view, in which case you deserve everything you get), but that won’t save you once you start running a demo in your web browser. Some remote meeting software lets you share a specific application rather than your whole screen, but all that means is that instead of the remote audience seeing the specific text of your on-screen alerts, they see ugly great redacted rectangles interfering with the display. Either way, it does not look great.

I hope these tips have been useful. Good luck with your presentations!


Photos by Headway and Olu Eletu on Unsplash

When Robots Kill

This is not a breaking-news blog. Instead, what I try to do here is bring together different strands of thinking about an issue – hence the name: Find The Thread.

This is why I’m going to comment on the tragic story of the woman struck and killed by a “self-driving" Uber car in Arizona, even though the collision occurred more than a week ago.

A Question Of Levels

We generally talk about levels of autonomy in driverless cars. Level 0 is the sort of car most of us are used to. Particularly high-tech cars – your Mercedes S-classes, Audi A8s, many Volvos, and so on – may have Level 1 or even 2 systems: radar cruise control that will decelerate to avoid obstacles, lane-keeping technology that will steer between the white lines on a motorway, and so on. Tesla also attempts Level 3 with its Autopilot tech.

In all of these cases, the driver is required to still be present and alert, ready to take over the driving at a moment’s notice. The goal is to get to Level 4 and 5, which is where the driver can actually let go of the wheel entirely. Once Level 5 is commonplace, we will start seeing cars built without manual controls, as they will no longer be required.

The problem, as Benedict Evans points out, is that this will not be a universal roll-out. As I have written myself, autonomous driving technology is likely to be rolled out gradually, with easy use cases such as highway driving coming first.

This is the nut of the issue, though: as long as human drivers are required as backup to self-driving tech that works most of the time, we are actually worse off than if we did not have this tech at all.

In the first known fatal accident involving self-driving tech, the driver may have ignored up to seven warnings to put his hands back on the wheel. That was an extreme case, with rumours that the driver may even have been watching a film on a laptop, but in the Arizona case, the driver may have had only between four and one seconds of warning. If you’re texting or even carrying on a conversation with other occupants of the car, four seconds to context-switch back to driving and re-acquire situational awareness is not a lot. One second? Forget it.

In tech circles, self-driving tech is mostly analysed as a technology problem. Can we do this with cameras and smarter processing, do we need expensive Lidar rigs, who has the smartest approach, and so on. This is all cutting-edge stuff, to be sure, and well worth investigating anyway. You can then start speculating about the consequences if this tech all works out, and I’ve had a go at thinking about what truly self-driving cars may imply myself.

Beyond The Software

There is a whole other level beyond the technological one, which is the real-world frameworks in which these technologies would have to operate. The sorts of driving licenses we issue to humans already focus more on the rules of the road than the techniques of driving. You can learn the mechanics of driving in a few hours, especially with an automatic gearbox. The reason we don’t give people licenses after a day of instruction is that we also require them to understand how to drive on public roads shared with others.

This tragic accident in Arizona has shifted the conversation to whether it is possible to sue an autonomous car. I am working with some major automotive manufacturers, and all are developing self-driving tech – but none are prepared to roll it out, or even discuss it much in public, until these aspects have been sorted out. Car-makers are a fairly conservative bunch, used to strict product liability laws.

In contrast, the software industry by and large accepts the idea that a click-through waiver absolves you of all responsibility for your products. That is not at all how the automobile industry operates. Even strictly software faults are held to a level of scrutiny unknown in the general software industry, outside of specialised applications. In the case of Toyota’s unintended acceleration problems, the car-maker was ultimately held responsible in court for a fatal accident, due to identified bugs in its electronic throttle control system – and to the fact that code metrics indicated the probability that other, as-yet unidentified bugs were still present in the codebase for that system.

Jamie Zawinski has some typically acerbic commentary:

Note that the article's headline referred to the woman killed by the robot as a "pedestrian" instead of a person. "Pedestrian" is a propaganda term invented by the auto industry to re-frame the debate: to get you to preemptively agree that roads, and by extension cities, are for cars, and any non-car-based use is “other", is some kind of special-case interloper. See The Invention of Jaywalking.

Semantics aside, I have one question that I think is pretty important here, and that is, who is getting charged with vehicular homicide? Even if they are ultimately ruled to be not at fault, what name goes on the court docket? Is it:

  • The Uber employee - or "non-employee independent contractor" - in the passenger seat?

  • Their shift lead?

  • Travis Kalanick?

  • The author(s) of the (proprietary, un-auditable) software?

  • The "corporate person" known as Uber?

Good question, and one that so far remains unanswered.

Why The Rush To Autonomous Cars?

Finally, let’s remember that there are two reasons that the industry is storming ahead with self-driving tech. The public reason is the presumption of increased road safety through the removal of distracted human drivers from the road. However, as the complexities involved in moving beyond simple demos in an empty parking lot become clear, people are starting to suggest ridiculous solutions like "bicycle-to-vehicle" communications – in other words, instrumenting cyclists so that they will advertise their position to cars. And if you give sensors to cyclists, why not pedestrians too?

This is a typical technology-first fix: if you can’t solve the problem one way, by detecting cyclists through sensors, you solve it another way, by fitting sensors to the cyclists themselves. Here again, though, we are not in a purely technological domain. This blinkered view is why self-driving cars won’t save cyclists, at least until the thinking shifts around the whole issue of cars in general.

Here is where we come to the second reason behind the urgency in the development of self-driving tech: Uber’s business model depends on it. Right now they are haemorrhaging money – over a billion-with-a-B per quarter in 2017 – in a race to achieve market dominance before they run out of cash (or investors willing to give them more). Much of that cost goes to their human drivers; if those could be replaced with automated systems, the cost would go away at a stroke, and they would also achieve much higher utilisation rates on their fleet of vehicles.

In this view, self-driving cars are both an offensive move against Uber’s competitors, and a defensive one in case the likes of Google get there first and undercut Uber with their little pod-cars.

This sort of thing is catnip for futurists and other professional speculators, existing at the nexus of technology and business model that is Silicon Valley distilled to its purest essence. However, as the real-world problems with this project become more and more visible, people are starting to question whether self-driving cars are actually a distraction for Uber.

The bottom line is that right now we are pushing forwards with self-driving tech in the hope it will make our roads safer. This is a valid and important goal, to be sure – but those claims of increased safety from self-driving tech are still assumptions, very much unproven, as the tragic death in Arizona reminds us.

Along the way to full Level 5 autonomy, we must pass through an “uncanny valley" of partial autonomy, which is actually more dangerous than no autonomy at all.

Adding the desperate urgency of a company whose very survival depends on the success of this research seems like a very bad idea on the surface of it. It is all too easy to imagine Uber (or any other company, but right now it’s Uber), with only a quarter or two worth of cash in the bank, deciding to rush out self-driving tech that is 1.0 at best.

It’s said that you shouldn’t buy any 1.0 product unless you are willing to tolerate significant imperfections. Would you ride in a car operated by software with significant imperfections?

Would you cross the street in front of one?

And shouldn’t you have the choice to make that call? This is why, despite claims that the EU’s strategy on AI is a failure, I like their go-slow approach. Sure, roll out 1.0 animoji or cat-ear filters, but before we rely on computer vision not to run people over, or fine them for jaywalking or whatever, we should maybe stop and think about that for a moment.