Adapt and evolve

A couple of days ago I was on the panel for a Hangout with Mark Thiele, of Switch fame. We had an interesting and wide-ranging chat, but Mark's answer to the last question stuck with me. The question was, what advice would he give to new graduates or in general to young people contemplating a career in IT. His answer was long and considered, but if I had to choose one word to concentrate it, the word would be adaptability.

The idea of adaptability resonates very strongly with me. In my own career, I have had various very different jobs, and I don't expect to have the same job ten years from now. In fact, I fully expect that the job I will have ten years from now does not even exist today, at least in that form! This is the world we live in nowadays, and while IT may be at the bleeding edge of the change, it is coming to all of us. Nobody can assume that they will go to university, study a topic, and work in that narrow field from graduation to retirement. Our way of working is changing, and the pace of change is accelerating.

Perhaps unsurprisingly, some of the stiffest resistance to these changes comes from the field most outsiders would expect to be leading the charge. Many IT people do not like the changes that are coming to the nice cozy world of the datacenter - or, well, the frosty world of the datacenter, unless the cooling system has failed…

I used to be a sysadmin, but that was more than a decade ago. The skills I had then are obsolete, and the platforms I worked on are no more. Some of the habits I formed then are still with me, but their applications have evolved over time. Since then, I've done tech support, both front-line and L2. I've done pre-sales, I've done training, I've done sales, and now I'm in marketing. Each time I changed jobs, my role changed and so did what was expected of me. This is the new normal.

In the last few years, my various roles have had one thing in common: I have been travelling around, showing IT people new technology that can make their jobs better and trying to persuade them to adopt it. My employers have always had competition, of course, but far and away the most dangerous competitor was what a previous boss used to call the Do Nothing Corporation: the status quo. People would say things like "we're doing fine now" or "we don't need anything new", all while the users were beating at the doors of the datacenter with their fists in a combination of frustration and supplication.

This is not your grandfather's IT

Every time you have to execute a task yourself, you lost. Your goal is to automate yourself out of ever having to do something manually. Your job is not to install a system or configure a device, to set up a monitor or to tail a log file; that's not what you were hired for. Your job is to make sure that users have what they need to do their jobs, and that they can get access to it quickly and easily.

This might mean that you have to let go of doing it all yourself - and that's fine. The WOPR was a big, impressive piece of kit, but it was a prop. If only half of "your" IT runs in your datacenter, and the rest is off in the cloud somewhere… well, as long as the users are happy and the business objectives are being met, you're ahead of the game!

Right now there is a certain amount of disillusionment with all this cloud nonsense. According to Gartner, we're about half-way down the slope from the Peak of Inflated Expectations to the Trough of Disillusionment. The cause of much of this disillusionment is partly those inflated expectations built on some overheated rhetoric by cloud boosters, but partly the refusal to accept the changes that are needed.

abstraction.jpeg

This is what it looks like if you try to treat the cloud like the same old IT

Of course cloud is built on servers and hypervisors and storage arrays and routers and switches and firewalls and all the rest of it, and we forget that at our peril. What makes the cloud different, and forces a long-overdue change in how IT works, is the expectation on the part of users. People expect their IT to be instantly available, to work nicely with what they already have - and with what they will add in the future - and to make it easy to understand costs. This is where IT can add value. Provisioning a server - that's a solved problem. That's not even table stakes; that's walking-into-the-casino stakes.

It's not so bad. Join the evolution!

All Software Sucks

It is a truism in high-tech that All Software Sucks. (There is an equally valid corollary that All Hardware Sucks.) The usual reason for restating this universally accepted truth is to deflate someone who is lapsing into advocacy of one platform over another. There is a deeper truth here, though, and it's about who builds the software, who for, and why.

Many open-source projects have terrible interfaces, major usability issues, and appearances that only a mother could love. This is because the creators by and large did not get into the project for that part; they wanted a tool that would perform a specific, often quite technical, task for them, and assembled a band of like-minded enthusiasts to work on the project in their off-hours.

This is great when the outcome of the project is something like the Linux kernel, which is safely hidden away from users who might cut themselves on its sharp edges. Problems start to occur when the band of hackers try to build something closer to the everyday surface that people see and use. There must be a thousand Linux desktop environments around when you count the combinations, and while each of them suits somebody's needs, they are otherwise pretty uniformly horrible to look at and inconsistent to use.

The same applies to enterprise software, but for a slightly different reason. Cynics have long suggested that the problem with enterprise software is that the buyer is not the user: purchasing departments and CIOs don't have to live with the consequences of their decisions. While I don't doubt that some of this goes on, in my experience both groups of people are usually trying to do their best. The problem is with the selection process itself.

Who specs enterprise software? Until very recently, only the IT department, and even now they still do most of it. Much like open-source hackers, they draw up the specification based on their own needs, drivers, and experience. These days, though, more and more people within the enterprise are doing more and more with the software tools, and they expect even more. Gone are the days of sending a supplication through inter-office mail to the high tower of IT. People have become used to self-service systems with attractive interfaces in their personal lives, and they expect the same at work.

Enterprise IT departments are struggling to adapt to this brave new world:

  • Their security policies are crafted around the concept of a perimeter, but that perimeter no longer exists. Personally-owned devices and corporate devices used for personal purposes have fuzzed the edges, and great chunks of the business process and even the infrastructure now live outside the firewall.

  • Their operational procedures are based on the idea that only a small band of experts who all work together and understand each other will work on the infrastructure, but more and more of the world doesn't work that way any more. Whether it's self-service changes for users, shared infrastructure that IT does not have full control over, developers doing their own thing, or even the infrastructure changing itself through automation, there is simply no way for every change to be known, in advance or indeed at all.

  • Their focus on IT is narrow and does not encompass all the ways people are interacting with their devices and systems. In fact, the IT department itself is often divided into groups responsible for different layers, so that an overall view even of the purely technical aspects is difficult to achieve.

This is important right now because enterprise IT departments are looking at a phase change, and many are failing or shying away from the new challenges. These days I am working on cloud management platforms, which are at the intersection of all of these issues. Too many of these projects take too long, fail to achieve all of their objectives, or never even get off the ground.

How does this happen?

The reasons for these failures are exactly what I described above. Here are a couple of real-life examples (names have been removed to protect the guilty).

The CIO of a large corporation in the energy and resource sector illustrated his cloud roadmap to me. The roadmap had been developed with the advice of a large consultancy, and was ambitious, long-term, and complete - except for one thing. After his presentation was complete, I asked him who he expected the users and use cases to be. His answer was: "The IT department, of course!" Noting my flabbergasted expression, he added: "Why, what else are they going to do?" What else, indeed? As far as I know, that roadmap has still not found any existence beyond the CIO's PowerPoint slides.

A major international bank did begin its cloud project, and implemented everything successfully. New and very advanced state-of-the-art hardware was procured, all the infrastructure and management software was installed, everyone congratulated each other, and the system was declared open for business. A few months later, the good cheer had evaporated, as usage of the system was far below projections: a few tens of requests per month, instead of the expected several hundred. It seems that nobody had thought to ask the users what they needed, or to explain how the new system could help them achieve it.

Something even worse happened to a big European telco. The cloud platform was specified, architected, evaluated, selected, and implemented according to a strict roadmap that had been built jointly by the telco’s own in-house architects and consultants from a big-name firm. Soon after the first go-live milestone, though, we all realised that utilisation was well below projections, just as in the case of the bank above.

As it happened, though, I was also talking to a different group within the company. A team of developers needed a way to test their product at scale before releasing it, and were struggling to get hold of the required infrastructure. This seemed to me like a match made in heaven, so I brokered a meeting between the two groups.

To cut a long story short, the meeting was a complete train wreck. The developers needed to provision "full stack" services: not just the bare VM, but several software components above that. They also needed to configure both the software components and the network elements in between to make sure all the bits & pieces of their system were talking to each other. All of this was right in the brochure for the technology we had installed - but the architects flatly refused to countenance the possibility of letting developers provision anything but bare VMs, saying that full-stack provisioning was still nine months out according to their roadmap.

That project managed to stagger on for a while, but eventually died quietly in a corner. I think it peaked at 600 simultaneous VMs, which is of course nothing in the cloud.

What is the lesson of these three stories?

The successful projects and products are the ones where people are not just designing and building a tool for themselves, but for a wide group of users. This is a fundamentally different approach, especially for enterprise IT, but it is necessary if IT is to survive.

So what do we do now?

If you are in enterprise IT, the new cloud services that users are asking for or even adopting on their own are not your competition. If someone out there can offer a better service more cheaply than you can operate it in house, that's great; one less headache for you. Your job is not to be engaged in the fulfilment of each request - because that makes you the bottleneck in the process. That sort of thinking is why IT is so often known as "the department of No". Instead, focus on making sure that each request is fulfilled, on time, on spec, and on budget.

If you are selling to enterprise IT, help them along this road. Talk to the users, and share your findings back with the IT department. This way everybody wins.

Talk to the users, they'll tell you what the problem is. Have no doubt about that.

Clouded Prism

One of the questions raised as a part of the PRISM discussion has been the impact on the internet and specifically cloud computing industries. For instance, Julie Craig of EMA wrote a post titled "PRISM: The End of the Cloud?"

I think these fears are a bit overblown. While there will probably be some blowback, most of the people who care about this sort of thing were already worried enough about the Patriot Act without needing to know more about PRISM. I think the number of people who will start to care about privacy and data protection as a result of PRISM will be fairly small. All Things D's Joy of Tech cartoonnailed it, as usual.

The same kind of thing applies in business. Many companies don't really care very much either way about the government reading their files. They might get more exercised about their competitors having access, but apart from perhaps some financial information, the government is low down the list of potential threats.

Of course, most analysis focuses on the question of US citizens and corporations using US services. What happens in the case of foreign users, whether private or corporate, using US services? There has been some overheated rhetoric on this point as well, but I don't think it's a huge factor. Much like Americans, people in the rest of the world already knew about the Patriot act, and most of them voted with their feet, showing that they did not care. As for corporations, most countries have their own restrictions on what data can be stored, processed or accessed across borders, quite possibly to make it easier to run their own versions of PRISM, so companies are already pretty constrained in terms of what they could even put into these US services.

For companies already using the public cloud or looking into doing so, this is a timely reminder that not all resources are created equal, and that there are factors beyond the purely technical and financial ones that need to be considered. The PRISM story might provide a boost for service providers outside the US, who can carve out a niche for themselves as giving the advantages of public cloud, but in a local or known jurisdiction. This could mean within a specific country, within a wider region such as the EU, or completely offshore. Sealand may have been ahead of its time, but soon enough there must emerge a "Switzerland of the cloud". The argument that only unsavoury types would use such a service doesn't really hold water, given that criminals already have the Russian Business Network and its ilk.

Bottom line, PRISM is nothing new, and it doesn't really bring any new facts. Given the Patriot Act, the sensible assumption had to be that the US government was doing something like this - and so were other technologically sophisticated governments around the world. The only impact it might have is in perception, if it blows up into a big enough story and stays around for long enough. In terms of actual rational grounds for decision-making, my personal expectation is for impact to be extremely limited.

Through a Prism, Darkly

Because everyone must have an opinion!

secure.jpg

First of all, let me just say that as a non-US citizen, I try not to comment in public on US politics. It's kind of hard, because it's a bit like trying not to comment on Roman politics in the first century AD, but there it is. Therefore, while the whole PRISM debacle is what prompted this post, what I have to say is not specific to PRISM.

Back in the day, three blogs ago and lost in the mists of Internet time, there was Total Information Awareness. This was a DARPA project from ten years ago, which ended up being defunded by Congress after a massive public outcry, not least about its totally creepy name. Basically the idea was to sift through all communications, or as many as was feasible, looking for patterns that indicated terrorist activity. The problem people had with TIA is much the same as the problem they have with PRISM: the idea that the government will look through everything, and then decide what is important.

On the one hand, this is actually a positive development. No, wait, let me finish! The old way of doing surveillance and Information Awareness that was less than Total was to let humans access all those communications. This method has problems with scale; even enthusiastic adopters of surveillance like East Germany and North Korea only succeeded to the degree they did because East Germany didn't have the internet and North Korea keeps it out. This guarantees abuse, if only because humans can't be told to ignore or forget information. The agent listening to the take from the microphones set up to catch subversive planning can't help also hearing intimate details of the subjects' lives that are not relevant to any investigation.

An automated system is preferable, then, since it doesn't "listen" the way a human does, and discards any data that do not match its patterns. Privacy is actually invaded less by an automated system than by human agents, given the same inputs.

That last clause is kind of important, though. Once you have the automated system set up, it is no longer constrained by scale. Governments around the world already have huge datacenters of their own, and could also take advantage of public cloud resources at a pinch, so such a system is guaranteed to expand, rapidly and endlessly, unless actively checked. A system that just looks for correlation clusters around terrorist organisations, subversive literature, and bulk purchases of fertiliser and ball-bearings will quickly be expanded to look for people behind on their student debt or parking tickets. Think this is an exaggerated slippery-slope argument? The US (sorry) Department of Education conducts SWAT raids over unpaid loans

As with all tools, the problem is the uses that the tool might be put to. Let's say you trust the current government absolutely not to do anything remotely shady, so you approve all these powers. Then next election, the Other Guys get in. What might they get up to? Are you sure you want them to have this sort of power available?

It is already very difficult to avoid interacting with the government or breaking any laws. Today, I briefly drove over twice the speed limit. Put like that it sounds terrible, doesn't it? But what actually happened was that the speed limit suddenly dropped by more than half. This is a very familiar route for me, I could see clear road ahead, and it was a lovely sunny day, so instead of jamming on my brakes right at the sign I exercised my judgment and braked more gradually. A human policeman, unless in a very bad mood, would have had nothing much to say about this. A black box in my car, though, might have revoked my license before I had finished braking (exceeding the speed limit by over 40 km/hour).

This is the zero-tolerance future of automated surveillance and enforcement. Laws and policies designed to be applied and enforced with judgment and common sense will show their weaknesses if they are to be applied by unthinking, unfeeling machines. I haven't even gone into the translation from fuzzy human language to deterministic computer language. The only solution would be to require lawmakers to submit a reference implementation of their law, which would have the advantage of allowing for debugging against test cases in silico instead of in the real world, with actual human beings. The ancillary benefit of massively slowing down the production of new laws is merely a fortunate side effect.

To recap: as usual, the weakest link is the human, not the machine. Systems like PRISM are probably inevitable and may even be desirable, but they need some VERY tight safeguards on them, which to date have not been in evidence. The problem is of course that discussing such systems in public risks disclosing information about how to evade them, but as we have seen in infosec, security by obscurity doesn't work nearly as well as full disclosure. If instead of feeling Big Brother watching over them, citizens felt that they and their government were working together to ensure common security, all of us would feel much happier about working to strengthen and improve such systems. Wouldn't you rather have guys like Bruce Schneier
inside the tent, as the saying goes?

We need the Culture, now!

We need the Culture, now!

Iain Banks has terminal liver cancer.

I don't even know what to say. I have never been one of those fans who get autographs and attend cons and generally want a personal connection with their favourite author, but having read almost everything he has written - certainly all the Culture books, published as Iain M. Banks - I will feel his loss keenly. I suspect we would have disagreed on very much if we had ever met, but regardless, he patently has a bright sparkling mind, and I wish him well wherever he goes next.

Over at Charles Stross's blog, commenters suggest that now would be a good time for the Culture to show up with advanced medical technology, not just for Banks himself, but also for Terry Pratchett. I particularly liked the suggestion that since whatever god or gods are out there are so keen on good company, we should start burying our dead with good weapons, dwarven style…

Either way, requiescat in pacem, and I hope nobody has the poor taste to simulate his state in a Hell, à la Surface Detail
.

Teleworking

I had refrained from commenting on Marissa Mayer's anti-telecommuting edict because it seemed like every human with a blog or a Twitter handle had already done so. Today, though, I read an interesting piece in the FT by John Kay, who compared telecommuting to Robert Moses's proposed clearances in midtown Manhattan and the mooted Lower Manhattan Expressway.

Now, I work from home quite often myself - after all, I am near Milan and my boss is in Boston, so it's not as if I have an immediate need to be in the office every day. Skype works about as well from my home office as it does from my employer's office in Milan. All the same, I do try to go into the office every ten days or so. Partly this is for the prosaic reason that we are not yet an entirely paperless office, and I have to submit physical receipts for my expenses, but partly I go into the office for the serendipitous conversations which often arise from doing that.

This is where I am reminded of the downside of being in the office. In common with most offices, my desk is in an open space with lots of other desks, separated by waist-high partitions. This is not exactly an environment conducive to being able to concentrate. In fact, when all those desks are filled, I'm doing well if I can hear myself think! This means that the office is where I go to have impromptu conversations and face-to-face meetings, but it's not where I am most productive, even with my headphones on. I am much more productive at home, in aeroplanes, or in hotel rooms without distractions. John Kay's negative scenario of a corridor of closed office doors is actually a dream for me! Meet up in the cafe area, or open your door if you're available, but have the ability to close it if you're trying to concentrate.

I would hate to work only remotely, though, and seize every opportunity for gatherings of our little team. With members spread across all of the US, plus me in Europe, we try to meet up once a quarter or so, but those are usually fantastic brainstorming sessions where we really plan out our activities. Some companies like Cisco push the telepresence thing to extremes, even having their yearly kick-off meetings via telepresence. Given that some of the most useful conversations I have at kick-offs and the like have happened in bars and between sessions, I think this is rather short-sighted, although I don't doubt that there are attractive savings from doing things this way.

A healthy combination of alone time and together time works best, at least for my workflow. Most of my desk time is spent building or reviewing content, which requires concentration and does not really benefit from face-to-face interaction. If you are doing something that really does require constant interaction with colleagues in your geographical area, then perhaps going into the office every day really is best.

Finally, some people will always take advantage. I remember the story of one engineer who would tell one salesperson he was with another when he was actually with neither. Finally he got fired for this, and the luckless person tasked with cleaning out his laptop found tons of, um, not-safe-for-work content... One opinion is that Marissa Mayer, being very data-driven, unearthed a lot of this slot of behaviour, perhaps based on VPN logins and such. Given that situation, the right option probably is indeed a very public crackdown, followed by a quiet return to a more flexible approach once the Augean stables have been cleaned out.

The more over-the-top pronouncements against Marissa Mayer are probably overblown, but even if they are not, this is the beauty of the capitalist system. It's not like working at Glorious State Web Company 319; I hear that Northern California has a couple of other web firms which might be willing to accommodate workers who prefer to be home-based. If it's that important to you, make your choices based on that.

The joy of making up words

First post on my new blog! Posterous shut down, so here I am over on Wordpress. I hope it lasts longer than Posterous did…

UPDATE: Evidently it didn’t. Although Wordpress is still around, I’m here on Postach.io now.


German is a wonderful language. I don't speak it nearly as well as I would like to, but I take every chance to read Die Welt or chat in German. What I love most about it is the structure, with each word in a sentence supporting every other word, meaning that a change in a word-ending can change the meaning of a phrase quite radically.

Some people are put off by the complexity of German, or by some of its idiosyncrasies. Famously, Mark Twain wrote a piece called "The Awful German Tongue", making gentle fun of peculiarities that Samuel Clemens himself found when learning German.

One of my own favourite features of the German tongue is the way you can create new words by mashing together existing words. This process can be taken to ridiculous lengths, as in Donaudampfschiffahrtselektrizitätenhauptbetriebswerkbauunterbeamtengesellschaft, which means the Association for Subordinate Officials of the Head Office Management of the Danube Steamboat Electrical Services (source: H2G2). In less extreme cases though this facility in word creation enables enormous precision of expression, to the point that the resulting words get adopted in other languages. Examples of useful German composite words adopted in English might be Zeitgeist or Schadenfreude. Even auto-correct has no problem with those as English words!

I want to propose a new word along the same lines: Arschlocherkennungsfreude, the small pleasure one takes in correctly identifying an idiot. I coined this word one day on the motorway, when I spotted clues in the body language of a car in front of me that led me to understand that it was about to change lanes without indicating or checking the mirrors, into the spot I was about to occupy. I backed off the throttle, and sure enough the car swerved right in front of me.

After exploring the ancestry of the driver for a few generations out loud, and making some choice observations on their offspring's prospects in life, I realised that I was actually mildly pleased to have correctly identified and allowed for the driver's idiocy. Why should this be so?

Quite simply, I had validated my own superior expertise in my own mind (95% of drivers, including me, identify their own skills as above average). Maybe Arschlocherckennungsfreude is a bit limiting as a term, except that it gives the all-important dimension of comparison with a point of reference - in this case, the swerving lunatic in front of me.

Either way, I challenge you to come up with an equally pithy word in English.

Collisions

delinquent-scaled500.jpg

I am trying to set up some Google Hangouts, because that is what all the cool kids are doing. The content will be work-related, so I invited a bunch of colleagues who are in my Google+ circles, and then sent the invite to everyone else via Outlook. One of the people who received the G+ invite commented that he preferred to keep work and personal life separate, which prompted some thoughts.

Is it even possible to keep your work life and your professional life separate? And if you were able to, would it be desirable?

Now I suppose I am speaking from a position of privilege, since I don't do anything in my personal life that might get me in trouble at work (boring!). However, I don't think my colleague is doing anything dodgy either, and I am in his friends circle; he just prefers to keep those two sides of his life separate.

I try to keep some separation, on the assumption that people who want cute baby pictures don't want cloud whitepapers - and those who do want both of those things can both friend me on Facebook and follow me on Twitter. I also keep Facebook pretty locked down simply because I would find it a bit... icky... for strangers to peruse my photos and so on. Google+, though, is a different case. The tentacles of Google reach everywhere, so if you use G+ at all, I think you're going to find the two worlds bleeding together. On the plus side, not many people seem to use G+ actively…

This is not an issue that is going to go away. It's probably a good thing that social media weren't around when I was in high school and university. The few traces
left of that era on the public web are fortunately fairly inoffensive. My son, though, is going to grow up with these networks as just part of life. This is why I kind of like the idea of a right to be forgotten, unworkable as that probably is.

On the other hand, it's not all doom and gloom. Looking people up on Linkedin and Twitter before I meet them can give us a natural topic of conversation, something to connect about on a human level. This makes interactions much more pleasant than sticking to formal roles and stilted responses. It's also a way of figuring out who you're talking to and whether they have form.

This "handshake" used to require checking for letters after people's names or whether their tie had the pattern for a prestigious school or army regiment. A top-shelf education and experience in a demanding sector such as the military are no bad things to have, mind, but nowadays we can go a bit broader. Say someone didn't do well with formal education but they're a top Github contributor; they might be a better hire than the product of a degree mill. Or perhaps someone's name indicates they come from a different culture and have a different first language than you and your colleagues; in the past you might have dumped their CV, but now you can check out their writing online in your language and see whether it's up to scratch.

As with most things, social media has its positives and its negatives. Try to address the negatives (lock down your privacy settings, guard your passwords, careful where you browse) but don't lose sight of the many positives.

Platform wars are here again

This is great! It's like I'm back in my teens…

Twenty years ago I was having religious arguments with my friends about MacOS versus Windows. Some of these arguments even degenerated into snarking at each other in HTML comments inside school websites we were building… Fun times.

The thing is, for some reason we felt, in line with more professional and supposedly mature pundits, that the debate was about far more than which was the correct number of buttons on a mouse, or whether menus should be attached to the top of the screen as opposed to the tops of windows. No, we also had to pull in numbers, and not just megahertz or megabytes, but user numbers. Of course, as a Mac user, I felt this was unfair, because usually the Mac came out well behind in all these metrics. Subjectively, the 200 MHz PowerPC 604e machines I was playing around with at the time, running MacOS 7 and 8, certainly felt faster than the 200 MHz Pentium boxes with Windows 95, but that's hardly a benchmark. Still, it was funny that we were all so invested in our choices that instead of saying "huh, you like that flavour? good for you!" and getting on with it, we had to argue the point endlessly. Admittedly, cooperation was made harder by trying to develop websites together, because stuff that worked at home would break on my friend's machine and vice-versa, and not just when he used that blasted marquee tag either.

Now the same thing is going on again, except now it's iOS versus Android. Plenty of people seem to feel the need to pile on any mis-step by Apple or by iOS developers and point out the superiority of the "open" Android platform. I don't get this reaction at all. For one thing, many of these Issues, which look potentially fatal to Apple at the time, are tempests in tiny teacups. See for instance Mapsgate. I never had any problem with the new Maps, but then again, I'm hardly a power user. I did check out a few points of interest at the height of the brouhaha, just out of curiosity, and I didn't see any issues. Metro stops were in the right place, villages were correctly labelled and had all their streets, and directions were sensible.

If anything, the new Maps app was an improvement in the one area for which I rely on it most: traffic. See, in my commute there are a couple of points where I can go one way or another, depending on traffic. If traffic's moving, I just stay on the ring-road, but if it comes to a halt, it sometimes makes sense to take an alternative route via surface streets. The alternative routes can also get grid-locked, though, so what I do is to bring up Maps and check what traffic looks like in my immediate surroundings. The old Google-powered Maps app would take so long to load data that even the crawling traffic would carry me past the relevant turns, so I had to guess and hope. The new Maps app loads up almost instantly - on the same phone, with the same carrier - and lets me make an informed decision.

So one reason I'm still on the Apple side of the barricades twenty years and two platforms later (Classic MacOS > OSX > iOS) is that my subjective experience is still better than the alternatives. The funny thing is that this feels very familiar in another way too. For all the Sturm und Drang in Gizmodo comment threads, I only know one (1) passionate Android user. I know many who don't even know that their phone is running Android! I think this also explains those statistics that show that despite representing a relatively small percentage of the market, iOS devices still account for the vast majority of web traffic: iPhone and iPad owners bought their devices deliberately and use them a lot. Many Android users simply wanted a phone (often not even a smartphone) and ended up with an Android device by default. They never connect their phone to wifi, or install apps; they might use built-in Facebook clients and what-not, but many don't even do that.

Android and iOS simply serve different markets. As I suspect that I would not be happy with Android devices (especially to replace my iPad), many Android users have no wish to spend several times more to get an iPhone which is (for their use cases) no better. Can we just move on now, instead of hyper-scrutinising every breath an Apple executive takes and every move Apple's stock makes?

Missing the point

Another day, another misguided article claiming that "bad attitudes to BYOD put off prospective employes". At least this time they missed out the hitherto obligatory reference to "millennials", whatever more-than-usually-misleading category they might be.

Look, the issue is rarely with BYOD as such. If you're as entitled a know-it-all as to make your employment choices based on whether your prospective employer will let you spend your own money on work technology, there is no help for you. Plenty of companies, my own sainted employer included, offer company-issued Macs and iPhones as optional alternatives to Dells and Blackberries. Wouldn't that be a better trait to look out for?

The problem people have with anti-BYOD policies is that they're generally the tip of an iceberg of bad policy and straightjacketed thinking. Companies that ban BYOD are not far from whitelisting approved executables, restricting admin privileges to users with a "valid and documented reason" for having that access, configuring ridiculously restrictive content firewalls, and so on and so forth.

Others have already explained in depth why BYOD is a symptom of unhealthy IT practices. In fact, the BYODers are arguably doing the company a favour by identifying problem areas. As I had occasion to say on Twitter, users interpret bad IT policies as damage and route around them.

BYOD just happens to be the latest buzzword which people can hang their Dilbertian complaints onto, but reversing that one clause would not fix the problem. In fact, a greater worry is a future in which everyone is required to purchase and maintain IT equipment for work use at their own expense. I might be able to do this now, and in fact I did Spend My Own Money and bought myself an 18 month reprieve from lugging the monster Dull around, but I certainly couldn't have afforded to do that when I started out in my career - at least, not without cutting into other areas of my budget, like food.

Stick to the important concerns. BYOD will fix itself, if all the other pieces are in place.