Showing all posts tagged tech:

Printing Money

I spent more time than I should have yesterday installing my mother-in-law’s new HP printer, and while I dodged the more obvious scams, I was actually shocked at how bad the experience was. There is absolutely no way that a normal person without significant IT experience could do it. And the worst part is that HP are in my experience the best — okay, least bad — printer manufacturer out there.

I'm going to document what happened in exhaustive detail because I still can't bring myself to believe some of what happened. It's not going to be a fun post. Sorry. If you want a fun post about how terrible printers are, here's one from The Oatmeal.

  • The "quick start" guide only showed the physical steps (remove packaging, connect power cord, add paper) and then offered a QR code to scan to deploy an HP app that would supposedly take care of the rest of the process.
  • The QR code lead to a URL that 404'd. In retrospect, this was the moment when I should have packed everything back up and shipped it back to HP.
  • Instead of following through on that much better plan and saving myself several hits to my sanity, some detective work helped me to identify what the app should be and find it in the Google Play Store (my MiL's computer is a Chromebook; this will be significant later).
  • The app's "install new printer" workflow simply scans the local network for printers. Since the step I was trying to accomplish was connecting the printer to Wi-Fi (this model doesn't have an on-board control panel, only an embedded web server), this scan was not particularly helpful.
  • The app's next suggestion was to contact support. Thanks, app.
  • After having checked the box for any additional docs, and finding only reams of pointless legal paperwork documenting the printer's compliance to various standards and treaties, I gingerly loaded up the HP web site to search for something more detailed.
  • The HP website's search function resolutely denied all knowledge of the printer model.
  • A Google search scoped to the HP web site found the printer's product page, which included an actual manual.
  • The manual asked me to connect to the printer's management interface, but at no point includes a step-by-step process. By piecing together various bits of information from the doc and some frantic Googling, I finally work out that I need to:
    • Connect to the printer's own ad-hoc Wi-Fi network;
    • Print a test page to get its IP address (this step involves holding down the paper feed button for 10 seconds);
    • Connect to that IP address;
    • Reassure the web browser that it's fine to connect to a website that is INSECURE!!1!
    • Not find the menu options from the doc, only some basic information about supplies;
    • Panic;
    • Note a tiny "Login" link hidden away in a corner;
    • Mutter "surely not…"
    • Fail to find any user credentials documented anywhere, or indeed any mention of a login flow;
    • Connect as "admin" with no password on a hunch;
    • Access the full management interface.
  • At this point I was finally able to authenticate the printer to the correct Wi-Fi network, at which point it promptly rebooted and then went catatonic for a worryingly long time before finally connecting.
  • But we're not done yet! The HP printer app claims to be able to set up the local printer on the Chromebook, but as far as I can tell, it doesn't even attempt to do this. However, we have a network connection, I can read out supply levels and what-not, how hard can this be?
  • Despite having Google Cloud Print enabled, nothing was auto-detected, so I created it as IPP (amazingly, this step is actually in the docs).
  • Time for a test print! The Chromebook's print queue showed the doc as PRINTED, but the printer didn’t produce anything, and as far as I could determine, it never hit the printer's own queue.
  • Hang head in hands.
  • Verified that my iPhone can see the printer (via AirPrint) and print to it. This worked first time.
  • Tried deleting the printer and re-creating it; somehow Google Cloud Print started working at this point, so the printer was auto-detected? The resulting config looked identical to what I created by hand, except with a port number specified instead of just an IP address.
  • Does it print now? HAHAHA of course not.
  • Repeat previous few steps with increasing muttering (can't swear or throw things because I am in my mother-in-law's home).
  • Decide to update software:
    • The Chromebook updates, reboots, no change.
    • The printer's product page does not show any firmware at all — unless you tell it you are looking for Windows software. There are official drivers for various Linux distros, but apparently they don't deserve firmware. There is nothing for macOS, because Apple wisely doesn't allow rando third-party printer drivers anywhere near their operating systems. And of course nothing for ChromeOS or "other", why would you ask?
    • Download the firmware from the Windows driver page, upload it to the printer's management UI — which quoth "firmware not valid".
    • Search for any checksum or other way to verify the download, and of course there is none.
    • Attempt to decode the version embedded in the file name, discover that it is almost impossible to persuade ChromeOS to display a file name that long.
    • Eventually decide that the installed and downloaded versions are probably the same, despite the installed one being over a year old.
  • Give up and run away, promising to return with new ideas, or possibly a can of petrol and a Zippo.

Nice Tech, Pity About The Product

Like many IT types, my workspace has a tendency to acquire obsolete technology. When I shared a flat in London with somebody else who lives with the same condition, computers significantly outnumbered people; heck, operating systems sometimes outnumbered people, even after our then-girlfriends/now-wives moved in! At one point, we even had an AS/400 desk-side unit that we salvaged, until we realised we really didn't have anything fun to do with it and moved it on again.

In the big clear-out last year, I got rid of a bunch of the old stuff — yes, even some of the cables! One item made the opposite journey, though, from the depths of a box inside a cupboard of toner cartridges underneath a monitor so old it still has a 4:3 aspect ratio, to pride of place in my line of sight from my desk chair.

That item is the installation media for a thoroughly obsolete computer operating system from the 90s.

What Even Is BeOS?

BeOS was the brain-child of a bunch of ex-Apple people, including Jean-Louis Gassée, who worked for Apple through the 80s and was instrumental in the creation of the Newton, among other things. While Apple spent the 90s trying and failing to create a new operating system to replace the aging MacOS, Gassée and his merry band created a brand-new operating system called BeOS. The 90s were probably the last time in history that it was possible to do something like that; the platforms that have emerged since then (iOS and Android) are variations on existing platforms (NeXTSTEP/OS X, which slightly predates BeOS, and Linux respectively).

Initially targeted at AT&T's Hobbit CPUs, BeOS was soon ported to the PowerPC architecture. These were the CPUs that powered Apple computers at the time, the product of an alliance between Apple, IBM, and Motorola. Between them, the three companies hoped to foster the emergence of an ecosystem to rival (or at least provide an alternative to) Intel's dominant x86. In those days, Apple licensed a handful of manufacturers to build MacOS-compatible PowerPC computers, so Be quickly stopped manufacturing their own BeBox hardware and switched to offering the BeOS to people who owned these computers — or actual Apple Macs, I suppose, but even at the time you didn't hear of many people doing that.

This is where BeOS first entered my life. If you can believe it, the way you found out about cool software in those pre-broadband days was to buy a printed magazine that would come with a CD full of demos, shareware, utilities, wallpapers, icon sets, and more. There were a few magazines that catered to the Apple enthusiast market, and in 1997, I happened to pick one up that included Preview Release 2 of the BeOS.1

Luckily for me, I owned a whopping 500MB external SCSI drive, so I didn't have to mess around with reformatting the main HDD of the family computer (which would probably have run all of 2GB at the time, kids!). I was quickly up and running with the BeOS, which absolutely blew away the contemporary Macintosh operating system.

Why Bother With BeOS?

The performance was the first and most obvious difference between BeOS and MacOS. Just watching GLTeapot spinning around in real time was amazing, especially compared to what I was used to in MacOS on the same hardware. Check out this contemporary review, focusing specifically on BeOS’ multimedia capabilities.

This was also my first exposure to a bash terminal, or indeed any command-line interface beyond MS-DOS, and I can safely say that it was love at first sight, especially once I started understanding how the output of one command could be passed to another, and then the whole thing wired up into a script.

BeOS was properly multi-user, in a way that Classic MacOS very definitely wasn't. This factor made me consider it as a full-time replacement for MacOS on the family computer, but the lack of hardware support killed that idea. Specifically, the Global Village Teleport fax/modem which was our connection to the early Internet, running at a blazing fast 14.4kbps, did not work in BeOS.

This lack was doubly annoying since BeOS shipped with an actual web browser: NetPositive, one of whose claims to fame was its haiku error messages. At the time, Mac users were stuck between Netscape Navigator, Microsoft Internet Explorer, Apple's almost wilfully obscure Cyberdog, and early versions of Opera.

What Happened To BeOS?

This is where we get to the point of the story. What killed BeOS was not any sort of issue with the technology. It was leaps and bounds ahead of both dominant operating systems of the day, with massive developer interest.

Unfortunately, Be did not own its own destiny. After failing to sell itself to Apple, Be staggered on for a few more years. Once it became obvious that Apple was going to kill the MacOS clone business which powered the ecosystem of non-Apple PowerPC hardware that BeOS ran on, an x86 port was quickly added. By this point dual-booting operating systems on x86 had become, if not exactly mainstream, at least somewhat common in technical circles. Unfortunately for Be, the second OS (of course after Windows) was almost always Linux. A second commercial operating system was always going to be a hard sell in a world where everyone had already paid for a Windows license as part of the purchase price for their PC, to the point that Be literally couldn't even give it away. In fact Be actually sued Microsoft over its alleged monopolistic practices, possibly the last gasp of the First Browser War of the late 90s.2

Be was eventually sold to Palm, and after Palm's own travails, the last vestiges of BeOS disappeared from public view only a few years later.

The lesson here is that the best technology does not always win — or at least, does not win unaided. Execution is key, and Be, despite some very agile pivots, failed to execute to the point of making any meaningful dent in the personal-computer-OS market.

What could Be have done differently? It's hard to say, even with the benefit of hindsight. None of the alternative desktop operating systems that sprang up in the late 80s and early 90s have survived. BeOS? Gone. OS/2 Warp? Gone. All the commercial UNIX systems? Gone — but maybe next year will be the year of Linux on the desktop. NeXT? It got acquired by Apple, and the tech is still with us in every current Apple platform — but if Be had been the one to get bought to replace the failed Copland project, NeXT would certainly have been the one to disappear.

That is the one inflection point really worth considering: what if Gassée had managed to negotiate a deal with Apple back then? What would OS X be like today if it were based on BeOS rather than on NeXTSTEP?3 And… what would Apple be like without Steve Jobs, in hindsight the most valuable part of the NeXT acquisition? There would probably still be a mobile product; one of the key Be employees was Steve Sakoman, godfather of the Newton, so it seems fairly certain that a descendant of some sort would have emerged from a Be-infused Apple. But would it have become the globe-spanning success of the iPhone (and iPad) without Steve Jobs to market it?

One day I would like to own both a BeBox and a NeXTcube,3 but for now I just keep that BeOS PR2 CD as a tech industry memento mori, a reminder to myself not to get caught up in the elegance of the tech, but always to remember the product and the use cases which that tech enables.


  1. I could have sworn it was MacAddict, which was definitely my favourite magazine at the time, but the only references I can find online say it was MacTech, and it's been long enough that I can't be sure. 

  2. Be's travails did inspire at least one high-profile fan, with Neal Stephenson discussing BeOS in his book-length essay In the Beginning... Was the Command Line, as well as giving it a cameo in Cryptonomicon (alongside "Finux", his gossamer-thin Linux-analogue). 

  3. Yes, weird capitalisation has always been part of the computer industry. 

Cloud Adoption Is Still Not A Done Deal

I have some thoughts on this new piece from 451 Research about IT provisioning. The report is all about how organisations that are slow to deliver IT resources will struggle to achieve their other goals. As business becomes more and more reliant on IT, the performance of IT becomes a key controlling factor for the overall performance of the entire business.

This connection between business and IT is fast becoming a truism; very few businesses could exist without IT, and most activities are now IT-enabled to some extent. If you’re selling something, you’ll have a website. People need to be able to access that website, and you need to make regular changes as you roll out new products, run sales promotions, or whatever. All of that requires IT support.

Where things get interesting is in the diagnosis of why some organisations succeed and others do not:

Just as internal IT culture and practices have an impact on provisioning time, they can also severely impact acceptance of technologies. Although the promise of machine learning and artificial intelligence (AI) is emerging among IT managers who took early steps toward machine-enabled infrastructure control, much work remains in convincing organizations of the technologies' benefits. In fact, the more manual the processes are for IT infrastructure management, the less likely that IT managers believe that machine learning and AI capabilities in vendor products will simplify IT management. Conversely, most managers in highly automated environments are convinced that these technologies will improve IT management.

If the IT team is still putting hands on keyboards for routine activities, that’s a symptom of some deeper rot.

It may appear easy to regard perpetual efforts of organizations to modernize their on-premises IT environments as temporary measures to extract any remaining value from company-owned datacenters before complete public cloud migration occurs. However, the rate of IT evolution via automation technologies is accelerating at a pace that allows organizations to ultimately transform their on-premises IT into cloudlike models that operate relatively seamlessly through hybrid cloud deployments.

The benefits of private cloud are something I have been writing about for a long time:

The reason this type of organisation might want to look at private cloud is that there’s a good chance that a substantial proportion of that legacy infrastructure is under- or even entirely un-used. Some studies I’ve seen even show average utilisation below 10%! This is where they get their elasticity: between the measured service and the resource pooling, they get a much better handle on what that infrastructure is currently used for. Over time, private cloud users can then bring their average utilisation way up, while also increasing customer satisfaction.

The bottom line is, if you already own infrastructure, and if you have relatively stable and predictable workloads, your best bet is to figure out ways to use what you already have more efficiently. If you just blindly jump into the public cloud, without addressing those cultural challenges, all you will end up with is a massive bill from your public cloud provider.

Large organisations have turning circles that battleships would be embarrassed by, and their radius is largely determined by culture, not by technology. Figuring out new ways to use internal resources more efficiently (private cloud), perhaps in combination with new types of infrastructure (public cloud), will get you where you need to be.

That cultural shift is the do-or-die, though. The agility of a 21st century business is determined largely by the agility of its IT support. Whatever sorts of resources the IT department is managing, they need to be doing so in a way which delivers the kinds of speed and agility that the business requires. If internal IT becomes a bottleneck, that’s when it gets bypassed in favour of that old bugbear of shadow IT.

IT is becoming more and more of a differentiator between companies, and it is also a signifier of which companies will make it in the long term – and which will not. It may already be too late to change the culture at organisations still mired in hands-on, artisanal provisioning of IT resources, but it is certain that completing that transition should be a priority.


Photo by Amy Skyer on Unsplash

More of Me

I have not been posting here nearly as much as I mean to, and I need to figure out a way to fix that.

In my defence, the reason is that I have been writing a lot lately, just not here. I have monthly columns at DevOps.com and IT Chronicles, as well as what I publish over at the Moogsoft blog. I aim for weekly blog posts, but that’s already three weekly slots out of four in each month taken up right there - plus I do a ton of other writing (white papers, web site copy, other collateral) that doesn’t get associated so directly with me.

As it happens, though, I am quite proud of my latest three pieces, so I’m going to link them here in case you’re interested. None of these are product pitches, not even the one on the company blog, more reflections on the IT industry and where it is going.

Do We Still Need the Datacenter? - a deliberately provocative title, I grant you, but it was itself provoked by a moment of cognitive dissonance when I was planning for the Gartner Data Center show while talking to IT practitioners who are busily getting rid of their data centers. Gartner themselves have recognised this shift, renaming the event to "IT Infrastructure, Operations Management & Data Center Summit" - a bit of a mouthful, but more descriptive.

Measure What’s Important: DevFinOps - a utopian piece, suggesting that we should embed financial data (cost and value) directly in IT infrastructure, to simplify impact calculation and rationalise decision making. I doubt this will ever come to pass, at least not like this, but it’s interesting to think about.

Is Premature Automation Holding IT Operations Back? - IT at some level is all about automation. The trick is knowing when to automate a task. At what point is premature automation considered not just wasteful, but actively harmful?


Photos by Patrick Perkins on Unsplash

The curve points the way to our future

url.png

Just a few days ago, I wrote a post about how technology and services do not stand still. Whatever model we can come up with based on how things are right now, it will soon be obsolete, unless our model can accomodate change.

One of the places where we can see that is with the adoption curve of Docker and other container architectures. Anyone who thought that there might be time to relax, having weathered the virtualisation and cloud storms, is in for a rude awakening.

Who is using Docker?

Sure, the latest Docker adoption survey still shows that most adoption is in development, with 47% of respondents classifying themselves as "Developer or Dev Mgr", and a further 15% as "DevOps or Release Eng". In comparison, only 12% of respondents were in "SysAdmin / Ops / SRE" roles.

Also, 56% of respondents are from companies with fewer than 100 employees. This makes sense: long-established companies have too much history to be able to adopt the hot new thing in a hurry, no matter what benefits it might promise.

What does happen is that small teams within those big companies start using the new cool tech in the lab or for skunkworks projects. Corporate IT can maybe ignore these science experiments for a while, but eventually, between the pressure of those research projects going into production, and new hires coming in from smaller startups that have been working with the new technology stack for some time, they will have to figure out how they are going to support it in production.

Shipping containers

If the teams in charge of production operations have not been paying attention, this can turn into Good news for Dev, bad news for Ops, as my colleague Sahil wrote on the official Moogsoft blog. When it comes to Docker specifically, one important factor for Ops is that containers tend to be very short-lived, continuing and accelerating the trend that VMs introduced. Where physical servers had a lifespan of years, VMs might last for months - but containers have been reported to have a lifespan four times shorter than VMs.

That’s a huge change in operational tempo. Given that shorter release cycles and faster scaling (up and down) in response to demand are among the main benefits that people are looking for from Docker adoption, this rapid churn of containers is likely to continue and even accelerate.

VMs were sometimes used for short-duration tasks, but far more often they were actually forklifted physical servers, and shoe-horned into that operational model. This meant that VMs could sometimes have a longer lifespan than physical servers, as it was possible for them simply to be forgotten.

Container-based architectures are sufficiently different that there is far less risk of this happening. Also, the combination of experience and generational turnover mean that IT people are far more comfortable with the cloud as an operational model, so there is less risk of backsliding.

The Bow Wave

The legacy enterprise IT departments that do not keep up with the new operational tempo will find themselves in the position of the military, struggling to adapt to new realities because of its organisational structure. Armed forces set up for Cold War battles of tanks, fighters and missiles struggle to deal with insurgents armed with cheap AK-47s and repurposed consumer technology such as mobile phones and drones.

In this analogy, shadow IT is the insurgency, able to pop up from nowhere and be just as effective as - if not more so than - the big, expensive technological solutions adopted by corporate. On top of that, the spiralling costs of supporting that technological legacy will force changes sooner or later. This is known as the "bow wave" of technological renewal:

"A modernization bow wave typically forms as the overall defense budget declines and modernization programs are delayed or stretched in the future," writes Todd Harrison of the Center for Strategic and International Studies. He continues: "As this happens the underlying assumption is that funding will become available to cover these deferred costs." These delays push costs into the future, like a ship’s bow pushes a wave forward at sea.

(from here)

What do we do?

The solution is not to throw out everything in the data centre, starting from the mainframe. Judiciously adapted, upgraded, and integrated, old tech can last a very long time. There are B-52 bombers that have hosted three generations from the same family. In the same way, ancient systems like SABRE have been running since the 1960s, and still (eventually) underpin every modern Web 3.0 travel-planning web site you care to name.

What is required is actually something much harder: thought and consideration.

Change is going to happen. It’s better to make plans up front that allow for change, so that we can surf the wave of change. Organisations that wipe out trying to handle (or worse, resist) change that they had not planned for may never surface again.

It’s Tough to be King

So what’s it like to live with the AppleTV? Have any of my complaints been addressed?

In a word: no.

I still don’t have access to Siri, for no good reason that I can determine. A couple of attempts to get a response from @AppleSupport over Twitter did not go anywhere.

In fact, things got even worse, as a software update added "hold to dictate" prompts everywhere, which of course do nothing for me.

Subtle hint for Apple: if the system language is English, and Siri supports English, Siri should be enabled.

Apart from that, the thing has been - fine. It’s a substantial update from the Apple TV 2 which I had before. Even the new remote (despite having one button - the voice prompt - that is completely useless to me) is better than the previous one. People complain that it’s too easy to end up holding it upside down, because it’s symmetrical, but in my experience the combination of the rougher surface of the touch pad and the double-height volume control is enough to keep me oriented.

Text input is terrible, but that’s pretty much inevitable without a keyboard - and that’s when I turn to the Remote app on iOS. I think it’s a safe enough bet that Apple TV owners will also have at least one iOS device lying around.

I can’t really comment on the games; I tried out a few, but the sad fact of the matter is that I’m just not that much of a gamer any more. Sure, Alto’s Adventure is fun, but if I hadn’t already owned it for iOS, I doubt I’d have bothered. I tried out a few racing sims, and didn’t even finish the free levels.

This probably says more about me than about the Apple TV’s capabilities as a gaming console. I’ve never been a console gamer in the first place; I was always a PC guy, preferring big sprawling RTS and sim games. The problem I have is not that those don’t translate to iOS (or tvOS), it’s that they require hours-long play sessions, and I just don’t have plural hours to spend gaming any more.

For my purposes - streaming from my local iTunes library, from the iTunes Store, and from YouTube - the Apple TV is fine. Most of the exciting cord-cuttery stuff isn’t available in my geo, and there just aren’t that many other categories of apps that make sense on a TV as opposed to on a phone or a tablet.

So what are you saying?

It’s not a flop, it’s a very capable fourth-generation device. It’s not transformative, but not every device has to be.

I am also coming in with low expectations, because even if Apple had somehow negotiated deals with content owners, I am certain that I would not have access to them in Italy. Seriously - we don’t even have visual voicemail over here. Forget about HBO or any of that stuff. Even what we do have, like Netflix, is crippled.

Much like iTunes and Apple Music, it’s perfectly fine for what it does. Could it be better? Sure. Should we demand more from Apple? Absolutely. But calling it a flop, a failure, a mess, an embarrassment? That’s going too far.

Now if you’ll excuse me, I have some YouTube to watch on my Apple TV.

Quick Text Shortcuts

I tend to assume that things I know are obvious and widely known, and so I don’t often bother to document them. However, I noticed that a couple of different people did not know this particular very useful trick, so I thought I would share it here for anyone else who might find it useful.

The trick (I refuse to call it a "hack", or even worse, a "life hack") is useful if you often need to type the same snippets of text on an Apple device, whether it’s an iPhone, an iPad, or a Mac. You can do this using only built-in tools from Apple, with no need to install additional components or mess with anything under the hood.

On a Mac, go to System Preferences > Keyboard > Text. Here you can create the shortcuts that will be useful to you. You should have one defined already, which replaces "omw" with "On my way!".

Simply click the + button at the bottom of the window to add your own snippets. I have a couple for my phone number and email address, so that I can simply type "mynum" or "mygmail" to have those appear, with no fear of typos.

This is of course even more useful on an iPhone, where the small keyboard can make it frustrating to type when you can’t rely on autocorrect - and doubly frustrating to type phone numbers in the middle of other text. On an iPhone (or an iPad), go to Settings > General > Keyboard > Text Replacement, and then tap the + to enter your own snippets.

The cherry on the cake of usefulness is that the text snippets will sync over iCloud, so any snippets you set up on one of your devices should be available on all your other devices too.

Enjoy!

In which I make a Discovery

Everyone who cares probably knew this already, but I just discovered something cool with iOS multitasking.

If you have an iPad Air 2 or an iPad Pro, you can run two apps side by side on the screen. I was doing this so that I could listen to music via YouTube while twittering, because Apple in their wisdom mute Safari if it's backgrounded. You have to do proper multi-tasking, not just slide-over, which is why this only works on those two models.

I was already pretty happy with my solution, devoting untold amounts of innovation and computing horsepower to wasting time more efficiently than ever before - and then I clicked on a link in Twitter, and a whole new world of possibilities opened up to me.

Now normally if you click a link in Twitter for iOS when it's running in full-screen mode, the linked page opens in an embedded mini-browser, which is of course the Wrong Thing.

If on the other hand you click a link while Twitter is running side by side with Safari, the linked page opens directly in a new Safari tab!

Amazing, right? Right?

Okay, this is a pretty niche use case, but it makes me unreasonably happy. I hope this post is useful to someone else, too.

Uphill Both Ways

Is IT getting too easy?

I was listening to the latest episode of the excellent Exponent podcast, where the topic of PC gaming up. The hosts were discussing the rise of the Steam platform, and ascribed it primarily to convenience.

PC gamers used to build custom rigs, worrying about thermal profiles and harried by IRQ conflicts. They would then get their games - on physical media! in boxes! - and install them, then immediately patch the game, their video drivers, and possibly several other things. Because of all this, gaming was a demanding and niche pastime.

14188841448pxo7.jpg

With solutions like Steam and the decline of the custom-built PC in favour of self-contained laptops, the level of convenience rose enormously. Something like Steam could never have risen to be a $1.5B business twenty years ago. However, I do wonder if we are losing some necessary skills to convenience.

When Steam appeared on the scene, everyone knew what it was doing and enjoyed the convenience of not having to do it themselves. Very quickly though, new gamers came on the scene who had not experience the old ways. All they knew was downloading games from Steam and having everything taken care of for them, even including updates and patches as they became available. Even mods, which had always been tricky to install and always came with the exciting potential to blow away your install, became easy with Steam.

Of course the skills required to be a gamer are of limited macroeconomic utility. It could be argued that keeping gamers busy chasing down IRQ conflicts would prevent them from embarrassing themselves in public (ahem Gamergate ahem), but there is a wider point. The same choice of convenience over detail plays out in enterprise IT as well, where sysadmin skills are getting harder to find.

Gamers used to need to know the precise version of the driver of their graphics card. Now, many gamers barely know whether they have one. In the same way, sysadmins used to have deep knowledge of what was behind the door of their server room, while now all they know is the login to the corporate AWS account. Meanwhile, key bits of infrastructure are still running on obsolete operating systems that nobody knows how to operate any more.

So as the consumerisation of IT rolls inexorably on, will IT users at all levels turn into Eloi relying on a handful of BOfH Morlocks to keep everything running?

Or is this like vintage car people claiming carburettors added character to engines that was lost when electronic fuel injection made the whole thing too easy?


Image by MootCreative via morgueFile

None

Apple Bottom Drawer

There has been a long-running complaint that equipping the entry-level iPhone with only 16GB of storage is not only cheap, but wrong-headed because owners will have a bad user experience. Most of the time, the example people bring up is operating system upgrades, with people forced to stay on older iOS releases because they don’t have enough free space to perform the upgrade1.

As per their usual tight-lipped policy, Apple has not said anything about precisely why it is that they continue to keep the 16GB models around. The general assumption has been that the idea is to offer a (relatively) low entry price for the iPhone range to get as many people as possible through the door.

url.png

Today, though, I overheard a conversation that illustrated a different reason why Apple might want to increase the storage in that bottom-tier device sooner rather than later. Someone recommended an album, someone else searched for it on iTunes, hit "Buy" - and was told that they did not have enough space. When storage limits are preventing sales, this is a problem.

One obvious quibble would be to ask how many owners of entry-level devices spend significant sums in the iTunes Store (or would do if they had the free space available). This overlooks the fact that these days, a significant number of iPhones are actually corporate-owned or at least -funded. Because the owner is not the user, it is not possible to infer the user’s purchasing power or willingness based on the device they have. Companies may well opt for limited storage because that’s all that is required for work purposes, even though employees would be willing to fill additional space with personal data, given the chance.

Bottom line: it’s high time for the bottom storage tier to move up to 32GB. I would also argue that when they do this, Apple should eat the difference and not raise prices, because their margin is big enough and the parts cost is so small. The improvement in user experience would pay for itself in Tim Cook’s beloved "customer sat", without even allowing for increased revenue per user (ARPU) as people are able and willing to fill up some of that free space.


  1. Yes, I know that you can also upgrade by plugging into iTunes without needing the free space, but these days, many iPhone owners don’t come from the iPod experience and would not necessarily think of that. Many of them in fact don’t even have iTunes installed, or may not even own a PC or Mac in the first place.