Hello there

Sun peeking through snow clouds at Les 2 Alpes. This was taken with my iPhone 11 Pro, and I’m just amazed at what it was able to do with the light and the snowflakes.

Note taking

It’s been ten years since the launch of the iPad, so it seems appropriate to reflect back on what effect it has had. The two best retrospectives that I’ve read are by Federico Viticci and Steven Sinofsky.

Personally, I’ve owned three iPads; the original squared-off one was more a proof-of-concept than a fully-fledged device, but I loved it to bits, and hung onto it until the first Retina iPad came out. Again, that was perhaps an early release and was quickly superseded, but I hung onto it until the Pro 10,5 tempted me with its keyboard cover and Pencil. Now I’m just waiting for the current Pro to be refreshed, especially as my keyboard cover appears to have died.

The input devices, whether keyboard or stylus, are what I really wanted to talk about in this post. My work involves a fair amount of note-taking, whether in a meeting, during a presentation, or while brainstorming, on my own or in a group. I find the iPad to be the ideal device in all these situations, but before explaining why, I need to take a step back.

Many people will tell you that there are all sorts of cognitive benefits to taking notes using pen and paper as opposed to an electronic device. Most articles you will find online link back to this study published with the Association for Psychological Science. I don’t have access to the actual paper, but here’s the abstract:

Taking notes on laptops rather than in longhand is increasingly common. Many researchers have suggested that laptop note taking is less effective than longhand note taking for learning. Prior studies have primarily focused on students’ capacity for multitasking and distraction when using laptops. The present research suggests that even when laptops are used solely to take notes, they may still be impairing learning because their use results in shallower processing. In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand. We show that whereas taking more notes can be beneficial, laptop note takers’ tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.

I should admit up front that I fully agree with this study’s conclusions, based on my own empirical experience.

Laptops Are Not Good For Notes

The study examined students who took notes on laptops and compared them to students who took notes longhand. These two tools represent fundamentally different modes of thinking, and so it’s not surprising that the results are different. Laptops are linear, constraining users to interfaces that assume sequential text entry. They are great for editing, and for composition of certain types of content, but not great at enabling non-linear jumps and exceptions to structures. More complex options like mind-mapping software add cognitive load without guaranteed benefits. I certainly find I spend more time futzing with the map, especially when trying to follow someone else’s train of thought, than actually taking notes!

Distraction – and the Suspicion of Distraction

The abstract dismissed distraction, and certainly if you’re motivated, you’ll find ways to avoid distractions. I will note in passing that iPads are better than any laptop, simply because they default to showing a single app taking up the whole screen. Sure, you still get notifications, but those can be curated more easily than on a laptop. Almost equally important, though, is the perception of distraction on the part of onlookers. The open laptop screen creates a barrier between speaker and note-taker, where it’s usually not possible for the speaker to know whether their counterpart is distracted by something else.

I actually ran into this back in the day. In the early years of the century I was the only person I knew using a stylus-equipped smartphone, a Sony-Ericsson P800 (and later a P990), and I would sometimes also use them to take notes in meetings.

The handwriting recognition was actually pretty decent, although I’ve never used an Apple Newton to compare the two, so this system worked pretty well. The one drawback was that I would get funny looks from other people in the meeting, and in fact once a more senior colleague told me to "stop playing with the phone" and demanded to see the screen before he would believe I wasn’t playing a game or something!

Write and Forget

The reason I was going to the effort of taking notes using a fairly rigid handwriting recognition system on a small screen is that, while it required a bit of effort in the moment, it made my life easier after the meeting, when I could quickly send notes via email, copy them into a CRM (Siebel or Salesforce), and search them and collate them with other interactions in the past.

Doing that with notes on paper is kind of hard. Notes that start life in electronic form make it trivial.

Back to the iPad

The iPad is the perfect device for all of these reasons and more. Using the Pencil, I can take notes freeform, without worrying too much about context. I can circle things, draw arrows, insert diagrams, and even drop in a photograph I took of a speaker’s slide – and then write and draw on that.

By taking notes like this, I keep all the cognitive benefits of taking notes on paper, but all the notes I take can also be tagged and searched, so that I can easily refer back to them and link them together. An additional benefit is that it’s trivial to share those notes with others after the fact.

Finally, I can do all of this with the iPad flat on the table, without creating a barrier between me and someone I’m speaking to. If we’re face to face, we can even start sketching together, as we might have done in another age with the proverbial cocktail napkin.

The one thing that’s missing, in fact, is a collaborative version of this way of working. I’ve been in group meetings where we use a Google Doc as a combination of note taking and back-channel communications. This approach works best when there is one dedicated note-taker, who is advertised as such, so the speaker is comfortable with their constant typing. Other participants can then dip in and out as needed. It is possible to join in from an iPad, but it’s not ideal, and I would love something like a flexible shared canvas with textual notes pinned to it. Send me a beta code if you decided to build this, won’t you?

The Internet of Unwelcome Gifts

It’s that time of year when many of us are out buying gifts for ourselves or others – or if you’re tight like me, waiting for the sales in the New Year to buy those big-ticket items. Ahem. Regardless, please do not buy IoT / "smart" devices as gifts for people you care about.

Here’s the thing: at this point in time, most people who want a dedicated assistant-in-a-can device already have one. If they don’t own one already, it may be because they realise they will hardly use it – most of these things are only ever used to play music and maybe set a timer. The first many of us knew about Amazon’s efforts to sell Alexa skills for actual cash money was when they missed their revenue forecast… badly. How badly did they miss? Well, against what I would have thought was a pretty conservative target by Amazon’s standards of $5M, they achieved… $1.4M. That’s 28% attainment, also known in sales circles as "pack up your desk and get out – and be quick about it, I already called Security". In other words, very few people are using Skills at all, and basically none are using for-pay skills.

Of course there are any number of surreptitiously "smart" devices. For instance, these days it is pretty much impossible to buy a consumer TV without an operating system powerful enough to connect to the Internet over wifi and run streaming-video apps. This also means they are powerful enough to snoop on user’s behaviour. You might think this is not too bad – after all, YouTube already knows exactly which cute cat videos you watched – but these days, the state of the art is capturing whatever is displayed on screen, and trying to run analytics on that. If you watch home videos or display your photos, well, the privacy policy you clicked through when you set up the TV says it’s okay for the company to own those now. This is why even staid Consumer Reports is offering advice to turn off snooping features in smart TVs — and yes, they called it "snooping", not me.

If you think TVs are bad, other categories are even worse; see this IEEE report that calls out security risks of drones, vibrators, and children’s toys.

All of this means that there is a good chance that your possible gift recipient, especially if they are technically inclined, considered and rejected smart devices for security reasons. In case you think I’m just a lone crank over here in my tinfoil hat, it’s worth noting that the FBI issued notices about securing smart TVs around Black Friday, while the French government just sent out this warning about internet-connected food processor.

At least someone with some technical skills might have a chance of heading off the snooping at the network edge with something like a Pi-Hole. Definitely don’t buy anything with an Internet connection for your Muggle friends and relatives!

This is the sort of thing that Mozilla’s excellent Privacy Not Included project is designed to highlight. Note that this is not a blanket anti-tech position; if you browse over to the Privacy Not Included site, there are a ton of "smart" devices that are not creepy. But then there are the others, such as the infamous Ring camera, which manages a hat trick of terrible security, accommodation with a surveillance-driven police state, and enablement and reinforcement of racist tendencies.

In this context,
Apple’s announcement that they are joining forces with Amazon, Google, and Zigbee
to establish a new, more secure and interoperable IoT standard may be a hopeful sign that the Wild West era of ill-considered experimentation in IoT is coming to an end – or it may be a well-intentioned standard that simply ends up gathering dust on a shelf in Cupertino.

Turn up the heating, I’m freezing!
I’m sorry Dave, I can’t let you do that.

Regardless, don’t buy any devices that are too smart for their own good – or more importantly, yours. If there is no good reason for a thing to be "smart", then stick to the dumb version: it no doubt works better today, and won’t be obsolete tomorrow when the vendor goes out of business or simply terminates support for that product line.

Classification

I just found out about Kurt Gebhard Adolf Philipp Freiherr von Hammerstein-Equord, and specifically this wonderful quote of his (emphasis mine):

There are clever, hardworking, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and hardworking; their place is the General Staff. The next ones are stupid and lazy; they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the mental clarity and strength of nerve necessary for difficult decisions. One must beware of anyone who is both stupid and hardworking; he must not be entrusted with any responsibility because he will always only cause damage.

I do not generally like military metaphors, but this classification seems very applicable to the enterprise world. We can all think of that one person who would make an immeasurable contribution to the work by just stopping what they are doing.

See also the VP of Nope.

Power Tools

It’s not easy to hit the right balance between making thing easy for new or infrequent users and enabling power users to be very efficient. It’s even harder for a tool like Slack, which by definition has to be universally adopted in order to succeed.

This is why it’s particularly important to note the discussion about the recent changes to Slack’s editing functionality. The editor used to use Markdown, which is the sort of thing power users love, but others, eh, not so much.

Markdown was created as a a quicker and simple alternative to HTML, but with the aim of catering to the sort of people who would otherwise be crafting HTML by hand in a text editor window. I can pop open BBEdit or vi, start right from

`

<head>



<title>This is my new document</title>

`

and go from there, but it’s a bit of a faff. Markdown makes it easy for me, a power user, to be more productive.

The problem with Markdown, especially when it’s implemented inline like Slack did, is that it’s not particularly discoverable. Unless you already know what Markdown is and that it’s supported in whatever window you’re typing in, you’re unlikely to stumble across the functionality by accident.

This is why Slack built a rich text editor which shows all the functions – bold, italic, list, hyperlink, and so on – visually in a toolbar. This makes it much easier for people to add formatting to their messages who might never have done so in the past – and anecdotally, that is exactly what I have been seeing. This is known as a WYSIWYG editor, where the acronym stands for "What You See Is What You Get". Appropriately enough, I first became familiar with the concept when the first WYSIWYG editors for HTML started to come out. Those of us who were used to hand-crafting our HTML by hand in text editors scoffed at the inelegant HTML these tools produced, but they were quickly adopted by vast numbers of people who had been intimidated or were simply turned off by the blank stare of a new text editor window.

WYSIWYG tools are a major democratising force, opening up functionality to huge groups of users who would not otherwise have had access to them. However, as with many user-assistive functionalities, they need to be implemented with care so that they do not become obstacles for users who do not require (or prefer to do without) their assistance.

The problem in Slack’s case is that the way the WYSIWYG editor is implemented breaks Markdown quite badly. In fact, the reaction got so bad that there is a Chrome plugin to disable the new editor.

To their credit, Slack are apparently walking back the changes and will rethink their approach:

Our recently introduced WYSIWYG formatting toolbar was developed with that broader customer community in mind. We thought we had nailed it, but we have seen an outpouring of feedback from customers who love using Slack with markup.

I don’t necessarily blame Slack for the original miss; it’s not easy to combine direct editing with WYSIWYG in the same window, and testing for all the edge cases of a markup language like Markdown is by definition a hard problem. It’s also worth noting that, in terms of percentage of user base, these Markdown issues will hit a very small number of people. The problem is that those are also the most passionate and dedicated users, so upsetting them will have disproportionate effects on user satisfaction overall. Also, kudos to Slack for listening to feedback and revisiting the changes.

This whole débacle does speak to a more general problem though. I liked early search engines, where you could type

"this phrase" AND "that phrase"

with a reasonable expectation of getting only matches that contained both of those phrases. Instead, search engines nowadays will "helpfully" try to work out what you really meant and return that instead, and it is frustratingly hard to persuade them to stop trying to help and get out of my way.

Providing assistive interfaces for those who need them is both a Good Thing in general, and good for product growth. Power users who are dedicated to learning something will jump through whatever hoops they need to jump through, but casual users will bounce off a learning curve that is too steep. The best match would seem to be a toggle somewhere that lets power users turn off the helpful interjections and talk directly to the machine (see also: autocorrect).

By giving both groups of users what they need, a user interface with this dual nature will both deliver easy onboarding of new users, and enable power users to work efficiently. I hope Slack figures it out quickly and becomes an example of Doing It Right.

Won’t Somebody Think of the (Virtual) Users?

Here’s the thing with VR: nobody has yet figured out what – or who – it’s actually for.

It seems like you can’t throw a rock without hitting some wild-eyed evangelist for VR. Apparently the next big thing is going to be VR tourism. On the one hand, this sort of thing could solve problems with overcrowding. Imagine if instead of the Mona Lisa, smaller than you expected, behind a barrier of smudged glass and smartphone-wielding fellow tourists, you could spend undisturbed time as close as you wanted to a high-pixel-count scan. And of course, being VR, you could take selfies from any angle without needing to wield a selfie stick or worry about permits for your camera drone.

On the other, you wouldn’t get to spend time in Paris and experience everything else that the city has to offer. At that point, why not just stay home in your favourite chair, enjoying a piped-in virtual experience, like the passengers of the cruise ship in Wall-E?

That’s the question that the VR industry has yet to answer successfully. Much like commercial-grade fusion power, it remains fifteen years away, same as fifteen years ago, and fifteen years before that. In fact, back at the tail end of last century, I played Duke Nukem 3D in a pub1 with goggles, a subwoofer in a backpack, and something called a 3D mouse. The whole thing was tethered to a pretty hefty gaming PC, which back then probably meant a 166 MHz CPU and maybe a first-gen 3dfx Voodoo graphics card.

It was fun, in the immature way that Duke Nukem was, but once the novelty of looking around the environments had worn off, I didn’t see anything that would make me pay the not-inconsiderable price for a similar setup for myself.

A couple of years ago I was at some tech event or other – maybe MWC? – and had the chance to try the then-new Oculus headset. I was stunned at how little the state of the art had moved forward – but that’s what happens when there is no clear use case, no pull from would-be users of the product, just push from people who desperately want to make it happen.

Now, the (virtual) chickens are coming home to roost. This piece in Fast Company admits the problems, but punts on offering any solutions.

The industry raised an estimated $900 million in venture capital in 2016, but by 2018 that figure had plummeted to $280 million. Oculus—the Facebook-owned company behind one of the most popular VR headsets on the market—planned to deliver 1 billion headsets to consumers, but as of last year had sold barely 300,000.

Investments in VR entertainment venues all over the world, VR cinematic experiences, and specialized VR studios such as Google Spotlight and CCP Games have either significantly downsized, closed down, or morphed into new ventures.

[…]

Ultimately it is down to VR developers to learn from existing success stories and start delivering those "killer apps." The possibilities are limited only by imagination.

Apple, more clear-headed than most, is postponing the launch of its own VR and AR efforts. This is particularly significant because Apple has a history of not being the first mover in a market, but of defining the use case such that every other player follows suit. They did not have the first smartphone, or even the first touchscreen, but it’s undeniable that these days almost every phone out there looks like an iPhone.

It’s not clear at this stage whether the delay in their AR/VR efforts is due to technology limitations or the lack of a clear use case, but either way, the fact that they could not see a way to a useful product does not bode well for anyone else trying to make a go of this market.

Shipping The Org Chart

The players who are staying in are the ones who want VR and AR to succeed for their own reasons, not because they see huge numbers of potential users clamouring for it. This is a dangerous road, as Sun found out to their cost, back in the day.

Read the whole thread, it’s gold.

Here’s the problem for VR: while I don’t doubt that there is a small population of hardcore gamers who would love deeper immersion, there is no killer app for the rest of us. Even console gaming is struggling, because it turns out that most people don’t graduate from casual gaming on their smartphones to "serious gaming". This is the other thing that will kill Google Stadia.

The one play that Apple might have is the one that seems to be working with Apple Arcade: first get devices everywhere, then slowly add capabilities. If Apple came out with a physical controller, or endorsed a third-party one, Apple TV would be an interesting contender as a gaming platform. The same thing could work with AR/VR, if only they can figure out a use case.

If it’s just the Google Glass thing of notifications, but RIGHT IN YOUR EYEBALLS, I don’t think it will go anywhere. The only convincing end-user demo I’ve seen is walking or cycling navigation via a virtual heads-up display, but again, that’s a niche use case that won’t support an entire industry.

This one image set back the AR industry by decades.

I already don’t have time for video, because it requires me to be somewhere where I can pay attention to video, listen to audio, and not be interrupted for maybe quarter of an hour. Adding the requirement for substantial graphics support and power consumption, not to mention the headset itself, and extending the timeline to match, further reduces the applicability of this technology.

But go ahead, prove me wrong.


🖼️ Top photo by Juan Di Nella on Unsplash


  1. This was back in the good old days before drinking-age laws were introduced, which meant that all of us got our drinking done when all we were in charge of was bicycles, limiting potential damage. By the time we got driving licenses, drinking was somewhat old-hat, so there was much less drive to mix the two. 

Conference Booth Do's and Don'ts

Conference season has started up again with a vengeance after the summer break. If you’ve ever staffed or attended a conference, you know that there is always a room (or a hallway, or an out-of-the-way closet) where sponsors can set up more or less elaborate booths and talk to attendees about their offerings.

Staffing a booth is a particular discipline, with significant variations depending on the intersection of which company you represent and which event you are at. Let’s go through some of the factors that go into deciding what goes in a booth – or not.

What is the goal of the sponsorship?

Depending on the company and the event, the goal of an event sponsorship can vary widely. Sometimes you might be there to scan literally every attendee’s badge and get their contact details so that you can follow up later. In this case, you want the flashy giveaway, the must-play game, and in general the fun, look-at-me booth. You also want to make sure that you can process people through pretty quickly; it’s a numbers game.

In other situations – different event audience, or different product and pitch on your part – that is exactly the opposite of what you want. You are aiming for a smaller number of longer and deeper conversations. The sorts of attendees you want will be turned off by queues or flashy displays, and may prefer a sit-down conversation to standing at a demo pod.

Make sure that both sales and marketing agree on the goals! I have personally been involved in events that Marketing considered a great success – "look at how many leads we generated!" – but Sales ignored as a waste of time – "those leads don’t convert". Have that conversation up front, because afterwards it’s too late.

Outside help

At many events, at least some of the booth staffers will be outside contractors, not employees of the company sponsoring the booth. A few years ago "contractor" would have been a euphemism for "booth babe", someone significantly younger than the average conference attendee, generally of the opposite sex to most of the attendees, and wearing significantly less clothing. This kind of contractor is there mainly as eye candy to attract passing traffic.

At least at the sort of conference I go to, the straight-up "booth babe" sort of thing has more or less completely died out – and good riddance to it. Even so, there are still a lot of contractors about, especially at larger events such as Mobile World Congress. They are there to give a pre-rehearsed short pitch and hand out collateral and swag, no more.

There is nothing inherently wrong with using outside help in this way, but it does influence what the typical attendee experience of your booth will be – and therefore what type of leads you will get.

Be in the room

If you’re working a booth, again, know what your goal is. If you want all the leads you can get, go stand out in the hallway with an armful of T-shirts or beer coozies or whatever your giveaway is, and scan everybody in sight. If you’re after more in-depth conversations, stay in your booth perimeter and wait for people to come to you.

Either way, don’t just hang out in the booth, playing with your phone or talking to your colleagues – and definitely don’t get out the laptop and try to work in the booth. You’re there to be available to attendees! If you need to do something urgently, step out of the booth, find a café or whatever, and work from there. There may be a sponsor lounge, or if you’re a speaker there is almost always some sort of green room with WiFi and coffee – and with any luck, a somewhat ergonomic table to work at.

Booth design matters

The booth design is also a factor, and it will change based on your company’s market profile, the event, and once again, your goal for the event. If your company is well-known enough that people will stop by just to see what you’re up to or grab the latest swag, your booth needs to be all about whatever is the newest thing you want to get out there. If you are a startup or a new entrant, you need something eye-catching that explains what your core value proposition is. Either way, keep it simple: nobody reads more than a handful of words on a booth, and they need to be able to do that from a distance, on the move, with a crush of people between them and you.

Different events may also need different designs. If you’re at, say, a Gartner event where most of the attendees are dressed formally, you need to be a bit more grown up too, both in wording and in presentation. Focus on business value and outcomes rather than tech buzzwords. On the other hand, if you’re at a tech-centric event where most people are wearing black T-shirts, you want that checklist, and your benefits need to be couched in technical terms too. This is literally a feeds & speeds crowd, and you should cater to that.

Collateral and handouts

Collateral is a hard one. I have long advocated doing away with take-home collateral entirely, and instead offering to email people about topics they care about – which is an excuse to have a conversation and uncover those topics! You might also consider a row of QR codes on a wall that people can scan to request particular items. This is both more ecological and more practical, since most printed collateral is never read.

However, in certain industries and regions people do actually want something to take away with them, so be aware of those preferences and make sure you cater to them.

The one piece of printed collateral I do like to have in a booth is an architecture diagram, because you can pick that up and use it as a visual aid in conversations with people, even if they never take it with them. In smaller situations I’ve also done this with a diagram printed on the wall or even a whiteboard in the booth, but when there are multiple people who might need to use the visual tool, it can get messy. Better to have one each!

I wrote down some more in-depth advice about conference collateral here.

Further reading

Those are my thoughts, but here are some more from Cote. There is some excellent advice here – do read it! You can sign up for his newsletter here – and if you like this sort of thing, his podcast is pretty good too.


🖼️ Photos by Jezael Melgoza and Cami Talpone on Unsplash

Problem Solving

Take intractable problem. Abandon intractable problem. Run errands. Return home. Play with coloured pens for a Pomodoro. Transfer clean copy to iPad while Mac updates itself. Accomplishment.

Two key parts: doing something else to give your brain space to mull on the problem, instead of trying to solve it by head-butting a brick wall into submission. And structure your time working on the problem, breaking it into chunks that feel approachable.