My Babbage posts

AS THE futurologist Paul Saffo likes to observe, most ideas take 20 years to become an overnight success. The basic technology is often worked out, but without someone to champion it, it spreads only slowly. Then, eventually, a big company takes the technology in question and builds it into its products, thus endorsing the idea and giving it scale. Suddenly, it takes off. It seems to be an overnight success, but it has actually taken much longer than that to reach the mainstream.

Over the years Apple has blessed several technologies in this way and brought them to a mass audience: the graphical user interface with the Mac, the digital music player with the iPod, mobile internet-browsing and multi-touch screens with the iPhone, tablet computers with the iPad. In each case there were previous examples of the technology, but Apple showed how it should be done, and it then took off. There’s another such technology that has been around for several years and is ready for lift-off; it just needs the endorsement of a big company like Apple to make it happen. The technology in question is “near-field communication” (NFC) chips, which can be used to make contactless payments, among other things.

If you use a contactless card as your office pass or public-transport ticket (prominent examples are Octopus in Hong Kong and Oyster in London) then you’ll already be familiar with the basic idea: you hold the card near a reader (I keep my Oyster card inside my wallet) and the ticker barrier opens. There are also contactless credit cards in several markets (though not very many retailers accept them yet), and contactless key fobs, using the same technology, which let you pay for petrol with a swipe. This technology has been slowly spreading for years.

What an NFC chip does, however, is enable a mobile phone to emulate one of these contactless cards. The phone is then able to replace a wallet-full of such cards, and accompanying software on the phone lets you check the balance on your rail pass, for example. This is in fact commonplace in Japan, where thousands of people routinely use their mobile phones as their railway tickets, and renew their tickets right on their phones, using the phone’s mobile-internet connection. For a while this was easily the biggest mobile-commerce application on Earth.

All the nuts and bolts have been worked out, in other words. But the technology is still stuck in the starting blocks. The banks were pushing contactless credit cards quite hard in some parts of the world a couple of years ago, but the financial crisis has understandably distracted them. Handset-makers such as Nokia have also produced handsets with NFC support, but if only one or two phones in the line-up support NFC, that’s not enough phones to encourage retailers, railway companies and so on to adopt the technology.

If Apple announced that every iPhone would henceforth support NFC, however, then the picture would change overnight. There would be a flood of apps to support the emulation of various contactless cards. Within a few months there would be a critical mass of tech-savvy users willing to adopt the technology, giving retailers, banks and other companies the confidence to pile in. And iPhones are quite expensive handsets, with relatively price-insensitive buyers, so nobody would really notice the small added cost of an NFC chip.

Understandably, given that it makes perfect sense, there have been persistent rumours that Apple might be thinking of putting NFC chips into the next iPhone, which is due to be announced on June 7th. Tellingly, the company has filed a couple of NFC-related patents. But the next-generation iPhones that have escaped into the wild in recent weeks, and have then been taken apart, do not seem to include the technology. Moreover, Apple has approved an iPhone case, made by DeviceFidelity, that includes an NFC chip connected via the dock connector, and which then talks to an app on the phone. I doubt Apple would do that if it were about to announce direct support for the technology itself; but perhaps it’s a bluff to divert attention from plans to do just that. On balance, though, I don’t expect the new fourth-generation iPhone HD, or whatever it is called, to include NFC. But perhaps the fifth-generation one will?

UPDATE: So, Steve Jobs has spoken, and there is no NFC in the new iPhone 4. It does have a better camera, a better screen, a faster chip with better battery life, a clever new case design, HD video recording (uh-oh, Flip), iBooks support and so forth. But NFC will just have to wait until iPhone 5, it seems.

(Cross-posted from Babbage, The Economist‘s technology blog)

THE first time I encountered a “dual SIM” mobile phone was in Uganda last year, when I was researching a special report on mobile phones in the developing world. It was branded NOKLA, and was a very faithful copy of a Nokia handset, with two additions: the garish LEDs built into the sides of the phone and the fact that it supported two SIM cards, allowing its owner to use two networks seamlessly with a single device.

The use of multiple SIMs is widespread in the developing world, because it saves money. It’s usually cheaper to call someone on the same network than someone on a different network, and there are lots of special deals offering discounts at particular times of day, though these vary from operator to operator. To get the best available deal for a particular call, it makes sense to own lots of SIMs, and to swap the appropriate SIM into your handset when needed. The problem, of course, is that you then end up with lots of phone numbers, and you really only want to give out a single number to other people. So in practice you use one SIM most of the time, and occasionally swap in another one. This is also common practice among people who travel a lot, and among cost-conscious users, such as teenagers, in the developed world.

For people juggling multiple SIMs, a dual-SIM phone therefore has obvious appeal: it allows your phone to act, in effect, as two phones at once, sitting on two networks, and may even be able to switch between active calls on different networks. In the rich world, however, most handsets are sold by operators, and operators are not keen on dual-SIM handsets. They would rather not admit that rival operators exist, and they make it difficult to switch SIMs by, for example, locking handsets so that they only work on a particular network. In the poor world, however, handsets and SIMs are often sold separately, so dual-SIM handsets are more widely available.

So Nokia’s announcement today of two new dual-SIM handsets, the C1 and C2, is interesting. There’s clearly strong demand for this feature in the developing world, and at the moment much of it is being met by makers of knock-off handsets. That means Nokia is missing out on sales, but it can only compete with the knock-off handsets if it offers a dual-SIM feature too. (Samsung, for its part, has made dual-SIM handsets for some time.) The C1 handset, which will cost about 30 euros when it goes on sale in the third quarter of this year, has two SIM slots, only one of which is active at a time; but you can switch between them by holding down a single key. The phone also has a standby time of six weeks, a built-in FM radio and a torch. The C2 handset (pictured above, and due in the fourth quarter) supports two active SIMs at once: hence the dual signal-strength indicators. You can take a call on one network and put it on hold when a call on the other network comes through. There are, in other words, two lots of radio circuitry in the phone, which helps explain why it costs a bit more (45 euros).

Why is Nokia doing this now? In part, I think, it’s because it doesn’t want to be out-innovated by pirates. But it’s also because Chinese vendors that start out producing knock-offs quickly learn the ropes and then start to produce increasingly competitive products. We’ve seen this happen with network gear and cars in the past decade, and now it’s starting to happen with phones, as can be seen from Gartner’s latest figures for handset market share, released on May 19th. As usual Nokia is on top, followed by Samsung and LG; RIM and Apple are coming up the field fast, and Motorola and Sony Ericsson are imploding. That accounts for seven of the top ten handset-makers; but look at the other three. They are all Chinese. You’ve probably heard of Huawei and ZTE, but GFive? Who are they?

GFive, it turns out, is a handset-maker based in Hong Kong. It describes itself as “the most elegant mobile brand from China” and says it is backed by a syndicate of Chinese factories which have collectively produced over 100m handsets to date. GFive’s phones are based on chipsets made by MediaTek of Taiwan, and many of them support dual SIMs. As we pointed out in a piece last August:

MediaTek’s technology has revolutionised the manufacture of mobile phones in mainland China. A handset firm there used to need 20m yuan ($2.9m), 100 engineers and at least nine months to bring a product to market. Now 500,000 yuan, ten engineers and three months will do. As a result, Chinese handset-makers now number in the hundreds. Many churn out shanzhai (or “bandit”) phones: knock-offs of established brands, labelled “Nckia” or “Sumsung”. Others are true innovators, making handsets with big speakers or with two slots for SIM cards, so that one handset can be called on two different numbers.

According to Carolina Milanesi of Gartner, GFive is now the number three handset-maker in India, and it also sells its phones in South-East Asia, the Middle East, Africa and Latin America. Rather than selling through operators, GFive sells its handsets through very small retailers—the sort of corner shops where people commonly buy top-up vouchers for their phones. The firm’s website shows a wide range of handsets; and some of them, it must be said, look very similar to phones from established vendors. But as Chinese firms have demonstrated in other areas, and Japanese firms demonstrated before them, imitation can give way to innovation with surprising speed. And when it comes to dual-SIM handsets, the Chinese upstarts have blazed a trail that Nokia, the industry’s giant, is only now rather belatedly following.

(Cross-posted from Babbage, The Economist‘s technology blog)

IT SEEMS like a curious question to ask: should links be deliberately excluded from online articles, essays and blog posts? The link, after all, is the very currency of the web. But that is the question Nicholas Carr poses in an intriguing blog post. Needless to say, his post does not contain links, at least not in the main text; instead they are listed at the end, like footnotes. Why? Because, Mr Carr argues, links lead us astray:

Links are wonderful conveniences, as we all know (from clicking on them compulsively day in and day out). But they’re also distractions. Sometimes, they’re big distractions – we click on a link, then another, then another, and pretty soon we’ve forgotten what we’d started out to do or to read. Other times, they’re tiny distractions, little textual gnats buzzing around your head. Even if you don’t click on a link, your eyes notice it, and your frontal cortex has to fire up a bunch of neurons to decide whether to click or not. You may not notice the little extra cognitive load placed on your brain, but it’s there and it matters. People who read hypertext comprehend and learn less, studies show, than those who read the same material in printed form. The more links in a piece of writing, the bigger the hit on comprehension.

This is part of Mr Carr’s broader argument, detailed in his new book “The Shallows”, about how the internet is changing the way people think. The hyperlink, he says, is “just one element among many—including multimedia, interruptions, multitasking, jerky eye movements, divided attention, extraneous decision making, even social anxiety—that tend to promote hurried, distracted, and superficial thinking online.” Laura Miller, who reviewed the book at Salon, took Mr Carr’s words to heart and put hyperlinks at the bottom, inspiring Mr Carr to do the same. And in a similar vein, he notes, a blog published by the National Core for Neuroethics at the University of British Columbia is carrying out an experiment in which hyperlinks will be excluded from the text blog posts, and listed at the end instead. The bloggers in question have for their part been inspired by the writing of Olivia Judson, formerly of this parish, at the New York Times; she also lists her hyperlinks at the end, rather like the references in a scientific paper.

Mr Carr’s suggestion that this is not a bad idea has prompted responses from several web gurus: Jay Rosen at NYU has accused him of wanting to “unbuild the web”; Jeff Jarvis claims that Mr Carr’s post is, ironically, linkbait (insert joke about pots, kettles and the colour black here); and Mathew Ingram gives a robust defence of the link:

I think not including links (which a surprising number of web writers still don’t) is in many cases a sign of intellectual cowardice. What it says is that the writer is unprepared to have his or her ideas tested by comparing them to anyone else’s, and is hoping that no one will notice. In other cases, it’s a sign of intellectual arrogance — a sign that the writer believes these ideas sprang fully formed from his or her brain, like Athena from Zeus’s forehead, and have no link to anything that another person might have thought or written. Either way, getting rid of links is a failure on the writer’s part.

Fair enough. But I have to confess that I have some sympathy for Mr Carr’s view. I don’t mind piles of links in sidebars, but I find links in text can be irritating if there are too many of them. Of course, it makes sense to link to sources, but links also invite the reader to go away and read something else, and they can imply that the item you are reading can only be understood by reading all the references. At The Economist we do our best to write articles that are self-contained and make sense without the need to refer to other sources, which leads to some characteristic Economist style quirks, such as saying “Ford, a carmaker”. (See? We saved you the trouble of having to ask Google what the company does.) When those articles are published online, there are very rarely hyperlinks in the body of the text.

Admittedly, the advent of browsers with tabs means a link is less of an invitation to go elsewhere than it used to be, because you can open up lots of background tabs while you read without interruption. But I wonder what proportion of the web population actually does this. Anyway, having chortled (via Twitter) at Ms Miller’s idea of a list of links, footnote-like, at the end of the article, I feel the least I can do is give it a try. So here are the links. What do you think? Is this approach less distracting? Should we include more links in the text of our articles? Are we being arrogant, or cowardly, by not doing so?

Nick Carr’s post on “delinkification”
Laura Miller’s review of “The Shallows”
Mathew Ingram defends linking

(Cross-posted from Babbage, The Economist‘s technology blog)

AS YOU may have heard, a certain Apple device goes on sale outside America for the first time on May 28th. Does the advent of multifunctional, colour tablets like the iPad spell doom for those rather old-fashioned devices: e-readers with black-and-white E Ink screens, like the Kindle and the Sony Reader? Not at all, insists Steve Haber, the head of Sony’s e-reader business. Anything that draws attention to the idea of what he calls “digital reading” will benefit the entire industry and expand the market overall, he says. Eighteen months ago, he points out, he had to explain to people what e-books and e-readers were. Since then the Kindle, and now the iPad, have brought the idea of reading on a tablet-like device to a much wider audience, and that can only be good for sales of e-readers of all kinds.

Of course, you’d expect him to say that: he has to defend his turf. But he may have a point. Some people will want an all-singing, all-dancing iPad; others may prefer a simpler, cheaper device dedicated to reading. Sony’s plan, according to Mr Haber, is to focus on such dedicated (he prefers the term “immersive”) reading devices. “Companies will do different things,” he says. “Our focus is immersive reading, so that you forget you have a device in your hand.”

Mr Haber does not want to attack the iPad by name, but there are implied criticisms of it in his defence of the merits of dedicated, E Ink devices. The fact that you can’t watch movies or check Twitter on a Sony Reader becomes a feature, not a bug, because it means you won’t be distracted. Mr Haber says Sony’s readers are “designed to be lightweight, to fit into your hand, compared with a device that may be heavier or larger”. We all know which device he means: many people’s first reaction to the iPad is surprise at how heavy it is. Not everyone likes backlit LCD displays; to some users it feels like “a flashlight in the eye”, says Mr Haber, and they may well prefer an E Ink display. (He didn’t mention it, but a recent study suggests that staring at the glowing screens of computers and other devices late at night can interfere with circadian rhythms and disrupt sleep patterns.) And Sony’s model when it comes to selling e-books is not “one store to one device, but access in general”, unlike Apple’s more integrated (ie, closed) approach — though, it must be said, the availability of multiple e-reader applications on the iPad means it is arguably the most versatile e-reader around. Overall, Mr Haber concludes, the fundamental difference between dedicated e-readers and “multifunctional backlit LCD devices” (ie, iPads) is what he calls “cosiness” — and a larger, heavier device is “not cosy”.

Whether you regard an iPad as cosier than a Sony Reader, or vice versa, is a matter of taste. (The iPad is a great couch-surfing machine, which counts as cosy in my book.) But it is entirely possible that as in other product categories, such as cars and mobile phones, different buyers will want different things. There has been no convergence on a single “best” design for cars or mobile phones; instead there are lots of products aimed at particular types of users with different needs and budgets. In addition to dedicated e-readers with black-and-white screens on the one hand, and general-purpose tablet computers on the other, there may be room for products aimed specifically at business people (Plastic Logic is trying this), students, children or old people. (It will be interesting to see what Amazon does next with the Kindle; the fact that it is encouraging people to write apps for it suggests that the firm plans to compete more directly with the iPad and the forthcoming tablets based on Google’s Android operating system. The next Kindle is rumoured to have a colour screen, perhaps using Pixel Qi or Mirasol technology, though Amazon’s boss, Jeff Bezos, played down such talk yesterday.)

The demographics of e-readers are unusual. Anecdotally, I’ve been struck by the apparent popularity of the Kindle among the over-50s, who are not usually early adopters. But they are often avid readers. Mr Haber says buyers of Sony’s e-readers are disproportionately likely to be over 40 and female. “This was the age-group that was leading the shift,” he says. “It’s great because it’s new technology, and it goes to show that you don’t have to be an 18-year-old male to like technology.” But the signs are that e-readers are now appealling to more traditional (ie, younger and male) buyers. No doubt that is due, in part, to the iPad effect. But it is still unclear whether the iPad will boost sales of e-readers more broadly and, if it does, whether buyers will favour dedicated devices as the “cosier” option, as Mr Haber contends.

(Cross-posted from Babbage, The Economist‘s technology blog)

%d bloggers like this: