Siri on iOS 11 still has a long way to go

What has Apple really done to improve Siri? To be honest, I’m not sure. After testing it for an entire day since Tuesday’s iOS 11 release, mostly on an iPad 9.7-inch and an iPhone 7 Plus, it’s obvious Siri is not a major priority for the company, even if some of the features are improved. What has changed is a good sign; what hasn’t makes Siri behind the times.

First, this is not a complete overview of Siri. My intention is to find out whether the new version — one that can now talk more like a human and translate phrases — is really ready to take on the big guns of Amazon Alexa and Google Home (running the Assistant bot). I’m not including Microsoft Cortana in this analysis, because, to be frank, Cortana is behind even Siri when it comes to natural language processing and handling more complex queries.

And it’s clear we don’t quite know everything about Siri or what Apple has planned for the bot. Imagine how hard it was to completely revamp the speaking voice to make it sound more natural with pauses and a nice and steady flow. I tested this new voice by having Siri read a few email headers, and it was like I was talking to a friend. We can finally say there won’t be any more movies where Siri is one of the characters. She (or he) now sounds more like a human, and it’s hard to even tell that this is a robot. The voice doesn’t sound distinct.

As for translation — that works, but it’s something Google has offered with the Assistant for some time. I tried translating several phrases from English to German and they all worked fine. How often does that come up in an everyday routine for me, though? Not often.

So, that left me with a few tests to see if Siri is smarter and understands context. For starters, Siri does know a little more about me. The bot can recommend news stories based on what I’ve read before. I don’t have access to Apple Music right now, but if I did, the bot would now keep track of the music I like and can play my favorite songs. But that’s not too astounding.

Siri still shows a lot of web pages. It doesn’t really know how to converse. Alexa and Google Home (and the Assistant bot) both do a better job of actually dialoguing.

Here are a few examples:

When I asked who is the current president of the United States, Siri answered correctly. And when I asked how old he is, that worked — the bot understood the context. However, when I asked the bot to tell me facts about Trump, Siri just showed me web search results. That seems to happen a lot still. Alexa not only nailed the context, but when I asked about an interesting fact, Alexa also read one of Trump’s recent tweets.

Siri didn’t really try to parse out any meaning. When I said “Play my favorite type of music,” Siri thought I wanted to play a favorites mix on iTunes. On the other hand, Alexa played music by The Boxer Rebellion, which is likely because I listen to that artist a lot.

Next, I said, “Do more people watch basketball or football?” None of the bots in my office helped with that one. Maybe it is just too esoteric. Siri showed me the schedule for the NBA, which is not in season. Alexa and Google did not know the answer at all. We’re in that strange period where bots don’t really know how to deal with any complexity.

That said, Google Assistant is far better at context than Siri. I’ve had conversations about cities and sports teams before, and it just works better with Google. For example, when I asked Siri about the population of Las Vegas, the bot gave me the right answer. But only Google understood what I meant when I asked about the surface miles of that area. (Siri offered to do some math.)

Context is one thing — conversation is another. I’m guessing Google will go even further next month when it announces the Pixel 2 smartphone and likely shows more bot improvements.

China’s Baidu launches $1.5 billion fund to drive its autonomous car efforts

(Reuters) — Chinese search engine Baidu announced a 10 billion yuan ($1.52 billion) autonomous driving fund on Thursday as part of a wider plan to speed up its technical development and compete with U.S. rivals.

The “Apollo Fund” will invest in 100 autonomous driving projects over the next three years, Baidu said in a statement.

The fund’s launch coincides with the release of Apollo 1.5, the second generation of the company’s open-source autonomous vehicle software.

After years of internal development, Baidu in April decided to open its autonomous driving technology to third parties, a move it hopes will accelerate development and help it compete with U.S. firms Tesla and Google project Waymo.

In the latest update to its platform, Baidu says partners can access new obstacle perception technology and high-definition maps, among other features.

It comes amid a wider reshuffle of Baidu’s corporate strategy as it looks for new profit streams outside its core search business, which lost a large chunk of ad revenue in 2016 following strict new government regulations on medical advertising.

Baidu’s Apollo project – named after the NASA moon landing – aims to create technology for completely autonomous cars, which it says will be ready for city roads in China by 2020.

It now has 70 partners across several fields in the auto industry, up from 50 in July, it says. Existing partners include microprocessors firm Nvidia and mapping service TomTom.

Despite the rapid growth of its partner ecosystem, Baidu has faced challenges negotiating local Chinese regulations, which have previously stopped the company from testing on highways.

In July local Beijing police said it was investigating whether the company had broken city traffic rules by testing a driverless car on public roads as part of a demonstration for a press event.

Encrypted email service ProtonMail expands beyond English into 7 new languages

Encrypted email service ProtonMail has come a long way since its global launch back in March 2016, introducing two-factor authentication (2FA), Tor support, and even rolling out a standalone VPN product.

But a major gap in ProtonMail’s “global” credentials so far has been its lack of language support. The service offered just an English interface at first, but back in January the company announced a new crowdsourced translation program as it sought volunteers to help expand the service beyond English.

Back in June, the company quietly rolled out French to the web interface, before adding German, Russian, Ukrainian, Spanish, Dutch, and Polish. But that clearly didn’t go far enough for its more than two million users around the world, so a few weeks back ProtonMail added the same seven languages to the Android app, and now the iOS incarnation is getting the same attention.

Above: ProtonMail – iOS: Language selection
Founded out of CERN in 2013, ProtonMail uses client-side encryption, meaning all data is encrypted before it arrives on the company’s servers. With Donald Trump now in power, and high-profile data leaks permeating the news, there has been a surge in sign-ups for online privacy tools, such as VPNs and encrypted messaging services.

But for ProtonMail to reach true scale, it does need to cater to more languages — which is an expensive endeavor. That’s why the company followed in the footsteps of other firms by asking its loyal user base to convert its interface into additional languages. Facebook was among the first high-profile tech startups to sidestep professional translators in favor of crowdsourcing.

ProtonMail said that 3,000 people originally applied for its translation program, and around 200 are still active across various languages. Languages up next are Chinese, Italian, Czech, Portuguese, and Romanian, among others.

HTC stock to resume trading on September 22

HTC is to resume trading on the Taiwan Stock Exchange (TWSE) tomorrow, according to an announcement issued by the TWSE. Trading of HTC shares was temporarily halted before today’s announcement of the company’s billion-dollar deal with Google.

The duo revealed that Google would acquire HTC’s “Powered by HTC” team for $1.1 billion, effectively bringing the Taiwanese tech titan’s mobile phone team under Google’s wing as the latter looks to bolster its hardware ambitions. In a separate deal, Google also gained a “non-exclusive” license for HTC intellectual property.

Rumors of some form of acquisition had been circulating for weeks, and yesterday the biggest sign that a deal was imminent came when HTC revealed it would be halting trading in its shares today.

HTC was founded in 1997, and its shares hit a peak of around NT$1,300 ($43 USD) in 2011, but they’ve been more or less in free fall since and now sit at just under NT$70 ($2.30 USD). Though HTC plans to bring at least one more branded smartphone to market, it will now be able to focus on other facets of its business, which include virtual reality (VR), augmented reality (AR), Internet of Things (IoT), and artificial intelligence (AI).

Apple’s iPad with iOS 11 takes us one step closer to killing laptops

Thanks to iOS 11, iPads are one step closer to killing off laptops. It’s only a matter of time before that happens, but until then, you do have to know what you are getting yourself into. For now, the best time to treat Apple’s iPad as a laptop is when you are in an office. As I found out, it’s not while you’re at a coffee shop.

On the basic model, the 9.7-inch iPad, I installed iOS 11 and started experimenting with the latest features, which are mostly designed to boost productivity (at least on the iPad). There’s a new dock, which makes it much easier to find commonly used apps. (Siri even plays a role here, tracking the most commonly used apps and storing them on the dock to the right.) There’s a cool multitasking view that lets you copy and paste elements from one section of the screen to another using Split View. In one case, I dragged a YouTube video over to iMessage and it inserted the video into my chat. That was cool.

I really like the Files app. I’ve been wanting a way to manage my files — mostly Word docs and photos — on the iPad because, as a journalist, I’m always trying to keep my projects straight. It was great being able to search for a slideshow by name and see exactly where it is located. The Files app also searches on services like Box and Dropbox. I had to send back an iPad Pro I was using for testing quite a while ago, but I’ve also noticed Apple keeps improving the Pencil. In iOS 11, you can tap on the lock screen and start jotting down a note.

So what is not quite there yet? What did I still miss while I was tooling around at the coffee shop?

For starters, you can improve the operating system on an iPad, but until the apps become much more powerful, you’ll probably still need a laptop. I liked managing my photo files on the iPad, but when an editor asked me to do a quick adjustment to an image of a car, taking out what looked like a smudge, I really wished I had the full desktop version of Photoshop. And, there’s one photo-related step that is still not quite right on the iPad — it was tricky to adjust the aspect ratio and pixels for an image. I do use web apps like Pixlr Editor in a browser on Chrome, and that works in a pinch, but I like Photoshop.

What about documents? The Files app makes it easier to manage my files, although most of them are on Google Drive and that app has been around for a while. And the Google Docs app works fine for me. However, there’s still a workflow problem. For starters, when I write using a laptop, I usually open a ton of tabs with new documents. Again, that’s possible on the iPad, but a mouse works better when you have six tabs up on the screen. With my finger on the 9.7-inch screen, it felt like something was still wrong — like I was on a tablet, not a laptop.

There’s a litany of other problems, and Apple knows all about them, I’m sure. Even though the A11 Bionic processor is supposedly faster than a MacBook, it’s not like I’m going to be playing Ark: Survival Evolved on the iPad anytime soon. Video editing is functional using apps like iMovie, but let’s get real here — you need a mouse, more graphics power, and a better workflow. On a Windows laptop, I can edit an entire 10-minute video using a lot of drag-and-drop, fine tweaks to clips, and layered audio channels. That just won’t happen on an iPad in the near future.

Surprisingly, even with these caveats, the iPad 9.7 with iOS 11 seemed like a big jump forward. I could see myself doing some drag-and-drop on a plane, typing or even dictating documents in a hotel room, and making use of the new dock to access my apps faster. The iPad is still mostly a “viewer” for movies and books, but it’s also great for email and documents. In the end though, I doubt I will be ready to leave the laptop behind unless the apps start working more like their desktop counterparts.

Google announces ‘zero-touch’ preconfigured Android device deployment for the enterprise

Setting up a new smartphone out of the box isn’t exactly rocket science these days. But for businesses with hundreds or thousands of employees to cater to, deploying and configuring devices across an entire organization is a little less straightforward.

With that in mind, Google has announced a new “zero-touch” enrollment service to make it easier for companies to preconfigure devices so they’re good to go when they land in employees’ hands.

Google has long targeted organizations with an enterprise-grade Android incarnation that sports special versions of the Google Play store, security tools, and centralized management dashboards. Different businesses will have different configuration needs for their devices, which could even vary on a department-by-department or even employee-by-employee basis, and getting everyone set up can be time-consuming. This is where zero-touch helps — the devices ship with nearly all the settings established in advance.

It’s pretty much the same as Apple’s device enrollment program, which rolled out back in 2014, and which also promises zero-touch configuration for IT.

“For administrators, zero-touch enrollment removes the need for users to configure their devices manually and ensures that devices always have corporate policies in place,” explained Google product manager James Nugent. “Support is also much easier, with no extra steps for end-users; they just sign in and get access to their work apps and data.”

It’s worth noting here that this service does require carrier opt-in. And to kick things off, zero-touch is only available on Google’s own Pixel phones through Verizon in the U.S., from today. Elsewhere, other carriers planning to offer zero-touch include AT&T, Sprint, and T-Mobile in the U.S.; BT and Deutsche Telekom in Germany; and Softbank and Telstra in Asia-Pacific.

A Pixel-only offering would be fairly limited, of course, which is why Google said that it’s working with a bunch of third-party Android device manufacturers to expand zero-touch. These include Samsung, HTC (naturally), Huawei, Sony, LG, HMD Global (Nokia), BlackBerry, and Motorola, among others.

The first third-party devices to support zero-touch will be the Huawei Mate 10 and Sony Xperia XZ1 and XZ1 Compact, with support arriving “in the coming weeks,” Nugent confirmed.

AI needs a human touch to function at its highest level

There is an old saying that speaks to the current state of AI: “To someone holding a hammer, everything looks like a nail.” As companies, governments, and organizations scramble to be in the vanguard of this new generation of artificial intelligence, they are trying their best to persuade everyone that all of our human shortcomings will be absolved by this technological evolution. But what exactly will it solve? Machine learning is an incredibly powerful tool, but, like any other tool, it requires a clear understanding of the problems to be solved in the first place — especially when those problems involve real humans.

Human versus machine intelligence

There is an oft-cited bit from Douglas Adams’ The Hitchhiker’s Guide to the Galaxy series in which an omniscient computer is asked for the ultimate answer to life and the universe. After 7.5 million years, it provides its answer: the number 42. The computer explains to the discombobulated beings who built it that the answer appears meaningless only because they never understood the question they wanted answered.

What is important is identifying the questions machine learning is well-tailored to answer, the questions it struggles with, and perhaps most importantly, how the paradigmatic shift in AI frameworks is impacting the relationship between humans, their data, and the world it describes. Using neural nets has allowed machines to become uncannily accurate at distinguishing idiosyncrasies in massive datasets — but at the cost of truly understanding what they know.

In his Pulitzer Prize-winning book, Gödel, Escher, Bach: an Eternal Golden Braid, Douglas Hofstadter explores the themes of intelligence. He contemplates the idea that intelligence is built upon tangled layers of “strange loops,” a Möbius strip of hierarchical, abstracted levels that paradoxically wind up where they started out. He believes that intelligence is an emergent property built on self-referential layers of logic and abstractions.

This is the wonder that neural nets have achieved — a multi-layered mesh of nodes and weights that pass information from one tier to the next in a digital reflection of the human brain. However, there is one important rule of thumb in artificial intelligence: The more difficult it is for a human to interpret and process something, the easier it is for a machine, and vice versa.

Calculating digits of π, encrypting messages using unimaginably huge prime numbers, and remembering a bottomless Tartarean abyss of information can occur within the blink of an eye using a computer, which manages to outperform millennia of human calculations. And yet humans can recognize their friend’s face in an embarrassing baby photo, identify painters based on brush strokes, and make sense of overly verbose and ruminating blog entries. These are domains that machine learning has made vast improvements in, but it is no wonder that as the human brain-inspired architecture of neural nets brings machines up to parity, and in some cases beyond, in areas of human cognition, machines are beginning to suffer some of the same problems humans do.

Nature or nurture?

By design, we are unable to know what neural nets have learned, and instead we often keep feeding the system more data until we like what we see. Worse yet, the knowledge it has “learned” is not discrete principles and theories, but rather contained in a vast network that is incomprehensible to humans. While Hofstadter might have contemplated artificial intelligence as a reflection of human intelligence, modern AI architects have no tendency to share the same preoccupation. Consequently, modern neural nets, while highly accurate, do not elucidate any understanding of the world for us. In fact, there are several well-publicized instances where AI went afoul, manifesting in a socially unacceptable reality. Within a day of Microsoft’s AI chatbot Tay being active, it learned from Twitter users how to craft misogynistic, racist, and transphobic tweets. Did Tay learn a conceptual sociohistorical theory of gender or race? I would argue not.

Why AI can’t be left unattended

Paradoxically, even if we assume that the purpose of an AI isn’t to understand human concepts at all, these concepts often materialize anyway. As another example of misguided AI, an algorithm was used to predict the likelihood of someone committing future crimes. Statistically based software models learned racial biases, assigning higher risks to black defendants with virtually no criminal records, if any, than to white defendants with extensive histories of violent crime. Facial recognition software is also known to have its biases, to the point that a Nikon camera was unable to determine if a Taiwanese-American woman had her eyes open or not. Machine learning is only as good as the data it is built upon, and when that data is subject to human biases, AI systems inherit these biases. Machines are effective at learning from data, but unlike humans, have little to no proficiency when it comes to taking into account all the things they don’t know, the things missing from the data. This is why even Facebook, which is able to devote massive AI resources to its efforts to eliminate terroristic posts, concedes that the cleanup process ultimately depends on human moderators. We should be rightfully anxious about firing up an AI, whose knowledge is unknowable to us, and leaving it to simmer unattended.

The AI community cannot be haphazard about throwing open the AI gates. Machine learning works best when the stakeholders’ problems and goals are clearly identified, allowing us to chart an appropriate course of action. Treating everything as a nail is likely to waste resources, erode users’ trust, and ultimately lead to ethical dilemmas in AI development.

Mike Pham is a technical product manager at Narrative Science, a company that makes advanced natural language generation (Advanced NLG) for the enterprise.

Amazon’s new smart goggles might make sense, if Alexa is everywhere

I’m not a fan of smart goggles, the awkward device Google tried to pawn off on us a few years ago for a high price tag. Google Glass failed because it didn’t really offer that much functionality, and Google didn’t actually convince anyone it made sense to wear glasses.

And yet news about Amazon making smart goggles caught my attention, because — while I never warmed up to the idea — I can see the benefit for anyone who already wears glasses anyway. This time around I could see myself jumping on board, with one major caveat.

Here it is: I would wear the glasses if Alexa could be found everywhere in other gadgets — like my garage door; my car; the adjustable desk in front of me; the autonomous mower in my yard; the phone I use; and my entire house, including everything from the dishwasher to the TV. Then it would be super easy to talk to my smart glasses rather than to my phone. My guess is that Amazon has the same kind of vision.

I can see how this might work. Sans phone, I’d wake up in the morning, put on my glasses as normal, and talk to Alexa. I might ask her to make a quick cup of coffee and open the blinds. Then, I’d ask her to read the news. I mean, isn’t this really what we want — access to Alexa with no phone or speaker around? (Although Alexa would know to activate a speaker when possible.) A HUD would show me what’s happening, or maybe Alexa would talk to me through the goggles, though I could see that getting annoying.

For this to make sense, it would absolutely have to work…and work all day.

I’m most interested in being able to use the glasses while driving. I won’t name any names here, but I tested Google Glass in a car once and it was pretty amazing to be able to see my speed in real time. What else could Alexa do for me? How about warning me when she notices I am not stopping for a car up ahead, or giving me directions based on the fact that she knows my schedule and knows where I need to be. She could even make sure the lights are on at home when I’m done for the day.

The “everywhere” concept is something I’ve mentioned before. Prevalence is key for the future of all AI. (Just don’t get too prevalent and take over our lives). If Alexa is everywhere, I’ll be happy to wear smart glasses because the benefit will be amazing.

But here is where things get a little complex. In truth, I prefer to use the Google Assistant for many tasks, especially those related to questions. Google often knows the answer, which should not be that surprising given its history of finely tuned web search results. Wearing Alexa glasses makes perfect sense in a world dominated by Amazon, but the goggles would not be as helpful if I’m switching over to Siri or the Assistant (or Cortana).

Maybe the somewhat overlooked announcement about Amazon partnering with Microsoft for bot integrations — essentially, you can activate one bot using another — is a bigger step on the road to bot domination than we all think. Maybe Alexa everywhere will work by triggering other bots and connected services. Maybe one pair of glasses will rule the world.

New technology could allow multiple vaccines to be delivered in single jab

Multiple injections for vaccinations could become a thing of the past, according to scientists who have developed an approach for delivering many doses of different substances in just one jab.

The technology involves encapsulating drugs or vaccines within tiny particles made of biodegradable polymers. Depending on their makeup, these polymers break down at different points in time, releasing their contents into the body.

Researchers say the approach could allow multiple vaccines to be delivered at once and remove the need for booster jabs. It may also prove handy in treatments for allergies, diabetes and even cancer where multiple injections are needed.

Researchers say it could prove valuable in developing countries, potentially allowing all childhood vaccines and their boosters to be given in one shot.

“One of the main limitations there is access to vaccines and the fact that you have to come back several times in order to get immunity from the pathogen,” said Ana Jaklenec, co-author of the research from the Massachusetts Institute of Technology. “A child or a baby is usually seen once, sometime around the birth time by some sort of healthcare worker.”

Writing in the journal Science, Jaklenec and colleagues describe how they developed the novel technique using biodegradable polymers already approved for use in humans.

The process, they reveal, involves making tiny silicone moulds – rather like ice cube trays – into which the biodegradable polymers are pressed and removed to form an array of box-like structures, each about 400 micrometres across. These are then filled with the required drug or vaccine and allowed to dry.

A lid, made from the same polymer, is then lined up on top of each micro-box and the system is briefly heated to seal it and prevent the drug or vaccine from leaking out.

When injected into the body, the boxes remain sealed until the polymer disintegrates – an event which occurs rapidly, with the timing dependant on the makeup of the polymer itself.

“What’s novel here is that the sharpness of how quickly the drug releases from the particle and the fact there is no leakage at all from the particle until [then],” said Jaklenec.

To test the approach, the team injected mice with microparticles made from one of three different polymers, each filled with a fluorescent substance. Using imaging techniques, the fluorescent substance was seen to be released at about nine days, 20 days or 41 days, depending on the polymer used.

The team also produced microparticles filled with a polio vaccine and exposed them to an antibody test to see if the vaccine’s potency was affected by the heat sealing: no such problem was detected.

Finally, mice were injected with two sets of microparticles made from different polymers – one designed to break down after about one week and the other after about five weeks – with both containing a protein found in egg white. The animals’ immune response was tracked for 16 weeks.

The results reveal that the filled microparticles triggered a response greater than two regular injections of the same dose of protein spaced four weeks apart. Indeed, the response was on a par with that from two regular injections each with twice the dose – probably down to the microparticles themselves boosting the immune response.

The team say the approach could have myriad applications in medicine and beyond. “ You could use pH sensitive materials [or] you can fill with any type of drug, or therapeutic or sensing drug, so we think it has a lot more application than just vaccines,” said Jaklenec.

But, she added, challenges remain, not least that vaccines are normally stored in refrigerators: “[We] have to stabilise all of these vaccines in the body for a long period of time at elevated temperature,” she said.

Andrew Pollard, professor of paediatric infection and immunity at the University of Oxford, was optimistic, although he noted it was early days.

“Technologies which allow slow or timed release of a dose and thus reduce the ‘needle burden’ of an immunisation programme without compromising protection would be welcomed by healthcare workers, parents and their offspring,” he said.

David Goldblatt, professor of vaccinology and immunology at University College London, said the approach was sophisticated.

But, he warned, hurdles remain, noting that it removed the chance to modify vaccines between doses and that there was no way to adjust the timing of vaccine release after injection. “We prefer to avoid immunising when you might have an active viral infection,” he said. “[What happens] if a child has malaria on the day that the [vaccine dose] is released automatically?”

Nevertheless, he said, the approach could prove revolutionary in helping children in developing countries to receive adequate vaccination. “It could be a game changer for that,” he said.

New Apple Watch that makes calls turns comic book fantasy into reality

CUPERTINO: More than two years after releasing the Apple Watch, Apple Inc has finally been able to replicate 1940s comic strip technology, an advance that analysts say will spur sales.

The Series 3 of the Apple Watch, released on Tuesday along with the much-anticipated iPhone X, features wireless LTE connectivity. That means customers will be able to make phone calls or send text messages from the watch without needing to have an iPhone nearby, as they do with earlier models.

The ability to make calls with a wristwatch has captured the imagination of tech enthusiasts at least since it was prominently featured in Dick Tracy, the comic about a private detective who, starting in 1946, used calls from his wrist to help bust bad guys.

“This has been our vision from the beginning,” Chief Operating Officer Jeff Williams said at the launch event. “Now you can go for a run with just your watch and still be connected. It’s really nice to know you can be reached if needed.”

To be sure, Samsung Electronics Co Ltd has sold smart watches with mobile data connectivity since 2014, but the first devices were bulky and suffered from poor battery life because the data connection consumed extra power. They also require a separate phone number.

Apple claims its new Series 3, on the other hand, will have up to 18 hours of battery life and is just a fraction of a millimetre thicker that its previous Series 2. And it will have the same phone number as a customer’s iPhone, which will still be required for the initial set up of the watch.

Apple said that all four major US carriers will offer service for the watch, and AT&T Inc and T-Mobile US Inc both said it would cost an extra $10 a month.

Analysts generally believe the new connectivity could ignite sales, though there is little consensus as to how much.

At $399, the new Watch is only slightly more expensive than the previous model, the $329 Series 2, which introduced stand-alone GPS capability. That $70 extra buys much more useful capabilities — including the ability to stream music from Apple Music.

“The third time is the charm for the watch,” said Bob O’Donnell of Techanalysis Research.

What may hold some consumers back is the monthly recurring charge, which would far exceed the extra cost of the Series 3 over older watches over time, said Brian Blau, an Apple analyst with Gartner. “Yes, you do have to pay for that extra data plan, but it sounds like the carriers are at least going to make it relatively easy to do,” Blau said.

Apple does not say how many Apple Watches it sells. Bernstein analyst Toni Sacconaghi believes Apple will sell 12 million watches in its fiscal 2017 and 14m to 15m in fiscal 2018. Gene Munster with Loup Ventures predicted a much bigger bump, to 26m units in 2018.

Either way, Apple is putting new pressure on smartwatch rivals such as Fitbit Inc and Garmin Ltd, which would be hard-pressed from a technical and business standpoint to match Apple’s wireless features.

But the new Apple Watch still requires an iPhone, which Fitbit believes leaves it ample market room to sell wearable devices that work with all phones, not just iPhones.

“With Android comprising approximately 80 per cent of the global smartphone market, broad compatibility remains a core differentiator for Fitbit,” the company said in a statement to Reuters.

Garmin did not immediately respond to a request for comment outside normal business hours.

The Watch will remain a blip in Apple’s sales, which were $215 billion last year. But it may be taking its place as part of a family of products that Apple loyalists cannot do without — all by making a schoolboy fantasy from the 1940s into reality for the masses.