What has Apple really done to improve Siri? To be honest, I’m not sure. After testing it for an entire day since Tuesday’s iOS 11 release, mostly on an iPad 9.7-inch and an iPhone 7 Plus, it’s obvious Siri is not a major priority for the company, even if some of the features are improved. What has changed is a good sign; what hasn’t makes Siri behind the times.
First, this is not a complete overview of Siri. My intention is to find out whether the new version — one that can now talk more like a human and translate phrases — is really ready to take on the big guns of Amazon Alexa and Google Home (running the Assistant bot). I’m not including Microsoft Cortana in this analysis, because, to be frank, Cortana is behind even Siri when it comes to natural language processing and handling more complex queries.
And it’s clear we don’t quite know everything about Siri or what Apple has planned for the bot. Imagine how hard it was to completely revamp the speaking voice to make it sound more natural with pauses and a nice and steady flow. I tested this new voice by having Siri read a few email headers, and it was like I was talking to a friend. We can finally say there won’t be any more movies where Siri is one of the characters. She (or he) now sounds more like a human, and it’s hard to even tell that this is a robot. The voice doesn’t sound distinct.
As for translation — that works, but it’s something Google has offered with the Assistant for some time. I tried translating several phrases from English to German and they all worked fine. How often does that come up in an everyday routine for me, though? Not often.
So, that left me with a few tests to see if Siri is smarter and understands context. For starters, Siri does know a little more about me. The bot can recommend news stories based on what I’ve read before. I don’t have access to Apple Music right now, but if I did, the bot would now keep track of the music I like and can play my favorite songs. But that’s not too astounding.
Siri still shows a lot of web pages. It doesn’t really know how to converse. Alexa and Google Home (and the Assistant bot) both do a better job of actually dialoguing.
Here are a few examples:
When I asked who is the current president of the United States, Siri answered correctly. And when I asked how old he is, that worked — the bot understood the context. However, when I asked the bot to tell me facts about Trump, Siri just showed me web search results. That seems to happen a lot still. Alexa not only nailed the context, but when I asked about an interesting fact, Alexa also read one of Trump’s recent tweets.
Siri didn’t really try to parse out any meaning. When I said “Play my favorite type of music,” Siri thought I wanted to play a favorites mix on iTunes. On the other hand, Alexa played music by The Boxer Rebellion, which is likely because I listen to that artist a lot.
Next, I said, “Do more people watch basketball or football?” None of the bots in my office helped with that one. Maybe it is just too esoteric. Siri showed me the schedule for the NBA, which is not in season. Alexa and Google did not know the answer at all. We’re in that strange period where bots don’t really know how to deal with any complexity.
That said, Google Assistant is far better at context than Siri. I’ve had conversations about cities and sports teams before, and it just works better with Google. For example, when I asked Siri about the population of Las Vegas, the bot gave me the right answer. But only Google understood what I meant when I asked about the surface miles of that area. (Siri offered to do some math.)
Context is one thing — conversation is another. I’m guessing Google will go even further next month when it announces the Pixel 2 smartphone and likely shows more bot improvements.