a16z Podcast - a16z Podcast: Apple and the Widgetification of Everything

Episode Date: June 14, 2016

The world's most valuable company, Apple, made a number of seemingly incremental announcements at its most recent annual developer's conference (WWDC) -- that Apple Pay is coming to the web; that Siri... is being opened up to app developers; that iMessage will suggest emoji; and many other things. Underneath all these little feature tweaks however is a bigger story, argue a16z's Benedict Evans, Frank Chen, and Kyle Russell. It's a story about -ification: the "platformification" of apps available on the Apple operating system (they've turned maps into a platform before even Google has); the "widgetification" of everything (using familiar interfaces to ensure continuity across different contexts); and the AI-ification of everyday services (like recognizing faces in photos and predicting, um, emoji). Add it all up though and it means Apple is focusing a lot more on A.I., just like other big tech companies such as Google and Facebook (and don't forget Amazon too!). Only Apple is bringing artificial intelligence to the phone -- it now also has a neural network API for instance -- only interestingly, it's focusing on doing so at the device, not cloud level. So what does it all mean?

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, everyone. Welcome to the A6 and Z podcast. I'm Sonal, and today we have a podcast on the heels of Apple's worldwide developers conference announcements yesterday, trying to talk about some of the announcements and more broadly some of the tech trends that they signify. And hosting that conversation, and to kick things off, we have Kyle Russell moderating. So yesterday, Apple showed off a bunch of new developments with regards to their several different operating systems. iOS on your phone and iPad, MacOS, renamed from OS 10. on the desktop and laptop computers they make and watchOS. And kind of looking across the spectrum of software they talked about, there were a couple of big themes that we saw. There's kind of this platformication of all of the apps that Apple makes available on iOS. There was also a lot of focus on AI, whether it was identifying faces of people you know
Starting point is 00:00:50 in photos you take from your photo library or suggesting which emoji to use in conversation when you're just chatting with your friends. And there was another thing that was kind of related to how they bring software and interface elements between their platforms, which was this kind of widgetification of different interface elements where parts of the lock screen look like parts of widgets that you use within apps, look like suggestions that Siri brings up.
Starting point is 00:01:12 It's kind of making the interface work in multiple contexts while also making it all still feel familiar, no matter kind of what angle you're approaching a task from. So to jump into kind of all three of those things, I'm joined by Frank and Benedict experts on AI and mobile, respectively. So, Benedict, what did you think of this kind of platformification of Apple's apps? Frankly, it sounded a lot like what Facebook and Google have been showing off, taking apps that people use every day and introducing ways for developers to kind of become part of that experience.
Starting point is 00:01:43 Yeah, I think there's a bunch of different ways to think about this. I mean, one of them, and I think first of all, one should remind people that, you know, actually most iOS users use Apple Maps, and almost all iOS users use iMessage a lot. And so, you know, just because everyone in the Valley is using Google. Maps. Apple Maps is kind of a big deal. And it's interesting that they've turned maps into a platform almost before Google has, really, I think, which was a sort of a thing that was missing actually from Google I.O. was that, you know, there wasn't much sense of developers being able to put their own layers into Google Maps. Whereas here, you know, OpenTable or Hotels.com
Starting point is 00:02:20 or Uber can integrate directly into maps and start making suggestions or providing services kind of embedded within those. And then the same thing in messages, the same thing in Siri. I think there's kind of two ways to look at this. One is it's like this is Apple creating new entry points and new ways that you can interact and engage, you can get people to use your app. Another way to look at it is it's like it's another thing
Starting point is 00:02:45 that Apple's making developers do and it's not clear whether all developers will go for it. So there was clearly huge pent-up demand for keyboards and app extensions. Whether there is huge pent-up demand for people to provide messaging applications, particularly for people who aren't just doing gifts. But, you know, whether there's pent up demand for OpenTable to do a messaging plugin is kind of unclear.
Starting point is 00:03:05 I think there's a sort of a third way to think about this, which is to contrast it with Google. So, you know, Google's, you know, you take the use case, I am trying to arrange where to have dinner with somebody using a messaging app. Google's use case would be Google's watching you and it says, hey, you should go to this restaurant. And Apple's use case is that you have the Open Table plugin. into Messenger. You've got the open table app on your phone, so you just tap the open table icon and messenger, and it starts suggesting restaurants. And so there's a sort of an interesting split here in that it's like Google feels like it can subsume all the world's knowledge into Google and then it can tell you what you want, whereas Apple is pursuing more of a kind of a third-party
Starting point is 00:03:45 developer-centric focus. And I think that manifests in a lot of the ways that AI is being, which we'll talk about later, but a lot of the way that Apple is deploying AI as opposed to the way Google is applying AI, feels like Google is sort of the all-knowing, wants to be the all-known Oracle, and you can ask it something, and it will just know the answer, or it'll even suggest it before you do it. Whereas Apple, it's more like kind of the, I don't know, it's like the servant that follows you around and hands you things as you need them without knowing that you needed them. It's just a sort of a slightly different approach to how you, what kinds of problems you're trying to use AI to solve. And I think, you know,
Starting point is 00:04:25 sort of platformization is sort of interesting. Again, because it's your sort of, you're kind of a bottom up approach as opposed to the Google approach, which is just a sort of a top-down approach to how you address these kind of problems. Yeah, one of the ways to think about it is that if you think about how do you find applications and content, we've gone from, well, the start button on your desktop OS controls that, right? You hit the start button and applications show up. Then we went to mobile, which was your home screen controls that.
Starting point is 00:04:55 right, which is whatever set of icons appears when I push the start screen on my iPhone, that's how I find content. And then what Apple is trying to enable is a third way, which is Siri is the front end for it, which is I can just do a natural language query and it'll get me to the right app and then shoot it the parameters. And so Apple's always been very consistent because it comes from the sort of desktop OS world, which is there's a way somebody discovers the set of things you can do, right? And then there's a set of providers who provide that. We used to call them application providers.
Starting point is 00:05:28 And now we call them mobile providers. And now we'll call them chat providers, right? Which is, I'm inside I message and I'm on the other side of this conversation. So comparing and contrasting approaches from these big incumbents, looking at Apple's focus on privacy. You know, they talked about how they want to enable you to see things like, oh, faces and locations that pull up across your photo library, like be able to ask Siri, hey, show me photos from last summer at the beach that. I took that are in photos or even, you know, with opening Siri up to other applications, show me those same photos in Instagram. Apple, you know, for the last couple of years, has been heavily pushing this emphasis on privacy
Starting point is 00:06:05 in addition to, you know, getting smarter all the time. And when you look at the amount of data that you have to look at in order to get those interesting insights, it seems like those kind of two goals are pulling against each other. You want to be able to look at everyone's data and, you know, kind of pick it apart for all the learning that you might find, you know, common features. But Apple seems to, you know, what they presented today hints that they think they found a way to do that without kind of invading your privacy without saying, we know that Kyle was at this beach on this day. And, ooh, that's kind of creepy. Right.
Starting point is 00:06:37 Apple and Google couldn't be more different on this. And Apple wants to emphasize the difference, which is Apple says, we make money when you buy iPhones. So we don't have to monetize you in any other ways. We don't have to look in your email. We don't have to look at your search history. We don't have to look at your pictures. We don't need to look at any of those things. But it turns out the way artificial intelligence is working, which is in particular, there's a technique of AI called deep learning.
Starting point is 00:07:04 The way deep learning algorithms work is you actually do need a lot of data. You need to look at people's photos to recognize the faces in them and to tell whether you're at a beach or a workoff site or in a forest. So you need a ton of data. So what Apple announced is they're using this very interesting computer. science technique called differential privacy. And the idea here is that you can feed the data in a way that guarantees cryptographically the security of the data. In other words, they're not just saying, we promise not to mix your user ID in with this data. They're trying to do something stronger, which is there's cryptographic techniques you can use to guarantee that there's a
Starting point is 00:07:46 separation between the personal identification and then the data itself, right? So that they can see my photos without knowing that I took them. So there's a trade-off between how much security you can guarantee and then the accuracy of their predictions. And so it'll be interesting to see if they can get to as accurate predictions as something like Google Photos, which doesn't use this technique. So another element of Apple's AI strategy is that, you know, when they were talking about looking at your photos, they say that they do the learning on device. And if you look at some of the tools they release for developers, they included this set of APIs for basic neural network subroutines. So constructing the different layers of a neural network, essentially the tools that
Starting point is 00:08:28 they use in order to build the photos learning on top of the new version iOS. What is the difference between learning in the cloud, Google is going for given their massive investment in data and server farms and learning on device? Are there certain pros to doing it on device? Or why do you think Apple would be moving that direction? Yeah, it's not exactly clear what developers can do on it yet, so we'll need more details from them. But very generally speaking, the way deep learning works is that you set up a neural network. It's got a bunch of nodes with connection strengths between them, and then you feed it a bunch of data. And what the neural network is doing is adjusting the strengths of connections between these nodes so that collectively the nodes
Starting point is 00:09:10 can make predictions. You feed it a picture, that's got a dog in it, or that's got a cat in it, or you're in a forest, or your friend Harry is in it. So essentially what these neural networks are doing is making predictions based on input. Now, the way these networks get to high accuracy is you need a ton of data. And so traditionally what you do is you get a ton of data and then you train the network. And the result is now you have a trained network. And then you can take that trained network and it will make predictions. So what is clear that you can do the Apple neural network stuff is you can get trained networks and then you can run the classifiers, make inferences is what they call it, if you read the developer documentation on device.
Starting point is 00:09:54 So that seems to make a bunch of sense, which is once you have a trained network, in other words, you've taken a million pictures and you trained it, the resulting trained network will run on device very, very quickly. So that's great. The benefit of doing it on device is that you don't need a round trip to the cloud. make the point that that's how Google Translate works on the phone. So Google Translate is run on the device when you have it on your smartphone. And so that's effectively what they've done. Yeah. So there's the data center side of it, which is analyzing and training the networks on all these language pairs. The result of that training goes on to the device so that you don't need to be connected to the internet for it to do the translation. So the exact same approach will
Starting point is 00:10:33 work on Apple. What will be interesting is to see if they extend those APIs so some rudimentary training can actually take place on the phone itself. So you can imagine maybe I just want to train a neural network on my photos. And maybe that's enough of a data set to find pictures of my friends. Now, it might not be good at finding pictures of beaches and forests and, you know, that type of thing, but it'll probably do a pretty good job of the faces of the people I take pictures of all the time. So it'll be interesting to see if they take that next step, which is do rudimentary training on the phone as well. Now, the reason that you wouldn't do that today, is maybe you don't have enough data and it consumes battery.
Starting point is 00:11:13 So let's leave the training to servers that are plugged in. And then to maybe move to something a little bit less esoteric and back into the world of things that we all will be touching on our phones every day, there was something kind of throughout the presentation, this widgetification of the interface, whether you're looking at the lock screen, where there's now interactable notifications where you're not just responding with touch, but with 3D touch, the feature that Apple showed off last year,
Starting point is 00:11:38 where it can also detect pressure, to the widgets that are activated using the same mechanism, things that show up in your notification screen, a lot of interface elements that seem to derive their kind of fundamental thinking from a lot of what we see on the Apple Watch. Benedict, did you think that that was particularly interesting? Or, you know, what did you find notable about it? Well, I think there's always been this sort of question
Starting point is 00:12:01 of, you know, bundling and unbundling on the device. And, you know, before there was bots, there were people were talking about a lot, like this time last year or maybe earlier, about how notifications were going to be like the new runtime and everything would happen inside the notification outside of the messaging app sort of in your notification feed. What Apple has done is they've sort of created this format
Starting point is 00:12:21 whereby an app can contain a little bit of content and a little bit of, at a couple of buttons. And that can be in Siri, it can be in messages, it can be on a pop-up if you press on an app, it can be in your widget screen, your swipe left screen. And it's kind of interesting. I'm not sure what the broader implications
Starting point is 00:12:39 of that are. It feels more like that they've just sort of said, let's just kind of tidy it up because we've got all these different bubbles all over the place and let's make them all work that's sort of the same way. I mean, I think what it does point to as I sort of mentioned earlier is this question of, you know, kind of endlessly shifting interaction models. We know we have so many different, I mean, this is this point I often make. If I was to say I installed an app on my smartphone in five years time, I don't quite know what that word would mean. And I think we're seeing some of that from Apple now. I come at that question from an oblique angle. One of the things we haven't mentioned so far is Apple enabling Apple pay on the web, which is also a sort of an interesting kind of question of, well, where does Apple choose to put the aggregation layer in this? Because, you know, the obvious analysis would be, you know, Apple wants everyone to make apps so that you can't leave the iPhone because you've got all your great apps. And then Apple pay is a way of making Apple apps better or apps on Apple devices better. But really Apple pay is about making the device better. And so here you now say, we know, here you are your e-commerce vendor. there's a whole class, there's a whole category of people for whom it really doesn't make much sense to make an app because people aren't going to buy from them or use their service often enough, but now they have access to Apple pay.
Starting point is 00:13:47 So even that retailer that you buy from every six months, it's still better to buy from them on an iPhone or an iPad, or indeed on a Mac, because now you can just have Apple pay instead of typing your credit card number in. I mean, I think maybe sort of there's a couple of ways to pull back from this. There's like every one of these events, and the same for Google I.O. and Apple, there's the stuff that's just been on the roadmap for the last two or three years. and it makes sense, and it's nice to have if you've got the device. And some of that requires developer support. Some of it doesn't. Some of it will get developer support.
Starting point is 00:14:14 Some of it won't. Some of it was on the other platform last year. Some of it will be on the other platform next year. And that's just kind of the genuine, kind of continuous kind of process of just sort of, what's the phrase, sustaining innovation on these platforms. And then the stuff that sort of points to fundamental shifts in how you use them or fundamental shifts in how they want to do this stuff. And I think the kind of the platformization that we saw earlier is interesting.
Starting point is 00:14:38 thing as an attempt to solve some of the problems that Google is solving with its kind of pervasive or knowing AI, but getting third-party developers to solve those problems instead. So instead of Google sucking in every cinema and being able to tell you what cinema is showing that film that you want, Apple would get a cinema ticket app to plug that information into Apple Maps, for example, or restaurants or whatever it is. And so they're sort of trying to use developers to compete with AI. You could almost argue. And then, as I said, the other thing is that they're not,
Starting point is 00:15:13 they themselves are using AI to make the existing apps better rather than to make sort of fundamentally new experiences, whereas the argument would be with Google is trying to kind of shift the whole level up, the whole experience up the stack and make the apps, a lot of those apps, slightly irrelevant in the way that Google Web Search makes some web pages like it sort of subsumes content into Google Search. And they're just very different approaches to sort of thinking about how you should be doing this stuff and where the kind of the aggregation points are. It's not quite clear where this is going.
Starting point is 00:15:47 We're kind of a point where Apple and Google have kind of converged with each other and now they're heading off in different directions, but not quite clear which directions are going in. Another kind of example of that being the Siri kit integration where rather than just saying get me tickets to Captain America at 7 p.m. tonight and Google's like, okay, I'll figure out how to route that request. it's more like Alexa, where they're reliant on particular developers to introduce new skills. So what you say is, get me tickets to Captain America at 7 from Fandango. Yeah, exactly. So it routes that request through someone who's been thinking about those problems a little bit more, developer, you know, who already has an app that solves that problem. Yeah, I mean, I've been thinking about the deployment phase of AI, so to speak.
Starting point is 00:16:28 And, you know, it strikes me there's this sort of, you know, there's this old saying in computer science that a computer should never ask a question that it should be able to work out the answer to. Clearly, that's one way of thinking about everything that Google showed at Google I.O. But it's also another way of thinking about a lot of these sort of small and interesting and useful little bits of AI scattered all the way through iOS. But they're trying to answer completely different kinds of questions. And the Apple is like, you know, the best example is the thing you mentioned earlier, you sign into your Mac because you walk up to it wearing an Apple watch. So the computer doesn't have to ask you a password anymore.
Starting point is 00:17:02 It doesn't have to ask, is that Kyle? It just can work it out. because the watch is there. Whereas Google would sort of say, we should tell Kyle that he should leave for home to home at half an hour early today because he's meeting a friend in Berkeley and there's bad traffic. So we'll pop a message up on his device that says, hey, you should leave early. And those are sort of very different kinds of things that the computer should be able to work out. But they're both things that the computer should be able to work out. But they're very different philosophically in terms of thinking about how you look at the user. One last thing I'd like to
Starting point is 00:17:32 bring up. Something you tweeted, as the news was unfolding, was how nothing kind of came up specifically for the iPad when it comes to iOS 10. Do you think that that's a matter of the last couple of point updates to iOS 9 have focused on the iPad because the iPad Pro? Or does that say something about kind of where Apple's paying its, or focusing its attention? I think there's two answers to this. One is Apple has kind of worked its way through a reboot of the iPad. So the iPad Pro really changes the experience of using it. The keyboard changes the experience of using it. If you have a use case for the pencil, that's a big deal. I don't, so I don't, but the pencil works really well. The split screen multitasking makes a big difference.
Starting point is 00:18:14 And so they've kind of rebooted the experience of having the thing. And then the pricing, which we have, the introduction of new subscription pricing options last week, potentially changes the economics for a developer. Because now if you're going to make a really interesting productivity app, instead of having to charge $50 for it or give it away for free, and those are basically your only options, or charge $5 for it and hope that you get a million people using it. Now you can make it a dollar a month or $5 a month, and you can make it $5 a month and $15 a month for different functionality. And so you have much more scope for people to invest in creating kind of more sophisticated, more powerful applications because you have this kind of
Starting point is 00:18:54 scope for recurring revenue within that. And so that's kind of, I think all of that was sort of iPad focused, actually. But then they didn't do anything else. And so I think, like, for example, one of the remaining pain points is opening files. So if you want to open something from Box or ICloud or Dropbox in PowerPoint or Excel or something, you have to jump through about five hoops and this sort of stuff that they could have cleaned up there that they haven't.
Starting point is 00:19:20 So, you know, that is slightly perplexing because there clearly is a very strong story in Apple's mind about the iPad as a route to replacing PCs. and they've done a bunch of it, but they didn't announce anything else other than the pricing. And yeah, that was a surprise to me. The problem with the platformification discussion is that it's all these little things,
Starting point is 00:19:38 like now phone can be taken over by WhatsApp or Citrix or what have you, and it looks like the native phone app, but it also has additional features from those applications. Well, on that point, the thing that I'll call out is, if you think about iMessage, they haven't really touched it since the beginning of the launch, right? Which is it's just a straight-up replacement for SMS, and it was sort of free to the Apple community.
Starting point is 00:19:58 So I think yesterday you saw an investment in much, much richer data types in iMessage. So you can send videos and they'll play. You can do handwriting samples. You can take over the entire screen. You can send messages with invisible ink so that it can be a surprise by opening a photo and then you have to swipe it to actually see it. Yeah, exactly. So if you think about where this takes us, then it takes us to WeChat, right? Which is WeChat is the messaging interface to lots and lots of useful applications.
Starting point is 00:20:25 And so to write useful applications, you need rich data types. So some of them will be in text and you can just have a conversation with Siri in the mixed, you can either type or you can speak. And then the answers will come back. But some of the interactions you want to have rich data types right there. So instead of just playing a video, how about a buy button or a book it now button? And so I think we're sort of on the first steps towards that where richer and richer data types are supported in message and in iMessage and it's only a matter of time before you get a full on application in there. And then I have a piece of sort of interesting historical content. One of my jobs here is sort of court historian. So you might think that artificial intelligence in Siri just sort of burst onto the scene recently
Starting point is 00:21:10 and that like people haven't been thinking about this for a long time. So on the contrary, Apple in particular, has been thinking about this for a long, long time. And so to see how Apple was thinking about this in the late 80s, go YouTube a video called Knowledge Navigator, which came out in 1987. And you'll see Apple's take on what is the future of knowledge work and voice-assisted intelligence. And you'll see a very, very compelling demo of things that we still can't even do today. And they've been thinking about this since 1987. So just go look that up. And I think you'll enjoy seeing the historical flavor of this.
Starting point is 00:21:47 You got a sense of Apple sort of saying, who are these guys who say we can't do AI? But you also got a sense that they didn't really even want to say AI. I mean, it's a little bit like when they do the new iPhone, and they, they rattle off some buzzwords about, you know, how the chip in the camera works. And they do it because it kind of sounds cool, but they're not really, that's not really what they're selling. They're just saying it's a better camera now. And in the same sense, you know, every now and then, they would kind of mention some AI buzzwords. But they were sort of saying them as marketing. And they weren't saying what Google was saying, which is, you know, we have built a fundamentally new and amazingly powerful AI system.
Starting point is 00:22:27 that uses these air techniques, they can do these new things that were never possible before. They weren't doing that at all. They didn't sit and go up and talk about how quickly they can recognize photographs or how many photographs they've got. They didn't even really do a very compelling image recognition demo. I mean, they sort of said they can do it, but they didn't do like the, you know, the convoluted thing where you say, you know, you can find your son wearing the green boots in the car standing next to another child who's got a teddy bear, you know,
Starting point is 00:22:53 the kind of stuff that Google likes to do. And it does feel like it's a little bit like the common line that way Apple thinks about the device. Apple thinks about the cloud has done storage and Google thinks about the phone as done of dunglass. And you still get this sort of sense from Apple that the AI is just something that enables some new user experience that they want to build. Rather than you think, oh my God, now it's possible to enter recognize images. Right. Okay, what can we do with that? they sort of start from the feature that they want and build the AI to support it.
Starting point is 00:23:26 And I don't think you sort of got a sense that it's like, of course they were going to build some of this stuff. You know, they weren't going to not have any machine learning. But it's still not entirely clear how much that ripples through the entire organization and ripples through the entire product. Whereas Google clearly is sort of all in on like this changes everything about how we do everything inside Google. Yeah, Apple's very much more toe in the water. let's see if we can deliver better user experience or better continuum features or better photo features through AI as opposed to AI is absolutely the next most important platform and everybody should get on board. They are not there yet. It very much so felt like Apple's approach to MVP of products where Apple Watch, it's essentially notifications you can interact with. For this, it was you're going to get some insights like photo will just automatically sort out photos of you and your friends.
Starting point is 00:24:18 And beyond that, not over-promising, not being too aggressive, just saying, see, we've baked in some stuff that can actually provide meaningful improvements to user experience. And then over the next two, three, four years, we'll see kind of the real version. Apple have announced all this AI stuff and they said, see, we can do something. But they've also said that they want to do it with their hands tied behind their back. And they've given some suggestions as to why that might be okay. But, you know, one kind of has to presume skepticism relative. to, you know, the biggest machine learning company on Earth, Google, that clearly isn't doing this with one hand tied behind their back. And so it's not quite clear how well that's going to work
Starting point is 00:24:56 in the future. Thanks again to Frank and Benedict for joining to talk about these kind of big ideas coming out of not only Apple, but also Microsoft, Facebook and Google, clearly all Silicon Valley is focused on these issues, AI, reaching people where they are when it comes to platforms like messaging and maps. So now that, you know, kind of the spectrum is complete with Facebook across all platforms, Google with Android and now Apple on iOS and its other software platforms, it's clear that these trends are only going to continue going forward.
Starting point is 00:25:26 And with that, thanks for listening to the A16D podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.