Network connectivity: optional – Chrome Dev Summit 2013 (Jake Archibald)

Network connectivity: optional – Chrome Dev Summit 2013 (Jake Archibald)


MALE SPEAKER: So next up
we have Mr. Jake Archibald, who you met before. Jake is a long time
front end developer. He originally worked on the
Glow library at the BBC. It was kind of like a competitor
to jQuery sort of back then because they
needed IE 5.5 support. Is that the reason? JAKE ARCHIBALD: And 5. MALE SPEAKER: Yeah, and 5. IE 5 support. Yeah, this was good. He also is an expert
in @font-face. He made a fantastic
offline tool for creating CSS sprites called Sprite Cow. He developed a technique for
including IE specific styles with Sass IE, naturally. More recently, you might
know him from his work with the Application
Cache, where he wrote a number of
articles and talks. And today, he’s going to be
telling you a little bit more about ServiceWorker. So, Jake Archibald. JAKE ARCHIBALD: Hey. [APPLAUSE] So I couldn’t help notice that
in Mat’s list of talks there that you should pay attention
to, mine wasn’t on there. So no Christmas card
for you this year. So making stuff work offline
used to be very easy, right? I mean, it’s only
really become an issue since the advent
of the internet. Before that, you had everything
you needed in front of you. You had the machine
that was going to do 100% of the
processing, and you had the thing that was
going to be processed. This is the first
computer I ever owned. And we should have audio here. Oh, we do a little bit. [BEEPING] And these were the
sights and sounds you would get just
to do something like render a simple
image on the page. And I got this working on a
Spectrum emulator last week, and it’s the most
fun I’ve had in ages. And I uploaded it to
YouTube, the full video. And someone actually
took the audio, recorded it onto a
cassette, and played it in a real ZX Spectrum. And it worked. Well, it crashed
at the very end. But for me, that makes
it more authentic. It was great. But in these days–
the only thing you needed from outside the
building was electricity. And that came from beyond the
wall, wherever it comes from. As long as that stream was
constant, then you were gold. Everything would work. But then we got another stream. We got another stream
from beyond the wall, the internet, data on demand. It looks like people were
terrified of it at first. But not anymore. In fact, now, even things
that are kind of built around physical media, they
can’t resist a delicious slurp of internet before
they’ll do anything. I don’t– I didn’t buy
new games for the PS3. Because you would go through
this every couple of months where every time you would
try and play the game, it wanted to connect
to the internet and download a 50
megabyte update. And I just wanted
to play a game. And they do automatic
updates now. It’s about time, really. Yeah, I just want
to play the game. I’ve got the disk
here in my hand. These days, most
devices will ship with a cache for
electricity which can be used when the
stream isn’t available. And we call it a battery. In fact, as we’ve become less
reliant on this electricity stream, we’ve become more
reliant on the internet stream, especially on the web. I became very aware
of this back when I worked at a small agency. You see, one day I needed
to go to the toilet. And there were five cubicles
for me to choose from. But unfortunately,
on this instance, the first four were occupied. And that’s usually OK. I tend to only need
one toilet per visit. But I knew from
previous experience that the office Wi-Fi only
extended to the first four. And there’s no mobile
data in there either. And I thought for a moment. And I was like, no. No, I don’t consider
this good enough anymore. As a human being,
it’s my right to have internet when I
go to the toilet. I’ve been using this
joke for a while now. But it comes with
an important update. In the Google offices–
or certainly in the UK– the situation is much worse. We have no data
at all, completely data-less to the point where
saying, I’m just going offline, has become a kind of euphemism. Of course, there
are other places without internet, like large
parts of my daily commute because I get the train in
through the countryside, underground systems of course,
planes, international roaming. You have data, but it’s
too expensive to use. Have we lost something
through this data dependency? Wouldn’t it be great if things
like Wikipedia or Google Maps and YouTube were once
again resilient to fluctuations in connectivity? Well, as a community,
we thought so. And we asked for a solution. And thus, we received
the Application Cache. How did that turn out? I have voiced my opinions
on AppCache before. But today, I would like to
defer to the philosopher Andre the Giant from then
WWF, who said this. I don’t like to speak
badly of people. I have grown up thinking and
being told that if you cannot say something nice
about someone, you should not say
anything at all. Yeah. Oh, but I must break
that rule in this case because I hate Hulk
Hogan very much. He is a big ugly goon, and
I want to squash his face. Yeah. AppCache needs
squashed in the face. The problem with AppCache
is it’s like this. It looks so simple, so inviting,
so easy to get started. But it comes with an instruction
manual, and it’s huge. And if you don’t read it
all and remember it all, you will be caught out. If I make an HTML
page, and point it at a manifest like this
and list some files in it, and it’ll be cached and it’ll
work offline, and that’s great. But any page that points at
the manifest, just index HTML5 here– that will automatically
become part of the cache. And that counts per URL. So if the URL just has a
change in the query string, that’s a new thing that will
be automatically cached. And this will just
build up and up and up. There’s no way you
can prevent it, and there’s nothing you
can do to clear it out once it happens. Also, on a page like this,
the CSS is going to work. That should be– it
shouldn’t be SCS, should it? What am I doing? Anyway, the
JavaScript will work. But what about this? This won’t load even
if you’re online, because it’s not in the cache. And AppCache won’t let network
requests through by default, unless you add this little
bit of magic at the bottom. Of course, this is
all in the manual. But there’s nothing
in the manifest there that hints to me that
this is the behavior, this is what I should expect. I was on a transatlantic flight,
and I had nothing better to do. I was offline. So I tried to draw
how AppCache works. And it took me like
eight hours, partly because the specification
is so dense, and it kept confounding
my expectations. But also because the person
in front had fully reclined, so I was having to use
my laptop like a T Rex. But it’s that complicated. And there’s nothing wrong
with high level APIs, right? I mean, look at jQuery. They looked at what people
were doing with the DOM, all the common stuff, and
they made shortcuts. Some quite overloaded methods. You need to know what the
different arguments actually do. But it works in practice. They’re really nice shortcuts. However, AppCache tried
to be the shortcut before it knew the long cuts. It looked really sweet on paper. But when developers
actually look closely at it, we go like, ugh, no, don’t
like the look of that. Take that away. So we’re going to
give it another try. We’re giving it another try. And the new thing is
the ServiceWorker. Actually, I think this
is the first talk on it. So that didn’t feel like
a deserving introduction. So we’ll give that another go. And the new thing is
the ServiceWorker. But yeah. The current status is
it’s a specification that we’re developing,
drafting on GitHub. We’ve been working on this
a while, along with Mozilla and other third parties
who are interested. We’ve been prototyping bits
and pieces in the browser. But the spec is still
a work in progress. I’m going to show
you bits of it. And any feedback you
have on it, anything you see you don’t like,
you can talk to me. You can ask me in the
Q&A. But you can also file issues on GitHub. There’s nothing to play
with in the browser yet, but we’re hoping to hit canary
sometime early next year. So what is the ServiceWorker? Well, it’s a
JavaScript context that operates for a set of URLs. And you can use it
to manage behavior. It’s actually kind of easier
to show you rather than describe it. So on just a normal page, you
would register a ServiceWorker like this. The first argument,
you’re telling it which URLs you want
it to be in charge of. The second argument, the
URL to the JavaScript file that’s going to do
all the controlling. And congratulations, you
now have a ServiceWorker. And the ServiceWorker is very
similar to a shared worker. I mean, it exists
separately to the pages. And multiple pages
can communicate with the same instance
of the ServiceWorker. If you’ve developed
Chrome apps before, this is kind of similar to
the background pages model. But these are much lighter. They don’t have a document. They’ve got no DOM. They’re less of a memory
burden and less startup cost than a whole page would be. So our ServiceWorker file
currently looks like this. It’s empty. And how does that affect
the behavior of pages? Well, it doesn’t. Magic was a huge
problem with AppCache. An empty AppCache file
would start doing things. This doesn’t. ServiceWorker aims to have
as little magic as possible. It’s a bring your own magic API. You can make your
own magic with it. But by default, it doesn’t
do anything you don’t expect. You’re in full control. But what control do
you actually get? Well, you can listen
in on network requests. You get this event. Some just like listening to the
event using the old DOM method, but you can use add event
listener if you want as well. So here, if I navigate to
slash whatever slash fubar, I’m going to get a
console lock for that in this other JavaScript file. But I also get one
for any request that I triggered by
that page as well, even they’re to another domain. The usual cross
origin rules apply when it comes to accessing data. But you’ll still get the
event for the request actually happening. Anyway, so what? Well, like other events
we have in the browser, you can prevent the default. And you can do something else. So here, I’m looking
at the request URL, and I’m looking
to see if it ends with one of the
image extensions. And then I’m going to
call event.respondwith. And event.respondwith
takes a response object, which is new in ServiceWorker. Or it will take a promise,
which will eventually resolve to a response. And the fetch method, which
is also new in ServiceWorker, is going to give you a promise
for a response to the URL you give it. So here I’m just redirecting
all of the requests or– and they’re going to serve up
this not-a-cat image instead of any other images on the page. So note here I’m actually
satisfying a cross origin request with a same
origin response. And you can do the
opposite as well. Security is always determined by
the origin of the response, not the request. This isn’t an HTTP redirect. You’re just serving
up a different file. As far as the
browser’s concerned, it requested
probably-cats-again, and it got it. But you could do an HTTP
redirect if you want. In fact, you can create
any kind of response. We have a kind of low
level API for that. So new same origin response,
give it a content type. Give it a body. And I’m going to serve that up. And you can see
how you could start to make things work
offline with this. Like, I get a template
from the file system API. I can get some data
from index database and mush them together
and serve that out. But that would be a
lot of code, and it would involve some
pretty gnarly APIs. So we’re introducing a new
storage system specifically for this, and it’s
the cache list. So we just need to
unfetch, but there are other events
as well on install. So this is called
when the browser sees the ServiceWorker
for the first time. It’s your opportunity
to get everything ready before you start
hearing about requests, before you start
handling requests. So in here, I can create a new
cache, give it a set of URLs, and add it onto
the caches object. And these are just URLs. They can be from another domain. That’s fine. And they’ll be downloaded,
and we add them to this cache called
static-v1 or whatever. Now the cache won’t
serve responses until all of those files
have successfully downloaded. And if one of them
fails, then you get nothing for
that whole cache. This is a good way to
group files together that are dependent
on each other. So if you can get the
network-failed.html out of the cache, you
know the PNG is there. The JS is there. The CSS is definitely there. They’re dependent on each other. We’re creating one
cache here, but you could create many,
like for a game. You could create a cache
per level, for instance. For a newspaper, you
could do a cache per issue or a cache per article. And then we wait on the cache. So event.waitUntil
takes a promise. And this is where you can
say, look, I’m not actually ready to handle requests
until this stuff has happened. On cache objects, you
get this ready method, which will resolve once that
cache has successfully formed. You might wait on
all of your caches, or you might only wait on some. So in the game model, you
might only wait on level one and allow the other levels to
download in the background. And you’ll just handle it
if level two is not there for whatever reason. For instance, if the
user goes offline while that’s still downloading
levels two and three, you can still offer an offline
experience for level one. And then when the user gets
to the end of level one, you can say, look, we don’t
have the files for level two. We’ve saved your position,
and when you’re next online, we can pick up
where we left off. But we’re just
creating caches here. We’re not actually using them. To use them, we go back
to our own fetch method. And once again, the
respondWith method, and I’m just going
to ask the caches for a match for this URL. I can be specific
and say, look, I only want to get a match
from the cache named static-v1 or whatever. But we don’t really
need to do that here. We don’t need to
be that specific. Of course, if that doesn’t
match anything in our caches– because we’re handling all
requests at the moment- we’re going to get a hard failure,
even if the user’s online because we’re saying, go to
the cache for everything. But this whole thing
is promise based, promises that we now have
in a DOM in JavaScript. So we can fix that just by
adding .catch to the end and do something else. If you’re unfamiliar
with how promises work, then there is an HTML5
Rocks article coming. It should be out next month. That hopefully will explain it. But all sync success
failure methods in JavaScript in the DOM
are moving to this model. So if you fail to get stuff from
the cache for whatever reason, like something
fails or there’s not something in the
cache for that URL, we fall down to this next step. And what I’m doing
is I’m using fetch. And I’m just going to
pipe through the requests. So we’re going to
go to the network. Of course, if there’s
nothing in the cache, and the network request fails,
we’re going to get nothing. But we can fix that as well. We can catch again. And here, I’m going to
go to the cache again, and I’m going to get the
network-failed.html file and serve that. And I know that’s there
because I depended on it in the uninstall
method, event even. And our other page is going to
be better than the browser’s default one because
we can say, hey, I’m afraid that resource
is not available. But here’s a list
of things which are. Here’s the articles that
are available to you. Here’s the levels
which will work. As I mentioned before,
onfetch doesn’t just fire for full page navigations. It also fires for assets on
the page, your JavaScript, XHR, CSS, images, et cetera. We can differentiate
between the two using event.type because
it makes sense here. We don’t want to serve up
network-failed.html for an XHR request. That doesn’t work. So here, we can just serve it
up for full page navigations. So this, this bit of
code– what does this look like in AppCache? Ah. It’s a lot simpler. But it’s not actually
simpler, is it? It’s just less characters. This doesn’t tell
me that for any URL, it should go to the cache
first and then try the network and then fall back to
network-failed.html. And this is just
one of the few cases that AppCache can actually
do without too much issue. But you don’t get
the full separation of caching and routing. But you’re going to
get the weird auto caching of all pages,
that link to it. And if you want it to serve
a different fall back for XHR compared to navigations,
AppCache can’t do that. If this, the worker code,
starts doing something I don’t understand,
what can I do? I can add a console.log. I can add a JavaScript
break point, and I can step through
it step by step and look at all the
values of variables, look at the whole
state on every step. If this does something I
don’t expect– and it will– we’re back to the manual. Read the whole thing again. When I was building Lanyard’s
offline enabled mobile site, I found out some devices
were behaving differently when they were out of signal to
when they were in flight mode. And what I was
most interested in is doing a proper
no reception test. But how could I test
genuinely no reception? This is before I
worked at Google, so I couldn’t just pop
along to Faraday’s lavatory and sit there testing phones. So I populated the
caches on each device, and I went to the local
underground station because there’s no mobile
reception down there. And I tested each
devices, see what it did. And I was filming it
with a digital camera so I could review it later on. And I looked up, and I saw
that two police officers were kind of standing in front of
me, pulling faces and pointing behind me. And I look around, and I was
sitting in between two posters. One said, someone’s got
more than one mobile phone. Probably a terrorist. Doing something odd with a
camera, probably a terrorist. I was ticking all
of the terror boxes. Thankfully, the police
thought this was hilarious and decided not to
shoot me to pieces, which was very nice of them. So the ServiceWorker lets us
build these offline experiences in a flexible and testable way. But who cares? I mean, we’ve got
train journeys, plane journey, roaming abroad. But mobile coverage is just
getting better and better. Planes and trains are getting
Wi-Fi, some even for free. And mobile providers
are starting to remove roaming costs. Maybe even one day, Google
will put a Wi-Fi antenna outside the toilets. Is this problem going away, this
problem of zero connectivity? When I first started
making stuff work offline, my instinct was to
build things as normal and then catch the
users that were offline. And you can do that
in ServiceWorker because, do what you want. I can do onfetch, event,
respondWith, go to the network, and then catch that
and do something else, like pull some data
out of the cache. So people who have
never connectivity are going to get
fresh data every time. Users without
connectivity are rescued. We can give them something. This is graceful degradation. And as we found with
graceful degradation, it’s the wrong way around. You see, when I want to get
something from the internet, I have to wake up the
radio or the Wi-Fi devices. Non 3G, that can take
a couple of seconds. And then we can begin
the epic journey to get data from somewhere
else in the world. And it’s a dangerous business,
going out onto the internet. We need to negotiate the router,
ISP, DNS, intermediate proxies, the destination server,
grab what we need, and then come back
through all those things. And it’s a lot to do. And if any of these
things are busy or flaky, we’re going to be slow. Or worse, we’re going to
end up with a total failure. But we don’t know it’s going to
be fast or slow until we try. We don’t know it’s
going to succeed or fail until one of those
two things happens. And this epic journey
is per request, and your pages are
made out of many. This is why I like
progressive enhancement, because you can get stuff
on the screen with less requirements from the network. A bit of the HTML and
the CSS comes down, you can start rendering. But ultimately, it’s
still network dependent. And when your phone’s like this,
it’s like a one legged dog. It thinks it can still
play fetch, but it can’t. And boy, will it try. And you have to sit there
watching it drag itself along the floor
with its one leg. And it’s heartbreaking. It’ll spend minutes
trying before it gives up. But even if your
connection’s like this, you’re still at the
mercy of everything else out on in the internet,
any problems that are there. And if you’ve got
something already on the device, if you already
have an offline experience, why wait for the network to show it? You don’t have to leave the
device to get to first render. We should build our
pages offline first. And the next generation
of progressive enhancement treats the network as a
potential enhancement, an enhancement that
might not be available. So sure, connectivity’s
getting better. But it’s very rarely going to
be faster than getting stuff straight off the device. This whole thing’s as much
about improving performance for users with
connectivity as it is for giving something
to those without. The Hoodie guys
made a great post about this, which I
recommend reading. But these ideas are
pretty new to the web. But some native apps have been
doing this for quite a while. Here we can compare the loading
of the Google Plus mobile site versus the native app
over a pretty decent mobile connection. Unfortunately, we have
to give the website a head start in this because
it has a one of these. I love the web. I hate this. We should not be doing this. But let’s– we’ll
gloss over that. Oh, yeah thanks. [APPLAUSE] But let’s– we’re going to load
the main site now at the same time as the app. So we go. And the native app has
content on the screen. There it goes. And the website’s still
thinking about it. Oh, and now it’s being
blocked by a font download. And eventually,
it comes through. If we ignore the font
delay, then both the app and the website actually
got fresh content around about the same time. But before that, the native
app shows cache content. And it feels so much faster. It looks like it beats the
website by two seconds. And in this example,
connectivity was good. And the mobile site
got an advantage because by going
through the slam door thing we did a lot of
the DNS work up front. And we did a lot
of the– yeah, we had some [? warmer ?] for
the connections in the radio. As connectivity
gets worse, the app is going to look
better and better compared to the website because
it can just render straight from the device. ServiceWorker levels
this playing field. So we can start by
just serving a page shell from the
ServiceWorker, just the UI but no content there. And we can start up a spinner. And we can see if we’ve
got any cached content. And if we do, we
can show it and then start going in and fetching
some network content. If we don’t have cached
content, that’s fine. We’ll just go to
the network and see if we can get
something from there. And once that works, we
can show the new content and hide the spinner. If the network
request fails, and we didn’t show any cached
content, then that’s a shame. We have to show
an error message. But if we did show
cached content, and the network request fails,
we can actually fail silently. Because this is just a load
of a page– the user hasn’t hit the Refresh
button specifically– this is the offline experience. And it works. And this is what a
lot of native apps do. We can represent that
diagram with promises. And once again, if promises
make your eyes bleed, then keep an eye on
HTML5 Rocks next month. Hopefully, there’ll
be an explanation. But actually, we can do better
than the flow chart because we can make the network
request at the same time as we make the cache request. Because why wait
on the cache before we actually try
going to the network? It is possible in extreme cases
that the network request will beat the cache, like if the
user’s hard drive is made of old cassette tape, but their
internet connection’s amazing. It is possible. We can let the
two requests race. But we’ve got these two methods
where we go and fetch data. And in one, we’re asking
it to go to the cache. And in the other, we’re not. How does the page
tell the ServiceWorker were to get the data from? Well, here’s our
implementation of fetch data. It’s just XHR. It’s pretty simple. If options object dot
useCache is true– if there’s no
ServiceWorker there, we’re just going to
reject at this point because we can’t do
anything about it. But if the ServiceWorker is
there, we can set header. We’re going to set header x
useCache to be true and then just serve up the XHR. I’m kind of pretending
that XHR uses promises, which it doesn’t at the moment. Hopefully one day, it will. This header thing looks a
little bit magic, and it is. But it’s not magic
in the ServiceWorker. It’s magic you can
bring as a developer. Over in the worker, we’re
listening for requests. And if it’s to the API, we
can look for that header. And if that header’s
there, we can just go straight to the cache
and only from the cache. And if it’s not there, well,
we can get rid of that header and just go to the
network because we don’t want that header hitting
the network particularly. It’s not useful. But at the same time,
we can update the cache with the response we get back. The important thing
about the code here is it doesn’t
make any assumptions about the network at all. It just tries stuff
and sees what happens. It’s reactive, not
predictive because predictive doesn’t work. Case in point, navigator.onLine. Aside from having the
worst camel casing since XMLHttpRequest,
it’s useless. If you have no Wi-Fi
reception and no other data, navigator.onLine is
going to be false. Right. If you have some
Wi-Fi, it will be true. Even if the router
you’re connected to is ultimately plugged into
some soil, it will be true. Predictive doesn’t
work. navigator.onLine doesn’t know anything
outside the device. It only knows about
the first hop. It doesn’t know if the
rest of the network has just soiled itself. It can’t predict
that sort of thing. Instead, just try
making requests and react to what happens. So I’ve got a couple of
minutes left of the talk. It’s quite short. But I wanted to throw
together some other API examples, some stuff that
I haven’t covered so far. If you’ve got a
ServiceWorker looking like this– you’re caching
some stuff– at some point, you’ll want to change that. You want to change the
URLs that in there, maybe change the routeing, fix
some bugs, add some new routes, whatever. Well, the browser’s going
to check for updates to your ServiceWorker
file, serviceworker.js. It’s going to check for updates
on every page navigation. You can make this not happen
with HTTP headers if you want. But by default, every
page navigation. So just change stuff. I’m going to change the URLs. I’m going to use
cache, static-v2. The fine. The browser will pick up the
byte differences in the file and go, oh, this is a new
version of the ServiceWorker, excellent. So it’s going to file uninstall
for that new ServiceWorker. But the old one is
going to remain running, handling requests and
pages that are active, that are actually using it. And that’s why we create
static-v2 rather than mess around with static-v1, because
there are pages still using the old version. This new worker won’t take
over dealing with pages until the static cache
is ready, because we’ve asked it to do that. But also, it waits for all
pages using the old version to go away. And this is important because
it means that you won’t end up with a situation
where you’ve made some changes to your
database, but there are still some pages trying to use the
old model, the old pattern. And worse, they’re
saving stuff that’s not going to be picked
up by the new version. Of course, you can override
this, like most things. When you’re ready, when
your static cache is ready, you can call event.replace. And this is saying,
look, I don’t care. I am ready to go right now. Kick the other worker out. I’m going to take over
those pages I mean, that means you’re going to
be taking over pages that were loaded using old
stuff from the cache. But if you can deal
with that, that’s fine. You can take over straight away. Another method can smooth
this transition, reloadAll. What this is going
to tell– it’s going to tell the pages that
are currently active to unload. And when they’re all unloaded,
the new worker will step in, and those pages will reload. But each page gets the
opportunity to cancel this. So if in one of the
pages, the user’s halfway through writing
a comment or an email, it can go, nope, I’m not ready. I’m going to cancel this. And I’m going to let
you know when I’m done, and we can continue upgrading. You get an event when you’re
ready to take over on activate. And this is the point
where you’ll go and delete the old caches because you
don’t need them anymore. The old worker’s gone. You can do your IDB schema
migrations or whatever. You can rearrange file
systems and so on. So this is us
controlling the cache by changing a list of files. But you can let users
control the cache themselves by deferring that to them. So you might have a page
with a Read Later button. And when it’s clicked,
you can postMessage to the ServiceWorker,
telling it, we want to cache the
article with this ID. And then over in
the ServiceWorker, you can pick that up
and go, oh, is this a post request for read later? Cool, I’m going to
go and find whatever URLs I need for that article
just using XHR or something. And when I’ve got
those, I’m going to create a new cache
for them, and I’m going to return
when that’s ready. And then I’m going
to post message back to the page to say,
yes, that thing is now available offline. I mean, this is how
you would do something like a Read Later button. It gives you full independent
control of caching and routing. So this is something that
isn’t in the spec yet. But I just wanted to show that
we were thinking about it. With your caches,
they’re static. But we want you to be able
to make a cache that updates in the background. We don’t know the API yet, but
just something like auto update is true. And this means that
the browser can update that cache
in the background, even when the browser’s
not open on the phone. It can just be
periodically checking this stuff for updates. The frequency it
would do this would depend on how often the
user uses your site, which is the iOS model, which
works pretty well. So yeah, all works in
the background services. It’s really nice. Developing for the web
has huge advantages. If a thing has a
screen, it’s more and more likely to
have a browser on it. Most devices now
have a browser on. When someone goes
down the native path and develops multiple
versions of the same thing for different languages, as
a platform, we must ask why and fix that bug
because it is a bug. The ServiceWorker is
one of those fixes, as are many of the
things that are going to be covered over
the next two days. ServiceWorker lets you
get content on the screen seconds faster. You can use it to ride
smoothly over bumps and jams in the network. And it provides the
groundwork for other things, like background syncing,
alarms, push notifications. We’re looking at putting all of
these things into this model. And I know know. You can probably tell that I’m
[INAUDIBLE] speaking about it. I’m really excited about
this, and we hope you are too. Thank you very much. Cheers. [APPLAUSE]

Author: Kevin Mason

1 thought on “Network connectivity: optional – Chrome Dev Summit 2013 (Jake Archibald)

  1. I hope you are building some great tools for testing/developing apps with this, like being able to make caches fail etc, ie. integration with Web Inspector.

Leave a Reply

Your email address will not be published. Required fields are marked *