fhwang.net

Rich clients: On mobile, and history management

Martin Sutherland responds to my rich client thoughts with some insightful caveats:

Mobile browsers on devices not designed in Cupertino

First of all: mobile. If you're using an iPhone 4(S), you might not realize that a lot of web browsers on mobile devices are abominably slow. In terms of getting the first page of your app/site up and running on a mobile browser, an HTML page rendered on the server is going to beat a client-side JS application hands down in at least 90% of cases.

I agree that this is something worth considering. But, depending on your application and your audience's technical situation, you may not consider those mobile browsers worth trying to reach, or at least right away. Having recently been a BlackBerry user, I can attest that after a while, some users just stop using the mobile browser unless they're really in a pinch. So when your web site is only as broken as 50% of the rest of the web through that particular device & browser, you can probably get away with it.

There are no doubt sites that are specialized enough on those benighted platforms, and/or just big enough, that mobile non-JS support should be addressed. In this cases, perhaps one option would be to build a thin intermediary site that consumes the REST API and offers a stripped-down web interface. May I suggest redirecting to a subdomain like noniphone4mobilebrowsers.mywebapp.com?

This scenario exposes the fact that you can always have two different ways to access the data on the server, with one piggy-backing off of the other, but choosing which method is the canonical form of access has pretty big implications for how you architect everything. It used to be that you made a thin-client web app with mostly HTML pages, and then wrote the API later. But it may be time to write web apps with the API at the core, and then add non-JS support later, as an edge case.

And it's probably worth noting that to a large extent, this is a discussion about a tradeoff between two constrained resources: mobile device capability (CPU, memory, browser optimizations) and mobile network quality. So is it a cop-out to point to the rate of change here? Mobile device capability is improving at a much faster rate than mobile network quality. So maybe it makes sense to lean into that by building a web app for the idealized high-power mobile browser from the start.

URLs for humans and for others

Secondly, there's the small matter of linkability and history management. If there is any part of your application that you want people to jump to directly, either as a bookmark for their own benefit, or as a link to hand out to others, it has to be have a URL. Using hash fragments for navigation may be a well-established pattern, but it's still a hack. So long as you're using hash fragments, that URL can only be run on the client. pushState() and replaceState() can fix this, but we're still a little while away from these methods being universally available (IE10).

It's probably worth noting that there are really two primary audiences for URLs: Humans, and search engines. And I think the problem is far smaller for the first than the second.

Humans use URLs--in emails, bookmarks, and the back button, but the endpoint of all that link manipulation is the same: Eventually it gets entered into a browser with decent JS support and rendered into something you read with your eyes. In my (admittedly limited) experience using Backbone, the Router and History objects handled these use cases, including the back button, fairly well, and without a lot of engineering. I'd have to believe they're handled easily enough in the other frameworks as well. Yes, there are the still-in-flux UX conventions of what actions are significant enough to create a new point in the history, but this feels like an acceptable speed bump.

(And yes, navigation through hash fragments is a regrettable hack and hopefully will be behind us all soon enough. But is it really a worse hack than using a Microsoft browser in the first place?)

Search engines are a harder case. I haven't been able to find up-to-date info about how much Javascript the Googlebot uses, but I'd have to believe that you'd be a little reluctant to run a full browser environment if you were trying to crawl every URL in the entire universe. I suspect there are going to be some non-trivial stumbles over this one, especially from an engineering productivity point of view. It's not super-hard to imagine how you'd have one master route on the server that figured out how to render the page first server-side, and then do some work to boot the rich client into that state right from the start. But does this mean we're have to support two versions of each view forever--a server-side version and a client-side version? I guess that's one more argument for using server-side JS, but for a lot of people (including myself) that's going to be a non-starter for some time.

One more cop-out might apply here: A lot of pages in web applications are access-restricted, and thus not visible to Googlebot anyway. So that reduces the pain. It certainly doesn't eliminate it, though. I've been starting to wonder if there's a need for, say, a stripped-down Node.js proxy that can 1) share JS views and templates with a rich client and 2) render the full HTML as a response to a single HTTP GET. But, uh, I'm pretty sure I'm not going to write that. For the time being, I think I'll just keep toying around with access-restricted web applications and try not to worry too much about the Googlebots.

(I guess there's a third audience here that I can't speak to at all: Vision-impaired humans. Do rich client apps make accessibility prohibitively difficult? I guess this could be an issue for, say, government sites, but to be honest I can't remember the last time I heard this issue raised for a commercial web app.)

Let's see if I know the ledge, or if anybody else does for that matter

One reason I had for writing yesterday's post was because I sort of wanted to be convinced that I was wrong. I've been having these rich-client thoughts for a while, but holding off from making that jump because of how much work would be involved. So I wrote my thoughts down, hoping that friends and colleagues could help me focus my thinking.

But you know what? There have been a few people (such as Martin) voicing nuanced concerns, but there hasn't been anybody telling me, publically or privately, that it's a terrible idea. And, uh, I know a lot of server-side web programmers.

blog comments powered by Disqus

« Previous post

Next post »