Caching large HTML elements in the Browser’s document cache

Sometimes we have to do things that we know are wrong.

For example, in my current project we have to present a list of customers so that the user can select one or more of them.  I work for a large company that has been around for many many years, and the list of customers can exceed 15,000 for some countries.

But I can’t decide which ones should be shown, and which ones should not be shown.  Instead I need to show the complete list to the user.   What is painful is that it can take a bit of time for this list to be sent down to the browser.  It would be much better would be if the customer list could be cached on the browser’s document cache.  This is what this blog post is about.

Note that in order for this to work, the list is presented as a simple HTML SELECT, and the selected items need to be processed correspondingly on the server using the Request.Form[SelectName] property.

In the ASPX page, I define my customer list like this: 

<div id="ListDiv">
    <select id="List" multiple="multiple" size="15" disabled="disabled">
        <option>Loading List ...</option>
        <option>This may take a moment the first time...</option>

Note that the SELECT is surrounded by a DIV, so that I can replace the DIV contents with a new list.

Then I have some JavaScript that fires when the page loads, and asynchronously fetches the list:

<script language="javascript" type="text/javascript">
// This is called by the ASP.NET AJAX Framework automatically
function pageLoad() {
    var wRequest =  new Sys.Net.WebRequest();
    wRequest.set_httpVerb("GET"); // GETs can be cached, POSTs can not

The pageLoad function gets invoked automatically.  It simply fires off an HTTP GET request to a handler, passing as a parameter the value of the ListVersion hidden field.  By changing the value of this field, the server-side code can control when the browser-cached contents are expired, and a new version is fetched (since changing the URL will mean there is no browser-cached version).

When the response comes back, it replaces the contents of the DIV with the value that is returned from the handler:

function OnFetchListCompleted(executor, eventArgs)
    var listDiv = $get('ListDiv');

        var list = executor.get_responseData();
        if(list && list.startsWith('<select')) {
            listDiv.innerHTML = list;

The handler simply generates the appropriate content:

<%@ WebHandler Language="C#" Class="ListHandler" %>

using System;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Collections.Generic;

public class ListHandler : IHttpHandler {

    public void ProcessRequest (HttpContext context) {
        context.Response.ContentType = "text/html";

        HtmlTextWriter htmlTextWriter = new HtmlTextWriter(
        ListBox listBox = GetListBox();


    private static ListBox GetListBox()
        ListBox listBox = new ListBox();
        return listBox;

Note the bolded directives to enforce the browser-side caching of the response.

I have a complete example here.

You might be wondering why I’ve used the Sys.Net.WebRequest mechanism with the HttpHandler.  Why not simply use a Web Service? That would indeed be simpler, however there is no easy way to set the appropriate cache expiration headers, although it is possible if you are willing to use reflection — the PageFlakes guys use it to speed up their pages.

2 thoughts on “Caching large HTML elements in the Browser’s document cache

  1. Lars Kermode

    Hi Damian,

    Thanks for this great tip. I was recently confronted to a similar issue (why on earth would you want a pull-down with thousands of entries… but hey, not my decision!). Using the browser’s cache works best in a controlled environment such as an Intranet, where you can be sure that neither the browser settings, nor proxy servers interfere with the process. On the Internet however, you cannot be sure of the results.

    I have looked into a couple alternative methods, with various degrees of success:

    1. Using the browser’s native persistent storage. This is a very browser specific solution, so your mileage will vary. In IE 5 and above, this is implemented as a behavior. There are several behaviors for persistence, the most interesting one being userData, which enables persisting custom data to a local XML file. Note however that the size of the storage is limited according to the security zone (up to 10Mb per domain) ( Obviously, data is persistent across sessions.
    In Firefox, local persistence is implemented using DOM persistence using the WHATWG DOM storage method ( This method offers up 5Mb of local storage, persistent across sessions or not.

    As a side-note, my application required refresh of data at each logon, so I wasn’t looking at cross-session persistence. I found IE’s userData method too slow as it is not memory-based, unlike Firefox’s DOM sessionStorage.

    2. Flash local shared object. Browsers with Flash 6 and above can take advantage of Flash cookies ( to persist data. Flash is nearly ubiquitous, and presents a nice and easy uniform cross-browser interface. The Dojo toolkit implements such a method and provides the associated .swf Flash file ( Up to 100Kb can be strored without user intervention. Above that, the user will be prompted once to accept or deny the storage request.

    For my needs (more than 100K storage), the user prompting with Flash was a show-stopper, so I settled for the first method. The user experience is not as good in IE since the data is not stored in memory, but it is still an order of magnitude faster that downloading the whole dataset upon each refresh of the page.

    There is a detailed blog entry about the various techniques here:

    Once again, there is no ‘one size fits all’ solution. At the end of the day, it really boils down to adequacy with your application and its environment.



Leave a Reply

Your email address will not be published. Required fields are marked *