What we still can’t do client-side

With the rise of all these APIs and the browser race to implement them, you’d think that currently we can do pretty much everything in JavaScript and even if we currently can’t due to browser support issues, we will once the specs are implemented. Unfortunately, that’s not true. There are still things we can’t do, and there’s no specification to address them at the time of this writing and no way to do them with the APIs we already have (or if there is a way, it’s unreasonably complicated).

We can’t do templating across pages

Before rushing to tell me “no, we can”, keep reading. I mean have different files and re-use them accross different pages. For example, a header and a footer. If our project is entirely client-side, we have to repeat them manually on every page. Of course, we can always use (i)frames, but that solution is worse than the problem it solves. There should be a simple way to inject HTML from another file, like server-side includes, but client-side. without using JavaScript at all, this is a task that belongs to HTML (with JS we can always use XHR to do it but…). The browser would then be able to cache these static parts, with significant speed improvements on subsequent page loads.

We can’t do localization

At least not in a sane, standard way. Client-side localization is a big PITA. There should be an API for this. That would have the added advantage that browsers could pick it up and offer a UI for it. I can’t count the number of times I’ve thought a website didn’t have an English version just because their UI was so bad I couldn’t find the switcher. Google Chrome often detects a website language and offers to translate it, if such an API existed we could offer properly translated versions of the website in a manner detectable by the browser.

Update: We have the ECMAScript Globalization API, although it looks far from ideal at the moment.

We can’t do screen capture

And not just of the screen, but we can’t even capture an element on the page and draw it on a canvas unless we use huge libraries that basically try to emulate a browser or SVG foreignObject which has its own share of issues. We should have a Screen Capture API, or at the very least, a way to draw DOM nodes on canvas. Yes, there are privacy concerns that need to be taken care of, but this is so tremendously useful that it’s worth the time needed to go intro researching those.

We can’t get POST parameters and HTTP headers

There’s absolutely NO way to get the POST parameters or the HTTP response headers that the current page was sent with. You can get the GET parameters through the location object, but no way to get POST parameters. This makes it very hard to make client-side applications that accept input from 3rd party websites when that input is too long to be on the URL (as is the case of dabblet for example).

We can’t make peer to peer connections

There is absolutely no way to connect to another client running our web app (to play a game for example), without an intermediate server.

 

Anything else we still can’t do and we still don’t have an API to do so in the future? Say it in the comments!

Or, if I’m mistaken about one of the above and there is actually an active spec to address it, please point me to it!

Why would you want to do these things client-side?!

Everything that helps take load away from the server is good. The client is always one machine, everything on the server may end up running thousands of times per second if the web app succeeds, making the app slow and/or costly to run. I strongly believe in lean servers. Servers should only do things that architecturally need a server (e.g. centralized data storage), everything else is the client’s job. Almost everything that we use native apps for, should (and eventually will) be doable by JavaScript.

  • http://twitter.com/duncanmid Duncan Midwinter

    I very much like your localization suggestion! Living as I do in a non-english speaking country, the majority of the sites I work on are multi-languge. This is something every internet user – not just developers would greatly benefit from!

  • http://twitter.com/g_marty Guillaume Marty

    In addition, that would be great to encode video natively in browsers.
    Modern API allow to edit images and save them to a file, but there’s nothing that can be achieved client-side when it comes to video (apart from playing, of course).
    Such an API would allow a wide variety of applications like screencast recording, video editing…

    Firefogg extension allow video encoding from a video or from HTML elements. There’s no such extension for Chrome that I’m aware of.

  • Anonymous

    D’uh I miss the inject html feature since I started working on websites back in 1998. Can’t wait for it.

  • Christophe Eblé

    What I really miss is the ability to customize select elements (Shadow Dom), a native datagrid (grouping, filtering, sorting table rows) and a native data-binding implementation. 

    • http://leaverou.me Lea Verou

      Thumbs up for a native data biding implementation!

  • http://twitter.com/iH8 Paul Verhoeven

    I’ve tweeted this to you but to be redundant: When using an asynchronous moduleloader in javascript, you can require !text modules which can be .html files. Templating is possible. Dojo does things this way and also uses !text  for loading NLS (languagebundles) for localization. 

    see:
    http://livedocs.dojotoolkit.org/loader/amd#plugins
    http://livedocs.dojotoolkit.org/dojo/text
    http://livedocs.dojotoolkit.org/dojo/i18n

    • http://leaverou.me Lea Verou

      Hi Paul,

      Thanks for your comment but I think you’re missing the point of this post a bit: Doing these things natively has great benefits that no library can ever offer. Yes, some of them can be worked around with JS, but that doesn’t mean we don’t need something better.

      • http://twitter.com/iH8 Paul Verhoeven

        I got the point, i was merely pointing out that something along the lines of AMD functionality could be a great starting point. Dojo currently already bridges Javascript and HTML with their support for declarative usage.
        Think something along the lines of . This would load a /my/element/template.html into and load/inject /my/element/languages/*_*.txt according to your browserlanguage. 

        • http://twitter.com/iH8 Paul Verhoeven

          errrr. Code got mangled by discuss. sorry! 

    • http://twitter.com/KimmoKuisma Kimmo Kuisma

      I was going to bring up the issue of AMD too but it seems you already brought it up. Personally, when a site starts turning into an app and more and more responsibilities shift client-side, I find solutions such as Backbone combined with RequireJS very useful not just for templating (which is only a small benefit), but for overall modularity of the app too. Code is nicely organized and loosely coupled provided you take some time designing the architecture.

      Templating of some sort such as Lea brought up would be nice natively, but I wouldn’t like seeing too much fall onto the shoulders of HTML. Structure, presentation (CSS) and behaviour all need to be responsibilities of separate layers in my opinion.

  • http://blog.yoav.ws Yoav Weiss

    HTML includes is by far (IMO) the biggest one on the list. 
    HTML includes would also provide great caching benefits for HTML content. In sites with frequent content changes (news, blogs & almost everything else), only the “moving” parts would have to be non-cacheable, while all the menus and layout code can be long term cacheable.  That’d be a big win to everyone. 
    I just hope it can be implemented without the performance hit that iframes have. I think we could add some limitations that will ease the eventual implementation (no JS sandboxing and same-domain restriction is a couple of things that come to mind)

    • http://leaverou.me Lea Verou

      Completely agree with everything you wrote!

    • Jasmine Hegman

      Regarding the HTML includes, might a workable compromise be to store some of the header/footer data in localStorage? This involves js, but saves the XHR on a warm cache. Each page would still have to have boilerplate to read or fetch them, but that should be easier to maintain than the individual headers/footers. Not ideal and I’m sure you’ve thought of it, just wanted to throw it out there. :)

      • http://blog.yoav.ws Yoav Weiss

        You can store HTML data in localStorage. It will have some perf penalty over (the theoretical) HTML include, because its subresources (images, etc) will only start to load after your JS has ran. It’s also more complicated to write than a simple include.
        I’m not saying that the pattern of “cacheable main HTML & non-cacheable content” cannot be implemented today (using AJAX/localStorage), but it’s harder than it should be, and it will be slower than what it can be.

  • http://twitter.com/stopsatgreen Peter Gasston

    HTML includes/templates may be desirable, but how useful are they really? How often do you develop without using a local server? Certainly when we publish the sites they sit on some kind of dynamic server, so there’s no need for them at that point.

    • Foo

      Don’t underestimate the caching benefits. If everythis but the current article and a little markup comes from cache for a blog entry page, that would clearly add up to a lot of saving.

    • http://leaverou.me Lea Verou

      Purely for the sake of doing them on the client. I’m a strong proponent of fat clients – lean servers. With clients you only have to deal with one machine, even when you’re as big as Google. With servers, you have to deal with an increasing number of clients running the code, which ends up in scalability nightmares. Fat clients = cheaper & easier. So, when I have an option, I prefer a client-side approach. For more on this, stay tuned for my upcoming .net magazine interview as I’ll discuss it there too.

      The caching benefit that Foo mentioned is very important too.

  • http://twitter.com/iH8 Paul Verhoeven

    I would love to see Javascript AMD supported natively in browsers. It would solve your template/language issues, as explained below, but it would also make building responsive/adaptive designs a whole lot easier. Lazy loading components on the fly when needed. Not to be forgotten it would cut pageloadtimes in more than half. Being able to do stuff like navigator.packages.load(/foo/bar); or something like that. I wanted to say something about caching too but a refresh learns Yoav beat me to the punch. Caching is easy with AMD. A nice resource: 
    http://unscriptable.com/code/Using-AMD-loaders/

  • http://twitter.com/mrgnrdrck Morgan Roderick

    Being able to style columns with css (other than four properties) would be really useful … or at least just add text-align to that list of properties.

  • Thomas

    What about cross-site-ajax, like using the Twitter API or similar?

    • http://leaverou.me Lea Verou

      That is already possible, by using CORS http://www.w3.org/TR/cors/ and many APIs allow it.

  • Dale Anderson

    There are some libraries out there that allow loading of HTML, JavaScript and CSS, check out Jim Cowart’s infuser library. Used in conjunction with one of the current web frameworks like Knockout, Backbone and JavascriptMVC, you can do some pretty cool things. 

    I’m working on a plugin for Knockout that facilitates truly composite MVVM JavaScript applications by dynamically loading script, markup and style and allowing components to communicate using a publish / subscribe mechanism. Hoping it will some day see a public release. Knockout is so cool!Globalisation… Oh yes! Totally agree, even dealing with date formats is a pain!

  • Trygve Lie

    Regarding “templating” or stitching HTML sniplets together I would like to see browsers implement something like Edge Side Includes (http://www.w3.org/TR/esi-lang) (ESI) directly in the browser. We would might like another syntax than the current spec tough…

    In our server architecture we rely heavy on ESI. Our back end servers only produce a wireframe which then links to HTML sniplets. Then we let our HTTP accelerator, Varnish, (https://www.varnish-cache.org/) do the job of stitching the pages together from different back end servers. This take a lot of steam of back end servers since they only need to render small parts of a page when they get updates and not whole pages. Akamai uses the same approach in their CDN also.

    We have played with the idea of writing a small JS library which can parse HTML documents for ESI tags and fetch and inject ESI fragments (HTML sniplets) directly in the browser instead of letting this happening at the accelerator level. I might get around to write it one day…

    It would be nice to see such a feature directly in the browser.

  • http://kmbweb.de/ KMB

    Browsers could use the meta tags:

    for localizations. Adding some flags to the addressbar, where you can choose your preferred language et voila, localized. Perhaps some even already do so.

    • http://leaverou.me Lea Verou

      That means you have to rewrite the entire HTML. And what about JS error messages and the like?

  • GeopaL

    Cant you see that PC turns to the new TV? Globalization wants cheap-thin clients for all sheep and flow of information must be done downwards with central administration from the above. All these topics would be technically possible to implement, but not for THIS Internet we use. End user cannot have power. Your proposals are something like αρτιφισιαλ ιντελιτζενς..

    • http://leaverou.me Lea Verou

      LOL this kind of scaremongering could only have come from a Greek dude. Yes, we’re heading to our doom with this γκλομπαλαϊζεϊσον. DOOM I TELL YA, DOOOOOOOM http://www.youtube.com/watch?v=DMSHvgaUWc8

      • Geopal

        Dear Lea, your reply is totally correct. i am really pessimist some times especially when i’m trying to see the big picture. See THIS: http://vimeo.com/31100268.
        I totally love your work and, dabblet is awesome.
        This post reminds me of A long digression into how standards are made and the tag.
        Internet relies upon server-client model. To achieve most of the issues you propose, we have to be redesign a different architecture.

  • Charles Ginzel
    • http://leaverou.me Lea Verou

      That needs a server too. Did you even care to read the front page before sharing a link like that?

      • Charles Ginzel

        My bad Lea.  I had checked this site out some months ago when John Resig tweeted it saying “with no need to set up a server”.  That didn’t make sense to me at the time, but I didn’t investigate it further.  Next time I’ll RTFM before repeating…”

  • Anonymous

    Regarding screen capture, you say “at the very least, a way to draw DOM nodes on canvas.” No way yet, but there is a working draft at the W3C, part of the CSS Image Values and Replaced Content Module Level 3, the element() method: http://www.w3.org/TR/2011/WD-css3-images-20111206/#element-reference

    Although in the status of that document it lists element() as one of the features at risk…

    • http://leaverou.me Lea Verou

      Yes, but there’s still no way to get that info and do useful stuff with it.

      • Anonymous

        Not sure I understand. On the one hand, you can’t do useful stuff with any of the things I’ve suggested, because they are just drafts and there are no implementations yet (that I’m aware of).

        On the other hand, while I recognize this only addresses your “at the very least” and not the whole issue of screen capture, it seems like it does address it. What is the useful stuff you want to do? I believe you that it doesn’t solve your need, just want to know what that need is in case there is a solution in progress elsewhere.

        • http://leaverou.me Lea Verou

          Firefox has a prefixed implementation of element(), and a pretty good one actually. But you can’t programmatically get the image, modify it and offer it to the user for download.
          For example, to make thumbnails and store them or offer them for download.

        • Anonymous

          Oh, I think I see the problem now. You can use a as a background using element(), but you can’t use element() to draw the nodes into a canvas. Yeah, that’s a problem.

  • Anonymous

    Regarding video encoding, that would presumably be part of  the MediaStream Processing API, (http://www.w3.org/TR/2011/WD-streamproc-20111215/), but the spec is somewhat vague about it, aside from mentioning the presence of encoders and some limitations to the spec imposed by them. But since it allows the recording and uploading of video files, presumably encoding would be a part of that.

  • Anonymous

    Regarding peer-to-peer connections, this is one of my favourites (I’d love to see it). I’m not crazy about the current spec (I think adding XMPP/Jabber support to the browser would be a much better 80/20 solution), but peer-to-peer is part of the WebRTC 1.0: Real-time Communication Between Browsers draft: http://dev.w3.org/2011/webrtc/editor/webrtc.html#peer-to-peer-connections

    • http://leaverou.me Lea Verou

      Thanks! I agree that that API is unwieldy, but at least it makes it possible…

      • Krystian Jarmicki

        I think Opera guys tried to make their own version of p2p in browser with Opera Unite, but it failed to draw widespread attention. Cool experiment though :)

  • http://floatboth.com MyFreeWeb

    localization — you actually mean internationalization here: navigator.language, html[lang] for telling the browser, any js library like R.js (just used it today).

    video encoding — what I can do in one Turing complete language, I can do in another Turing complete language. Not at the same speed though :-) Nobody compiled ffmpeg with Emscripten yet?

    What we really can’t do without plugins (Java and ActiveX) is ping and traceroute. Damn. Even Flash and Silverlight can’t be used for this because of security restrictions, and Java applets need to be signed.

  • Matt MacLaurin

    loading template files (in the mustache sense) via XHR and then using client-side JS to expand them works very nicely and I use it all the time. it’s very little code. I’m not bothered by the JS requirement because I almost never want to insert HTML without some kind of dynamic data. I use this for high-performance large-set data navigation and am very happy with the perf and how easy it makes it for me to “virtualize” views (i.e. only create nodes for a subset of the data)

  • Paul Sayre

    You can do peer to peer using Flash as a transport.

    • http://leaverou.me Lea Verou

      Yup, but that’s not the point. Doable only in Flash means not doable with open web technologies and that needs fixing.

      • Anonymous

        It seems to me that you’re advocating doing in XHTML+ECMAScript+CSS what is currently done with Flash, Silverlight, or some other rich client runtime.  I would argue the opposite direction from what this article argues: rather than add more and more capabilities to browsers, let’s strip them down to a light, minimal set of necessary capabilities and intelligently divide the workload between rich web pages and rich client apps: both have their place in the world.

  • http://garron.us/ Lucas

    One issue that’s been bothering me is the inability to generate PDFs. Javascript is capable of many things now (including a lot of image and SVG support), but it is basically impossible to
    1) create a web page that is guaranteed to print a certain way, or
    2) generate a PDF file of average complexity using client-only Javascript.

    I’m currently working on a fully client-side web app ( http://www.cubing.net/mark2/ ), which is intended to create nicely formatted pages with text and images on them. (I need to generate everything client-side because 1) it should be able to run offline, and 2) the generated content should stay on the client side for security.) When the page is exported to PDF, I’m okay with spreading certain content over two pages, but I don’t know ahead of time how to know what would fit on a page. The printed size of a of DOM elements depends on random browser quirks, zoom leve, and various settings. It’s impossible to predict where a page break will occur, and also difficult to make sure it doesn’t do something silly to the layout (e.g. overlaying images). In my case, I just have to make some educated guesses, throw in some “page-break: avoid” statements that don’t seem to work (do I also need to say “pretty please?” to the CSS engine), and hope for the best. I haven’t seen an example that fares significantly better, and I can’t imagine one right now that wouldn’t use hacks (make every page an SVG element, maybe?) or use a big library ported using emscripten.
    It can be argued that something like PDF generation should not have a client-side API. But it’s something that we can do very easily on any server or desktop program (and often we want to), but we can’t do it client-side.

  • Ben

    Im a bit lost … you say you want this on the client-side? …. One question, why? …. I mean this sentence “There should be a simple way to inject HTML from another file, like server-side includes, but client-side.” says it all for me, This should be left on the server side. I am all for pushing the boundaries, but why are we adding a huge amount of things to a client side developer’s role when it is unneccessary? …. I am 20 years old and basically just starting with the scripting side of front end development and to be totally honest im really scared at the amount of information i have to learn to be considered a professional front end developer. I’m not slating the article, far from it actually because i agree totally with specific things such as localisation etc, i just feel we are bloating the client side section of the industry.

    • http://leaverou.me Lea Verou

      The client-side is not assistive any more. JavaScript is becoming a fully fledged programming language and the target is to eventually not need the server at all unless we need a centralized whatever.
      If you see JavaScript as a toy, you will be overwhelmed and you will become a bad front-end developer. Respect the technology as an equal to what you already know, and you’ll be fine.
      Good luck! :)

      • not-the-end-of-the-world

        The ‘fully fledged programming language’ is only as good as the interpreter and sandbox that it runs in. Most of these concepts (peer to peer network connections, capture POST parameters, screen capture) scream potential security vulnerability. As for templating and localisation – client side caching has this covered.

        A lot of these ‘issues’ sound like the rant of a front end developer who yearns to be a programmer.

        • http://leaverou.me Lea Verou

          I assume you write in machine language then, Mr. Big Programmer? 

          I guess it must’ve hurt your arrogance when we petty front-end developers got OS-level threading. Stay tuned for the rest. ;)

        • Ben Surgison

          I second that Lea :-)

  • Anonymous

    I sometimes wish we could have a simple src attribute on block content elements, analogous to the iframe, but to load DOM fragments into the document, including inline scripts. Add a defer and an async attribute and I’m a happy coder once more. Finally I’d like some callbacks to manipulate the content before displaying it. Voilà, basic templating! The usual cross-site restrictions would apply.

    I’m not a fan of sprites but I’m not a fan of creating a request for each little resource our sites need either. Hey and I’m also not a fan of concatenating all scripts and styles into huge minified and therefore obfuscated files. But I would really like to see something like this: http://people.mozilla.com/~jlebar/respkg/ HTML Resource packages, zip files basically, that contain all the essential stuff in compressed form.

    But we’re all DOOMED anyway! :)

    • http://leaverou.me Lea Verou

      Love this idea! 

      • Anonymous

        Cool. Now there’s two of us! Maybe we should tell Hixie… :)

    • Anne Onymous

      Well, putting everything into an archive and async loading + HTTP pipelining should basically have the same effect, no?

      • Anonymous

        I don’t understand what you’re trying to say. Which effect are you referring to? (Also HTTP pipelining has been here for many years — and it’s disabled by default in all browsers except Opera and most proxies don’t support it at all.)

        • Anne Onymous

          I mean the load time should be about the same if you pipeline and gzip everything instead of doing some resource bundle stuff.

          And yes, that pipelining is disabled is a pity.

    • Marty Vance

      XHTML2 was going to make src and href universal attributes.  Too bad all those cool ideas will never see the light of day.

  • Jörn Zaefferer

    Playback of multiple audio files in parallel on mobile, specifically iOS.

    • http://leaverou.me Lea Verou

      Excuse my ignorance, can’t you do that with multiple <audio> elements and JS to .play() one when another is triggered?

  • Matthew Graybosch

    Lea, it’s not ideal, and probably not what you had in mind, but JQuery does include a $(‘className’).load(‘$URL’) method. I used it here: https://github.com/MatthewGraybosch/nerdmanifesto/blob/master/includes/includes.js

  • http://twitter.com/RenaissDesign Chris Cox

    Can != should. Some things belong server-side.

    • Max Holmes

      Think again. Actually, it would be extremely handy to have client side includes – for instance for offline applications on small laptops with no net connection, where there are over 200 pages, and much repetition, and large elements to be repeated whose functionality doesn’t repeat with Javascript includes.

  • Mickey Mouse

    Flash is client side and it can solve a few of the problems you mentioned.
    Other than that, it’s too risky to add some to client side.

  • http://about.me/josephcarney joseph_carney

    Client Side.  Can’t do.  Browser. API.  Specifications. 

    Yes, you can, it all depends upon how you code.  Lots of sites and applications already capitalise on these features. 

    With any language, it depends upon how you use it.  You can already cache code and data, what you cache and how you code the cache, should enable you to repeat and rinse.  I agree that there is no “easy way”, no “HTML TAG” to do this client side, but with cookies, local storage with cached javascript/css, you can do what you need to.  To make a distinction between HTML and a Javascript load is false. They are co-dependent.  They are both interpreted.  Isn’t the difference between an HTML and a Javascript purely semantic?   

    Location is still in it’s infancy.  A nightmare.  It is getting better, more so on mobile devices, but again,  what we have to work with is enabling a lot of people to build the next generation of apps.  

    For example: I am English and live in Spain.  Try to localise me!!!  How? The language I speak? Which one? The location I am in? The app or web page I am using?  The app or web page I am reading?  The information I want to obtain? All variables, all interchangeable depending on what I am looking for. 

    I use a mixture of both languages, locations, information and programs/services.  What we have available enables me to utilise all, trying to second guess what I want is impossible.  I may want the menu in Spanish, the location in Spanish, the language in English and the app/service in English.  Or, I may want the menu in English, the location in English but the language in Spanish and the app in Spanish.  It all depends upon where I am and what service is best… It is tricky I agree, but I should be able to have everything.  

    Localisation is almost a walled garden in your definition, it will fail when the choices and information are best served as the provider sees fit.

    A screen capture is too general a statement.  What you are actually talking about are snapshots of interpreted code.  The overhead for this more than likely outweighs the performance, hence the developers didn’t build it the first place.  What good does it do the user, other than add more overhead to the performance of the device/software?

    Peer to Peer is useless without a centralised server.  It simply doesn’t work.  The centralisation of the server can be as small (or near) or a big (many users) as you need it to, but you still need a server.

    I can see where you are coming from with your arguments.  To say that they should all be client side, I think is wrong.  Some things will always remain server side, it is a 2 way conversation.  To have more control, often leads to less options.

    Lea, I tried abysmally to make these points in 140 chars or less, then I got caught up in a telephone conversation, hence the delay in my reply.  But, ‘you little minx’, server side is good and not everything needs to be in the hands of the client.  We’ll never get away from the dumb terminals, nor should we ;)

  • Jacob J

    The idea of embedding fragments of remote documents (regular document markup, plaintext, javascript, anything really) was in xHTML2 use a universal src attribute (and possibly something else, but I’d have to re-read the spec to be sure). The idea of improved user interaction reading and event capturing for complex UI application development existed in xHTML2 as XForms and XMLEvents. And Localization? Yeah. xHTML2 had that too. 

    I’m not going to say “I told you so” (because I didn’t), but I am going to say, “I warned you about the stairs Bro! I told you dog!”. :P
    http://www.w3.org/TR/xhtml2/http://www.w3.org/TR/xhtml2/mod-embedding.htmlhttp://www.w3.org/TR/xhtml2/mod-xforms.html#s_xformsmodule
    http://www.w3.org/TR/2007/WD-xml-events-20070216/http://www.w3.org/TR/xhtml2/mod-i18n.html#s_i18nmodule
    http://www.w3.org/TR/2007/REC-its-20070403/

    • billy bob

      Yo Dog – first 2 links are dead !

  • Jacob J

    A pox upon my spelling!

  • Anne Onymous

    >I mean have different files and re-use them accross different pages.
    I think that’s possible with XSLT. I’ve seen some demo that shims the XML X:Include specification via XSLT. So yeah, it’s theoretically possible. Not that anyone actually uses XSLT though.

    • http://sunchaser.info/ SunChaser

      XSLT may be a perfect solution if search engines could index such pages. Only Yahoo had had such feature before they disabled their search engine

  • http://twitter.com/subchild Alex Kolundzija

    We can’t sync audio and video/animations at a keyframe level with reliable accuracy.
    We have no access to the local filesystem, not even to detect size of local files (e.g. to help track upload progress.)

  • Phaux

    We need UDP connections. Whats the point in webcam/mic support if we can’t send data thru UDP?

  • http://www.facebook.com/profile.php?id=698125682 Björn Kaiser

    We should not forget what HTML is. It’s just there to describe a document, not for handling data, manipulate data or connecting 2+ people to interact in some ways nor to encode videos.  

  • Paulo Eduardo Neves

    What about creating client side templates with libraries like dust.js? http://akdubya.github.com/dustjs/

    You can simulate a lot of things you do with server side templates.

  • http://twitter.com/sainsb Ben Sainsbury

    Please correct me if I’m wrong, but we can’t do file I/O client side.

  • Pbk

    We can’t use Chroma key compositing with the video tag–this means we can’t use videos shot on greenscreen. Flash has it but that doesn’t count.

    It can be abused, true, but it also lets us make cool effects.

    • http://twitter.com/dharmaone jouni helminen

      What you really want is html5 video that supports alpha channel, and then you chromakey it in your video editor and render out as a video with a transparent alpha channel for a transparent background. Can’t be done at the moment, since WebM, h264 or OGG don’t support alpha channel. 

      Realtime greensceen compositing is less useful – although it can be done with canvas

  • Anonymous

    “We can’t get POST parameters and HTTP headers”
    Correct me if I’m wrong. But I think this can be natively done in Chrome in the Network tab of the Developer Tools.

    • http://leaverou.me Lea Verou

      I mean through the page.

  • Achilleas Margaritis

    What we need is an “internet O/S”, i.e. a standard platform that allows code and data modules to be downloaded lazilly and be updated automatically. The code modules would contain bytecode or native code so as that it can be interpreted or executed, giving easy access to the host’s capabilities, without having to wait for the W3 consortium for any progress. The whole http/html/javascript/css idea is just a god-awful specification for such a platform that is more of a nuisance than a solution to modern computing needs. 

  • http://twitter.com/mcandre Andrew Pennebaker

    Templating, POST, and HTTP headers can be done in Node.js, a server-side JavaScript web framework.

    • http://leaverou.me Lea Verou

      Which part of “client-side” did you not understand?

  • Anonymous

    1. Native KeyChain support for all platforms (some kind of e-wallet API)
    2. Device API for access to webcam, microphone, scanner and other
    3. Harmony language, garbage collector management, ‘compiled’ JS
    4. Full native support for MathML (for DOM) and LaTex (reliable)
    5. Native Client support for all browsers (PDF, Sound, Video editing)
    6. Mouse cursor customization. Mouse lock for FPS-like games and apps.
    7. Implemented client-side Node.js API for rich applications
    8. UI-API for creating real-life apps with own dock icon, menu, etc.
    9. SourceMap support for languages like SASS and CoffeeScript
    10. Sandboxing and resource-limiting techniques for running remote code

    11. Make coffee each morning and wash the dishes

  • http://twitter.com/ptcc ptcc

    about the screenshots… i did not dig into details, but… what about this: http://html2canvas.hertzen.com/  ?

    • http://leaverou.me Lea Verou

      I wrote “unless we use huge libraries that basically try to emulate a browser” in the post.

  • Ali Farhadi

    Excuse my ignorance, but why iframes aren’t good for client-side includes? I thought html5 seamless attribute was added for the same purpose.

    • http://leaverou.me Lea Verou

      iframes are horrible performance-wise, as it’s basically a new document, with its own global object and everything. The seamless attribute is purely stylistic.

  • monkeyboy

    Wow… completely disagree with the ‘fat client’ philosophy.. you’re making yourself a hostage to fortune as you can’t always predict or control the capabilities of the client running all that DOM-battering, CPU-intensive javascript. The server is controllable and scalable. Not sure the argument about fat client being cheaper really cuts it, that’s analagous to building a house from straw as it’s cheaper than bricks..
    Server side templating is way faster for initial page render as you don’t have several round trips to complete before the content is visible (HTML shell -> JS bundle -> JSON data). I’ve experience of both approaches and am a front end developer so this isn’t a territorial thing, I’m getting that flavour from some of the comments though..
    Server side template the initial page render, then use javascript / JSON / HTML snippets for async page updates. It’s faster, more robust, more maintainable, better SEO, more accessible. Nuff said

  • Er. Varun Ambrose Thomas

    Hello Ma’am, I’m working on a security mechanism and stuck on two points:

    1. Can you please elaborate with references what you meant by “We can’t do screen capture” ??

    2. Is it possible to Save a Custom Cursor Image supplied by server on an HTML page though a script that does not involve server ??

  • Robert Alan Yeatts Jr.

    You made a good point when you said we should try to do as much client-side as we can to keep as much load off of the servers as possible.

  • Max Holmes

    Finally sorted out the Client Side Includes issue with John Hunter’s JQuery-includes. Works on Firefox and Chrome – may be others – on Mac (but with Chrome you have to run Chrome from the terminal with –allow-files-access-from-files.

    I’ve read two ways of doing this:

    open -a “Google Chrome” –args –allow-files-access-from-files

    and

    open /Applications/Google Chrome.app –args –allow-file-access-from-files

    The second worked for me and not the first (snow leopard). May different on other machines.

    For windows I’ve read that it is, which I guess could be incorporated into the relevant opening string in the properties box.

    chrome.exe –allow-file-access-from-files

  • nobody

    Isn’t the current trend in the opposite direction? Look at netbooks, more or less notebooks only used to browse the internet… and some of your problems can be solved with AJAX.

  • http://strugee.net/ Alex Jordan

    note that nowadays Mozilla has an experimental API for listening on TCP sockets coming soon and an experimental API for an HTTP server that IIRC is either in progress or completed. both of these work or will work in Firefox OS, and they’re on the standards track.
    I mention this to point out that in the computer-to-computer example, you could have a server negotiate a connection but then drop to direct communication this way. maybe. it might depend on firewalls, etc.