Blog

38 posts 2009 16 posts 2010 50 posts 2011 28 posts 2012 15 posts 2013 7 posts 2014 10 posts 2015 5 posts 2016 4 posts 2017 7 posts 2018 2 posts 2019 17 posts 2020 7 posts 2021 7 posts 2022 11 posts 2023 6 posts 2024

Simpler CSS typing animation, with the ch unit

2 min read 0 comments Report broken page

A while ago, I posted about how to use steps() as an easing function to create a typing animation that degrades gracefully.

Today I decided to simplify it a bit and make it more flexible, at the cost of browser support. The new version fully works in Firefox 1+ and IE10, since Opera and WebKit donā€™t support the ch unitĀ and even though IE9 supports it, it doesnā€™t support CSS animations. To put it simply, one ch unit is equivalent to the width of the zero (0) character of the font. So, in monospace fonts, itā€™s equivalent to the width of every character, since every character has the same width.

In the new version, we donā€™t need an obscuring span, so no extra HTML and it will work with non-solid backgrounds too. Also, even though the number of characters still needs to be hard-coded, it doesnā€™t need to be hardcoded in the animation any more, so it could be easily done through script without messing with creating/modifying stylesheets. Note how each animation only has one keyframe, and takes advantage of the fact that when the from (0%) and to (100%) keyframes are missing, the browser generates them from the fallback styles. I use this a lot when coding animations, as I hate duplication.

In browsers that support CSS animations, but not the ch unit (such as WebKit based browsers), the animation will still occur, since we included a fallback in ems, but it wonā€™t be 100% perfect. I think thatā€™s a pretty good fallback, but if it bothers you, just declare a fallback of auto (or donā€™t declare one at all, and it will naturally fall back to auto). In browsers that donā€™t support CSS animations at all (such as Opera), the caret will be a solid black line that doesnā€™t blink. I thought thatā€™s better than not showing it at all, but if you disagree, itā€™s very easy to hide it in those browsers completely: Just swap the border-color between the keyframe and the h1 rule (hint: when a border-color is not declared, itā€™s currentColor).

Edit: It appears that Firefoxā€™s support for the ch unit is a bit buggy so, the following example wonā€™t work with the Monaco font for example. This is not the correct behavior.

Enjoy:


Exactly how much CSS3 does your browser support?

2 min read 0 comments Report broken page

This project started as an attempt to improve dabbletĀ and to generate data for the book chapter Iā€™m writing for Smashing Book #3. I wanted to create a very simple/basic testsuite for CSS3 stuff so that you could hover on a e.g. CSS3 property and you got a nice browser support popup. While I didnā€™t achieve that (turns out BrowserScope doesnā€™t do that kind of thing), I still think itā€™s interesting as a spin-off project, especially since the results will probably surprise you.

How it works

css3test (very superficially) tests pretty much everything in the specs mentioned on the sidebar (not just the popular widely implemented stuff). You can click on every feature to expand it and see the exact the testcases run and whether they passed. It only checks what syntax the browser recognizes, which doesnā€™t necessarily mean it will work correctly when used. WebKit is especially notorious for cheating in tests like this, recognizing stuff it doesnā€™t understand, like the values ā€œroundā€ and ā€œspaceā€ for background-repeat, but the cheating isnā€™t big enough to seriously compromise the test.

Whether a feature is supported with a prefix or not doesnā€™t matter for the result. If itā€™s supported without a prefix, it will test that one. If itā€™s supported only with a prefix, it will test the prefixed one. For properties especially, if an unprefixed one is supported, it will be used in all the tests.

Only stuff thatā€™s in a W3C specification is tested. So, please donā€™t ask or send pull requests for proprietary things like -webkit-gradient() or -webkit-background-clip: text; or -webkit-box-reflect and so on.

Every feature contributes the same to the end score, as well as to the score of the individual spec, regardless of the number of tests it has.

Crazy shit

Chrome may display slightly different scores (1% difference) across pageloads. It seems that for some reason, it fails the tests for border-image completely on some pageloads, which doesnā€™t make any sense. Whoever wants to investigate, Iā€™d be grateful. Edit: Fixed (someone found and submitted an even crazier workaround.).

Browserscope

This is the first project of mine in which Iā€™ve used browserscope. This means that your results will be sent over to its servers and aggreggated. When I have enough data, Iā€™m gonna built a nice table for everyone to see :) In the meantime, check the results page.

It doesnā€™t work on my browser, U SUCK!

The test wonā€™t work on dinosaur browsers like IE8, but who cares measuring their CSS3 support anyway? ā€œFor a laughā€ isnā€™t a good enough answer to warrant the time needed.

If you find a bug, please remember you didnā€™t pay a dime for this before nagging. Politely report it on Github, or even better, fix it and send a pull request.

Why did you build it?

To motivate browsers to support the less hyped stuff, because Iā€™m tired of seeing the same things being evangelized over and over. Thereā€™s much more to CSS3.

Current results

At the time of this writing, these are the results for the major modern browsers:

  • Chrome Canary, WebKit nightlies, Firefox Nightly: 64%
  • Chrome, IE10PP4: 63%
  • Firefox 10: 61%
  • Safari 5.1, iOS5 Safari: 60%
  • Opera 11.60: 56%
  • Firefox 9: 58%
  • Firefox 6-8: 57%
  • Firefox 5, Opera 11.1 - 11.5: 55%
  • Safari 5.0: 54%
  • Firefox 4: 49%
  • Safari 4: 47%
  • Opera 10: 45%
  • Firefox 3.6: 44%
  • IE9: 39%

Enjoy!Ā css3test.comĀ Fork css3test on Github Browserscope results


Why tabs are clearly superior

4 min read 0 comments Report broken page

If you follow me on twitter or have heard one of my talks youā€™ll probably know I despise spaces for indentation with a passion. However, Iā€™ve never gone into the details of my opinion on stage, and twitter isnā€™t really the right medium for advocacy. I always wanted to write a blog post about my take on this old debate, so here it is.

Tabs for indentation, spaces for alignment

Letā€™s get this out of the way: Tabs should never be used for alignment. Using tabs for alignment is actively worse than using spaces for indentation and is the base of all arguments against tabs. But using tabs for alignment is misuse, and negates their main advantage: personalization. Itā€™s like trying to eat soup with a fork and complaining because it doesnā€™t scoop up liquid well.

Consider this code snippet:

if (something) {
	let x = 10,
	    y = 0;
}

Each line inside the conditional is indented with a tab, but the variables are aligned with four spaces. Change the tab size to see how everything adapts beautifully:

And yes, of course using tabs for alignment is a mess, because thatā€™s not what theyā€™re for:

if (something) {
	let x = 10,
		y = 0;
}

Another example: remember CSS vendor prefixes?

div {
	-webkit-transition: 1s;
	   -moz-transition: 1s;
	    -ms-transition: 1s;
	     -o-transition: 1s;
	        transition: 1s;
}

1. Tabs can be personalized

The width of a tab character can be adjusted per editor. This is not a disadvantage of tabs as commonly evangelized, but a major advantage. People can view your code in the way they feel comfortable with, not in the way *you* prefer. Tabs are one step towards decoupling the codeā€™s presentation from its logic, just like CSS decouples presentation from HTML. They give more power to the reader rather than letting the author control everything. Basically, using spaces is like saying ā€œI donā€™t care about how you feel more comfortable reading code, I will force you to use my preferences because itā€™s my codeā€.

Personalization is incredibly valuable when a team is collaborating, as different engineers can have different opinions. Some engineers prefer their indents to be 2 spaces wide, some prefer them to be 4 spaces wide. With spaces for alignment, a lead engineer imposes their preference on the entire team; with tabs everyone gets to choose what they feel comfortable with.

2. You donā€™t depend on certain tools

When using spaces, you depend on your editor to abstract away the fact that an indent is actually N characters instead of one. You depend on your editor to insert N spaces every time you press the Tab key and to delete N characters every time you press backspace or delete near an indent. I have never seen an editor where this abstraction did not leak at all. If youā€™re not careful, itā€™s easy to end up with indentation that is not an integer multiple of the indent width, which is a mess. With tabs, the indent width is simply the number of tabs at the beginning of a line. You donā€™t depend on tools to hide anything, and change the meaning of keyboard keys. Even in the most basic of plain text editors, you can use the keyboard to navigate indents in integer increments.

3. Tabs encode strictly more information about the code

Used right, tabs are only used for a singular purpose: indentation. This makes them easy to target programmatically, e.g. through regular expressions or find & replace. Spaces on the other hand, have many meanings, so programmatically matching indents is a non-trivial problem. Even if you only match space characters at the beginning of a line, there is no way of knowing when to stop, as spaces are also used for alignment. Being able to tell the difference requires awareness about the semantics of the language itself.

4. Tabs facilitate portability

As pointed out byĀ Norbert SĆ¼le in the comments, when you copy and paste code thatā€™s indented with spaces, you have to manually adjust the indentation afterwards, unless the other person also happens to prefer the same width indents as you. With tabs, there is no such issue, as itā€™s always tabs so it will fit in with your (tabbed) code seamlessly. The world would be a better place if everyone used tabs.

5. Tabs take up less space

One of the least important arguments here, but still worth mentioning. Tabs take up only one byte, while spaces take up as many bytes as their width, usually 2-4x that. On large codebases this can add up. E.g. in a codebase of 1M loc, averaging 1 indent per line (good luck computing these kinds of stats with spaces btw, see 3 above), with an indent width of 4 spaces, you would save 3MB of space by using tabs instead of spaces. Itā€™s not a tremendous cost if spaces actually offered a benefit, but itā€™s unclear what the benefit is.

The downsides of using tabs for indentation

Literally all downsides of using tabs for indentation stem from how vocal their opponents are and how pervasive spaces are for indentation. To the point that using spaces for indentation is associated with significantly higher salaries!

In browsers

It is unfortunate that most UAs have declared war to tabs by using a default tab size of 8, far too wide for any practical purpose. For code you post online, you can use the tab-size property to set tab size to a more reasonable value, e.g. 4. Itā€™s widely supported.

For reading code on other websites, you can use an extension like Stylus to set the tab size to whatever you want. I have this rule applying on all websites:

/* ==UserStyle==
@name           7/22/2022, 5:43:07 PM
@namespace      *
@version        1.0.0
@description    A new userstyle
@author         Me
==/UserStyle== */
  • { tab-size: 4 !important; }

In tooling

Editors that handle smart tabs correctly are few and far between. Even VS Code, the most popular editor right now, doesnā€™t handle them correctly, though there are extensions (Tab-Indent Space-Align, Smart Tabs, and others)

What does it matter, tabs, spaces, whatever, itā€™s just a pointless detail

Sure, in the grand scheme of things, using spaces for indentation will not kill anyone. But itā€™s a proxy for a greater argument: that technology should make it possible to read code in the way you prefer, without having to get team buy-in on your preferences. There are other ways to do this (reformatting post-pull and pre-commit), but are too heavyweight and intrusive. If we canā€™t even get people to see the value of not enforcing the same indentation width on everyone, how can we expect them to see the value in further personalization?

Further reading

Thanks to OliĀ for proofreading the first version of this post.


My new yearā€™s resolution

1 min read 0 comments Report broken page

Warning: Personal post ahead. If youā€™re here to read some code trickery, move along and wait for the next post, kthxbai

Blogs are excellent places for new yearā€™s resolutions. Posts stay there for years, to remind you what youā€™ve been thinking long ago. A list on a piece of paper or a file in your computer will be forgotten and lost, but a resolution on your blog will come back to haunt you. Sometimes you want that extra push. Iā€™m not too fond of new yearā€™s resolutions and this may as well be my first, but this year there are certain goals I want to achieve, unlike previous years were things were more fluid.

So, in 2012 I want toā€¦

  • Land my dreamjob in a US company/organization I respect
  • Get the hell out of Greece and move to the Bay Area
  • Strive to improve my english even more, until I sound and write like a native speaker
  • Find a publisher I respect thatā€™s willing to print in full color and write my first book.
  • Stop getting into stupid fights on twitter. They are destructive to both my well-being and my creativity.
  • Get my degree in Computer Science. This has been my longest side project, 4 years and counting.

I wonder how many of those I will have achieved this time next year, how many I will have failed and how many I wonā€™t care about any moreā€¦


What we still canā€™t do client-side

3 min read 0 comments Report broken page

With the rise of all these APIs and the browser race to implement them, youā€™d think that currently we can do pretty much everything in JavaScript and even if we currently canā€™t due to browser support issues, we will once the specs are implemented. Unfortunately, thatā€™s not true. There are still things we canā€™t do, and thereā€™s no specification to address them at the time of this writing and no way to do them with the APIs we already have (or if there is a way, itā€™s unreasonably complicated).

We canā€™t do templating across pages

Before rushing to tell me ā€œno, we canā€, keep reading. I mean have different files and re-use them accross different pages. For example, a header and a footer. If our project is entirely client-side, we have to repeat them manually on every page. Of course, we can always use (i)frames, but that solution is worse than the problem it solves. There should be a simple way to inject HTML from another file, like server-side includes, but client-side. without using JavaScript at all, this is a task that belongs to HTML (with JS we can always use XHR to do it butā€¦). The browser would then be able to cache these static parts, with significant speed improvements on subsequent page loads.

Update: The Web Components family of specs sort of helps with this, but still requires a lot of DIY and Mozilla is against HTML imports and will not implement them, which is one main component of this.

We canā€™t do localization

At least not in a sane, standard way. Client-side localization is a big PITA. There should be an API for this. That would have the added advantage that browsers could pick it up and offer a UI for it. I canā€™t count the number of times Iā€™ve thought a website didnā€™t have an English version just because their UI was so bad I couldnā€™t find the switcher. Google Chrome often detects a website language and offers to translate it, if such an API existed we could offer properly translated versions of the website in a manner detectable by the browser.

Update: We have the ECMAScript Globalization API, although it looks far from ideal at the moment.

We canā€™t do screen capture

And not just of the screen, but we canā€™t even capture an element on the page and draw it on a canvas unless we use huge libraries that basically try to emulate a browser or SVG foreignObject which has its own share of issues. We should have a Screen Capture API, or at the very least, a way to draw DOM nodes on canvas. Yes, there are privacy concerns that need to be taken care of, but this is so tremendously useful that itā€™s worth the time needed to go intro researching those.

We canā€™t get POST parameters and HTTP headers

Thereā€™s absolutely NO way to get the POST parameters or the HTTP response headers that the current page was sent with. You can get the GET parameters through the location object, but no way to get POST parameters. This makes it very hard to make client-side applications that accept input from 3rd party websites when that input is too long to be on the URL (as is the case of dabblet for example).

We canā€™t make peer to peer connections

There is absolutely no way to connect to another client running our web app (to play a game for example), without an intermediate server.

Update: Thereā€™s RTCPeerConnection in WebRTC, though the API is pretty horrible.

_________

Anything else we still canā€™t do and we still donā€™t have an API to do so in the future? Say it in the comments!

Or, if Iā€™m mistaken about one of the above and there is actually an active spec to address it, please point me to it!

Why would you want to do these things client-side?!

Everything that helps take load away from the server is good. The client is always one machine, everything on the server may end up running thousands of times per second if the web app succeeds, making the app slow and/or costly to run. I strongly believe in lean servers. Servers should only do things that architecturally need a server (e.g. centralized data storage), everything else is the clientā€™s job. Almost everything that we use native apps for, should (and eventually will) be doable by JavaScript.


Dabblet blog

1 min read 0 comments Report broken page

Not sure if you noticed, but Dabblet now has a blog: blog.dabblet.com

Iā€™ll post there about Dabblet updates and not flood my regular subscribers here who may not care. So, if you are interested on Dabbletā€™s progress, follow that blog or @dabblet on twitter.

That was also an excuse to finally try tumblr. So far, so good. I love how it gives you custom domains and full theme control for free (hosted Wordpress charges for those). Gorgeous, GORGEOUS interface too. Most of the themes have markup from the 2005-2007 era, but that was no surprise. I customized the theme I picked to make it more HTML5-ey and more on par with dabbletā€™s style and it was super easy (though my attempt is by no means finished). There are a few shortcomings (like no titles for picture posts), but nothing too bad.


On web apps and their keyboard shortcuts

3 min read 0 comments Report broken page

Yesterday, I released dabblet. One of its aspects that I took extra care of, is itā€™s keyboard navigation. I used many of the commonly established application shortcuts to navigate and perform actions in it. Some of these naturally collided with the native browser shortcuts and I got a few bug reports about that. Actually, overriding the browser shortcuts was by design, and Iā€™ll explain my point of view below.

Native apps use these shortcuts all the time. For example, I press Cmd+1,2,3 etc in Espresso to navigate through files in my project. People press F1 for help. And so on. These shortcuts are so ingrained in our (power users) minds and so useful that we thoroughly miss them when theyā€™re not there. Every time I press Cmd+1 in an OSX app and I donā€™t go to the first tab, Iā€™m distraught. However, in web apps, these shortcuts are taken by the browser. We either have to use different shortcuts or accept overriding the browserā€™s defaults.

Using different shortcuts seems to be considered best practice, but how useful are these shortcuts anyway? They have to be individually learned for every web app, and thatā€™s hardly about memorizing the ā€œkeyboard shortcutsā€ list. Our muscles learn much more slowly than our minds. To be able to use these shortcuts as mindlessly as we use the regular application shortcuts, we need to spend a long time using the web app and those shortcuts. If we ever do get used to them that much, weā€™ll have trouble with the other shortcuts that most apps use, as our muscles will try to use the new ones.

Using the de facto standard keyboard shortcuts carries no such issues. They take advantage of muscle memory from day one. If we advocate that web is the new native, it means our web apps should be entitled to everything native apps are. If native editors can use Cmd+1 to go to the first tab and F1 for help, so should a web editor. When youā€™re running a web app, the browser environment is merely a host, like your OS. The focus is the web app. When youā€™re working in a web app and you press a keyboard shortcut, chances are youā€™re looking to interact with that app, not with the browser Chrome.

For example, Iā€™m currently writing in Wordpressā€™ editor. When I press Cmd+S, I expect my draft to be saved, not the browser to attempt to save the current HTML page. Would it make sense if they wanted to be polite and chose a different shortcut, like Alt+S? I would have to learn the Save shortcut all over again and Iā€™d forever confuse the two.

Of course, it depends on how you define a web app. If weā€™re talking about a magazine website for example, youā€™re using the browser as a kind of reader. The app youā€™re using is still the browser, and overriding its keyboard shortcuts is bad. Itā€™s a sometimes fine distinction, and many disagreements about this issue are basically disagreements about what constitutes a web app and how much of an application web apps are.

So, what are your thoughts? Play it safe and be polite to the host or take advantage of muscle memory?

Edit: Johnathan Snook posted these thoughts in the comments, and I thought his suggested approach is pure genius and every web UX person should read it:

On Yahoo! Mail, we have this same problem. Itā€™s an application with many of the same affordances of a desktop application. As a result, we want to have the same usability of a desktop applicationā€”including with keyboard shortcuts. In some cases, like Cmd-P for printing, weā€™ll override the browser default because the browser will not have the correct output.

For something like tab selection/editing, we donā€™t override the defaults and instead, create alternate shortcuts for doing so.

One thing I suggest you could try is to behave somewhat like overflow areas in a web page. When you scroll with a scroll mouse or trackpad in the area, the browser will scroll that area until it reaches itā€™s scroll limit and then will switch to scrolling the entire page. It would be interesting to experiment with this same approach with other in-page mechanisms. For example, with tabs, I often use Cmd-Shift-[ and Cmd-Shift-] to change tabs (versus Cmd-1/2/3, etc). You could have it do so within the page until it hits its limit (first tab/last tab) and then after that, let the event fall back to the browser. For Cmd-1, have it select the first tab. If the user is already on the first tab, have it fall back to the browser.


Introducing dabblet: An interactive CSS playground

3 min read 0 comments Report broken page

I loved JSFiddle ever since I first used it. Being able to test something almost instantly and without littering my hard drive opened new possibilities for me. I use it daily for experiments, browser bug testcases, code snippet storage, code sharing and many other things. However, there were always a few things that bugged me:

  • JSFiddle is very JS oriented, as you can tell even from the name itself
  • JSFiddle is heavily server-side so thereā€™s always at least the lag of an HTTP request every time you make an action. It makes sense not to run JS on every keystroke (JSBin does it and itā€™s super annoying, even caused me to fall in an infinite loop once) but CSS and HTML could be updated without any such problems.
  • Iā€™m a huge tabs fan, I hate spaces for indenting with a passion.
  • Every time I want to test a considerable amount of CSS3, I need to include -prefix-free as a resource and I canā€™t save that preference or any other (like ā€œNo libraryā€).

Donā€™t get me wrong, I LOVE JSFiddle. It was a pioneer and it paved the way for all similar apps. Itā€™s great for JavaScript experiments. But for pure CSS/HTML experiments, we can do better.

The thought of making some interactive playground for CSS experiments was lingering in my mind for quite a while, but never attempted to start it as I knew it would be a lot of fascinating work and I wouldnā€™t be able to focus on anything else throughout. While I was writing my 24ways article, I wanted to include lots of CSS demos and I wanted the code to be editable and in some cases on top of the result to save space. JSFiddleā€™s embedding didnā€™t do that, so I decided to make something simple, just for that article. It quickly evolved to something much bigger, and yes I was right, it was lots of fascinating work and I wasnā€™t able to focus on anything else throughout. I even delayed my 24ways article for the whole time I was developing it, and Iā€™m grateful that Drew was so patient. After 3 weeks of working on it, I present dabblet.

Features

So what does dabblet have that similar apps donā€™t? Hereā€™s a list:

  • Realtime updates, no need to press a button or anything
  • Saves everything to Github gists, so even if dabblet goes away (not that I plan to!) you wonā€™t lose your data
  • No page reloads even on saving, everything is XHR-ed
  • Many familiar keyboard shortcuts
  • Small inline previewers for many kinds of CSS values, in particular for: colors, absolute lengths, durations, angles, easing functionsĀ and gradients. Check them all in this dabblet.
  • Automatically adds prefixes with -prefix-free, to speed up testing
  • Use the Alt key and the up/down arrows to increment/decrement <length>, <time> and <angle> values.
  • Dabblet is open source under a NPOSL 3.0 license
  • You can save anonymously even when you are logged in
  • Multiple view modes: Result behind code, Split views (horizontal or vertical), separate tabs. View modes can be saved as a personal preference or in the gists (as different demos may look better with different view modes)
  • Editable even from an embedded iframe (to embed just use the same dabblet URL, it will be automatically adjusted through media queries)

Hereā€™s a rough screencast that I made in 10 minutes to showcase some of dabbletā€™s features. Thereā€™s no sound and is super sloppy but I figured even this lame excuse of a screencast is better than none.

Iā€™m hoping to make a proper screencast in the next few days.

However, dabblet is still very new. I wouldnā€™t even call it a beta yet, more like an Alpha. Iā€™ve tried to iron out every bug I could find, but Iā€™m sure there are many more lingering around. Also, it has some limitations, but itā€™s my top priority to fix them:

  • Itā€™s currently not possible to see or link to older versions of a dabblet. You can of course use Github to view them.
  • It currently only works in modern, CORS-enabled browsers. Essentially Chrome, Safari and Firefox. I intend to support Opera too, once Opera 12 comes out. As for IE, Iā€™ll bother with it when a significant percentage of web developers start using it as their main browser. Currently, I donā€™t know anyone that does.
  • It doesnā€™t yet work very well on mobile but Iā€™m working on it and itā€™s a top priority
  • You canā€™t yet add other scripts like LESS or remove -prefix-free.
  • Hasnā€™t been tested in Windows very much, so not sure what issues it might have there.

I hope you enjoy using it as much as I enjoyed making it. Please report any bugs and suggest new features in its bug tracker.

Examples

Here are some dabblets that should get you started:

Credits

Roman Komarov helped tremendously by doing QA work on dabblet. Without his efforts, it would have been super buggy and much less polished.

Iā€™d also like to thank David Storey for coming up with the name ā€œdabbletā€ and for his support throughout these 3 weeks.

Last but not least, Iā€™d also like to thank Oli Studholme and Rich Clark for promoting dabblet in their .net magazine articles even before its release.

Update: Dabblet has its own twitter account now: Follow @dabblet


Vendor prefixes have failed, whatā€™s next?

4 min read 0 comments Report broken page

Edit: This was originally written to be posted in www-style, the mailing list for CSS development. I thought it might be a good idea to post it here as other people might be interested too. It wasnā€™t. Most people commenting didnā€™t really get the point of the article and thought Iā€™m suggesting we should simply drop prefixes. Others think that itā€™s an acceptable solution for the CSS WG if CSS depends on external libraries like my own -prefix-free or LESS and SASS. I guess it was an failure of my behalf (ā€œKnow your audienceā€) and thus Iā€™m disabling comments.

Discussion about prefixes was recently stirred up again by an article by Henri Sivonen, so the CSS WG started debating for the 100th timeĀ about when features should become unprefixed.

I think we need to think out of the box and come up with new strategies to solve the issues that vendor prefixes were going to fix. Vendor prefixes have failed and we canā€™t solve their issues by just unprefixing properties more early.

Issues

The above might seem a bold statement, so let me try to support it by recapping the serious issues we run into with vendor prefixes:

1. Unnecessary bloat

Authors need to use prefixes even when the implementations are already interoperable. As a result, they end up pointlessly duplicating the declarations, making maintenance hard and/or introducing overhead from CSS pre- and post-processors to take care of this duplication. We need to find a way to reduce this bloat to only the cases where different declarations are actually needed.

2. Spec changes still break existing content

The biggest advantage of the current situation was supposed to be that spec changes would not break existing content, but prefixes have failed to even do this. The thing is, most authors will use something if itā€™s available, no questions asked. Ā I doubt anyone that has done any real web development would disagree with that.Ā And in most cases, they will prefer a slightly different application of a feature than none at all, so they use prefixed properties along with unprefixed. Then, when the WG makes a backwards-incompatible change, existing content breaks.

I donā€™t think this can really be addressed in any way except disabling the feature by default in public builds. Any kind of prefix or notation is pointless to stop this, weā€™ll always run into the same issue. If we disable the feature by default, almost nobody will use it since they canā€™t tell visitors to change their browser settings. Do we really want that? Yes, the WG will be able to make all the changes they want, but then then who will give feedback for these changes? Certainly not authors, as they will effectively have zero experience working with the feature as most of them donā€™t have the time to play around with features they canā€™t use right now.

I think we should accept that changes will break *some* existing content, and try to standardize faster, instead of having tons of features in WD limbo. However, I still think that there should be some kind of notation to denote that a feature is experimental so that at least authors know what theyā€™re getting themselves into by using it and for browsers to be able to experiment a bit more openly. I donā€™t think that vendor prefixes are the right notation for this though.

3. Web development has become a popularity contest

Iā€™ll explain this with an example: CSS animations were first supported by WebKit. People only used the -webkit- prefix with them and they were fine with it. Then Firefox also implemented them, and most authors started adding -moz- to their use cases. Usually only to the new ones, their old ones are still WebKit only. After a while, Microsoft announced CSS animations in IE10. Some authors started adding -ms- prefixes to their new websites, some others didnā€™t because IE10 isnā€™t out yet. When IE10 is out, they still wonā€™t add it because their current use cases will be for the most part not maintained any more. Some authors donā€™t even add -ms- because they dislike IE.Ā Opera will soon implement CSS animations. Who will really go back and add -o- versions? Most people will not care, because they think Opera has too little market share to warrant the extra bloat.

So browsers appear to support less features, only because authors have to take an extra step to explicitly support them. Browsers do not display pages with their full capabilities because authors were lazy, ignorant, or forgetful. This is unfair to both browser vendors and web users. We need to find a way to (optionally?) decouple implementation and browser vendor in the experimental feature notation.

Ideas

There is a real problem that vendor prefixes attempted to solve, but vendor prefixes didnā€™t prove out to be a good solution. I think we should start thinking outside the box and propose new ideas instead of sticking to vendor prefixes and debating their duration. Iā€™ll list here a few of my ideas and Iā€™m hoping others will follow suit.

1. Generic prefix (-x- or something else) and/or new @rule

A generic prefix has been proposed before, and usually the argument against it is that different vendors may have incompatible implementations. This could be addressed at a more general level, instead of having the prefix on every feature: An @-rule for addressing specific vendors. for example:

@vendor (moz,webkit,o) { .foo { -x-property: value; } }

@vendor (ms) { .foo { -x-property: other-value; } }

A potential downside is selector duplication, but remember: The @vendor rule would ONLY be used when implementations are actually incompatible.

Of course, thereā€™s the potential for misuse, as authors could end up writing separate CSS for separate browsers using this new rule. However, I think weā€™re in a stage where most authors have realized that this is a bad idea, and if they want to do it, they can do it now anyway (for example, by using @-moz-document to target Moz and so on)

2. Supporting both prefixed and unprefixed for WD features

This delegates the decision to the author, instead of the WG and implementors. The author could choose to play it safe and use vendor prefixes or risk it in order to reduce bloat on a per-feature basis.

I guess a problem with this approach is that extra properties mean extra memory, but itā€™s something that many browsers already do when they start supporting a property unprefixed and donā€™t drop the prefixed version like they should.

Note: While this post was still in draft, I was informed that Alex Mogilevsky has suggested something very similar. Read his proposal.

3. Prefixes for versioning, not vendors

When a browser implements a property for the first time, they will use the prefix -a-. Then, when another browser implements that feature, they look at the former browserā€™s implementation, and if theirs is compatible, they use the same prefix. If itā€™s incompatible, they increment it by one, using -b- and so on.

A potential problem with this is collisions: Vendors using the same prefix not because their implementations are compatible but because they developed them almost simultaneously and didnā€™t know about each otherā€™s implementation. Also, it causes trouble for the smaller vendors that might want to implement a feature first.

We need more ideas

Even if the above are not good ideas, Iā€™m hoping that theyā€™ll inspire others to come up with something better. I think we need more ideas about this, rather than more debates about fine-tuning the details of one bad solution.


Animatable: A CSS transitions gallery

1 min read 0 comments Report broken page

What kind of transitions can you create with only one property? This is what my new experiment, animatable aims to explore.

Itā€™s essentially a gallery of basic transitions. It aims to show how different animatable properties look when they transition and to broaden our horizons about which properties can be animated. Hover over the demos to see the animation in action, or click ā€œAnimate Allā€ to see all of them (warning: might induce nausea, headache and seizures :P ). You can also click on it to see more details and get a permalink. Instead of clicking, you can also navigate with the arrow keys and press Esc to return to the main listing.

Fork it on Github and add your own ideas. Be sure to add your twitter username to them as a data-author attribute!

Iā€™ve only tested in Firefox and Chrome for OSX so far. Not sure which other browsers are supported. However, since it uses CSS animations, we know for sure that it wonā€™t work in browsers that donā€™t support CSS animations.

Hope you enjoy it :)


My experience from Fronteers, JSConf EU, Frontend and FromTheFront

4 min read 0 comments Report broken page

This month has been very busy conference-wise. I had 4 conferences in a row, so I was flying from country to country and giving talks for 2 weeks. As I usually do after conferences, this post sums up my experiences and feedback I got from these conferences, in chronological order.

FromTheFront

This was a rather low-budget Italian conference that took place in Cesena, a city near Bologna. Despite the extremely low ticket price, they managed to pull off a very decent one day conference, which is very admirable. Italian food is so good that Iā€™d recommend visiting this country even if itā€™s just for the food! They were very nice hosts, and I thoroughly enjoyed my time there.

My talk was right after Jeremy Keithā€™s, who is a very well-known and experienced speaker that knows how to make audiences delirious (in a good way), so I was naturally a bit nervous about the unavoidable comparison. Despite my fears, my talk was very well received. Hereā€™s a sample of the twitter feedback I got:

JSConf EU

Next stop was Berlin and JSConfā€™s European sister conference. This was one of the most well organized conferences Iā€™ve been to: The food, the coffee, the afterparties, the wifi, the projectors, everything was top notch. Also, it had a get-together the day after the conference (called ā€œhangover.jsā€) which I think is great and more conferences should start adopting this tradition. It eases the pain of the conference being over and you get to say goodbye to a few folks you werenā€™t able to catch at the afterparty. It also featured many cool ideas, like a gal drawing live visualizations of the talks (Hereā€™s mine) and a singer to open the conference in the first day singing a song to ā€¦Brendan EichĀ (!). I made new friends, had lots of fun and everything was awesome.

I was a bit more nervous about my talk for two reasons: Firstly, it was my first JavaScript talk, and secondly, it had no live demos like my CSS talks, which is a big part of why people like them. It went much better than I expected, and I got very good feedback and even though I went hugely overtime (I had 30 minutes and did 55!) nobody complained. Thankfully, it was right before lunch so I didnā€™t eat up another speakerā€™s time (which is part of the reason I love the pre-lunch spot so much). Ā I didnā€™t get the super-enthusiastic feedback I get from my CSS talks, but it was good enough to not be disappointed. Hereā€™s a sample:

You can find my slides on SpeakerdeckĀ , Slideshare or the HTML version on my website.

Fronteers

I was looking forward to Fronteers the most, since itā€™s my favorite conference. It might not be the one with the most money or the biggest, but it has a special place in my heart for a number of different reasons (not all of which I can write in a public blog post). It was the first international conference I ever attended (in 2010) and Iā€™ve met there so many people I used to only know (and admire) as a name & avatar before. Itā€™s the conference Iā€™ve had the most fun at, in both years Iā€™ve been there. Everyone, the volunteers, the attendees, the speakers, everyone is awesome. There is something magic about this conference, as most of its speakers and attendees think about it in the same way (Christian Heilmann for example calls it ā€œhis special conferenceā€ and he goes to A LOT of conferences). It doesnā€™t just feel like a professional conference, it feels like a big, loving, open, web development family that gets together once a year to celebrate the advances in our field.

But this time, I wasnā€™t just an attendee. I wasnā€™t a regular speaker either. I was also hosting a workshop, my first full day workshop. I was super stressed about that, and in retrospect, it was the most exhausting thing I have ever done. Some other speakers told me it felt so exhausting because it was my first, I really hope theyā€™re right. Luckily, attendees loved it, and they didnā€™t seem to notice my progressively getting tired after the 4th hour. Hereā€™s some of the feedback I got:

https://twitter.com/V\_v\_V/status/121511282758258688

My talk was the next day, and even though I was afraid it would be bad due to being tired from the workshop and the pre-party, I think it was my best talk ever. I was much more relaxed, and I got the most enthusiastic feedback I ever have. My hand literally got tired favoriting tweets, and Iā€™m pretty sure I missed some. Hereā€™s a small sample:

https://twitter.com/addy\_osmani/status/121885202581684224

https://twitter.com/Georg\_Tavonius/status/121888702678044672

My slides are now online at Speakerdeck, Slideshare and the interactive version on my website.

Frontend 2011

Oslo is a city Iā€™ve been to many times in the past, so there was nothing new to see there. I didnā€™t make it to the speakers dinner & pre-party due to my late flight, which kinda sucked but itā€™s my fault since it took me a long while to decide on my flight dates. The conference itself was a bit more design-focused that Iā€™d like, but very well organized. It took place in the same hotel the speakers were staying at, which is always a good thing. It also had the best coffee Iā€™ve ever drank at a conference, and one of the best Iā€™ve tasted in general. I also loved the idea of having multiple projectors, so that everyone in the audience can see clearly. They had the very original idea of not only drawing caricatures for every speaker (hereā€™s mine, I also got it in a nice frame) but also having the artist in the venue to draw caricatures for attendees as well!

My talk went smoothly, and received very good feedback:

https://twitter.com/gustaff\_weldon/status/123741227563753472

https://twitter.com/gav\_taylor/status/123742248646094848

https://twitter.com/ken\_guru/status/123744573666246656

Thatā€™s it. I now get to rest for a while. Next stop isĀ SWDCĀ in November, which will host the premiĆØre of my new talk ā€œCSS in the 4th dimension: Not your daddyā€™s CSS animationsā€ which will be about CSS transitions & animations, from the basics all way to badass secrets.

Thanks to all the conference organizers for inviting me and for the attendees for attending and giving feedback on my talks. You are all awesome, and it was the best 2 weeks ever. :)


Optimizing long lists of yes/no values with JavaScript

1 min read 0 comments Report broken page

My newest article on Smashing Magazineā€™s coding section is for the geekiest among you. Itā€™s about how you can pack long lists of boolean values into a string in the most space-efficient way. Hope you enjoy it :)


Easily keep gh-pages in sync with master

1 min read 0 comments Report broken page

I always loved Githubā€™s ability to publish pages for a project and get the strain out of your server. However, every time I tried it, I struggled to keep the gh-pages branch up to date. Until I discovered the awesome git rebase.

Usually my github workflow is like this:

git add . git status // to see what changes are going to be commited git commit -m ā€˜Some descriptive commit messageā€™ git push origin master

Now, when I use gh-pages, there are only a few more commands that I have to use after the above:

git checkout gh-pages // go to the gh-pages branch git rebase master // bring gh-pages up to date with master git push origin gh-pages // commit the changes git checkout master // return to the master branch

I know this is old news to some of you (Iā€™m a github n00b, struggling with basic stuff, so my advice is probably for other n00bs), but if I had read this a few months ago, it wouldā€™ve saved me big hassles, so Iā€™m writing it for the others out there that are like me a few months ago.

Now if only I find an easy way to automate thisā€¦ :)


PrefixFree: Break free from CSS prefix hell!

1 min read 0 comments Report broken page

I wrote this script while at the airport travelling to Oslo and during the Frontend 2011 conference. I think itā€™s amazing, and it makes authoring CSS3 a pleasure.

Read my announcement about it on Smashing Magazine.

Hope you like it!


My experience from Frontendconf Zurich

2 min read 0 comments Report broken page

Iā€™m writing this blog post while eating some of the amazing Lindt chocolates I got for free 10 days ago at Frontend conference in Zurich. But it wasnā€™t a good experience only because of them!

First of all, it gave me the opportunity to visit Zurich for free, and meet an old friend for the first time. A girl we used to be penpals with at primary school & junior high when she was still living in Athens and I in Lesvos. She is now living in Zurich and doing her PhD in ETH. I arrived in Zurich a day earlier and stayed in her place that first night. We caught up and I had a great time.

Secondly, the rest of the speakers are great people and fun too, it was a pleasure to meet them. Especially Smashing Magazineā€™s Vitaly Friedman. Heā€™s a very kind guy, nothing like what youā€™d expect from somebody so successful. I also got the chance to meet Robert again, who was lots of fun as always. Those Swedes have a great sense of humor!

The conference itself was very nice, although small (only 200 people). Many inspiring talks, although I couldnā€™t attend them all because they were split into multiple tracks in one day. I would very much prefer it if it had 1 track and was 2 days. The 2nd day was an unconference, where attendees could speak, about whatever they wanted. I decided to get some sleep the second day, so I arrived a bit later, and didnā€™t attend many talks. It was kinda sad that it finished so early, around 4pm almost everyone was gone and most speakers were flying back the same day.

My talk went great, although I had the most technical glitches Iā€™ve ever faced in a talk. That was my fault, not the conferenceā€™s. I guess I should learn to stop tweaking my slides at the last moment, cause things might break (and this time they did). Despite those glitches however, the audience loved it. Hereā€™s a small sample of the twitter feedback I got:

https://twitter.com/smash\_it\_on/status/112107410155515904

If you read the above carefully, you might have noticed that my talk was recorded, so you can see it too. :) Enjoy!