Blog

38 posts 2009 16 posts 2010 50 posts 2011 28 posts 2012 15 posts 2013 7 posts 2014 10 posts 2015 5 posts 2016 4 posts 2017 7 posts 2018 2 posts 2019 17 posts 2020 7 posts 2021 7 posts 2022 11 posts 2023 6 posts 2024

Duoload: Simplest website load comparison tool, ever

1 min read 0 comments Report broken page

Today I needed a quick tool to compare the loading progression (not just loading time, but also incremental rendering) of two websites, one remote and one in my localhost. Just have them side by side and see how they load relative to each other. Maybe even record the result on video and study it afterwards. That’s all. No special features, no analysis, no stats.

So I did what I always do when I need help finding a tool, I asked Twitter:

Most suggested complicated tools, some non-free and most unlikely to work on local URLs. I thought damn, what I need is a very simple thing! I could code this in 5 minutes! So I did and here it is, in case someone else finds it useful! The (minuscule amount of) code is of course on Github.

Duoload

Of course it goes without saying that this is probably a bit inaccurate. Do not use it for mission-critical performance comparisons.

Credits for the name Duoload to Chris Lilley who came up with it in the 1 minute deadline I gave him :P


Resolve Promises externally with this one weird trick

2 min read 0 comments Report broken page

Those of us who use promises heavily, have often wished there was a Promise.prototype.resolve() method, that would force an existing Promise to resolve. However, for architectural reasons (throw safety), there is no such thing and probably never will be. Therefore, a Promise can only resolve or reject by calling the respective methods in its constructor:

var promise = new Promise((resolve, reject) => {
	if (something) {
		resolve();
	}
	else {
		reject();
	}
});

However, often it is not desirable to put your entire code inside a Promise constructor so you could resolve or reject it at any point. In my latest case today, I wanted a Promise that resolved when a tree was created, so that third-party components could defer code execution until the tree was ready. However, given that plugins could be running on any hook, that meant wrapping a ton of code with the Promise constructor, which was obviously a no-go. I had come across this problem before and usually gave up and created a Promise around all the necessary code. However, this time my aversion to what this would produce got me to think even harder. What could I do to call resolve() asynchronously from outside the Promise?

A custom event? Nah, too slow for my purposes, why involve the DOM when it’s not needed?

Another Promise? Nah, that just transfers the problem.

An setInterval to repeatedly check if the tree is created? OMG, I can’t believe you just thought that Lea, ewwww, gross!

Getters and setters? Hmmm, maybe that could work! If the setter is inside the Promise constructor, then I can resolve the Promise by just setting a property!

My first iteration looked like this:

this.treeBuilt = new Promise((resolve, reject) => {
	Object.defineProperty(this, "_treeBuilt", {
		set: value => {
			if (value) {
				resolve();
			}
		}
	});
});

// Many, many lines below…

this._treeBuilt = true;

However, it really bothered me that I had to define 2 properties when I only needed one. I could of course do some cleanup and delete them after the promise is resolved, but the fact that at some point in time these useless properties existed will still haunt me, and I’m sure the more OCD-prone of you know exactly what I mean. Can I do it with just one property? Turns out I can!

The main idea is realizing that the getter and the setter could be doing completely unrelated tasks. In this case, setting the property would resolve the promise and reading its value would return the promise:

var setter;
var promise = new Promise((resolve, reject) => {
	setter = value => {
		if (value) {
			resolve();
		}
	};
});

Object.defineProperty(this, "treeBuilt", { set: setter, get: () => promise });

// Many, many lines below…

this.treeBuilt = true;

For better performance, once the promise is resolved you could even delete the dynamic property and replace it with a normal property that just points to the promise, but be careful because in that case, any future attempts to resolve the promise by setting the property will make you lose your reference to it!

I still think the code looks a bit ugly, so if you can think a more elegant solution, I’m all ears (well, eyes really)!

Update: Joseph Silber gave an interesting solution on twitter:

function defer() {
	var deferred = {
		promise: null,
		resolve: null,
		reject: null
	};

deferred.promise = new Promise((resolve, reject) => { deferred.resolve = resolve; deferred.reject = reject; });

return deferred; }

this.treeBuilt = defer();

// Many, many lines below…

this.treeBuilt.resolve();

I love that this is reusable, and calling resolve() makes a lot more sense than setting something to true. However, I didn’t like that it involved a separate object (deferred) and that people using the treeBuilt property would not be able to call .then() directly on it, so I simplified it a bit to only use one Promise object:

function defer() {
	var res, rej;

var promise = new Promise((resolve, reject) => { res = resolve; rej = reject; });

promise.resolve = res; promise.reject = rej;

return promise; }

this.treeBuilt = defer();

// Many, many lines below…

this.treeBuilt.resolve();

Finally, something I like!


URL rewriting with Github Pages

2 min read 0 comments Report broken page

redirectI adore Github Pages. I use them for everything I can, and try to avoid server-side code like the plague, exactly so that I can use them. The convenience of pushing to a repo and having the changes immediately reflected on the website with no commit hooks or any additional setup, is awesome. The free price tag is even more awesome. So, when the time came to publish my book, naturally, I wanted the companion website to be on Github Pages.

There was only one small problem: I wanted nice URLs, like http://play.csssecrets.io/pie-animated, which would redirect to demos on dabblet.com. Any sane person would have likely bitten the bullet and used some kind of server-side language. However, I’m not a particularly sane person :D

Turns out Github uses some URL rewriting of its own on Github Pages: If you provide a 404.html, any URL that doesn’t exist will be handled by that. Wait a second, is that basically how we do nice URLs on the server anyway? We can do the same in Github Pages, by just running JS inside 404.html!

So, I created a JSON file with all demo ids and their dabblet URLs, a 404.html that shows either a redirection or an error (JS decides which one) and a tiny bit of Vanilla JS that reads the current URL, fetches the JSON file, and redirects to the right dabblet. Here it is, without the helpers:

(function(){

document.body.className = ‘redirecting’;

var slug = location.pathname.slice(1);

xhr({ src: ‘secrets.json’, onsuccess: function () { var slugs = JSON.parse(this.responseText);

var hash = slugs[slug];

if (hash) { // Redirect var url = hash.indexOf(‘http’) == 0? hash : ‘http://dabblet.com/gist/’ + hash; $(‘section.redirecting > p’).innerHTML = ‘Redirecting to <a href="’ + url + ‘">’ + url + ‘</a>…’; location.href = url; } else { document.body.className = ‘error not-found’; } }, onerror: function () { document.body.className = ‘error json’; } });

})();

That’s all! You can imagine using the same trick to redirect to other HTML pages in the same Github Pages site, have proper URLs for a single page site, and all sorts of things! Is it a hack? Of course. But when did that ever stop us? :D


Autoprefixing, with CSS variables!

1 min read 0 comments Report broken page

Recently, when I was making the minisite for markapp.io, I realized a neat trick one can do with CSS variables, precisely due to their dynamic nature. Let’s say you want to use a property that has multiple versions: an unprefixed one and one or more prefixed ones. In this example we are going to use clip-path, which currently needs both an unprefixed version and a -webkit- prefixed one, however the technique works for any property and any number of prefixes or different property names, as long as the value is the same across all variations of the property name.

The first part is to define a --clip-path property on every element with a value of initial. This prevents the property from being inherited every time it’s used, and since the * has zero specificity, any declaration that uses --clip-path can override it. Then you define all variations of the property name with var(--clip-path) as their value:

* {
	--clip-path: initial;
	-webkit-clip-path: var(--clip-path);
	clip-path: var(--clip-path);
}

Then, every time we need clip-path, we use --clip-path instead and it just works:

header {
	--clip-path: polygon(0% 0%, 100% 0%, 100% calc(100% - 2.5em), 0% 100%);
}

Even !important should work, because it affects the cascading of CSS variables. Furthermore, if for some reason you want to explicitly set -webkit-clip-path, you can do that too, again because * has zero specificity. The main downside to this is that it limits browser support to the intersection of the support for the feature you are using and support for CSS Variables. However, all browsers except Edge support CSS variables, and Edge is working on it. I can’t see any other downsides to it (except having to use a different property name obvs), but if you do, let me know in the comments!

I think there’s still a lot to be discovered about cool uses of CSS variables. I wonder if there exists a variation of this technique to produce custom longhands, e.g. breaking box-shadow into --box-shadow-x, --box-shadow-y etc, but I can’t think of anything yet. Can you? ;)


Markapp: A list of HTML libraries

1 min read 0 comments Report broken page

Screen Shot 2016-08-26 at 17.09.24I have often lamented how many JavaScript developers don’t realize that a large percentage of HTML & CSS authors are not comfortable writing JS, and struggle to use their libraries.

To encourage libraries with HTML APIs, i.e. libraries that can be used without writing a line of JS, I made a website to list and promote them: markapp.io. The list is currently quite short, so I’m counting on you to expand it. Seen any libraries with good HTML APIs? Add them!


Introducing Multirange: A tiny polyfill for HTML5.1 two-handle sliders

1 min read 0 comments Report broken page

multirangeAs part of my preparation for my talk at CSSDay HTML Special, I was perusing the most recent HTML specs (WHATWG Living Standard, W3C HTML 5.1) to see what undiscovered gems lay there. It turns out that HTML sliders have a lot of cool features specced that aren’t very well implemented:

  • Ticks that snap via the list attribute and the <datalist> element. This is fairly decently implemented, except labelled ticks, which is not supported anywhere.
  • Vertical sliders when height > width, implemented nowhere (instead, browsers employ proprietary ways for making sliders vertical: An orient=vertical attribute in Gecko, -webkit-appearance: slider-vertical; in WebKit/Blink and writing-mode: bt-lr; in IE/Edge). Good ol’ rotate transforms work too, but have the usual problems, such as layout not being affected by the transform.
  • Two-handle sliders for ranges, via the multiple attribute.

I made a quick testcase for all three, and to my disappointment (but not to my surprise), support was extremely poor. I was most excited about the last one, since I’ve been wanting range sliders in HTML for a long time. Sadly, there are no implementations. But hey, what if I could create a polyfill by cleverly overlaying two sliders? Would it be possible? I started experimenting in JSBin last night, just for the lolz, then soon realized this could actually work and started a GitHub repo. Since CSS variables are now supported almost everywhere, I’ve had a lot of fun using them. Sure, I could get broader support without them, but the code is much simpler, more elegant and customizable now. I also originally started with a Bliss dependency, but realized it wasn’t worth it for such a tiny script.

So, enjoy, and contribute!

Multirange


My positive experience as a woman in tech

4 min read 0 comments Report broken page

Women speaking up about the sexism they have experienced in tech is great for raising awareness about the issues. However, when no positive stories get out, the overall picture painted is bleak, which could scare even more women away.

Lucky for me, I fell in love with programming a decade before I even heard there is a sexism problem in tech. Had I read about it before, I might have decided to go for some other profession. Who wants to be fighting an uphill battle all her life?

Thankfully, my experience has been quite different. Being in this industry has brought me nothing but happiness. Yes, there are several women who have had terrible experiences, and I’m in no way discounting them. They may even be the majority, though I am not aware of any statistics. However, there is also the other side. Those of us who have had incredibly positive experiences, and have always been treated with nothing but respect. That side’s stories need to be heard too, not silenced out of fear that we will become complacent and stop trying for more equality. Stories like mine should become the norm, not the exception.

I’ve had a number of different roles in tech over the course of my life. I’ve been a student, a speaker & author, I’ve worked at W3C, I’ve started & maintain several successful open source projects and I’m currently dabbling in Computer Science research. In none of these roles did I ever feel I was unfairly treated due to my gender. That is not because I’m oblivious to sexism. I tend to be very sensitive to seeing it, and I often notice even the smallest acts of sexism (“death by a thousand paper cuts”). I see a lot of sexism in society overall. However, inside this industry, my gender never seemed to matter much, except perhaps in positive ways.

On my open source repos, I have several contributors, the overwhelming majority of which, is male. I’ve never felt less respected due to my gender. I’ve never felt that my work was taken less seriously than male OSS developers. I’ve never felt my contributors would not listen to me. I’ve never felt my work was unfairly scrutinized. Even when I didn’t know something, or introduced a horrible bug, I’ve never been insulted or berated. The community has been nothing but friendly, helpful and respectful. If anything, I’ve sometimes wondered if my gender is the reason I hardly ever get any shit!

On stage, I’ve never gotten any negative reactions. My talks always get excellent reviews, which have nothing to do with me being female. There is sometimes the odd complimentary tweet about my looks, but that’s not only exceedingly rare, but also always combined with a compliment about the actual talk content. My gender only affected my internal motivation: I often felt I had to be good, otherwise I would be painting all female tech speakers in a negative light. But other people are not at fault for my own stereotype threat.

My book, CSS Secrets, has been as successful as an advanced CSS book could possibly aspire to be and got to an average of 5 stars on Amazon only a few months after its release. It’s steadily the 5th bestseller on CSS and was No 1 for a while shortly after publication. My gender did not seem to negatively affect any of that, even though there’s a picture of me in the french flap so there are no doubts about me being female (as if the name Lea wasn’t enough of a hint).

As a student, I’ve never felt unfairly treated due to my gender by any of my professors, even the ones in Greece, a country that is not particularly famous for its gender equal society, to put it mildly.

As a new researcher, I have no experience with publishing papers yet, so I cannot share any experiences on that. However, I’ve been treated with nothing but respect by both my advisor and colleagues. My opinion is always heard and valued and even when people don’t agree, I can debate it as long and as intensely as I want, without being seen as aggressive or “bossy”.

I’ve worked at W3C and still participate as an Invited Expert in the CSS Working Group. In neither of these roles did my gender seem to matter in any way. I’ve always felt that my expertise and skillset were valued and my opinions heard. In fact, the most well-respected member of the CSS WG, is the only other woman in it: fantasai.

Lastly, In all my years as a working professional, I’ve always negotiated any kind of remuneration, often hard. I’ve never lost an opportunity because of it, or been treated with negativity afterwards.

On the flip side, sexism today is rarely overt. Given that hardly anybody over ten will flat out admit they think women are inferior (even to themselves), it’s often hard to tell when a certain behavior stems from sexist beliefs. If someone is a douchebag to you, are they doing it because you’re a woman, or because they’re douchebags? If someone is criticizing your work, are they doing it because they genuinely found something to criticize or because they’re negatively predisposed due to your gender? It’s impossible to know, especially since they don’t know either! If you confront them on their sexism, they will deny all of it, and truly believe it. It takes a lot of introspection to see one’s internalized stereotypes. Therefore, a lot of the time, you cannot be sure if you have experienced sexist behavior, and there is no way to find out for sure, since the perpetrator doesn’t know either. There are many false positives and false negatives there.

Perhaps I don’t feel I have experienced much sexism because I prefer to err on the side of false negatives. Paraphrasing Blackstone, I would rather not call out sexist behavior ten times, than wrongly accuse someone of it once. It might also have to do with my personality: I’m generally confident and can be very assertive. When somebody is being a jerk to me, I will not curl in a ball and question my life choices, I will reply to them in the same tone. However, those two alone cannot make the difference between a pit rampant with sexism and an egalitarian paradise. I think a lot of it is that we have genuinely made progress, and we should celebrate it with more women coming out with their positive experiences (it cannot just be me, right?).

Ironically, one of the very few times I have experienced any sexism in the industry was when a dude was trying to be nice to me. I was in a speaker room at a conference in Las Vegas, frantically working on my slides, not participating in any of the conversations around me. At some point, one of the guys said “fuck” in a conversation, then turned and apologized to me. Irritated about the sudden interruption, I lifted my head and looked around. I noticed for the first time that day that I was the only woman in the room. His effort to be courteous made me feel that I was different, the odd one out, the one we must be careful around and treat like a fragile flower. To this day, I regret being too startled to reply “Eh, I don’t give a fuck”.


Introducing Bliss: A 3KB library for happier Vanilla JS

3 min read 0 comments Report broken page

Screen Shot 2015-12-04 at 16.59.39Anyone who follows this blog, my twitter, or my work probably is aware that I’m not a huge fan of big libraries. I think wrapper objects are messy, and big libraries are overkill for smaller projects. On large projects, one uses frameworks like React or Angular anyway, not libraries.

Anyone who writes Vanilla JS on a daily basis probably is aware that it can sometimes be, ahem, somewhat unpleasant to work with. Sure, the situation is orders of magnitude better than it was when I started. Back then, IE6 was the dominant browser and you needed a helper function to even add event listeners to an element (remember element.attachEvent?) or to get elements by a class!

jasset-datepicker

Fun fact: I learned JavaScript back then by writing my own library, called jAsset. I had not heard of jQuery when I started it in 2007, so I had even coded my own selector engine! (Anyone remember slickspeed?) jAssset had plenty of nice helper functions, its own UI library and a cool logo. I had even started to make a website for its UI components, seen on the right.

Sadly, jAsset died the sad inevitable death of all unreleased projects: Without external feedback, I had nobody to hold me back from adding to its API every time I personally needed a helper function. And adding, and adding, and adding… Until it became 5000+ loc long and its benefit of being lightweight or comprehensible had completely vanished. It collapsed under its own weight before it even saw the light of day. I abandoned it and went through a few years of using jQuery as my preferred helper library. Eventually, my distaste for wrapper objects, the constantly improving browser support for new APIs that made Vanilla JS more palatable, and the decline of overly conspicuous browser bugs led me to give it up.

It was refreshing, and educational, but soon I came to realize that while Vanilla JS is orders of magnitude better than it was when I started, certain APIs are still quite unwieldy, which can be annoying if you use them often. For example, the Vanilla JS for creating an element, with other elements inside it, events and inline styles is so commonly needed, but also so verbose and WET, it can make one suicidal.

However, Vanilla JS does not mean “use no abstractions”. Programming is all about abstractions! The Vanilla JS movement, is about favoring speed, smaller abstractions and understanding of the Web Platform, over big libraries that we treat as a black box. It’s about using libraries to save time, not to skip learning.

So, I used my own tiny helpers, on every project. They were small and easy to understand, instead of several KB of code aiming to fix browser bugs I will likely never encounter and let me create complex nested DOM structures with a single JSON-like object. Over time, their API solidified and improved. On larger projects it was a separate file which I had tentatively codenamed Utopia (due to the lack of browser bug fixes and optimistic use of modern APIs). On smaller ones just a few helper methods (I could not live without at least my tiny 2 sloc $() and $$() helpers!). Here is a sample from my open source repos:

Notice any recurring themes there? :)

I never mentioned Utopia.js anywhere, besides silently including it in my projects, so it went largely unnoticed. Sometimes people would look at it, ask me to release it, I’d promise them I would and then nothing. A few years ago, someone noticed it, liked it and documented it a bit (site is down now it seems). However, it was largely my little secret, hidden in public view.

For the past half year, I’ve been working hard on my research project at MIT. It’s pretty awesome and is aimed at helping people who know HTML/ CSS but not JS, achieve more with Web technologies (and that’s all I can say for now). It’s also written in JS, so I used Utopia as a helper library, naturally. Utopia evolved even more with this project, got renamed to Bliss and got chainability via my idea about extending DOM prototypes without collisions (can be disabled and the property name is customizable).

All this worked fine while I was the only person working on the project. Thankfully, I might get some help soon, and it might be rather inexperienced (the academia equivalent of interns). Help is very welcome, but it did raise the question: How will these people, who likely only know jQuery, work on the project? [1]

The answer was that the time has come to polish, document and release Bliss to the world. My plan was to spend a weekend documenting it, but it ended up being a little over a week on and off, when procrastinating from other tasks I had to do. However, I’m very proud of the resulting docs, so much that I gifted myself a domain for it. They are fairly extensive (though some functions still need work) and has two things I always missed in other API docs:

  • Recommendations about what Vanilla JS to use instead when appropriate, instead of guiding people into using library methods even when Vanilla JS would have been perfectly sufficient.
  • A “Show Implementation” button showing the implementation, so you can both learn, and judge whether it’s needed or not, instead of assuming that you should use it over Vanilla JS because it has magic pixie dust. This way, the docs also serve as a source viewer!

So, enjoy Bliss. The helper library for people who don’t like helper libraries. :) In a way, it feels that a journey of 8 years, finally ends today. I hope the result makes you blissful too.

blissfuljs.com

Oh, and don’t forget to follow @blissfuljs on twitter!

[1]: Academia is often a little behind tech-wise, so everyone uses jQuery here — hardly any exceptions. Even though browser support doesn’t usually even matter to research projects!


Copying object properties, the robust way

2 min read 0 comments Report broken page

If, like me, you try to avoid using heavy libraries when not needed, you must have definitely written a helper to copy properties from one object to another at some point. It’s needed so often that it’s just silly to write the same loops over and over again.

These days, most of my time is spent working on my research project at MIT, which I will hopefully reveal later this year. In that, I’m using a lightweight homegrown helper library, which I might release separately at some point as I think it has potential in its own right, for a number of reasons.

Of course, it needed to have a simple extend() method as well, to copy properties from one object to another. Let’s assume for the purposes of this article that we’re talking about shallow copying, that overwrites are allowed, and let’s omit hasOwnProperty() checks to make code easier to read.

It’s a simple task, right? Our first attempt might look like this:

$.extend = function (to, from) {
	for (var property in from) {
		to[property] = from[property];
	}

return to; }

This works fine, until you try it on objects with accessors or other types of properties defined via Object.defineProperty() or get/set keywords. What do you do then? Our next iteration could look like this:

$.extend = function (to, from) {
	for (var property in from) {
		Object.defineProperty(to, property, Object.getOwnPropertyDescriptor(from, property));
	}

return to; }

This works much better, until it fails, and it can fail pretty epically. Try this:

$.extend(document.body.style, {
	backgroundColor: "red"
});

Both in Chrome and Firefox, the results are super weird. Even though reading document.body.style.backgroundColor will return "red", no style will have actually been applied. In Firefox it even destroyed the native setter entirely and any future attempts to set document.body.style.backgroundColor in the console did absolutely nothing.

In contrast, the previous naïve approach worked fine for this. It’s clear that we need to somehow combine the two approaches, using Object.defineProperty() only when actually needed. But when is it actually not needed?

One obvious case is if the descriptor is undefined (such as with some native properties). Also, in simple properties, such as those in our object literal, the descriptor will be of the form {value: somevalue, writable: true, enumerable: true, configurable: true}. So, the next obvious step would be:

$.extend = function (to, from) {
	var descriptor = Object.getOwnPropertyDescriptor(from, property);

if (descriptor && (!descriptor.writable || !descriptor.configurable || !descriptor.enumerable || descriptor.get || descriptor.set)) { Object.defineProperty(to, property, descriptor); } else { to[property] = from[property]; } }

This works perfectly, but is a little clumsy. I’ve currently left it at that, but any suggestions for making it more elegant are welcome :)

FWIW, I looked at jQuery’s implementation of jQuery.extend() after this, and it seems it doesn’t even handle accessors at all, unless I missed something. Time for a pull request, perhaps…

Edit: As MaxArt pointed out in the comments, there is a similar native method in ES6, Object.assign(). However, it does not deal with copying accessors, so does not deal with this problem either.


On the blindness of blind reviews

2 min read 0 comments Report broken page

Over the last couple of years, blind reviews have been popularized as the ultimate method for fair talk selection in industry conferences. While I don’t really submit proposals myself, I have served several times on the other side of the process, doing speaker selection in conference committees, and the more data points I collect, the more convinced I become that the blind selection process is fundamentally flawed.

Blind reviews come from the world of academia. However, in academic conferences, you do not judge a talk by a 1-2 paragraph abstract, but by a 10+ page paper, so there’s way more to judge by. In addition, in academia the content of the research matters infinitely more than the quality of a talk. In industry conferences, selection committees in blind reviews have both way less data to use, and a much harder task, as they need to balance several factors (content, speaker skill, talk quality etc). It’s no surprise that the results end up being even more of a gamble.

Blind reviews result in conservative talk selection. More often than not, I remember me and my fellow committee members saying “Damn, this talk could be great with the right presenter, but that’s rare” and giving it a poor or average score. Few topics can make good talks regardless of the presenters. Therefore, when there is little information on the speaker in the initial selection round, talk selection ends up being conservative, rejecting more challenging topics that need a skilled speaker to shine and sticking to safer choices.

One of my most successful talks ever was “The humble border-radius” which was shortlisted for a .net award for Conference Talk of The Year 2014. It would never have passed any blind review. There is no committee in their right mind that would have accepted a 45 minute talk about …border-radius. The conferences I presented it at invited me as a speaker, carte blanche, and trusted me to present on whatever I felt like. Judging by the reviews, they were not disappointed.

In addition, all too many times I’ve seen great speakers get poor scores in blind reviews, not because their talks were not good, but because writing good abstracts is an entirely separate skill. Blind reviews remove anything that could cause bias, but they do so by striping all personality away from a proposal. In addition, a good abstract for a blind review is not necessarily a good abstract in general. For example, blind reviews penalize more mysterious/teasy abstracts and tend to be skewed towards overly detailed ones, since it’s the only data the committee gets for these talks (bonus points here for CfS that have a separate field for more details to conf organizers).

“But what about newcomers to the conference circuit? What about bias elimination?” one might ask. Both very valid concerns. I’m not saying any kind of anonymization is a bad idea. I’m saying that in their present form in industry conferences, blind reviews are flawed. For example, an initial round of blind reviews to pick good talks, without rejecting any at that stage, would probably solve these issues, without suffering from the flaws mentioned above.

Disclaimer: I do recognize that most people in these committees are doing their best to select fairly, and putting many hours of (usually volunteer) work in it. I’m not criticizing them, I’m criticizing the process. And yes, I recognize that it’s a process that has come out of very good intentions (eliminating bias). However, good intentions are not a guarantee for infallibility.


Stretchy: Form element autosizing, the way it should be

1 min read 0 comments Report broken page

Screen Shot 2015-07-25 at 18.40.13As you might be aware, these days a good chunk of my time is spent working on research, at MIT. Although it’s still too early to talk about my research project, I can say that it’s related to the Web and it will be open source, both of which are pretty awesome (getting paid to work on cool open source stuff is the dream, right?).

The one thing I can mention about my project is that it involves a lot of editing of Web content. And since contentEditable is a mess, as you all know, I decided to use form controls styled like the content being edited. This meant that I needed a good script for form control autosizing, one that worked on multiple types of form controls (inputs, textareas, even select menus). In addition, I needed the script to smoothly work for newly added controls, without me having to couple the rest of my code with it and call API methods or fire custom events every time new controls were added anywhere. A quick look at the existing options quickly made it obvious that I had to write my own.

After writing it, I realized this could be released entirely separately as it was a standalone utility. So Stretchy was born :) I made a quick page for it, fixed a few cross-browser bugs that needed fixing anyway, put it up on Github and here it is!

Enjoy!

PS: You can also use it as a bookmarklet, to autosize form controls on an existing page, if a form is bothering you with its poor usability. You will find it in the footer.


Spot the unsubscribe (link)!

1 min read 0 comments Report broken page

Screen Shot 2015-07-28 at 19.39.34After getting fed up with too many “promotional” emails and newsletters with incredibly obscure unsubscribe links, I decided to make this tumblr to point out such examples of digital douchebaggery. This annoying dark pattern is so widespread that Google even added a feature to Gmail for making those unsubscribe links obvious!

Unsubscribe links are crucial to promotional emails. They are not just another menu item. They are not something that should be hidden in a blurb of tiny low contrast text. Unsubscribe links should be immediately obvious to anyone looking for them. You want people to be reading your email because they’re interested, not because they can‘t find the way out. Otherwise you are the digital equivalent of those annoying door-to-door salesmen who just won’t go away.

— From my introductory post on Spot the unsubscribe!

On the spur of the moment, after yet another email newsletter with a hard to find Unsubscribe link, I decided to quickly put together a tumblog about this UX pet peeve of mine, called Spot the Unsubscribe!. In less than an hour, it was ready and had a few posts as well :)

Hopefully if this bothers others as well, there will be submissions. Otherwise, new posts will be rather infrequent.


Conical gradients, today!

2 min read 0 comments Report broken page

Screen Shot 2015-06-18 at 16.26.40It’s no secret that I like conical gradients. In as early as 2011, I wrote a draft for conical-gradient() in CSS, that Tab later said helped him when he added them in CSS Image Values Level 4 in 2012. However, almost three years later, no progress has been made in implementing them. Sure, the spec is still relatively incomplete, but that’s not the reason conical gradients have gotten no traction. Far more underspecified features have gotten experimental implementations in the past. The reason conical gradients are still unimplemented, is because very few developers know they exist, so browsers see no demand.

Another reason was that Cairo, the graphics library used in Chrome and Firefox had no way of drawing a conical gradient. However, this changed a while ago, when they supported mesh gradients, of which conical gradients are a mere special case.

Recently, I was giving a talk on creating pie charts with CSS on a few conferences, and yet again, I was reminded of how useful conical gradients can be. While every CSS or SVG solution is several lines of code with varying levels of hackiness, conical gradients can give us a pie chart with a straightforward, DRY, one liner. For example, this is how to create a pie chart that shows 40% in gold and 60% in #f06:

padding: 5em; /* size */
background: conic-gradient(gold 40%, #f06 0);
border-radius: 50%; /* make it round */

Screen Shot 2015-06-18 at 16.23.57 So, I decided to take matters in my own hands. I wrote a polyfill, which I also used in my talk to demonstrate how awesome conical gradients can be and what cool things they can do. Today, during my CSSConf talk, I released it publicly.

In addition, I mention to developers how important speaking up is for getting their favorite features implemented. Browsers prioritize which features to implement based on what developers ask for. It’s a pity that so few of us realize how much of a say we collectively have in this. This is more obvious with Microsoft and their Uservoice forum where developers can vote on which features they want to see worked on, but pretty much every major browser works in a similar way. They monitor what developers request and what other browsers implement, and decide accordingly. The squeaky wheel will get the feature, so if you really want to see something implemented, speak up.

Since “speaking up” can be a bit vague (“speak up where?” I can hear you asking), I also filed bug reports with all major browsers, that you can also find in the polyfill page, so that you can comment or vote on them. That doesn’t mean that speaking up on blogs or social media is not useful though: That’s why browsers have devrel teams. The more noise we collectively make about the features we want, the more likely it is to be heard. However, the odds are higher if we all channel our voices to the venues browser developers follow and our voice is stronger and louder if we concentrate it in the same places instead of having many separate voices all over the place.

Also, I’m using the term “noise” here a bit figuratively. While it’s valuable to make it clear that we are interested in a certain feature, it’s even more useful to say why. Providing use cases will not only grab browsers’ attention more, but it will also convince other developers as well.

So go ahead, play with conic gradients, and if you agree with me that they are fucking awesome and we need them natively on the Web, make noise.

conic-gradient() polyfill


Idea: Extending native DOM prototypes without collisions

3 min read 0 comments Report broken page

As I pointed out in yesterday’s blog post, one of the reasons why I don’t like using jQuery is its wrapper objects. For jQuery, this was a wise decision: Back in 2006 when it was first developed, IE releases had a pretty icky memory leak bug that could be easily triggered when one added properties to elements. Oh, and we also didn’t have access to element prototypes on IE back then, so we had to add these properties manually on every element. Prototype.js attempted to go that route and the result was such a mess that they decided to change their decision in Prototype 2.0 and go with wrapper objects too. There were even long essays being written back then about how much of a monumentally bad idea it was to extend DOM elements.

The first IE release that exposed element prototypes was IE8: We got access to Node.prototype, Element.prototype and a few more. Some were mutable, some were not. On IE9, we got the full bunch, including HTMLElement.prototype and its descendants, such as HTMLParagraphElement. The memory leak bugs were mitigated in IE8 and fixed in IE9. However, we still don’t extend native DOM elements, and for good reason: collisions are still a very real risk. No library wants to add a bunch of methods on elements, it’s just bad form. It’s like being invited in someone’s house and defecating all over the floor.

But what if we could add methods to elements without the chance of collisions? (well, technically, by minimizing said chance). We could only add one property to Element.prototype, and then hang all our methods on that. E.g. if our library was called yolo and had two methods, foo() and bar(), calls to it would look like:

var element = document.querySelector(".someclass");
element.yolo.foo();
element.yolo.bar();
// or you can even chain, if you return the element in each of them!
element.yolo.foo().yolo.bar();

Sure, it’s more awkward than wrapper objects, but the benefit of using native DOM elements is worth it if you ask me. Of course, YMMV.

It’s basically exactly the same thing we do with globals: We all know that adding tons of global variables is bad practice, so every library adds one global and hangs everything off of that.

However, if we try to implement something like this in the naïve way, we will find that it’s kind of hard to reference the element used from our namespaced functions:

Element.prototype.yolo = {
	foo: function () {
		console.log(this);
	},

bar: function () { /* … */ } };

someElement.yolo.foo(); // Object {foo: function, bar: function}

What happened here? this inside any of these functions refers to the object that they are called on, not the element that object is hanging on! We need to be a bit more clever to get around this issue.

Keep in mind that this in the object inside yolo would have access to the element we’re trying to hang these methods off of. But we’re not running any code there, so we’re not taking advantage of that. If only we could get a reference to that object’s context! However, running a function (e.g. element.yolo().foo()) would spoil our nice API.

Wait a second. We can run code on properties, via ES5 accessors! We could do something like this:

Object.defineProperty(Element.prototype, "yolo", {
	get: function () {
		return {
			element: this,
			foo: function() {
				console.log(this.element);
			},

bar: function() { /* … */ } } }, configurable: true, writeable: false });

someElement.yolo.foo(); // It works! (Logs our actual element)

This works, but there is a rather annoying issue here: We are generating this object and redefining our functions every single time this property is called. This is a rather bad idea for performance. Ideally, we want to generate this object once, and then return the generated object. We also don’t want every element to have its own completely separate instance of the functions we defined, we want to define these functions on a prototype, and use the wonderful JS inheritance for them, so that our library is also dynamically extensible. Luckily, there is a way to do all this too:

var Yolo = function(element) {
	this.element = element;
};

Yolo.prototype = { foo: function() { console.log(this.element); },

bar: function() { /* … */ } };

Object.defineProperty(Element.prototype, "yolo", { get: function () { Object.defineProperty(this, "yolo", { value: new Yolo(this) });

return this.yolo; }, configurable: true, writeable: false });

someElement.yolo.foo(); // It works! (Logs our actual element)

// And it’s dynamically extensible too! Yolo.prototype.baz = function(color) { this.element.style.background = color; };

someElement.yolo.baz("red") // Our element gets a red background

Note that in the above, the getter is only executed once. After that, it overwrites the yolo property with a static value: An instance of the Yolo object. Since we’re using Object.defineProperty() we also don’t run into the issue of breaking enumeration (for..in loops), since these properties have enumerable: false by default.

There is still the wart that these methods need to use this.element instead of this. We could fix this by wrapping them:

for (let method in Yolo.prototype) {
	Yolo.prototype[method] = function(){
		var callback = Yolo.prototype[method];

Yolo.prototype[method] = function () { var ret = callback.apply(this.element, arguments);

// Return the element, for chainability! return ret === undefined? this.element : ret; } } }

However, now you can’t dynamically add methods to Yolo.prototype and have them automatically work like the native Yolo methods in element.yolo, so it kinda hurts extensibility (of course you could still add methods that use this.element and they would work).

Thoughts?


jQuery considered harmful

4 min read 0 comments Report broken page

Heh, I always wanted to do one of those “X considered harmful” posts*. :D

Before I start, let me say that I think jQuery has helped tremendously to move the Web forward. It gave developers power to do things that were previously unthinkable, and pushed the browser manufacturers to implement these things natively (without jQuery we probably wouldn’t have document.querySelectorAll now). And jQuery is still needed for those that cannot depend on the goodies we have today and have to support relics of the past like IE8 or worse.

However, as much as I feel for these poor souls, they are the minority. There are tons of developers that don’t need to support old browsers with a tiny market share. And let’s not forget those who aren’t even Web professionals: Students and researchers not only don’t need to support old browsers, but can often get by just supporting a single browser! You would expect that everyone in academia would be having tons of fun using all the modern goodies of the Open Web Platform, right? And yet, I haven’t seen jQuery being so prominent anywhere else as much as it is in academia. Why? Because this is what they know, and they really don’t have the time or interest to follow the news on the Open Web Platform. They don’t know what they need jQuery for, so they just use jQuery anyway. However, being able to do these things natively now is not the only reason I’d rather avoid jQuery.

Yes, you probably don’t really need it…

I’m certainly not the first one to point out how much of jQuery usage is about things you can do natively, so I won’t spend time repeating what others have written. Just visit the following and dive in:

I will also not spend time talking about file size or how much faster native methods are. These have been talked about before. Today, I want to make a point that is not frequently talked about…

…but that’s not even the biggest reason not to use it today

To avoid extending the native element prototypes, jQuery uses its own wrapper objects. Extending native objects in the past was a huge no-no, not only due to potential collisions, but also due to memory leaks in old IE. So, what is returned when you run $("div") is not a reference to an element, or a NodeList, it’s a jQuery object. This means that a jQuery object has completely different methods available to it than a reference to a DOM element, an array with elements or any type of NodeList. However, these native objects come up all the time in real code — as much as jQuery tries to abstract them away, you always have to deal with them, even if it’s just wrapping them in $(). For example, the context when a callback is called via jQuery’s .bind() method is a reference to an HTML element, not a jQuery collection. Not to mention that often you use code from multiple sources — some of them assume jQuery, some don’t. Therefore, you always end up with code that mixes jQuery objects, native elements and NodeLists. And this is where the hell begins.

If the developer has followed a naming convention for which variables contain jQuery objects (prepending the variable names with a dollar sign is the common one I believe) and which contain native elements, this is less of a problem (humans often end up forgetting to follow such conventions, but let’s assume a perfect world here). However, in most cases no such convention is followed, which results in the code being incredibly hard to understand by anyone unfamiliar with it. Every edit entails a lot of trial and error now (“Oh, it’s not a jQuery object, I have to wrap it with $()!” or “Oh, it’s not an element, I have to use [0] to get an element!”). To avoid such confusion, developers making edits often end up wrapping anything in $() defensively, so throughout the code, the same variable will have gone through $() multiple times. For the same reason, it also becomes especially hard to refactor jQuery out of said code. You are essentially locked in.

Even if naming conventions have been followed, you can’t just deal only with jQuery objects. You often need to use a native DOM method or call a function from a script that doesn’t depend on jQuery. Soon, conversions to and from jQuery objects are all over the place, cluttering your code.

In addition, when you add code to said codebase, you usually end up wrapping every element or nodelist reference with $() as well, because you don’t know what input you’re getting. So, not only you’re locked in, but all future code you write for the same codebase is also locked in.

Get any random script with a jQuery dependency that you didn’t write yourself and try to refactor it so that it doesn’t need jQuery. I dare you. You will see that your main issue will not be how to convert the functionality to use native APIs, but understanding what the hell is going on.

A pragmatic path to JS nudity

Sure, many libraries today require jQuery, and like I recently tweeted, avoiding it entirely can feel like you’re some sort of digital vegan. However, this doesn’t mean you have to use it yourself. Libraries can always be replaced in the future, when good non-jQuery alternatives become available.

Also, most libraries are written in such a way that they do not require the $ variable to be aliased to jQuery. Just call jQuery.noConflict() to reclaim the $ variable and be able to assign it to whatever you see fit. For example, I often define these helper functions, inspired from the Command Line API:

// Returns first element that matches CSS selector {expr}.
// Querying can optionally be restricted to {container}’s descendants
function $(expr, container) {
	return typeof expr === "string"? (container || document).querySelector(expr) : expr || null;
}

// Returns all elements that match CSS selector {expr} as an array. // Querying can optionally be restricted to {container}’s descendants function $$(expr, container) { return [].slice.call((container || document).querySelectorAll(expr)); }

In addition, I think that having to type jQuery instead of $ every time you use it somehow makes you think twice about superfluously using it without really needing to, but I could be wrong :)

Also, if you actually like the jQuery API, but want to avoid the bloat, consider using Zepto.

* I thought it was brutally obvious that the title was tongue-in-cheek, but hey, it’s the Internet, and nothing is obvious. So there: The title is tongue-in-cheek and I’m very well aware of Eric’s classic essay against such titles.