Copying object properties, the robust way

If, like me, you try to avoid using heavy libraries when not needed, you must have definitely written a helper to copy properties from one object to another at some point. It’s needed so often that it’s just silly to write the same loops over and over again.

These days, most of my time is spent working on my research project at MIT, which I will hopefully reveal later this year. In that, I’m using a lightweight homegrown helper library, which I might release separately at some point as I think it has potential in its own right, for a number of reasons.

Of course, it needed to have a simple extend() method as well, to copy properties from one object to another. Let’s assume for the purposes of this article that we’re talking about shallow copying, that overwrites are allowed, and let’s omit hasOwnProperty() checks to make code easier to read.

It’s a simple task, right? Our first attempt might look like this:

$.extend = function (to, from) {
	for (var property in from) {
		to[property] = from[property];
	}
	
	return to;
}

This works fine, until you try it on objects with accessors or other types of properties defined via Object.defineProperty() or get/set keywords. What do you do then? Our next iteration could look like this:

$.extend = function (to, from) {
	for (var property in from) {
		Object.defineProperty(to, property, Object.getOwnPropertyDescriptor(from, property));
	}
	
	return to;
}

This works much better, until it fails, and it can fail pretty epically. Try this:

$.extend(document.body.style, {
	backgroundColor: "red"
});

Both in Chrome and Firefox, the results are super weird. Even though reading document.body.style.backgroundColor will return "red", no style will have actually been applied. In Firefox it even destroyed the native setter entirely and any future attempts to set document.body.style.backgroundColor in the console did absolutely nothing.

In contrast, the previous naïve approach worked fine for this. It’s clear that we need to somehow combine the two approaches, using Object.defineProperty() only when actually needed. But when is it actually not needed?

One obvious case is if the descriptor is undefined (such as with some native properties). Also, in simple properties, such as those in our object literal, the descriptor will be of the form {value: somevalue, writable: true, enumerable: true, configurable: true}. So, the next obvious step would be:

$.extend = function (to, from) {
	var descriptor = Object.getOwnPropertyDescriptor(from, property);

	if (descriptor && (!descriptor.writable || !descriptor.configurable || !descriptor.enumerable || descriptor.get || descriptor.set)) {
		Object.defineProperty(to, property, descriptor);
	}
	else {
		to[property] = from[property];
	}
}

This works perfectly, but is a little clumsy. I’ve currently left it at that, but any suggestions for making it more elegant are welcome :)

FWIW, I looked at jQuery’s implementation of jQuery.extend() after this, and it seems it doesn’t even handle accessors at all, unless I missed something. Time for a pull request, perhaps…

Edit: As MaxArt pointed out in the comments, there is a similar native method in ES6, Object.assign(). However, it does not deal with copying accessors, so does not deal with this problem either.

On the blindness of blind reviews

Over the last couple of years, blind reviews have been popularized as the ultimate method for fair talk selection in industry conferences. While I don’t really submit proposals myself, I have served several times on the other side of the process, doing speaker selection in conference committees, and the more data points I collect, the more convinced I become that the blind selection process is fundamentally flawed.

Blind reviews come from the world of academia. However, in academic conferences, you do not judge a talk by a 1-2 paragraph abstract, but by a 10+ page paper, so there’s way more to judge by. In addition, in academia the content of the research matters infinitely more than the quality of a talk. In industry conferences, selection committees in blind reviews have both way less data to use, and a much harder task, as they need to balance several factors (content, speaker skill, talk quality etc). It’s no surprise that the results end up being even more of a gamble.

Blind reviews result in conservative talk selection. More often than not, I remember me and my fellow committee members saying “Damn, this talk could be great with the right presenter, but that’s rare” and giving it a poor or average score. Few topics can make good talks regardless of the presenters. Therefore, when there is little information on the speaker in the initial selection round, talk selection ends up being conservative, rejecting more challenging topics that need a skilled speaker to shine and sticking to safer choices.

One of my most successful talks ever was “The humble border-radius” which was shortlisted for a .net award for Conference Talk of The Year 2014. It would never have passed any blind review. There is no committee in their right mind that would have accepted a 45 minute talk about …border-radius. The conferences I presented it at invited me as a speaker, carte blanche, and trusted me to present on whatever I felt like. Judging by the reviews, they were not disappointed.

In addition, all too many times I’ve seen great speakers get poor scores in blind reviews, not because their talks were not good, but because writing good abstracts is an entirely separate skill. Blind reviews remove anything that could cause bias, but they do so by striping all personality away from a proposal. In addition, a good abstract for a blind review is not necessarily a good abstract in general. For example, blind reviews penalize more mysterious/teasy abstracts and tend to be skewed towards overly detailed ones, since it’s the only data the committee gets for these talks (bonus points here for CfS that have a separate field for more details to conf organizers).

“But what about newcomers to the conference circuit? What about bias elimination?” one might ask. Both very valid concerns. I’m not saying any kind of anonymization is a bad idea. I’m saying that in their present form in industry conferences, blind reviews are flawed. For example, an initial round of blind reviews to pick good talks, without rejecting any at that stage, would probably solve these issues, without suffering from the flaws mentioned above.

Disclaimer: I do recognize that most people in these committees are doing their best to select fairly, and putting many hours of (usually volunteer) work in it. I’m not criticizing them, I’m criticizing the process. And yes, I recognize that it’s a process that has come out of very good intentions (eliminating bias). However, good intentions are not a guarantee for infallibility.

Stretchy: Form element autosizing, the way it should be

Screen Shot 2015-07-25 at 18.40.13As you might be aware, these days a good chunk of my time is spent working on research, at MIT. Although it’s still too early to talk about my research project, I can say that it’s related to the Web and it will be open source, both of which are pretty awesome (getting paid to work on cool open source stuff is the dream, right?).

The one thing I can mention about my project is that it involves a lot of editing of Web content. And since contentEditable is a mess, as you all know, I decided to use form controls styled like the content being edited. This meant that I needed a good script for form control autosizing, one that worked on multiple types of form controls (inputs, textareas, even select menus). In addition, I needed the script to smoothly work for newly added controls, without me having to couple the rest of my code with it and call API methods or fire custom events every time new controls were added anywhere. A quick look at the existing options quickly made it obvious that I had to write my own.

After writing it, I realized this could be released entirely separately as it was a standalone utility. So Stretchy was born :) I made a quick page for it, fixed a few cross-browser bugs that needed fixing anyway, put it up on Github and here it is!

Enjoy!

PS: You can also use it as a bookmarklet, to autosize form controls on an existing page, if a form is bothering you with its poor usability. You will find it in the footer.

Spot the unsubscribe (link)!

Screen Shot 2015-07-28 at 19.39.34After getting fed up with too many “promotional” emails and newsletters with incredibly obscure unsubscribe links, I decided to make this tumblr to point out such examples of digital douchebaggery. This annoying dark pattern is so widespread that Google even added a feature to Gmail for making those unsubscribe links obvious!

Unsubscribe links are crucial to promotional emails. They are not just another menu item. They are not something that should be hidden in a blurb of tiny low contrast text. Unsubscribe links should be immediately obvious to anyone looking for them. You want people to be reading your email because they’re interested, not because they can‘t find the way out. Otherwise you are the digital equivalent of those annoying door-to-door salesmen who just won’t go away.

— From my introductory post on Spot the unsubscribe!

On the spur of the moment, after yet another email newsletter with a hard to find Unsubscribe link, I decided to quickly put together a tumblog about this UX pet peeve of mine, called Spot the Unsubscribe!. In less than an hour, it was ready and had a few posts as well :)

Hopefully if this bothers others as well, there will be submissions. Otherwise, new posts will be rather infrequent.

Conical gradients, today!

Screen Shot 2015-06-18 at 16.26.40It’s no secret that I like conical gradients. In as early as 2011, I wrote a draft for conical-gradient() in CSS, that Tab later said helped him when he added them in CSS Image Values Level 4 in 2012. However, almost three years later, no progress has been made in implementing them. Sure, the spec is still relatively incomplete, but that’s not the reason conical gradients have gotten no traction. Far more underspecified features have gotten experimental implementations in the past. The reason conical gradients are still unimplemented, is because very few developers know they exist, so browsers see no demand.

Another reason was that Cairo, the graphics library used in Chrome and Firefox had no way of drawing a conical gradient. However, this changed a while ago, when they supported mesh gradients, of which conical gradients are a mere special case.

Recently, I was giving a talk on creating pie charts with CSS on a few conferences, and yet again, I was reminded of how useful conical gradients can be. While every CSS or SVG solution is several lines of code with varying levels of hackiness, conical gradients can give us a pie chart with a straightforward, DRY, one liner. For example, this is how to create a pie chart that shows 40% in gold and 60% in #f06:

padding: 5em; /* size */
background: conic-gradient(gold 40%, #f06 0);
border-radius: 50%; /* make it round */

Screen Shot 2015-06-18 at 16.23.57
So, I decided to take matters in my own hands. I wrote a polyfill, which I also used in my talk to demonstrate how awesome conical gradients can be and what cool things they can do. Today, during my CSSConf talk, I released it publicly.

In addition, I mention to developers how important speaking up is for getting their favorite features implemented. Browsers prioritize which features to implement based on what developers ask for. It’s a pity that so few of us realize how much of a say we collectively have in this. This is more obvious with Microsoft and their Uservoice forum where developers can vote on which features they want to see worked on, but pretty much every major browser works in a similar way. They monitor what developers request and what other browsers implement, and decide accordingly. The squeaky wheel will get the feature, so if you really want to see something implemented, speak up.

Since “speaking up” can be a bit vague (“speak up where?” I can hear you asking), I also filed bug reports with all major browsers, that you can also find in the polyfill page, so that you can comment or vote on them. That doesn’t mean that speaking up on blogs or social media is not useful though: That’s why browsers have devrel teams. The more noise we collectively make about the features we want, the more likely it is to be heard. However, the odds are higher if we all channel our voices to the venues browser developers follow and our voice is stronger and louder if we concentrate it in the same places instead of having many separate voices all over the place.

Also, I’m using the term “noise” here a bit figuratively. While it’s valuable to make it clear that we are interested in a certain feature, it’s even more useful to say why. Providing use cases will not only grab browsers’ attention more, but it will also convince other developers as well.

So go ahead, play with conic gradients, and if you agree with me that they are fucking awesome and we need them natively on the Web, make noise.

conic-gradient() polyfill

Idea: Extending native DOM prototypes without collisions

As I pointed out in yesterday’s blog post, one of the reasons why I don’t like using jQuery is its wrapper objects. For jQuery, this was a wise decision: Back in 2006 when it was first developed, IE releases had a pretty icky memory leak bug that could be easily triggered when one added properties to elements. Oh, and we also didn’t have access to element prototypes on IE back then, so we had to add these properties manually on every element. Prototype.js attempted to go that route and the result was such a mess that they decided to change their decision in Prototype 2.0 and go with wrapper objects too. There were even long essays being written back then about how much of a monumentally bad idea it was to extend DOM elements.

The first IE release that exposed element prototypes was IE8: We got access to Node.prototype, Element.prototype and a few more. Some were mutable, some were not. On IE9, we got the full bunch, including HTMLElement.prototype and its descendants, such as HTMLParagraphElement. The memory leak bugs were mitigated in IE8 and fixed in IE9. However, we still don’t extend native DOM elements, and for good reason: collisions are still a very real risk. No library wants to add a bunch of methods on elements, it’s just bad form. It’s like being invited in someone’s house and defecating all over the floor.

But what if we could add methods to elements without the chance of collisions? (well, technically, by minimizing said chance). We could only add one property to Element.prototype, and then hang all our methods on that. E.g. if our library was called yolo and had two methods, foo() and bar(), calls to it would look like:

var element = document.querySelector(".someclass");
element.yolo.foo();
element.yolo.bar();
// or you can even chain, if you return the element in each of them!
element.yolo.foo().yolo.bar();

Sure, it’s more awkward than wrapper objects, but the benefit of using native DOM elements is worth it if you ask me. Of course, YMMV.

It’s basically exactly the same thing we do with globals: We all know that adding tons of global variables is bad practice, so every library adds one global and hangs everything off of that.

However, if we try to implement something like this in the naïve way, we will find that it’s kind of hard to reference the element used from our namespaced functions:

Element.prototype.yolo = {
	foo: function () {
		console.log(this); 
	},
	
	bar: function () { /* ... */ }
};

someElement.yolo.foo(); // Object {foo: function, bar: function}

What happened here? this inside any of these functions refers to the object that they are called on, not the element that object is hanging on! We need to be a bit more clever to get around this issue.

Keep in mind that this in the object inside yolo would have access to the element we’re trying to hang these methods off of. But we’re not running any code there, so we’re not taking advantage of that. If only we could get a reference to that object’s context! However, running a function (e.g. element.yolo().foo()) would spoil our nice API.

Wait a second. We can run code on properties, via ES5 accessors! We could do something like this:

Object.defineProperty(Element.prototype, "yolo", {
	get: function () {
		return {
			element: this,
			foo: function() {
				console.log(this.element);
			},
			
			bar: function() { /* ... */ }
		}
	},
	configurable: true,
	writeable: false
});

someElement.yolo.foo(); // It works! (Logs our actual element)

This works, but there is a rather annoying issue here: We are generating this object and redefining our functions every single time this property is called. This is a rather bad idea for performance. Ideally, we want to generate this object once, and then return the generated object. We also don’t want every element to have its own completely separate instance of the functions we defined, we want to define these functions on a prototype, and use the wonderful JS inheritance for them, so that our library is also dynamically extensible. Luckily, there is a way to do all this too:

var Yolo = function(element) {
	this.element = element;
};

Yolo.prototype = {
	foo: function() {
		console.log(this.element);
	},
	
	bar: function() { /* ... */ }
};

Object.defineProperty(Element.prototype, "yolo", {
	get: function () {
		Object.defineProperty(this, "yolo", {
			value: new Yolo(this)
		});
		
		return this.yolo;
	},
	configurable: true,
	writeable: false
});

someElement.yolo.foo(); // It works! (Logs our actual element)

// And it’s dynamically extensible too!
Yolo.prototype.baz = function(color) {
	this.element.style.background = color;
};

someElement.yolo.baz("red") // Our element gets a red background

Note that in the above, the getter is only executed once. After that, it overwrites the yolo property with a static value: An instance of the Yolo object. Since we’re using Object.defineProperty() we also don’t run into the issue of breaking enumeration (for..in loops), since these properties have enumerable: false by default.

There is still the wart that these methods need to use this.element instead of this. We could fix this by wrapping them:

for (let method in Yolo.prototype) {
	Yolo.prototype[method] = function(){
		var callback = Yolo.prototype[method];
		
		Yolo.prototype[method] = function () {
			var ret = callback.apply(this.element, arguments);
			
			// Return the element, for chainability!
			return ret === undefined? this.element : ret;
		}
	}
}

However, now you can’t dynamically add methods to Yolo.prototype and have them automatically work like the native Yolo methods in element.yolo, so it kinda hurts extensibility (of course you could still add methods that use this.element and they would work).

Thoughts?

jQuery considered harmful

Heh, I always wanted to do one of those “X considered harmful” posts*. 😀

Before I start, let me say that I think jQuery has helped tremendously to move the Web forward. It gave developers power to do things that were previously unthinkable, and pushed the browser manufacturers to implement these things natively (without jQuery we probably wouldn’t have document.querySelectorAll now). And jQuery is still needed for those that cannot depend on the goodies we have today and have to support relics of the past like IE8 or worse.

However, as much as I feel for these poor souls, they are the minority. There are tons of developers that don’t need to support old browsers with a tiny market share. And let’s not forget those who aren’t even Web professionals: Students and researchers not only don’t need to support old browsers, but can often get by just supporting a single browser! You would expect that everyone in academia would be having tons of fun using all the modern goodies of the Open Web Platform, right? And yet, I haven’t seen jQuery being so prominent anywhere else as much as it is in academia. Why? Because this is what they know, and they really don’t have the time or interest to follow the news on the Open Web Platform. They don’t know what they need jQuery for, so they just use jQuery anyway. However, being able to do these things natively now is not the only reason I’d rather avoid jQuery.

Yes, you probably don’t really need it…

I’m certainly not the first one to point out how much of jQuery usage is about things you can do natively, so I won’t spend time repeating what others have written. Just visit the following and dive in:

I will also not spend time talking about file size or how much faster native methods are. These have been talked about before. Today, I want to make a point that is not frequently talked about…

…but that’s not even the biggest reason not to use it today

To avoid extending the native element prototypes, jQuery uses its own wrapper objects. Extending native objects in the past was a huge no-no, not only due to potential collisions, but also due to memory leaks in old IE. So, what is returned when you run $("div") is not a reference to an element, or a NodeList, it’s a jQuery object. This means that a jQuery object has completely different methods available to it than a reference to a DOM element, an array with elements or any type of NodeList. However, these native objects come up all the time in real code — as much as jQuery tries to abstract them away, you always have to deal with them, even if it’s just wrapping them in $(). For example, the context when a callback is called via jQuery’s .bind() method is a reference to an HTML element, not a jQuery collection. Not to mention that often you use code from multiple sources — some of them assume jQuery, some don’t. Therefore, you always end up with code that mixes jQuery objects, native elements and NodeLists. And this is where the hell begins.

If the developer has followed a naming convention for which variables contain jQuery objects (prepending the variable names with a dollar sign is the common one I believe) and which contain native elements, this is less of a problem (humans often end up forgetting to follow such conventions, but let’s assume a perfect world here). However, in most cases no such convention is followed, which results in the code being incredibly hard to understand by anyone unfamiliar with it. Every edit entails a lot of trial and error now (“Oh, it’s not a jQuery object, I have to wrap it with $()!” or “Oh, it’s not an element, I have to use [0] to get an element!”). To avoid such confusion, developers making edits often end up wrapping anything in $() defensively, so throughout the code, the same variable will have gone through $() multiple times. For the same reason, it also becomes especially hard to refactor jQuery out of said code. You are essentially locked in.

Even if naming conventions have been followed, you can’t just deal only with jQuery objects. You often need to use a native DOM method or call a function from a script that doesn’t depend on jQuery. Soon, conversions to and from jQuery objects are all over the place, cluttering your code.

In addition, when you add code to said codebase, you usually end up wrapping every element or nodelist reference with $() as well, because you don’t know what input you’re getting. So, not only you’re locked in, but all future code you write for the same codebase is also locked in.

Get any random script with a jQuery dependency that you didn’t write yourself and try to refactor it so that it doesn’t need jQuery. I dare you. You will see that your main issue will not be how to convert the functionality to use native APIs, but understanding what the hell is going on.

A pragmatic path to JS nudity

Sure, many libraries today require jQuery, and like I recently tweeted, avoiding it entirely can feel like you’re some sort of digital vegan. However, this doesn’t mean you have to use it yourself. Libraries can always be replaced in the future, when good non-jQuery alternatives become available.

Also, most libraries are written in such a way that they do not require the $ variable to be aliased to jQuery. Just call jQuery.noConflict() to reclaim the $ variable and be able to assign it to whatever you see fit. For example, I often define these helper functions, inspired from the Command Line API:

// Returns first element that matches CSS selector {expr}.
// Querying can optionally be restricted to {container}’s descendants
function $(expr, container) {
	return typeof expr === "string"? (container || document).querySelector(expr) : expr || null;
}

// Returns all elements that match CSS selector {expr} as an array.
// Querying can optionally be restricted to {container}’s descendants
function $$(expr, container) {
	return [].slice.call((container || document).querySelectorAll(expr));
}

In addition, I think that having to type jQuery instead of $ every time you use it somehow makes you think twice about superfluously using it without really needing to, but I could be wrong :)

Also, if you actually like the jQuery API, but want to avoid the bloat, consider using Zepto.

* I thought it was brutally obvious that the title was tongue-in-cheek, but hey, it’s the Internet, and nothing is obvious. So there: The title is tongue-in-cheek and I’m very well aware of Eric’s classic essay against such titles.

Awesomplete: 2KB autocomplete with zero dependencies

awesompleteSorry for the lack of posts for the past 7 (!) months, I’ve been super busy working on my book, which up to a certain point, I couldn’t even imagine finishing, but I’m finally there! I’ve basically tried to cram all the CSS wisdom I’ve accumulated over the years in it 😛 (which is partly why it took so long, I kept remembering more things that just *had* to be in it. Its page count on the O’Reilly website had to be updated 3 times, from 250 to 300 to 350 and it looks like the final is gonna be closer to 400 pages) and it’s gonna be super awesome (preorder here!) 😀 . I have been posting a few CSS tricks now and then on my twitter account, but haven’t found any time to write a proper blog post.

Anyhow, despite being super busy with MIT (which btw is amazing, challenging in a good way, and full of fantastic people. So glad to be here!) and the book, I recently needed an autocomplete widget for something. Surprisingly, I don’t think I ever had needed to choose one in the past. I’ve worked with apps that had it, but in those cases it was already there.

At first, I didn’t fret. Finally, a chance to use the HTML5 <datalist>, so exciting! However, the more I played with it, the more my excitement was dying a slow death, taking my open web standards dreams and hopes along with it. Not only it’s incredibly inconsistent across browsers (e.g. Chrome matches only from the start, Firefox anywhere!), it’s also not hackable or customizable in any way. Not even if I got my hands dirty and used proprietary CSS, I still couldn’t do anything as simple as changing how the matching happens, styling the dropdown or highlighting the matching text!

So, with a heavy heart, I decided to use a script. However, when I looked into it, everything seemed super bloated for my needs and anything with half decent usability required jQuery, which results in even more bloat.

So, I did what every crazy person with a severe case of NIH Syndrome would: I wrote one. It was super fun, and I don’t regret it, although now I’m even more pressed for time to meet my real deadlines. I wrote it primarily for myself, so even if nobody else uses it, ho hum, it was more fun than alternative ways to take a break. However, it’s my duty to put it on Github, in case someone else wants it and in case the community wants to take it into its loving, caring hands and pull request the hell out of it.

To be honest, I think it’s both pretty and pretty useful and even though it won’t suit complex needs out of the box, it’s pretty hackable/extensible. I even wrote quite a bit of documentation at some point this week when I was too sleepy to work and not sufficiently sleepy to sleep — because apparently that’s what was missing from my life: even more technical writing.

I saved the best for last: It’s so lightweight you might end up chasing it around if there’s a lot of wind when you download it. It’s currently a little under 1.5KB minified & gzipped (the website says 2KB because it will probably grow with commits and I don’t want to have to remember to update it all the time), with zero dependencies! 😀

And it’s even been verified to work in IE9 (sorta), IE10+, Chrome, Firefox, Safari 5+, Mobile Safari!

’Nuff said. Get it now!

PS: If you’re about to leave a comment on how it’s not called “autocomplete”, but “typeahead”, please go choke on a bucket of cocks instead. 😛

An easy notation for grayscale colors

These days, there is a lengthy discussion in the CSS WG about how to name a function that produces shades of gray (from white to black) with varying degrees of transparency, and we need your feedback about which name is easier to use.

The current proposals are:

1. gray(lightness [, alpha])

In this proposal gray(0%) is black, gray(50%) is gray and gray(100%) is white. It also accepts numbers from 0-255 which correspond to rgb(x,x,x) values, so that gray(255) is white and gray(0) is black. It also accepts an optional second argument for alpha transparency, so that gray(0, .5) would be equivalent to rgba(0,0,0,.5).

This is the naming of the function in the current CSS Color Level 4 draft.

2. white(lightness [, alpha])

Its arguments work in the same way as gray(), but it’s consistent with the expectation that function names that accept percentages give the “full effect” at 100%. gray(100%) sounds like a shade of gray, when it’s actually white. white(100%) is white, which might be more consistent with author expectations. Of course, this also accepts alpha transparency, like all the proposals listed here.

3. black(lightness [, alpha])

black() would work in the opposite way: black(0%) would be white, black(100%) would be black and black(50%,.5) would be semi-transparent gray. The idea is that people are familiar thinking that way from grayscale printing.

4. rgb() with one argument and rgba() with two arguments

rgb(x) would be a shorthand to rgb(x, x, x) and rgba(x, y) would be a shorthand to rgba(x, x, x, y). So, rgb(0) would be black and rgb(100%) or rgb(255) would be white. The benefit is that authors are already accustomed to using rgb() for colors, and this would just be a shortcut. However, note how you will need to change the function name to get a semi-transparent version of the color. Also, if in the future one needs to change the color to not be a shade of gray, a function name change is not needed.

I’ve written some SCSS to emulate these functions so you can play with them in your stylesheets and figure out which one is more intuitive. Unfortunately rgb(x)/rgba(x,a) cannot be polyfilled in that way, as that would overwrite the native rgb()/rgba() functions. Which might be an argument against them, as being able to polyfill through a preprocessor is quite a benefit for a new color format IMO.

You can vote here, but that’s mainly for easy vote counting. It’s strongly encouraged that you also leave a comment justifying your opinion, either here or in the list.

Vote now!

Also tl;dr If you can’t be bothered to read the post and understand the proposals well, please, refrain from voting.

Image comparison slider with pure CSS

As a few of you know, I have been spending a good part of this year writing a book for O’Reilly called “CSS Secrets” (preorder here!). I wanted to include a “secret” about the various uses of the resize property, as it’s one of my favorite underdogs, since it rarely gets any love. However, just mentioning the typical use case of improving the UX of text fields didn’t feel like enough of a secret at all. The whole purpose of the book is to get authors to think outside the box about what’s possible with CSS, not to recite widely known applications of CSS features. So I started brainstorming: What else could we do with it?

Then I remembered Dudley’s awesome Before/After image slider from a while ago. While I loved the result, the markup isn’t great and it requires scripting. Also, both images are CSS backgrounds, so for a screen reader, there are no images there. And then it dawned on me: What if I overlaid a <div> on an image and made it horizontally resizable through the resize property? I tried it, and as you can see below, it worked!

The good parts:

  • More semantic markup (2 images & 2 divs). If object-fit was widely supported, it could even be just one div and two images.
  • No JS
  • Less CSS code

Of course, few things come with no drawbacks. In this case:

  • One big drawback is keyboard accessibility. Dudley’s demo uses a range input, so it’s keyboard accessible by design.
  • You can only drag from the bottom right corners. In Dudley’s demo, you can click at any point in the slider. And yes, I did try to style ::webkit-resizer and increase its size so that at least it has smoother UX in Webkit. However, no matter what I tried, nothing seemed to work.

Also, none of the two seems to work on mobile.

It might not be perfect, but I thought it’s a pretty cool demo of what’s possible with the resize property, as everybody seems to only use it in textareas and the like, but its potential is much bigger.

And now if you’ll excuse me, I have a chapter to write 😉

Edit: It looks like somebody figured out a similar solution a few months ago, which does manage to make the resizer full height, albeit with less semantic HTML and more flimsy CSS. The main idea is that you use a separate element for the resizing (in this case a textarea) with a height of 15px = the height of the resizer. Then, they apply a scaleY() transform to stretch that 15px to the height of the image. Pretty cool! Unfortunately, it requires hardcoding the image size in the CSS.