Categories
Articles

The case for Weak Dependencies in JS

Earlier today, I was briefly entertaining the idea of writing a library to wrap and enhance querySelectorAll in certain ways. I thought I’d rather not introduce a Parsel dependency out of the box, but only use it to parse selectors properly when it’s available, and use more crude regex when it’s not (which would cover most use cases for what I wanted to do).

In the olden days, where every library introduced a global, I could just do:

if (window.Parsel) {
	let ast = Parsel.parse();
	// rewrite selector properly, with AST
}
else {
	// crude regex replace
}

However, with ESM, there doesn’t seem to be a way to detect whether a module is imported, without actually importing it yourself.

I tweeted about this…

I thought this was a common paradigm, and everyone would understand why this was useful. However, I was surprised to find that most people were baffled about my use case. Most of them thought I was either talking about conditional imports, or error recovery after failed imports.

I suspect it might be because my primary perspective for writing JS is that of a library author, where I do not control the host environment, whereas for most developers, their primary perspective is that of writing JS for a specific app or website.

After Kyle Simpson asked me to elaborate about the use case, I figured a blog post was in order.

The use case is essentially progressive enhancement (in fact, I toyed with the idea of titling this blog post “Progressively Enhanced JS”). If library X is loaded already by other code, do a more elaborate thing and cover all the edge cases, otherwise do a more basic thing. It’s for dependencies that are not really dependencies, but more like nice-to-haves.

We often see modules that do things really well, but use a ton of dependencies and add a lot of weight, even to the simplest of projects, because they need to cater to all the edge cases that we may not care about. We also see modules that are dependency free, but that’s because lots of things are implemented more crudely, or certain features are not there.

This paradigm gives you the best of both worlds: Dependency free (or low dependency) modules, that can use what’s available to improve how they do things with zero additional impact.

Using this paradigm, the size of these dependencies is not a concern, because they are optional peer dependencies, so one can pick the best library for the job without being affected by bundle size. Or even use multiple! One does not even need to pick one dependency for each thing, they can support bigger, more complete libraries when they’re available and fall back to micro-libraries when they are not.

Some examples besides the one in the first paragraph:

  • A Markdown to HTML converter that also syntax highlights blocks of code if Prism is present. Or it could even support multiple different highlighters!
  • A code editor that uses Incrementable to make numbers incrementable via arrow keys, if it’s present
  • A templating library that also uses Dragula to make items rearrangable via drag & drop, if present
  • A testing framework that uses Tippy for nice informational popups, when it’s available
  • A code editor that shows code size (in KB) if a library to measure that is included. Same editor can also show gzipped code size if a gzip library is included.
  • A UI library that uses a custom element if it’s available or the closest native one when it’s not (e.g. a fancy date picker vs <input type="date"> ) when it isn’t. Or Awesomplete for autocomplete when it’s available, and fall back to a simple <datalist> when it isn’t.
  • Code that uses a date formatting library when one is already loaded, and falls back to Intl.DateTimeFormat when it’s not.

This pattern can even be combined with conditional loading: e.g. we check for all known syntax highlighters and load Prism if none are present.

To recap, some of the main benefits are:

  • Performance: If you’re loading modules over the network HTTP requests are expensive. If you’re pre-bundling it increases bundle size. Even if code size is not a concern, runtime performance is affected if you take the slow but always correct path when you don’t need it and a more crude approach would satisfice.
  • Choice: Instead of picking one library for the thing you need, you can support multiple. E.g. multiple syntax highlighters, multiple Markdown parsers etc. If a library is always needed to do the thing you want, you can load it conditionally, if none of the ones you support are loaded already.

Are weak dependencies an antipattern?

Since this article was posted, some of the feedback I got was along the lines of “Weak dependencies are an antipattern because they are unpredictable. What if you have included a library but don’t want another library to use it? You should instead use parameters to explicitly provide references to these libraries.”

There are several counterpoints to make here.

First, if weak dependencies are used well, they are only used to enhance the default/basic behavior, so it’s highly unlikely that you’d want to turn that off and fall back to the default behavior.

Second, weak dependencies and parameter injection are not mutually exclusive. They can work together and complement each other, so that the weak dependencies provide sensible defaults that the parameters can then tweak further (or disable altogether). Only having parameter injection imposes a high upfront cognitive cost for using the library (see Convention over Configuration). Good APIs make simple things easy and complex things possible. The common case is that if you’ve loaded e.g. a syntax highlighter, you’d want to use it to syntax highlight, and if you’ve loaded a parser, you’d prefer it over parsing with regexes. The obscure edge cases where you wouldn’t want to highlight or you want to provide a different parser can still be possible via parameters, but should not be the only way.

Third, the end user-developer may not even be aware of all the libraries that are being loaded, so they may already have a library loaded for a certain task but not know about it. The weak dependencies pattern operates directly on which modules are loaded so it doesn’t suffer from this problem.

How could this work with ESM?

Some people (mostly fellow library authors) *did* understand what I was talking about, and expressed some ideas about how this would work.

Idea 1: A global module loaded cache could be a low-level way to implement this, and something CJS supports out of the box apparently.

Idea 2: A global registry where modules can register themselves on, either with an identifier, or a SHA hash
Idea 3: An import.whenDefined(moduleURL) promise, though that makes it difficult to deal with the module not being present at all, which is the whole point.

Idea 4: Monitoring <link rel="modulepreload">. The problem is that not all modules are loaded this way.

Idea 5: I was thinking of a function like import() that resolves with the module (same as a regular dynamic import) only when the module is already loaded, or rejects when it’s not (which can be caught). In fact, it could even use the same functional notation, with a second argument, like so:

import("https://cool-library", {weak: true});

Nearly all of these proposals suffer from one of the following problems.

Those that are URL based mean that only modules loaded from the same URL would be recognized. The same library loaded over a CDN vs locally would not be recognized as the same library.

One way around this is to expose a list of URLs, like the first idea, and allow to listen for changes to it. Then these URLs can be inspected and those which might belong to the module we are looking for can be further inspected by dynamically importing and inspecting their exports (importing already imported modules is a pretty cheap operation, the browser does de-duplicate the request).

Those that are identifier based, depend on the module to register itself with an identifier, so only modules that want to be exposed, will be. This is the closest to the old global situation, but would suffer in the transitional period until most modules use it. And of course, there is the potential for clashes. Though the API could take care of that, by essentially using a hashtable and adding all modules that register themselves with the same identifier under the same “bucket”. Code reading the registry would then be responsible for filtering.

Categories
Tips

Simple pie charts with fallback, today

Five years ago, I had written this extensive Smashing Magazine article detailing multiple different methods for creating simple pie charts, either with clever use of transforms and pseudo-elements, or with SVG stroke-dasharray. In the end, I mentioned creating pie charts with conic gradients, as a future technique. It was actually a writeup of my “The Missing Slice” talk, and an excerpt of my CSS Secrets book, which had just been published.

I was reminded of this article today by someone on Twitter:

I suggested conic gradients, since they are now supported in >87% of users’ browsers, but he needed to support IE11. He suggested using my polyfill from back then, but this is not a very good idea today.

Indeed, unless you really need to display conic gradients, even I would not recommend using the polyfill on a production facing site. It requires -prefix-free, which re-fetches (albeit from cache) your entire CSS and sticks it in a <style> element, with no sourcemaps since those were not a thing back when -prefix-free was written. If you’re already using -prefix-free, the polyfill is great, but if not, it’s way too heavy a dependency.

Pie charts with fallback (modern browsers)

Instead, what I would recommend is graceful degradation, i.e. to use the same color stops, but in a linear gradient.

Categories
Original Tips

The -​-var: ; hack to toggle multiple values with one custom property

What if I told you you could use a single property value to turn multiple different values on and off across multiple different properties and even across multiple CSS rules?

What if I told you you could turn this flat button into a glossy skeuomorphic button by just tweaking one custom property --is-raised, and that would set its border, background image, box and text shadows in one fell swoop?

Categories
Rants Thoughts

The failed promise of Web Components

Web Components had so much potential to empower HTML to do more, and make web development more accessible to non-programmers and easier for programmers. Remember how exciting it was every time we got new shiny HTML elements that actually do stuff? Remember how exciting it was to be able to do sliders, color pickers, dialogs, disclosure widgets straight in the HTML, without having to include any widget libraries?

The promise of Web Components was that we’d get this convenience, but for a much wider range of HTML elements, developed much faster, as nobody needs to wait for the full spec + implementation process. We’d just include a script, and boom, we have more elements at our disposal!

Or, that was the idea. Somewhere along the way, the space got flooded by JS frameworks aficionados, who revel in complex APIs, overengineered build processes and dependency graphs that look like the roots of a banyan tree.

This is what the roots of a Banyan tree look like. Photo by David Stanley on Flickr (CC-BY).
Categories
Articles

Developer priorities throughout their career

I made this chart in the amazing Excalidraw about two weeks ago:

It only took me 10 minutes! Shortly after, my laptop broke down into repeated kernel panics, and it spent about 10 days in service (I was in a remote place when it broke, so it took some time to get it to service). Yesterday, I was finally reunited with it, turned it on, launched Chrome, and saw it again. It gave me a smile, and I realized I never got to post it, so I tweeted this:

The tweet kinda blew up! It seems many, many developers identify with it. A few also disagreed with it, especially with the “Does it actually work?” line. So I figured I should write a bit about the rationale behind it. I originally wrote it in a tweet, but then I realized I should probably post it in a less transient medium, that is more well suited to longer text.

Categories
Apps & scripts Original

Parsel: A tiny, permissive CSS selector parser

I’ve posted before about my work for the Web Almanac this year. To make it easier to calculate the stats about CSS selectors, we looked to use an existing selector parser, but most were too big and/or had dependencies or didn’t account for all selectors we wanted to parse, and we’d need to write our own walk and specificity methods anyway. So I did what I usually do in these cases: I wrote my own!

You can find it here: https://projects.verou.me/parsel/

Categories
Tips

Introspecting CSS via the CSS OM: Getting supported properties, shorthands, longhands

For some of the statistics we are going to study for this year’s Web Almanac we may end up needing a list of CSS shorthands and their longhands. Now this is typically done by maintaining a data structure by hand or guessing based on property name structure. But I knew that if we were going to do it by hand, it’s very easy to miss a few of the less popular ones, and the naming rule where shorthands are a prefix of their longhands has failed to get standardized and now has even more exceptions than it used to. And even if we do an incredibly thorough job, next year the data structure will be inaccurate, because CSS and its implementations evolve fast. The browser knows what the shorthands are, surely we should be able to get the information from it …right? Then we could use it directly if this is a client-side library, or in the case of the Almanac, where code needs to be fast because it will run on millions of websites, paste the precomputed result into whatever script we run.

Categories
Tips

Import non-ESM libraries in ES Modules, with client-side vanilla JS

In case you haven’t heard, ECMAScript modules (ESM) are now supported everywhere!

While I do have some gripes with them, it’s too late for any of these things to change, so I’m embracing the good parts and have cautiously started using them in new projects. I do quite like that I can just use import statements and dynamic import() for dependencies with URLs right from my JS, without module loaders, extra <script> tags in my HTML, or hacks with dynamic <script> tags and load events (in fact, Bliss has had a helper for this very thing that I’ve used extensively in older projects). I love that I don’t need any libraries for this, and I can use it client-side, anywhere, even in my codepens.

Once you start using ESM, you realize that most libraries out there are not written in ESM, nor do they include ESM builds. Many are still using globals, and those that target Node.js use CommonJS (CJS). What can we do in that case? Unfortunately, ES Modules are not really designed with any import (pun intended) mechanism for these syntaxes, but, there are some strategies we could employ.

Categories
Apps & scripts Original

Releasing MaVoice: A free app to vote on repo issues

First off, some news: I agreed to be this year’s CSS content lead for the Web Almanac! One of the first things to do is to flesh out what statistics we should study to answer the question “What is the state of CSS in 2020?”. You can see last year’s chapter to get an idea of what kind of statistics could help answer that question.

Of course, my first thought was “We should involve the community! People might have great ideas of statistics we could study!”. But what should we use to vote on ideas and make them rise to the top?

Categories
Articles Original

The Cicada Principle, revisited with CSS variables

Many of today’s web crafters were not writing CSS at the time Alex Walker’s landmark article The Cicada Principle and Why it Matters to Web Designers was published in 2011. Last I heard of it was in 2016, when it was used in conjunction with blend modes to pseudo-randomize backgrounds even further.

So what is the Cicada Principle and how does it relate to web design in a nutshell? It boils down to: when using repeating elements (tiled backgrounds, different effects on multiple elements etc), using prime numbers for the size of the repeating unit maximizes the appearance of organic randomness. Note that this only works when the parameters you set are independent.

When I recently redesigned my blog, I ended up using a variation of the Cicada principle to pseudo-randomize the angles of code snippets. I didn’t think much of it until I saw this tweet: