Blog

38 posts 2009 16 posts 2010 50 posts 2011 28 posts 2012 15 posts 2013 7 posts 2014 10 posts 2015 5 posts 2016 4 posts 2017 7 posts 2018 2 posts 2019 17 posts 2020 7 posts 2021 7 posts 2022 11 posts 2023 7 posts 2024 4 posts 2025

The Hovercar Framework for Deliberate Product Design

13 min read Report broken page

Many teams start with the MVP. But what if the key to shipping great products wasn’t starting small — but starting big? Could great products start at the finish line?

You may be familiar with this wonderful illustration and accompanying blog post by Henrik Kniberg about good MVPs:

Henrik Kniberg's MVP illustration

It’s a very visual way to illustrate the age-old concept that that a good MVP is not the one developed in isolation over months or years, grounded on assumptions about user needs and goals, but one that delivers value to users as early as possible, so that future iterations can take advantage of the lessons learned from real users.

From Hovercar to Skateboard

I love Henrik’s metaphor so much, I have been using a similar system to flesh out product requirements and shipping goals, especially early on. It can be immediately understood by anyone who has seen Henrik’s illustration, and I find it can be a lot more pragmatic and flexible than the usual simple two tiered system (core requirements and stretch goals). Additionally, I find this fits nicely into a fixed time, variable scope development process, such as Shape Up.

🛹 The Skateboard aka the Pessimist’s MVP
What is the absolute minimum we can ship, if need be? Utilitarian, bare-bones, and somewhat embarrassing, but shippable — barely. Anything that can be flintstoned gets flintstoned.
🛴 The Scooter aka the Realist’s MVP
The minimum product that delivers value. Usable, but no frills. This is the target.
🚲 The Bicycle aka the Optimist’s MVP
Stretch goals — UX polish, “sprinkles of delight”, nonessential but high I/E features. Great if we get here, fine if we don’t.
🏍️ The Motorcycle
Post-launch highest priority items.
🚗 The Car
Our ultimate vision, taking current constraints into account.
🏎️ The Hovercar aka the North Star UI
The ideal experience — unconstrained by time, resources, or backwards compatibility. Unlikely to ship, but a guiding light for all of the above.

Please note that the concept of a North Star UI has no relation to the North Star Metric. While both serve as a guiding light for product decisions, and both are important, the North Star UI guides you in designing the product, whereas the North Star Metric is about evaluating success. To avoid confusion, I’ll refer to it as “North Star UI”, although it’s not about the UI per se, but the product vision on a deeper level.

The first three stages are much more concrete and pragmatic, as they directly affect what is being worked on. The more we go down the list, the less fleshed out specs are, as they need to allow room for customer input. This also allows us to outline future vision, without having to invest in it prematurely.

The most controversial of these is the last one: the hovercar, i.e. the North Star UI. It is the very antithesis of the MVP. The MVP describes what we can ship ASAP, whereas the North Star describes the most idealized goal, one we may never be able to ship.

It is easy to dismiss that as a waste of time, a purely academic exercise. “We’re all about shipping. Why would we spend time on something that may not even be feasible?” I hear you cry in Agile.

Stay with me for a moment, and please try to keep an open mind. Paradoxical as it may sound, fleshing out your North Star can actually save you time. How? Start counting.

Core Idea

At its core, this framework is about breaking down tough product design problems into three more manageable components:

  1. North Star: What is the ideal solution?
  2. Constraints: What prevents us from getting there right now?
  3. Compromises: How close can we reasonably get given these constraints?

One way to frame it is is that 2 & 3 are the product version of tech debt.[1]

It’s important to understand what constraints are fair game to ignore for 1 and which are not. I often call these ephemeral or situational constraints. They are constraints that are not fundamental to the product problem at hand, but relate to the environment in which the product is being built and could be lifted or change over time. Things like:

  • Engineering resources
  • Time
  • Technical limitations (within reason)
  • Performance
  • Backwards compatibility
  • Regulatory requirements

Unlike ephemeral constraints, certain requirements are part of the problem description and cannot be ignored. Some examples from the case studies below:

While these may be addressed differently in different solutions, it would be an oxymoron to have a North Star that did not take them into account.

Benefits

1. It makes hard product problems tractable

Nearly every domain of human endeavor has a version of divide and conquer: instead of solving a complex problem all at once, break it down into smaller, manageable components and solve them separately. Product design is no different.

This process really shines when you’re dealing with the kinds of tough product problems where at least two of these questions are hard, so breaking it down can do wonders for reducing complexity.

2. It makes the product design process robust and adaptable

By solving these components separately, our product design process becomes can more easily adapt to changes.

I have often seen “unimplementable” solutions become implementable down the line, due to changes in internal or external factors, or simply because someone had a lightbulb moment.

By addressing these components separately, when constraints get lifted all we need to reevaluate is our compromises. But without this modularization, our only solution is to go back to the drawing board. Unsurprisingly, companies often choose to simply miss out on the opportunity, because it’s cheaper (or seems cheaper) to do so.

3. It facilitates team alignment by making the implicit, explicit

Every shipping goal is derived from the North Star, like peeling layers off an onion. This is whether you realize it or not.

Whether you realize it or not, every shipping goal is always derived from the North Star, like peeling layers off an onion. In some contexts the process of breaking down a bigger shipping goal into milestones that can ship independently is even called layering.

The process is so ingrained, so automatic, that most product designers don’t realize they are doing it. They go from hovercar to car so quickly they barely realize the hovercar was there to begin with. Thinking about the North Star is taboo — who has time for daydreaming? We must ship, yesterday!

But the hovercar is fundamental. Without it, there is no skateboard — you can’t reduce the unknown. When designing it is not an explicit part of the process, the result is that the main driver of all product design decisions is something that can never be explicitly discussed and debated like any other design decision. In what universe is that efficient?

A skateboard might be a good MVP if your ultimate vision is a hovercar, but it would be a terrible minimum viable cruise ship — you might want to try a wooden raft for that.

A wooden raft, then a simple sailboat, then a speedboat, then a yacht, and finally a ship.

A skateboard may be a great MVP for a car, but a terrible MVP for a cruise ship.

Making the North Star taboo doesn’t make it disappear (when did that ever work?).

It just means that everyone is following a different version of it. And since MVPs are products of the North Star, this will manifest as difficulty reaching consensus at every step of the way.

The product team will disagree on whether to ship a skateboard or a wooden raft, then on whether to build a scooter or a simple sailboat, then on whether to work on a speedboat or a yacht, and so on. It will seem like there is so much disconnect that every decision is hard, but there is actually only one root disconnect that manifests as multiple because it is never addressed head on.

Two people arguing. One has a speech bubble with a skateboard, the other a speech bubble with a wooden raft. The first also has a thought bubble with a car, the second a thought bubble with a ship.

When the North Star is not clearly articulated, everyone has their own.

Here is a story that will sound familiar to many readers:

A product team is trying to design a feature to address a specific user pain point. Alice has designed an elegant solution that addresses not just the problem at hand, but several prevalent longstanding user pain points at once — an eigensolution. She is aware it would be a little trickier to implement than other potential solutions, but the increase in implementation effort is very modest, and easily offset by the tremendous improvement in user experience. She has even outlined a staged deployment strategy that allows it to ship incrementally, adding value and getting customer feedback earlier.

Excited, she presents her idea to the product team, only to hear engineering manager Bob dismiss it with “this is scope creep and way too much work, it’s not worth doing”. However, what Bob is actually thinking is “this is a bad idea; any amount of work towards it is a waste”. The design session is now derailed; instead of debating Alice’s idea on its merits, the discussion has shifted towards costing and/or reducing effort. But this is a dead end because the amount of work was never the real problem. In the end, Alice wants to be seen as a team player, so she backs off and concedes to Bob’s “simpler” idea, despite her worries that it is overfit to the very specific use case being discussed, and the product is now worse.

Arguing over effort feels safer and less confrontational than debating vision — but is often a proxy war. Additionally, it is not productive. If the idea is poor, effort is irrelevant. And once we know an idea is good and believe it to our core, we have more incentive to figure out implementation, which often proves to be easier than expected once properly investigated. Explicitly fleshing out the Hovercar strips away the noise and brings clarity.

When we answer the questions above in order and reach consensus on the North Star before moving on to the compromises, we know what is an actual design decision and what is a compromise driven by practical constraints. Articulating these separately, allows us to discuss them separately. It is very hard to evaluate tradeoffs collaboratively if you are not on the same page about what we are trading off and how much it’s worth. You need both the cost and the benefit to do a cost-benefit analysis!

Additionally, fleshing the North Star out separately ensures that everyone is on the same page about what is being discussed. All too often have I seen early design sessions where one person is discussing the skateboard, another the bicycle, and a third one the hovercar, no-one realizing that the reason they can’t reach consensus is that they are designing different things.

4. It can improve the MVP via user testing

Conventional wisdom is that we strip down the North Star to an MVP, ship that, then iterate based on user input. With that process, our actual vision never really gets evaluated and by the time we get to it, it has already changed tremendously.

But did you know you can actually get input from real users without writing a single line of code?

alt text

Believe it or not, you don’t need to wait until a UI is prototyped to user test it. You can even user test a low-fi paper prototype or even a wireframe. This is widely known in usability circles, yet somehow entirely unheard of outside the field. The user tells you where they would click or tap on every step, and you mock the UI’s response by physically manipulating the prototype or showing them a wireframe of the next stage.

Obviously, this works better for some types of products than others. It is notably hard to mock rich interactions or UIs with too many possible responses. But when it does work, its Impact/Effort ratio is very high; you get to see whether your core vision is on the right track, and adjust your MVP accordingly.

It can be especially useful when there are different perspectives within a team about what the North Star might be, or when the problem is so novel that every potential solution is low-confidence. No-one’s product intuition is always right, and there is no point in evaluating compromises if it turns out that even the “perfect” solution was not actually all that great.

5. It paves the way for future evolution

So far, we have discussed the merits of designing our North Star, assuming we will never be able to ship it. However, in many cases, simply articulating what the North Star is can bring it within reach. It’s not magic, just human psychology.

Once we have a North Star, we can use it to evaluate proposed solutions: How do they relate to it? Are they a milestone along a path that ends at the North Star? Do they actively prevent us from ever getting there? Prioritizing solutions that get us closer to the North Star can be a powerful momentum building tool.

Humans find it a lot easier to make one more step along a path they are already on, than to make the first step on an entirely new path. This is well-established in psychology and often used as a technique for managing depression or executive dysfunction. However, it applies on anything that involves humans — and that includes product design.

Once we’re partway there, it naturally begs the question: can we get closer? How much closer? Even if we can’t get all the way there, maybe we can close enough that the remaining distance won’t matter. And often, the closer you get, the more achievable the finish line gets.

In fact, sometimes simply reframing the North Star as a sequence of milestones rather than a binary goal can be all that is needed to make it feasible. For an example of this, check out the CSS Nesting case study below.

Case studies

In my 20 years of product design, I have seen ephemeral constraints melt away so many times I have learned to interpret “unimplementable” as “kinda hard; right now”. Two examples from my own experience that I find particularly relevant below, one around Survey UI, and one around a CSS language feature.

Context Chips and the Power of Engineering Momentum

Illustration of context chips

The case study is described at length in Context Chips in Survey Design: “Okay, but how does it feel?”. In a nutshell, the relevant bits are:

  • Originally, I needed to aggressively prioritize due to minimal engineering resources, which led me to design an extremely low-effort solution which still satisfied requirements.
  • The engineer hated the low-effort idea so much, he prototyped a much higher-effort solution in a day, backend and all. Previously, this would have been entirely out of the question.
  • Once I took the ephemeral constraints out of the question, I was able to design a much better, novel solution, but it got pushback on the basis of effort.
  • Prototyping it allowed us to user test it, which revealed it performed way better than alternatives.
  • Once user testing built engineering momentum and the implementation was more deeply looked into, it turned out it did not actually require as much effort as initially thought.

Here is a dirty little secret about software engineering (and possibly any creative pursuit): neither feasibility nor effort are fixed for a given task. Engineers are not automatons that will implement everything with the same energy and enthusiasm. They may implement product vision they disagree with, but you will be getting very poor ROI out of their time.

Investing the time and energy to get engineers excited can really pay dividends. When good engineers are excited, they become miracle workers.

In fact, engineering momentum is often, all that is needed to make the infeasible, feasible. It may seem hard to fit this into the crunch of OKRs and KPIs but it’s worth it; the difference is not small, it is orders of magnitude. Things that were impossible or insurmountable become feasible, and things that would normally take weeks or months get done in days.

One way to build engineering momentum is to demonstrate the value and utility of what is being built. All too often, product decisions are made in a vacuum, based on gut feelings and assumptions about user needs. Backing them up with data, such as usability testing sessions is an excellent way to demonstrate (and test!) their basis. When possible, having engineers observe user testing sessions firsthand can be much more powerful than secondhand reports.

black✅ AA✅ AAA✅ AAA✅ AAA✅ AAA✅ AAA+ Readability APCAwhite😍 Best😍 Best😍 Best🙂 OK🙂 OKblack🙂 OK🙂 OK😍 Best
Contrast guarantees we can infer for black and white text over arbitrary colors. OK = passes but is not necessarily best.

You may have noticed that in general, WCAG has a lot of false negatives around white text, and tends to place the Lightness threshold much lower than APCA. This is a known issue with the WCAG algorithm.

Therefore, to best balance readability and compliance, we should use the highest threshold we can get away with. This means:

Here’s a demo so you can see how they both play out. Edit the color below to see how the two thresholds work in practice, and compare with the actual contrast brackets, shown on the table next to (or below) the color picker.

Your browser does not support Relative Color Syntax, so the demo below will not work. This is what it looks like in a supporting browser: Screenshot of demo

Lthreshold = 70%
Lthreshold = 62.3%
Actual contrast ratios
Text color APCA WCAG 2.1
White
Black

Avoid colors marked “P3+”, “PP” or “PP+”, as these are almost certainly outside your screen gamut, and browsers currently do not gamut map properly, so the visual result will be off.

Note that if your actual color is more constrained (e.g. a subset of hues or chromas or a specific gamut), you might be able to balance these tradeoffs better by using a different threshold. Run the experiment yourself with your actual range of colors and find out!

Here are some examples of narrower ranges I have tried and the highest threshold that still passes WCAG 2.1:

Description Color range Threshold
Modern low-end screens Colors within the sRGB gamut 65%
Modern high-end screens Colors within the P3 gamut 64.5%
Future high-end screens Colors within the Rec.2020 gamut 63.4%
Neutrals C ∈ [0, 0.03] 67%
Muted colors C ∈ [0, 0.1] 65.6%
Warm colors (reds/oranges/yellows) H ∈ [0, 100] 66.8%
Pinks/Purples H ∈ [300, 370] 67%

It is particularly interesting that the threshold is improved to 64.5% by just ignoring colors that are not actually displayable on modern screens. So, assuming (though sadly this is not an assumption that currently holds true) that browsers prioritize preserving lightness when gamut mapping, we could use 64.5% and still guarantee WCAG compliance.

You can even turn this into a utility class that you can combine with different thesholds:

.contrast-color {
	--l: clamp(0, (l / var(--l-threshold, 0.623) - 1) * -infinity, 1);
	color: oklch(from var(--color) var(--l) 0 h);
}

.pink { –l-threshold: 0.67; }

Conclusion & Future work

Putting it all together, including a fallback, as well as a “fall forward” that uses contrast-color(), the utility class could look like this:

.contrast-color {
	/* Fallback for browsers that don't support RCS */
	color: white;
	text-shadow: 0 0 .05em black, 0 0 .05em black, 0 0 .05em black, 0 0 .05em black;

@supports (color: oklch(from red l c h)) { –l: clamp(0, (l / var(–l-threshold, 0.623) - 1) * -infinity, 1); color: oklch(from var(–color) var(–l) 0 h); text-shadow: none; }

@supports (color: contrast-color(red)) { color: contrast-color(var(–color)); text-shadow: none; } }

This is only a start. I can imagine many directions for improvement such as:

Addendum

As often happens, after publishing this blog post, a ton of folks reached out to share all sorts of related work in the space. I thought I’d share some of the most interesting findings here.

Using luminance instead of Lightness

When colors have sufficiently different lightness values (as happens with white or black text), humans disregard chromatic contrast (the contrast that hue/colorfulness provide) and basically only use lightness contrast to determine readability. This is why L can be such a good predictor of whether white or black text works best.

Another measure, luminance, is basically the color’s Y component in the XYZ color space, and a good threshold for flipping to black text is when Y > 0.36. This gives us another method for computing a text color:

--y-threshold: 0.36;
--y: clamp(0, (y / var(--y-threshold) - 1) * -infinity, 1);
color: color(from var(--color) xyz-d65 var(--y) var(--y) var(--y));

As you can see in this demo by Lloyd Kupchanko, using Ythreshold > 36% very closely predicts the best text color as determined by APCA.

In my tests (codepen) it appeared to work as well as the Lthreshold method, i.e. it was a struggle to find colors where they disagree. However, after this blog post, Lloyd added various Lthreshold boundaries to his demo, and it appears that indeed, Lthreshold has a wider range where it disagrees with APCA than Ythreshold does.

Given this, my recommendation would be to use the Ythreshold method if you need to flip between black and white text, and the Lthreshold method if you need to customize the text color further (e.g. have a very dark color instead of black).

Browser bug & workarounds

About a week after publishing this post, I discovered a browser bug with color-mix() and RCS, where colors defined via color-mix() used in from render RCS invalid. You can use this testcase to see if a given browser is affected. This has been fixed in Chrome 125 and Safari TP release 194, but it certainly throws a spanner in the works since the whole point of using this technique is that we don’t have to care how the color was defined.

There are two ways to work around this:

  1. Adjust the @supports condition to use color-mix(), like so:
@supports (color: oklch(from color-mix(in oklch, red, tan) l c h)) {
	/* ... */
}

The downside is that right now, this would restrict the set of browsers this works in to a teeny tiny set. 2. Register the custom property that contains the color:

@property --color {
	syntax: "<color>";
	inherits: true;
	initial-value: transparent;
}

This completely fixes it, since if the property is registered, by the time the color hits RCS, it’s just a resolved color value. @property is currently supported by a much wider set of browsers than RCS, so this workaround doesn’t hurt compatiblity at all.

Useful resources

Many people have shared useful resources on the topic, such as:

Thanks to Chris Lilley, Andrew Somers, Cory LaViska, Elika Etemad, and Tab Atkins-Bittner for their feedback on earlier drafts of this article.


Eigensolutions: composability as the antidote to overfit

14 min read 0 comments Report broken page

tl;dr: Overfitting happens when solutions don’t generalize sufficiently and is a hallmark of poor design. Eigensolutions are the opposite: solutions that generalize so much they expose links between seemingly unrelated use cases. Designing eigensolutions takes a mindset shift from linear design to composability.

Continue reading


Minimalist Affordances: Making the right tradeoffs

6 min read 0 comments Report broken page

Usability and aesthetics usually go hand in hand. In fact, there is even what we call the “Aesthetic Usability Effect”: users perceive beautiful interfaces as easier to use and cut them more slack when it comes to minor usability issues.

Unfortunately, sometimes usability and aesthetics can be at odds, also known as “form over function”.

Simplicity, and knowing when to stop

A common incarnation of form-over-function, is when designers start identifying signifiers and affordances as noise to be eliminated, sacrificing a great deal of learnability for an — often marginal — improvement in aesthetics.

Aesthetic and Minimalist Design is one of the Nielsen/Norman core usability heuristics (and all other heuristics taxonomies have something similar). More poetically, Antoine de Saint-Exupéry said “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away”. However, this is one of those cases where everyone agrees with the theory, but the devil is in the details (though user testing can do wonders for consensus).

Continue reading


State of HTML 2023 now open!

4 min read 0 comments Report broken page

tl;dr the brand new State of HTML survey is finally open!

Take State of HTML 2023 Survey

Benefits to you:

  • Survey results are used by browsers to prioritize roadmaps — the reason Google is funding this. Time spent thoughtfully filling them out is an investment that can come back to you tenfold in the form of seeing features you care about implemented, browser incompatibilities being prioritized, and gaps in the platform being addressed.
  • In addition to browsers, several standards groups are also using the results for prioritization and decision-making.
  • Learn about new and upcoming features you may have missed; add features to your reading list and get a list of resources at the end!
  • Get a personalized score and see how you compare to other respondents
  • Learn about the latest trends in the ecosystem and what other developers are focusing on

While the survey will be open for 3 weeks, responses entered within the first 9 days (until October 1st) will have a much higher impact on the Web, as preliminary data will be used to inform Interop 2024 proposals.

State of HTML 2023 Logo

The State of HTML logo, designed by Chris Kirk-Nielsen, who I think surpassed himself with this one!

Background

This is likely the most ambitious Devographics survey to date. For the past couple of months, I’ve been hard at work leading a small product team spread across three continents (2am to 8am became my second work shift 😅). We embarked on this mission with some uncertainty about whether there were enough features for a State of HTML survey, but quickly found ourselves with the opposite problem: there were too many, all with good reasons for inclusion! To help weigh the tradeoffs and decide what makes the cut we consulted both the developer community, as well as stakeholders across browsers, standards groups, community groups, and more.

We even designed new UI controls to facilitate collecting the types of complex data that were needed without making the questions too taxing, and did original UX research to validate them. Once the dust settles, I plan to write separate blog posts about some of these.

FAQ

Can I edit my responses?

Absolutely! Do not worry about filling it out perfectly in one go. If you create an account, you can edit your responses for the whole period the survey is open, and even split filling it out across multiple devices (e.g. start on your phone, then fill out some on your desktop, etc.) Even if you’re filling it out anonymously, you can still edit responses on your device for a while. You could even start anonymously and create an account later, and your responses will be preserved (the only issue is filling it out anonymously, then logging in with an existing account).

So, perhaps the call to action above should be…

Start State of HTML 2023 Survey

Why are there JS questions in an HTML survey?

For the same reason there are JS APIs in the HTML standard: many JS APIs are intrinsically related to HTML. We mainly included JS APIs in the following areas:

  • APIs used to manipulate HTML dynamically (DOM, form validation, etc.)
  • Web Components APIs, used to create custom HTML elements
  • APIs used to create web apps that feel like native apps (e.g. Service Workers, Web App Manifest, etc.)

If you don’t write any JS, we absolutely still want to hear from you! In fact, I would encourage you even more strongly to fill out the survey: we need to hear from folks who don’t write JS, as they are often underrepresented. Please feel free to skip any JS-related questions (all questions are optional anyway) or select that you have never heard these features. There is a question at the end, where you can select that you only write HTML/CSS:

Question about HTML/CSS and JS balance

Is the survey only available in English?

Absolutely not! Localization has been an integral part of these surveys since the beginning. Fun fact: Nobody in the core State of HTML team is a native English speaker.

Screenshot showing dozens of languages

Each survey gets (at least partially) translated to over 30 languages.

However, since translations are a community effort, they are not necessarily complete, especially in the beginning. If you are a native speaker of a language that is not yet complete, please consider helping out!

What does my score mean?

Previous surveys reported score as a percentage: “You have heard or used X out of Y features mentioned in the survey”. This one did too at first:

80% score, 105/131 heard or used

This was my own score when the survey first launched, and I created the darn survey 😅 Our engineer, Sacha who is also the founder of Devographics got 19%!

These were a lot lower for this survey, for two reasons:

  1. It asks about a lot of cutting edge features, more than the other surveys. As I mentioned above, we had a lot of difficult tradeoffs to make, and had to cut a ton of features that were otherwise a great fit. We err’ed on the side of more cutting edge features, as those are the areas the survey can help make the most difference in the ecosystem.
  2. To save on space, and be able to ask about more features, we used a new compact format for some of the more stable features, which only asks about usage, not awareness. Here is an example from the first section of the survey (Forms): Form validation question screenshot However, this means that if you have never used a feature, it does not count towards your score, even if you have been aware of it for years. It therefore felt unfair to many to report that you’ve “heard or used” X% of features, when there was no way to express that you have heard 89 out of 131 of them!

To address this, we changed the score to be a sum of points, a bit like a video game: each used feature is worth 10 points, each known feature is worth 5 points.

Since the new score is harder to interpret by itself and only makes sense in comparison to others, we also show your rank among other participants, to make this easier.

920 pts score, used 79 features out of 131, heard of 26 more

My score after the change. If you have already taken the survey, you can just revisit it (with the same device & browser if filled it in anonymously) and go straight to the finish page to see your new score and ranking!

I found a bug, what should I do?

Please file an issue so we can fix it!

Acknowledgements

This survey would not have been possible without the hard work of many people. Besides myself (Lea Verou), this includes the rest of the team:

  • Engineering team: Sacha Greif, Eric Burel
  • UX research & data science team: Shaine Rosewel Matala, Michael Quiapos, Gio Vernell Quiogue
  • Our logo designer, Chris Kirk-Nielsen

And several volunteers:

  • Léonie Watson for accessibility feedback
  • Our usability testing participants
  • …and all folks who provided early feedback throuhgout the process

Last but not least, Kadir Topal made the survey possible in the first place, by proposing it and securing funding from Google.

Thank you all! 🙏🏼

Press coverage (selected)

You still haven’t started the State of HTML 2023 survey?!


Numbers or Brackets for numeric questions?

7 min read 0 comments Report broken page

As you may know, this summer I am leading the design of the inaugural State of HTML survey. Naturally, I am also exploring ways to improve both survey UX, as well as all questions.

Shaine Madala, a data scientist working on the survey design team proposed using numerical inputs instead of brackets for the income question. While I was initially against it, I decided to explore this a bit further, which changed my opinion.

Continue reading