Two years ago, I was funded by Google to design the inaugural State of HTML survey.
While I had led State of … surveys before (also graciously sponsored by Google), that was by far the most intense, as 0→1 projects often are.
In addition to the research, content, and analysis work that goes into every State of … survey,
the unique challenges it presented were a forcing function for finally tackling some longstanding UX issues with these surveys.
As a result, we pioneered new survey interaction UIs, and validated them via usability testing.
This work did not just affect State of HTML, but had ripple effects on all subsequent State of … surveys.
The results made it all worth it.
Turnout was the highest ever for a new Devographics [1] survey: 21 thousand participants, which remains a record high for State of HTML.
The survey findings heavily influenced Interop 2024 (hello Popover API and Declarative Shadow DOM!) and helped prioritize several other initiatives, such as stylable selects.
Despite lower 2024 participation, the survey still significantly influenced Interop 2025;
notably, View transitions was added after being prominent in the survey for two years in a row.
This is the goal of these surveys: to drive meaningful change in the web platform.
Sure, getting a shareable score about what you know and seeing how you compare to the rest of the industry is fun, but the reason browser vendors pour thousands of dollars into funding these surveys is because they provide unique vendor-neutral insights into developer pain points and priorities, which helps them make better decisions about what to work on.
And this ultimately helps you: by getting your voice heard, you can directly influence the tools you work with.
It’s a win-win: developers get better tools, and browser vendors get better roadmaps.
Last year, I was too busy to take the lead again.
Wrapping up my PhD and starting a new job immediately after, there was no time to breathe, let alone lead a survey.
I’m happy to be returning to it this year, but my joy is bittersweet.
When I was first asked to lead this year’s survey a few months ago,
I was still too busy to take it on.
Someone else from the community accepted the role — someone incredibly knowledgeable and talented who would have done a fantastic job.
But they live in the Middle East, and as the war escalated, their safety and their family’s well-being were directly impacted.
Understandably, leading a developer survey became the least of their concerns.
In the meantime, I made a few decisions that opened up some availability, and I was able to step in at the last minute.
It’s a sobering reminder that events which feel far away can hit close to home — shaping not just headlines, but the work and lives of people we know.
A big part of these surveys is “feature questions”: respondents are presented with a series of web platform features,
and asked about their familiarity and sentiment towards them.
At the end, they get a score based on how many they were familiar with that they can share with others,
and browser vendors and standards groups get signal on which upcoming features to prioritize or improve.
You can see which features were included in last year’s survey here or in [2] the table below.
I believe that co-designing these surveys with the community is the best way to avoid blind spots.
While the timeline is tighter than usual this year (the survey is launching later this month!), there is still a little time to ask:
👉🏼 Which upcoming HTML features or Web APIs are currently on your radar? 👈🏼
What does “on your radar” mean? Features you’re excited about and would love to see progress on.
Why focus on upcoming features?
The best candidates for these surveys are features that are mature enough to be fleshed out (at least a mature proposal, ideally a spec and WPT tests),
but not so mature they have already been implemented in every browser.
These are the features for which a survey such as this can drive meaningful impact.
If it’s so early for a feature that it’s not yet fleshed out, it’s hard to make progress via initiatives such as Interop.
Interest is still useful signal to help prioritize work on fleshing it out, but it’s a bit of a longer game.
And for features that are already implemented everywhere, the only thing that can improve things further is passage of time
— a problem for which I unfortunately have no solution (yet).
Obviously we’re looking at all the usual suspects already,
and initiatives such as webstatus.dev
and Web platform features explorer provide a treasure trove of data which makes this task infinitely easier than it used to be.
But this kind of preliminary signal is also useful for filtering and prioritization — to give you a sense, my list of candidate new features to ask about already has 57 items (!).
Given that State of HTML 2024 asked about 49 features, that will need some very heavy pruning.
While the title is “State of HTML”,
anything that wouldn’t fit better in State of CSS or State of JS is fair game.
This includes topics such as accessibility, browser APIs, web components, templating, static site generation, media formats, and more.
This may seem strange at first, but is no different than how the HTML specification itself covers a lot more than just HTML markup.
Any way to reach me works fine.
You can post in the comments here (preferred),
or reply on
BlueSky,
Mastodon,
Threads,
LinkedIn, or
Twitter.
Make sure to check the other replies first, and 👍 those with features you care about.
Looking forward to your ideas and comments!
Devographics is the company behind “State of …” surveys. ↩︎
As an Easter egg, this widget is just a <details> element with custom CSS.
Inspect it to see how it works!
It works best in Chrome and Safari, as they fully support ::details-content.
Chrome also supports calc-size(), which enables a nice animation, while the interaction in Safari is more abrupt.
In terms of a11y, the summary gets spoken out as a regular <summary> element, with “Show more” or “Show less” at the end of its content.
It seems ok-ish to me, but I’d love to hear from those with more expertise in this area. ↩︎
Many teams start with the MVP. But what if the key to shipping great products wasn’t starting small — but starting big? Could great products start at the finish line?
You may be familiar with this wonderful illustration and accompanying
blog post by Henrik Kniberg about good MVPs:
It’s a very visual way to illustrate the age-old concept that
that a good MVP is not the one developed in isolation over months or years,
grounded on assumptions about user needs and goals,
but one that delivers value to users as early as possible,
so that future iterations can take advantage of the lessons learned from real users.
I love Henrik’s metaphor so much, I have been using a similar system to flesh out product requirements and shipping goals, especially early on.
It can be immediately understood by anyone who has seen Henrik’s illustration,
and I find it can be a lot more pragmatic and flexible than the usual simple two tiered system (core requirements and stretch goals).
Additionally, I find this fits nicely into a fixed time, variable scope development process,
such as Shape Up.
🛹 The Skateboard aka the Pessimist’s MVP
What is the absolute minimum we can ship, if need be?
Utilitarian, bare-bones, and somewhat embarrassing, but shippable — barely.
Anything that can be flintstoned gets flintstoned.
🛴 The Scooter aka the Realist’s MVP
The minimum product that delivers value. Usable, but no frills. This is the target.
🚲 The Bicycle aka the Optimist’s MVP
Stretch goals — UX polish, “sprinkles of delight”, nonessential but high I/E features.
Great if we get here, fine if we don’t.
🏍️ The Motorcycle
Post-launch highest priority items.
🚗 The Car
Our ultimate vision, taking current constraints into account.
🏎️ The Hovercar aka the North Star UI
The ideal experience — unconstrained by time, resources, or backwards compatibility.
Unlikely to ship, but a guiding light for all of the above.
Please note that the concept of a North Star UI has no relation to the North Star Metric.
While both serve as a guiding light for product decisions, and both are important,
the North Star UI guides you in designing the product,
whereas the North Star Metric is about evaluating success.
To avoid confusion, I’ll refer to it as “North Star UI”, although it’s not about the UI per se, but the product vision on a deeper level.
The first three stages are much more concrete and pragmatic, as they directly affect what is being worked on.
The more we go down the list, the less fleshed out specs are, as they need to allow room for customer input.
This also allows us to outline future vision, without having to invest in it prematurely.
The most controversial of these is the last one: the hovercar, i.e. the North Star UI.
It is the very antithesis of the MVP.
The MVP describes what we can ship ASAP,
whereas the North Star describes the most idealized goal, one we may never be able to ship.
It is easy to dismiss that as a waste of time, a purely academic exercise.
“We’re all about shipping. Why would we spend time on something that may not even be feasible?” I hear you cry in Agile.
Stay with me for a moment, and please try to keep an open mind.
Paradoxical as it may sound, fleshing out your North Star can actually save you time.
How? Start counting.
At its core, this framework is about breaking down tough product design problems into three more manageable components:
North Star: What is the ideal solution?
Constraints: What prevents us from getting there right now?
Compromises: How close can we reasonably get given these constraints?
One way to frame it is is that 2 & 3 are the product version of tech debt.[1]
It’s important to understand what constraints are fair game to ignore for 1 and which are not.
I often call these ephemeral or situational constraints.
They are constraints that are not fundamental to the product problem at hand,
but relate to the environment in which the product is being built and could be lifted or change over time.
Things like:
Engineering resources
Time
Technical limitations (within reason)
Performance
Backwards compatibility
Regulatory requirements
Unlike ephemeral constraints, certain requirements are part of the problem description and cannot be ignored.
Some examples from the case studies below:
Nearly every domain of human endeavor has a version of divide and conquer:
instead of solving a complex problem all at once, break it down into smaller, manageable components and solve them separately.
Product design is no different.
This process really shines when you’re dealing with the kinds of tough product problems where at least two of these questions are hard,
so breaking it down can do wonders for reducing complexity.
By solving these components separately,
our product design process becomes can more easily adapt to changes.
I have often seen “unimplementable” solutions become implementable down the line,
due to changes in internal or external factors, or simply because someone had a lightbulb moment.
By addressing these components separately, when constraints get lifted all we need to reevaluate is our compromises.
But without this modularization, our only solution is to go back to the drawing board.
Unsurprisingly, companies often choose to simply miss out on the opportunity, because it’s cheaper (or seems cheaper) to do so.
Every shipping goal is derived from the North Star, like peeling layers off an onion.
This is whether you realize it or not.
Whether you realize it or not, every shipping goal is always derived from the North Star, like peeling layers off an onion.
In some contexts the process of breaking down a bigger shipping goal into milestones that can ship independently is even called layering.
The process is so ingrained, so automatic, that most product designers don’t realize they are doing it.
They go from hovercar to car so quickly they barely realize the hovercar was there to begin with.
Thinking about the North Star is taboo — who has time for daydreaming?
We must ship, yesterday!
But the hovercar is fundamental.
Without it, there is no skateboard — you can’t reduce the unknown.
When designing it is not an explicit part of the process,
the result is that the main driver of all product design decisions is something that can never be explicitly discussed and debated like any other design decision.
In what universe is that efficient?
A skateboard might be a good MVP if your ultimate vision is a hovercar,
but it would be a terrible minimum viable cruise ship — you might want to try a wooden raft for that.
A skateboard may be a great MVP for a car, but a terrible MVP for a cruise ship.
Making the North Star taboo doesn’t make it disappear (when did that ever work?).
It just means that everyone is following a different version of it.
And since MVPs are products of the North Star, this will manifest as difficulty reaching consensus at every step of the way.
The product team will disagree on whether to ship a skateboard or a wooden raft,
then on whether to build a scooter or a simple sailboat,
then on whether to work on a speedboat or a yacht,
and so on.
It will seem like there is so much disconnect that every decision is hard,
but there is actually only one root disconnect that manifests as multiple because it is never addressed head on.
When the North Star is not clearly articulated, everyone has their own.
Here is a story that will sound familiar to many readers:
A product team is trying to design a feature to address a specific user pain point.
Alice has designed an elegant solution that addresses not just the problem at hand, but several prevalent longstanding user pain points at once — an eigensolution.
She is aware it would be a little trickier to implement than other potential solutions,
but the increase in implementation effort is very modest, and easily offset by the tremendous improvement in user experience.
She has even outlined a staged deployment strategy that allows it to ship incrementally, adding value and getting customer feedback earlier.
Excited, she presents her idea to the product team, only to hear engineering manager Bob dismiss it with “this is scope creep and way too much work, it’s not worth doing”.
However, what Bob is actually thinking is “this is a bad idea; any amount of work towards it is a waste”.
The design session is now derailed; instead of debating Alice’s idea on its merits, the discussion has shifted towards costing and/or reducing effort.
But this is a dead end because the amount of work was never the real problem.
In the end, Alice wants to be seen as a team player, so she backs off and concedes to Bob’s “simpler” idea, despite her worries that it is overfit to the very specific use case being discussed, and the product is now worse.
Arguing over effort feels safer and less confrontational than debating vision — but is often a proxy war.
Additionally, it is not productive.
If the idea is poor, effort is irrelevant.
And once we know an idea is good and believe it to our core, we have more incentive to figure out implementation,
which often proves to be easier than expected once properly investigated.
Explicitly fleshing out the Hovercar strips away the noise and brings clarity.
When we answer the questions above in order and reach consensus on the North Star before moving on to the compromises,
we know what is an actual design decision and what is a compromise driven by practical constraints.
Articulating these separately, allows us to discuss them separately.
It is very hard to evaluate tradeoffs collaboratively if you are not on the same page about what we are trading off and how much it’s worth.
You need both the cost and the benefit to do a cost-benefit analysis!
Additionally, fleshing the North Star out separately ensures that everyone is on the same page about what is being discussed.
All too often have I seen early design sessions where one person is discussing the skateboard,
another the bicycle, and a third one the hovercar,
no-one realizing that the reason they can’t reach consensus is that they are designing different things.
Conventional wisdom is that we strip down the North Star to an MVP, ship that, then iterate based on user input.
With that process, our actual vision never really gets evaluated and by the time we get to it, it has already changed tremendously.
But did you know you can actually get input from real users without writing a single line of code?
Believe it or not, you don’t need to wait until a UI is prototyped to user test it.
You can even user test a low-fi paper prototype or even a wireframe.
This is widely known in usability circles, yet somehow entirely unheard of outside the field.
The user tells you where they would click or tap on every step, and you mock the UI’s response by physically manipulating the prototype or showing them a wireframe of the next stage.
Obviously, this works better for some types of products than others.
It is notably hard to mock rich interactions or UIs with too many possible responses.
But when it does work, its Impact/Effort ratio is very high;
you get to see whether your core vision is on the right track,
and adjust your MVP accordingly.
It can be especially useful when there are different perspectives within a team about what the North Star might be,
or when the problem is so novel that every potential solution is low-confidence.
No-one’s product intuition is always right, and there is no point in evaluating compromises if it turns out that even the “perfect” solution was not actually all that great.
So far, we have discussed the merits of designing our North Star,
assuming we will never be able to ship it.
However, in many cases,
simply articulating what the North Star is can bring it within reach.
It’s not magic, just human psychology.
Once we have a North Star, we can use it to evaluate proposed solutions:
How do they relate to it?
Are they a milestone along a path that ends at the North Star?
Do they actively prevent us from ever getting there?
Prioritizing solutions that get us closer to the North Star can be a powerful momentum building tool.
Humans find it a lot easier to make one more step along a path they are already on, than to make the first step on an entirely new path.
This is well-established in psychology and often used as a technique for managing depression or executive dysfunction.
However, it applies on anything that involves humans — and that includes product design.
Once we’re partway there, it naturally begs the question: can we get closer? How much closer?
Even if we can’t get all the way there, maybe we can close enough that the remaining distance won’t matter.
And often, the closer you get, the more achievable the finish line gets.
In fact, sometimes simply reframing the North Star as a sequence of milestones rather than a binary goal can be all that is needed to make it feasible.
For an example of this, check out the CSS Nesting case study below.
In my 20 years of product design, I have seen ephemeral constraints melt away so many times I have learned to interpret “unimplementable” as “kinda hard; right now”.
Two examples from my own experience that I find particularly relevant below,
one around Survey UI, and one around a CSS language feature.
Originally, I needed to aggressively prioritize due to minimal engineering resources, which led me to design an extremely low-effort solution which still satisfied requirements.
The engineer hated the low-effort idea so much, he prototyped a much higher-effort solution in a day, backend and all.
Previously, this would have been entirely out of the question.
Once I took the ephemeral constraints out of the question, I was able to design a much better, novel solution, but it got pushback on the basis of effort.
Prototyping it allowed us to user test it, which revealed it performed way better than alternatives.
Once user testing built engineering momentum and the implementation was more deeply looked into, it turned out it did not actually require as much effort as initially thought.
Here is a dirty little secret about software engineering (and possibly any creative pursuit):
neither feasibility nor effort are fixed for a given task.
Engineers are not automatons that will implement everything with the same energy and enthusiasm.
They may implement product vision they disagree with,
but you will be getting very poor ROI out of their time.
Investing the time and energy to get engineers excited can really pay dividends.
When good engineers are excited, they become miracle workers.
In fact, engineering momentum is often, all that is needed to make the infeasible, feasible.
It may seem hard to fit this into the crunch of OKRs and KPIs but it’s worth it; the difference is not small, it is orders of magnitude.
Things that were impossible or insurmountable become feasible, and things that would normally take weeks or months get done in days.
One way to build engineering momentum is to demonstrate the value and utility of what is being built.
All too often, product decisions are made in a vacuum, based on gut feelings and assumptions about user needs.
Backing them up with data, such as usability testing sessions is an excellent way to demonstrate (and test!) their basis.
When possible, having engineers observe user testing sessions firsthand can be much more powerful than secondhand reports.
black
✅ AA
✅ AAA
✅ AAA
✅ AAA
✅ AAA
✅ AAA+
Readability APCA
white
😍 Best
😍 Best
😍 Best
🙂 OK
🙂 OK
black
🙂 OK
🙂 OK
😍 Best
Contrast guarantees we can infer for black and white text over arbitrary colors.
OK = passes but is not necessarily best.
You may have noticed that in general, WCAG has a lot of false negatives around white text,
and tends to place the Lightness threshold much lower than APCA.
This is a known issue with the WCAG algorithm.
Therefore, to best balance readability and compliance, we should use the highest threshold we can get away with.
This means:
If passing WCAG is a requirement, the highest threshold we can use is 62.3%.
If actual readability is our only concern, we can safely ignore WCAG and pick a threshold somewhere between 68.7% and 71.6%, e.g. 70%.
Here’s a demo so you can see how they both play out.
Edit the color below to see how the two thresholds work in practice, and compare with the actual contrast brackets, shown on the table next to (or below) the color picker.
Your browser does not support Relative Color Syntax, so the demo below will not work.
This is what it looks like in a supporting browser:
Lthreshold = 70%
Lthreshold = 62.3%
Actual contrast ratios
Text color
APCA
WCAG 2.1
White
Black
Avoid colors marked “P3+”, “PP” or “PP+”, as these are almost certainly outside your screen gamut,
and browsers currently do not gamut map properly, so the visual result will be off.
Note that if your actual color is more constrained (e.g. a subset of hues or chromas or a specific gamut),
you might be able to balance these tradeoffs better by using a different threshold.
Run the experiment yourself with your actual range of colors and find out!
Here are some examples of narrower ranges I have tried and the highest threshold that still passes WCAG 2.1:
It is particularly interesting that the threshold is improved to 64.5% by just ignoring colors that are not actually displayable on modern screens.
So, assuming (though sadly this is not an assumption that currently holds true) that browsers prioritize preserving lightness when gamut mapping, we could use 64.5% and still guarantee WCAG compliance.
You can even turn this into a utility class that you can combine with different thesholds:
This is only a start.
I can imagine many directions for improvement such as:
Since RCS allows us to do math with any of the color components
in any color space, I wonder if there is a better formula that still be implemented in CSS and balances readability and compliance even better.
E.g. I’ve had some chats with Andrew Somers (creator of APCA) right before publishing this,
which suggest that doing math on luminance (the Y component of XYZ) instead could be a promising direction.
We currently only calculate thresholds for white and black text.
However, in real designs, we rarely want pure black text,
which is why contrast-color() only guarantees a “very light or very dark color” unless the max keyword is used.
How would this extend to darker tints of the background color?
As often happens, after publishing this blog post, a ton of folks reached out to share all sorts of related work in the space.
I thought I’d share some of the most interesting findings here.
When colors have sufficiently different lightness values (as happens with white or black text),
humans disregard chromatic contrast (the contrast that hue/colorfulness provide)
and basically only use lightness contrast to determine readability.
This is why L can be such a good predictor of whether white or black text works best.
Another measure, luminance, is basically the color’s Y component in the XYZ color space,
and a good threshold for flipping to black text is when Y > 0.36.
This gives us another method for computing a text color:
As you can see in this demo by Lloyd Kupchanko, using Ythreshold > 36%
very closely predicts the best text color as determined by APCA.
In my tests (codepen) it appeared to work as well as the Lthreshold method,
i.e. it was a struggle to find colors where they disagree.
However, after this blog post, Lloyd added various Lthreshold boundaries to his demo,
and it appears that indeed, Lthreshold has a wider range where it disagrees with APCA than Ythreshold does.
Given this, my recommendation would be to use the Ythreshold method if you need to flip between black and white text,
and the Lthreshold method if you need to customize the text color further (e.g. have a very dark color instead of black).
About a week after publishing this post, I discovered a browser bug with color-mix() and RCS,
where colors defined via color-mix() used in from render RCS invalid.
You can use this testcase to see if a given browser is affected.
This has been fixed in Chrome 125 and Safari TP release 194, but it certainly throws a spanner in the works since the whole point of using this technique is that we don’t have to care how the color was defined.
There are two ways to work around this:
Adjust the @supports condition to use color-mix(), like so:
@supports (color: oklch(from color-mix(in oklch, red, tan) l c h)) {
/* ... */
}
The downside is that right now, this would restrict the set of browsers this works in to a teeny tiny set.
2. Register the custom property that contains the color:
This completely fixes it, since if the property is registered, by the time the color hits RCS, it’s just a resolved color value.
@property is currently supported by a much wider set of browsers than RCS, so this workaround doesn’t hurt compatiblity at all.
tl;dr:Overfitting happens when solutions don’t generalize sufficiently and is a hallmark of poor design.
Eigensolutions are the opposite: solutions that generalize so much they expose links between seemingly unrelated use cases.
Designing eigensolutions takes a mindset shift from linear design to composability.
Usability and aesthetics usually go hand in hand.
In fact, there is even what we call the “Aesthetic Usability Effect”:
users perceive beautiful interfaces as easier to use and cut them more slack when it comes to minor usability issues.
Unfortunately, sometimes usability and aesthetics can be at odds, also known as “form over function”.
A common incarnation of form-over-function, is when designers start identifying signifiers and affordances as noise to be eliminated,
sacrificing a great deal of learnability for an — often marginal — improvement in aesthetics.
Aesthetic and Minimalist Design is one of the Nielsen/Norman core usability heuristics (and all other heuristics taxonomies have something similar).
More poetically, Antoine de Saint-Exupéry said “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away”.
However, this is one of those cases where everyone agrees with the theory, but the devil is in the details (though user testing can do wonders for consensus).
Survey results are used by browsers to prioritize roadmaps — the reason Google is funding this.
Time spent thoughtfully filling them out is an investment that can come back to you tenfold
in the form of seeing features you care about implemented, browser incompatibilities being prioritized, and gaps in the platform being addressed.
In addition to browsers, several standards groups are also using the results for prioritization and decision-making.
Learn about new and upcoming features you may have missed; add features to your reading list and get a list of resources at the end!
Get a personalized score and see how you compare to other respondents
Learn about the latest trends in the ecosystem and what other developers are focusing on
While the survey will be open for 3 weeks, responses entered within the first 9 days (until October 1st) will have a much higher impact on the Web,
as preliminary data will be used to inform Interop 2024 proposals.
The State of HTML logo, designed by Chris Kirk-Nielsen, who I think surpassed himself with this one!
This is likely the most ambitious Devographics survey to date.
For the past couple of months, I’ve been hard at work leading a small product team spread across three continents (2am to 8am became my second work shift 😅).
We embarked on this mission with some uncertainty about whether there were enough features for a State of HTML survey,
but quickly found ourselves with the opposite problem:
there were too many, all with good reasons for inclusion!
To help weigh the tradeoffs and decide what makes the cut we consulted both the developer community,
as well as stakeholders across browsers, standards groups, community groups, and more.
We even designed new UI controls to facilitate collecting the types of complex data that were needed without making the questions too taxing,
and did original UX research to validate them.
Once the dust settles, I plan to write separate blog posts about some of these.
Absolutely! Do not worry about filling it out perfectly in one go.
If you create an account, you can edit your responses for the whole period the survey is open, and even split filling it out across multiple devices (e.g. start on your phone, then fill out some on your desktop, etc.)
Even if you’re filling it out anonymously, you can still edit responses on your device for a while.
You could even start anonymously and create an account later, and your responses will be preserved (the only issue is filling it out anonymously, then logging in with an existing account).
For the same reason there are JS APIs in the HTML standard:
many JS APIs are intrinsically related to HTML.
We mainly included JS APIs in the following areas:
APIs used to manipulate HTML dynamically (DOM, form validation, etc.)
Web Components APIs, used to create custom HTML elements
APIs used to create web apps that feel like native apps (e.g. Service Workers, Web App Manifest, etc.)
If you don’t write any JS, we absolutely still want to hear from you!
In fact, I would encourage you even more strongly to fill out the survey: we need to hear from folks who don’t write JS, as they are often underrepresented.
Please feel free to skip any JS-related questions (all questions are optional anyway) or select that you have never heard these features.
There is a question at the end, where you can select that you only write HTML/CSS:
Absolutely not! Localization has been an integral part of these surveys since the beginning.
Fun fact: Nobody in the core State of HTML team is a native English speaker.
Each survey gets (at least partially) translated to over 30 languages.
However, since translations are a community effort, they are not necessarily complete, especially in the beginning.
If you are a native speaker of a language that is not yet complete, please consider helping out!
Previous surveys reported score as a percentage: “You have heard or used X out of Y features mentioned in the survey”.
This one did too at first:
This was my own score when the survey first launched, and I created the darn survey 😅
Our engineer, Sacha who is also the founder of Devographics got 19%!
These were a lot lower for this survey, for two reasons:
It asks about a lot of cutting edge features, more than the other surveys.
As I mentioned above, we had a lot of difficult tradeoffs to make,
and had to cut a ton of features that were otherwise a great fit.
We err’ed on the side of more cutting edge features, as those are the areas the survey can help make the most difference in the ecosystem.
To save on space, and be able to ask about more features, we used a new compact format for some of the more stable features, which only asks about usage, not awareness.
Here is an example from the first section of the survey (Forms):
However, this means that if you have never used a feature, it does not count towards your score, even if you have been aware of it for years.
It therefore felt unfair to many to report that you’ve “heard or used” X% of features, when there was no way to express that you have heard 89 out of 131 of them!
To address this, we changed the score to be a sum of points, a bit like a video game:
each used feature is worth 10 points, each known feature is worth 5 points.
Since the new score is harder to interpret by itself and only makes sense in comparison to others,
we also show your rank among other participants, to make this easier.
My score after the change.
If you have already taken the survey, you can just revisit it (with the same device & browser if filled it in anonymously)
and go straight to the finish page to see your new score and ranking!