Over the last couple of years, blind reviews have been popularized as the ultimate method for fair talk selection in industry conferences. While I don’t really submit proposals myself, I have served several times on the other side of the process, doing speaker selection in conference committees, and the more data points I collect, the more convinced I become that the blind selection process is fundamentally flawed.
Blind reviews come from the world of academia. However, in academic conferences, you do not judge a talk by a 1-2 paragraph abstract, but by a 10+ page paper, so there’s way more to judge by. In addition, in academia the content of the research matters infinitely more than the quality of a talk. In industry conferences, selection committees in blind reviews have both way less data to use, and a much harder task, as they need to balance several factors (content, speaker skill, talk quality etc). It’s no surprise that the results end up being even more of a gamble.
Blind reviews result in conservative talk selection. More often than not, I remember me and my fellow committee members saying “Damn, this talk could be great with the right presenter, but that’s rare” and giving it a poor or average score. Few topics can make good talks regardless of the presenters. Therefore, when there is little information on the speaker in the initial selection round, talk selection ends up being conservative, rejecting more challenging topics that need a skilled speaker to shine and sticking to safer choices.
One of my most successful talks ever was “The humble border-radius” which was shortlisted for a .net award for Conference Talk of The Year 2014. It would never have passed any blind review. There is no committee in their right mind that would have accepted a 45 minute talk about …border-radius. The conferences I presented it at invited me as a speaker, carte blanche, and trusted me to present on whatever I felt like. Judging by the reviews, they were not disappointed.
In addition, all too many times I’ve seen great speakers get poor scores in blind reviews, not because their talks were not good, but because writing good abstracts is an entirely separate skill. Blind reviews remove anything that could cause bias, but they do so by striping all personality away from a proposal. In addition, a good abstract for a blind review is not necessarily a good abstract in general. For example, blind reviews penalize more mysterious/teasy abstracts and tend to be skewed towards overly detailed ones, since it’s the only data the committee gets for these talks (bonus points here for CfS that have a separate field for more details to conf organizers).
“But what about newcomers to the conference circuit? What about bias elimination?” one might ask. Both very valid concerns. I’m not saying any kind of anonymization is a bad idea. I’m saying that in their present form in industry conferences, blind reviews are flawed. For example, an initial round of blind reviews to pick good talks, without rejecting any at that stage, would probably solve these issues, without suffering from the flaws mentioned above.
Disclaimer: I do recognize that most people in these committees are doing their best to select fairly, and putting many hours of (usually volunteer) work in it. I’m not criticizing them, I’m criticizing the process. And yes, I recognize that it’s a process that has come out of very good intentions (eliminating bias). However, good intentions are not a guarantee for infallibility.