My Paper was Rejected… Again
A month ago, I submitted an updated version of my basic block versioning paper to an academic conference (the previous version can be viewed here). We have an improved system, more working benchmarks and better results across all metrics. I was quite proud of this paper, and thought our odds of getting into the said conference were quite high. Unfortunately, the improved paper was again rejected. Comments from the reviewers were mixed. Some quite positive, others extremely negative and harsh. Notably, the paper was shot down by a famous industry researcher who has done extensive work on trace compilation, a competing approach.
Below are some excerpts of the comments from four different reviewers:
The paper is well written and the examples help to understand the tradeoffs of the approach. Overall the approach seems to be sound and deserves further discussion/publication.
I fail to see the difference to trace compilation (and the predecessor of trace compilation, dynamic binary translation) [...] Constant propagation and conditional elimination in a trace compiler lead to the same type check elimination that you present.
I have had a peek at the code and it very elegant. The paper is well written too. The paper has some weaknesses but I don’t believe that they are show stoppers. [...] Overall this is a nice idea well worth looking at in more detail.
In this improved paper, we’ve managed to show that basic block versioning reduced the number of type tests much more effectively than a static analysis could, and that our approach was able to yield significant speedups. We’ve also addressed one of the most important criticisms of such an approach by showing that it does not result in a code size explosion in practice, but only a moderate code size increase. This, however, wasn’t sufficient for some reviewers. The main criticism they raised is that basic block versioning has similarities with tracing, and that as such, we should have compared its performance against that of a tracing JIT.
I have no issue with writing a detailed comparison of both approaches on paper, but it seems the only way to compare the performance of basic block versioning against that of tracing in a fair manner would be for me to implement a tracing JIT compiler in Higgs. Not only that, I’d probably be expected to implement a tracing JIT with all the latest developments in tracing technology. This could well require an entire year of work. I feel like there’s an unfortunate disregard for the amount of infrastructure work required to do compiler research. It’s becoming increasingly difficult to do academic research on compiler architecture. There’s a high amount of pressure to publish often, but at the same time, conferences expect us to pour several man-years into each publication and to immediately provide an absolute proof that ideas brought forward are better than everything else out there. Curious exploration of new ideas is systematically discouraged.
I was understandably feeling quite frustrated and discouraged after this second rejection. This blog post by Daniel Lemire lifted my spirits a little. It’s clear in my mind that basic block versioning has potential. We have something clearly novel, and it works! No matter what, I can at least take some comfort in the knowledge that basic block versioning has already been presented at several industry conferences, such as DConf, mloc.js and Strange Loop.
I may not be doing very well at the publication game thus far, but thousands of people have watched my talks online. The idea has already been spread around, and I’ve gotten positive responses. I haven’t put this much work into Higgs and my thesis to just give up. I’ve already found two conferences where I can submit further updated versions of the paper, and the deadlines are luckily coming up soon. In addition to this, I’ve already started work towards implementing typed shapes in Higgs, which should lead to another publication next year.