Commenting here as well for visibility, the author of this article mangled the math to produce this clickbait headline. It’s very cool tech with a lot of potential, but it unfortunately did not show 1700x the efficiency of a google TPU.
The paper claims that a simulated scaled-up 8-bit version of this tech (180nm CNT transistor TPUs) could theoretically reach 1TOPS/W. That is less than the efficiency the author specifies for the google TPU (4TOPS/2W = 2TOPS/W)
Then they go on to speculate that a lower process node will probably improve that efficiency greatly (very likely true, but no figures listed in the public preview of the paper, even simulations)
The author of the article assumed (wrongly) that the actual chip they made could do 1TOPS (it’s only 3000 transistors and can only do 2-bit math), and that it consumed 295 microwatts to do so, for an efficiency of 3389TOPS/W. (roughly 1700x the 2TOPS/W of the google chip) That’s of course ludicrous.
my recollection is that they wouldn’t accept that one. I read through the old github discussions at one point when trying to get into contributing code. They claimed it was basically too anglo-centric and not generalizable to all languages and that therefore they’d only accept a more expansive/generic user tagging/flairing feature or something like that (which could then also be used for pronouns)