Theoretical physicists cluster with honey badgers1 as far as Journal Impact Factors are concerned. Such intestinal fortitude’s a possible result of lots of players, few seats, and long songs in our game of academic musical chairs.
Standard `quick note’ disclaimer: This is almost entirely stream of consciousness analysis with all the incident depth, insight, consistency, and utility one can expect.
Every so often I notice a flurry of discussion among my scientist friends on twitter.
“Impact factors are no basis for a career; young folks shouldn’t pay attention – worry about the quality of the research!”
This is so manifestly correct it’s hard to imagine anyone can question it. If you try to even have this conversation with a theoretical physicist it just falls on the floor – there’s no interest in debating it at all. I’m going to take a minute or two to poke at why.
First it’s important to realize that formal theorists barely notice journals at all. This isn’t new. Since the advent of the arXiv they’re irrelevant2 for
- propagating ideas,
- noting priority3.
Why wait on the judgment of a handful of other folk to bless the possible keys to the universe when you can just read the preprint yourself. If you can’t understand the paper you probably are in no position to judge it, so offload. What if for some random reason you need to measure the importance? I can’t count the number of times I’ve heard (and probably said), “What the hell, if you’re too lazy to bother reading and understanding a paper4, just look at it’s citation count5 for impact, that’s what inSPIRE is for!”
Obviously I’m not going to justify laziness, but at some point–really early on–I came to live with the second bit. Citation count, assuming a sufficiently honest community, is a measure of impact. Is it fair? Absolutely not; people who have higher visibility – socially, or because of past performance, or because of what institutions they’re at, or have a more favorable gender/cultural/ethnic presentation, or because they invite themselves to talk everywhere, or because of who’s their advisor/best-friend/sister-in-law, etc. – are better read. As my former Ph.D. advisor continuously emphasized to me (and still does every time I grit my teeth)
“There is no justice, only luck.” -Zvi Bern
And Zvi’s right – in academia, as in life, there’s no justice, at least none you can count on6. I’ve seen awesome proposals go unfunded, and spectacular candidates go un-hired, and you can trace out why, and it has nothing to do with justice – in the end, it’s bad luck. Well bad luck and herd mentality. But independent of whether citation count is fair, it does demonstrate whether papers are leading, following, or irrelevant to the zeitgeist. Does that define good papers? It depends on what you mean by good – high citation count certainly suggests they’re stirring the pot and that’s necessary for science to progress. If you really want to know if a paper is good for you, read it and see if teaches you something (or model whether you think it would teach something to those poor unfortunates who aren’t yet as brilliant and insightful as yourself), if it does and you care about what you learned then the paper’s done something. If not, well…
But sometimes you don’t care about whether a particular paper is good, you just want a quick hack to see if someone’s work has impact. Anyways, I internalized citations as impact almost the second I stepped back into academia – it’s just manifest. You can’t measure impact on a short time-scale, and so take a long enough time scale, and see whether something makes waves. It honestly doesn’t even occur to mess around with silly proxies like average citation count in a time period by a journal. Small citation numbers, over short time scales, are going to be noisy and uncertain (could be errors, could be lack of luft, etc), over longer time scales and especially as numbers build up, they tend to stand on stronger footing.
I have to say one of the hardest things about leaving industry and returning to academia was the new time-lag of the feedback loop re: “are my ideas any good???” In my previous life, actually working for a living, the feedback was almost instantaneous, and completely unambiguous: “Does your idea make more money7?” Even for subtle questions the feedback loop was on the order of months, because who in industry has the luxury of longer-scale strategies? Academia doesn’t have such rapid time-scales. It can take months to years to see if your ideas take root and flourish in the community, so you just get used to it, and you develop your nasal acuity to sniff out whether certain questions smell like they’re worth pursuing, and build the requisite self-confidence/hubris to actually trust your own judgment.
Ahh, but what about jobs – how can you wait for citation impact to decide on jobs? Well there are are a few things here:
- Nobody really pays attention to citation impact for early-career soft-hires. Leaders in the field are great at spotting future leaders in the field (almost by definition, see visibility markers above, draw your own conclusions about cronyism, pyramid schemes, and the earlier statement about justice in the academy). Also you just read the damn papers, look at the references for warnings, and speak with their senior collaborators. I’ve seen identical candidates, with the identical joint-papers, have their careers be determined by a mild like 3-word-difference by a senior collaborator. Letter writers know this, and it’s almost never by accident. This, by the way, is one reason you often don’t see all that much jr position hiring out-of-field.
- The great places don’t worry about citation impact for actual-jobs – the existing faculty’s developed the arrogance to trust their own judgment. This can be a little insular, but it’s not crazy – if you’ve got a great group, who else would you trust to build your future spectacular group?
- Have you noticed the timescale for formal theory actual-jobs?
There are typically two if not three 3-year postdocs these days between Ph.D and tenure-track positions in theoretical physics. Combine that with the sheer volume of Ph.D’s being produced and post-docs competing for positions, not to mention the output of our glorious high-drive hires over the past \(X\)-decades still producing at a tremendous rate, we have a flood of data. There’s a fair bit of time for people to build up their street-cred, and enough people generating papers that you don’t have to wait all that long to see if people care about what you’re doing.
Is this healthy? I don’t know – a lot of permanent hires these days are spectacular8. We “lose” fantastic people to industry, people we’ve invested a tremendous amount in training, and people who have given us tremendous energy, ideas, and stimulation. This is one reason we should all work towards making the transition between industry and academia much more two-way at all career stages. We should invest in leveraging those industries that benefit from our training, to help support general soft-term research positions, and relevant research programs in general. We additionally need to create space for our relatively newly-industrious academic children to maintain their connections and share their time and energy to the scientific endeavor on their personal time-scales. Not everyone wants to teach, nor should they. Researchers like to research – and they can enjoy, it turns out, solving many different types of problems. But this is also a topic for another day.
I can imagine in fields where there’s a lot less being written, where journal peer-review slows the visibility of impact, and where you’re expected to land a job in reasonable human-life-scale time periods, journal impact factor might seem relevant (even if everyone agrees it’s hyper-crappy). On the basis of absolutely no evidence, I suspect such fields would be even more susceptible to the whim and whimsy of key established players, so looking for anything that seems like hard data would be attractive. This isn’t us – we’re sitting on tons of impact data (whimsy driven as it may be), and are conditioned into seemingly infinite patience.
So why did I write this if we never have these conversations? Largely, while wondering why I was seeing it on twitter again, I finally stopped to consider how distinct our culture may be in academia, and thought I’d share9. Happy to learn from anyone else’s experiences, similar or varied.
-
Yes, I’m that old. Whaaaaazzzzuuuuuuppp! ↩
-
I’m not asserting, as many of our young firebrands will, that journals are useless. I think the role has changed – it’s now a lot less about getting jobs, and – when useful – more about promoting good science. Sometimes it’s nice to know at least one person was forced to pretend to read one of your papers – I’ve learned a lot about broadening my scope by well-intentioned and absolutely mystified referee reports. Even better are when people actually catch honest glitches. That’s great, I’ve heard ;-). Moreover, as a referee I appreciate having the excuse to dive into papers I’ve been meaning to spend more time with (typically multiple weeks after first browsed on the arXiv). Note: these are different circumstances than public comment threads on papers, which are fine, and should not be on the arXiv. I may have a more in-depth discussion of refereed journals at some other in-between time. ↩
-
Since talks are now widely posted and archived, sometimes even the arXiv is subdominant for priority. Feel free to look at my articles to see how many times I cite Michael Kiermaier’s Amplitudes 2010 talk at Queen Mary University. ↩
-
Which we all agree is TOO LAZY. ↩
-
Don’t worry, I’ll get to visibility privilege and citation count in a bit. Keep going. Or don’t, I’m killing time in a hotel room here. Now…I’m killing time in a lobby. ↩
-
Though in academia, as in life, it is worth striving for! ↩
-
Somewhat joking – it can often not be directly about money – but it is about a metric you should be moving on a very short time period: impressions / user-base / satisfaction / conversions…that will eventually be linked to money. ↩
-
With entirely obvious inclusion issues for anyone bothering to glance around (cf. visibility above). ↩