After the Progress Conference 2024, the macroscience podcast put out a series of interviews with the speakers. I found it interesting, but I also often found it a bit divorced from my reality as a researcher in the life sciences. I am in complete agreement with the purpose of the speakers (to make science better, faster, and more impactful). I am squarely in the centre of the general ethos of the conference and happy to borrow ideas from big businesses and startups, to discuss incentives and institutions, &c. But still, I felt that some of the discussions were missing the point in a few ways.
Earlier this month, I saw that there will be a new edition of the conference in 2025 and there is an application process, so I decided to re-listen to the series and write down a few notes to add to my application.1 These are the notes, in no particular order.
Some of the discussions suffered from Seeing Like a State where the formal systems are assumed to be the real systems. As I've noted before, the formal system and the real system are not always the same thing.2
In particular, I rarely find that the terms of my grant funding restricts what I do beyond some very broad topics. If I have a grant to work on antimicrobial resistance, I can normally work on whatever I think are the major problems in antimicrobial resistance because I wrote the grant where I described what I think are the major problems in the area. There is a certain amount of grantsmanship in not over-committing to particular paths, but I have also written to programme officers at funding agencies saying effectively hey, we were thinking of doing X and that is what we wrote, but because of Y, we now think that the overall goal of the grant is now best served by Z. POs are reasonable people who respond to reasonable arguments and the general response is sounds reasonable, thanks for letting me know. Sticking to the grant is very low on the list of reasons why I do not pursue what I think is the most important science.
The view from Stanford. This is a view which sees young people as incredibly agentic and motivated with great ideas who are then forced to jump through several hoops before being able to truly pursue their visions. With no disrespect to students, in the life sciences, this is not really what happens. It actually takes a long time before young people are even able to formulate a good problem. This is not due to lack of intelligence. It is just very difficult to get to the frontier of knowledge from the outside.
I will regularly meet with a student who will show me a recent paper (say, from 2024) and say that he is modeling their project on this published paper (for example, wanting to use a similar technique in a different context). Then, the following dialogue ensues. Me: "we could do better by moving to newer methods/technologies". Student: "but this is less than a year old, at a high-impact journal, from a leading group in the world, how could we be more up to date?" Me: "it was published in 2024, which means they submitted it in 2023, which means that the work was done in 2021. I know the group, had coffee with the senior author in Paris, they aren't doing it this way anymore." Papers are like the light from the stars, a few years out of date and subject to red shift.
I wish that people were applying for PhDs with well-formed ideas and telling me to get out of their way. Instead, it is a part of my job to bring them up to the state-of-the-art and help them design good projects. It takes years. In fact, several people will tell you that the expectation is that students can design a project at the end of their doctorate. Maybe, we should try to aim for a world in which the state-of-the-art is more legible, not hidden in private meetings and emails, but that would be a radical goal, not the world as it is now.
The guests are often pulling in opposite directions in unacknowledged ways. While some guests bemoaned the fact that PhD and postdocs spend years working on someone else's vision, other guests bemoaned the fact that it is difficult to assemble the large teams that are necessary to tackle large problems. While both of these views are critiques of the current system, they are opposite critiques!
I tend to side with the view that we would benefit from larger teams of stably-employed professionals instead of the current system where we have to firegraduate anyone who gets too productive, while at the same time minimizing the risk that we spend too much time contributing to larger goals, lest we get a bad case of middle author disease. However, this means that I think we should have more people who are working in science at the behest of other people's vision, not fewer!
Peer review: in the grant system vs. the publication process There are two points where peer review comes into the picture: grant review and publications. They are very different and have different costs and benefits. In general, I think in the modern publication system (which includes preprints), publication peer review works quite okay, although we could improve it on the margin. Peer review in the funding system seems much more random. In the discussions, the two meanings of peer review were often used partly interchangeably.
The lack of good metrics To be fair to the guests, this was often acknowledged, but still a few guests discussed the number of papers published as a relevant metric, but this would be like arguing that all companies are equally efficient because they all publish the same number of annual reports. Papers have grown in the content they pack and keep growing (see the point above about students modelling themselves on recent papers still failing to see that the bar is already higher).
In Australia and several other countries, a PhD is 3-4 years, compared to the US standard of around 5. Does this mean that Australian PhD students are 25-40% more efficient? Of course not, but different countries have different standards for what they call a PhD (which some people will use as arbitrage).
★
A few problems that I do encounter regularly, could benefit from some theorising and were not mentioned:
(1) Non-fungibility of contributions and unfavorable cross-discipline status exchange rates. If you’ll forgive the dense terminology, this is a huge problem. Academia is first and foremost a status economy, but except at the very very top (Nobel Prize), status is not fungible and has terrible exchange rates between disciplines. For someone who works in one area to contribute to another area is rarely prized. For example, if I were to spend some of my time, working on an important study on Progress Studies3 and this was a success in all fronts (paper in respected economics journal, cited in legislation that changed things for the better), it would still have few direct career benefits at it would not be legible to grant panels in the life sciences. Regularly, we have projects which would benefit from computer scientists, but would bring the computer scientists few benefits in their economy. Even between different subfields of computational biology there are no good exchange rates. Papers I publish in the microbiome field will get some readership based on my name and some recognition that I have accrued,4 but this will bring me no benefit if I publish in even a closely-related topic.
(2) As I discussed in another post before, the up or out system. It shapes everything, from project design to collaborations, to careers.
(3) The slowness of decisions and processes at all levels: grants regularly take up to a year between submission and decision, hiring decisions take 6 months, paper reviews take months... This was mentioned in the context of Fast Grants, but I think it was still underrated how it makes everything worse. Delays do not just slow things down, so that the same result comes later. People change their behaviour in response to the delays. Talent identification is incredibly hard if projects take several years and success has a huge stochastic component and leads people to focus on lower risk. Career switches can take up to a decade. To make a Ruxandrian point, long career limbos impact women the most.
★
The problem with lived experience testimonials is a tendency to miss the forest for the trees. It is also easy to get into the mode of complaining about the traffic which can sound like one is engaging in policy discussions.
Still, let's say I come into the office on a random Thursday and look around our Centre asking myself Why are the people here not producing better science faster? How could the Australian taxpayer be getting more bang for their buck? If instead of just a group leader, if I was a Dean, for example with some ability to steer the activities of faculty but without a huge budget, how could I make things better? I feel that Progress Studies should have some sub area that is trying to provide answers to these questions.
★
In retrospect, there is major foreboding of the current situation. Several guests, including Tyler Cowen, discuss the loss of confidence in scientific institutions, institutional blindness to this fact and vaguely gesture towards a coming backlash. The original series was recorded before Trump was elected and the backlash is now here.
o3 thinks that this is a risky strategy, especially because some of the comments read as snarky and the whole blogpost reads as unstructured
As a recent example of this tension, see this complaint from Michael Baym about how his lab is being punished by the Trump administration for something they didn't do. In fact, it is Harvard that is being punished. Michael is an employee of Harvard and when corporations are punished, sometimes employees who are not connected to the reason for the punishment suffer consequences. From the administration’s point of view, this is no different than the case of issuing a fine to a bank that facilitated money laundering: the grant was to the institution, so the government can take it away. However, there is also a very real sense in which the grant was to Michael Baym and not to Harvard.
Which I would love to do!
This is not bragging, this is literally just me doing my job. Everyone who has published more than 2 or 3 papers will start to have some reputation in their subfield.