Economists need the softer social sciences

A follow-up to my remark on how few valuation studies include proper qualitative research: this remark was provoked by two travel cost studies presented at EAERE 2012. One looked at the effect that forest fires have on visit rates in Portuguese forests, whereas the other studied how people trade off entrance fees and mortality risk while visiting a nature reserve in Japan.

The Portuguese study reminded me of a paper by Erwin Bulte and others on what they called the ‘outrage effect’. They found that people are willing to pay a lot more for conservation of Wadden Sea seals if you tell them the population suffers from pollution than if you tell them the seals suffer from a viral disease. I would expect something similar to happen with regard to forest fires. People might even appreciate a scorched patch of forest if you tell them it is part of a natural or at least indispensible process, but they would be apalled if the fires were caused by human carelessness. I would also expect it matters whether multiple hectares are gone, or whether there are only occasional blackened patches. The researchers did not ask their respondents what they thought was the cause of forest fires, but almost all forest fires in their region were man-made, and they assumed their respondents were aware of that fact.

The Japanese study reminded me of the Darwin Awards, or rather, the fact that most of its recipients are intoxicated, overconfident males. Suppose a respondent prefers a $20 dangerous hike over a $30 safe one, does that mean that he considers $10 too much to lower his risk of getting killed? Or does he (I’m afraid it’s mostly a ‘he’) assume that bad stuff only happens to other people? In this case the researchers stated that it was widely known which hiking trails are dangerous, and that casualties have been all over the news. But that argument ignores how good some people are at downplaying risks – at their peril, indeed.

The bottom line for me is that too much economic research, especially the valuation stuff, seems to blindly jump into the issue, imposing wildly unrealistic assumptions on human behaviour, without doing proper explorative research first. Why not interview a few hikers first, to get an idea what considerations may be at play? Why not talk to a psychologist, or a sociologist, who has done research on how people view their own mortality risks?

I think economists should observe more, and take more heed of what other social scientists have found so far about human behaviour. Economics is a world apart from most other social sciences, notably sociology and anthropology. (Supposedly, an unnamed Hindu economist once claimed that bad economists reincarnate as sociologists.) But I think this is finally changing, as Economics Nobel prizes1 are being awarded to behavioural economists and political scientists, and economic experiments have become fashionable enough to be published in top journals like American Economic Review.

So how does this relate to my own work? Besides other activities, my work involves modelling of how people exploit natural resources, and estimating how valuable those resources are to them. I think the time is ripe to do such work together with anthropologists and sociologists. I am about to start a research project on international cooperation in management of Pacific tuna, together with Simon Bush from Wageningen University’s Environmental Policy Group. But I’ll keep my eyes open for opportunities to do more such interdisciplinary work.

1 Actually, I don’t like calling the Economics Nobel an Economics Nobel. It’s just that Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, as it is officially called, is a bit too long. But there is no such thing as a Nobel Prize for Economics. The only reason why there is an Economics prize that has “Nobel” in its name, and not, say, a similar biology prize, is that economists work at banks such as the Swedish central bank, and thereby have access to enough money to create the fund for such a prize. Unlike biology faculties.

Should one species be allowed to choke a fishery?

The North Sea bottom trawl fishery is a typical multispecies fishery. A single haul catches many different species, including plaice, sole, cod, turbot, red mullet, tub gurnard, and monk fish. These species tend to associate with other species instead of swimming together in schools, so it is almost impossible to catch one species without also catching a lot of others. Fisheries scientists like to call this a technical interaction between the species: they interact not through predation or competition, but through ending up in the same net.

If a bottom trawl fisher runs out of cod quota he can do three things: buy additional cod quota from other fishers, stop fishing, or keep on fishing but throw all cod that he catches back into the sea (which is what we call discarding of fish). So if this fisher is not allowed to discard his cod catch, and there is nobody who can sell him any cod quota, he is forced to stop fishing altogether. Cod is then called the ‘choke species’: the species that stops you from fishing when its quota runs out. If you were allowed to discard fish you would continue fishing as long as there is at least one species that can still be caught. As far as I know there is no term for this sort of species, but I kind of like the term slack species.

The EU is about to introduce a ban on discarding, and to set TAC (Total Allowable Catch) levels for a number of species whose catch has so far been unrestricted (for instance red mullet, brill, and sea bass). Unsurprisingly, fishers are adamantly opposed. To them ‘more species under a TAC regime’ means ‘more potential choke species’. They prefer fisheries policy to regard a few major species only, and to accept the bycatch of all other species.

MSY is flawed in two ways…

I admit that my first reaction was something like “oh, so these guys just want the right to fish a species to extinction if it suits them”. But on second thought they point towards two flaws in the principle of Maximum Sustainable Yield (MSY), which is currently the guiding principle in fisheries policy worldwide. MSY is the largest possible annual catch you can get sustainably. If a stock is never fished, it is at its maximum possible size, but you don’t catch anything. If you fish a little every year, the stock will be a bit smaller, and you catch a little bit every year. Fish some more, and the stock will be somewhat smaller while your catches are larger. As you keep on intensifying your fishing, however, you reach a point where not only the stock declines but also your catch. The maximum catch you can have every year without depleting the stock is called MSY.

The first problem with MSY is that it is based solely on biological principles. Economists have long argued for Maximum Economic Yield to be the guiding principle: this is the annual catch that maximizes, in a sustainable manner, the total revenues minus the total costs from fishing. MEY means that you take into account not only biological growth, but also the costs of fishing and the price of the fish. In the simple text book models this actually means fishing a bit less than under MSY to take advantage of the fact that more abundant fish is easier, and hence cheaper, to catch.

Second, the principle of MSY does not only ignore costs and prices, but also technical interactions. If you took these into account, a policy that maximizes the sum of the MEY over all species would allow fishers to overfish some species, and to underfish others. Mind you: ‘overfishing’ does not necessarily mean depleting a fish stock. It simply means that fishing pressure is higher than the fishing pressure that would lead to MSY. So you can sustainably overfish a stock! It’s just that in general you shouldn’t do that, because it gives you less fish under higher costs than you would have under MSY. In a multispecies fishery, however, it may be more efficient to overfish some species because that would allow more catch of other species that are more valuable, more productive, or both.

Note that trading quota could relieve some of the pain, but not all. If cod quota are very low compared to other quota, cod will eventually become a choke species, trade or no trade.

…but what is the alternative?

Ideally TACs should be set taking technical interactions into account. Fisheries biologists are working on this problem, but I suspect that it remains a very complex issue. On the other hand, current TACs are certainly not realistic in multispecies fisheries, so any consideration of technical interactions would be welcome.

Then there is the issue of discarding: the EU can either allow fishers to discard unwanted catch, as has been the policy so far, or ban discards, as the EU is about to do. In theory, if discards are allowed, fishers can keep on fishing as long as there is a ‘slack species’. It sounds horrible that fishers go on fishing, throwing over board everything they don’t need, but be aware that not all discarded catch is dead or dying, although the survival rate can be very small. A discard ban will have the advantage that the researchers who do stock assessments have better catch data, because so far they had only very crude estimates of how much fish were discarded. This matters, because stock assessments lean heavily on landings data, which underestimates catch if part of the catch is discarded. On a longer term a discard ban may also give a strong incentive to develop more selective fishing technologies, although it is highly unlikely that bottom trawling will ever be 100% selective.

Additionally, perhaps the current system of catch quota could be complemented with the possibility to rent additional quota from the government for, say, the first few choke species. Under such a policy you would indeed catch a bit more of the choke species and a bit less of the slack species, because catching all species would become too expensive. The rental price would still give an incentive to fish more selectively, but the fishery would not be shut down completely as soon as the choke species quota run out. The problem, of course, is setting the right prices: they should reflect the value of the loss of future catches of the choke species, which depends not only on biological growth, but also on the price of the choke species, the costs of catching it, and the discount rate. A daunting task indeed.

AIR on stupidity and optimizing your wedding party

Here is a journal I’d love to publish in someday. It’s from the same folks who issue the igNobel prices. Its latest issue is a delightful exposition of the science (including economics!) of stupidity, and how a marrying couple of scientists applied mixed-integer programming to design the optimal seating plan of their wedding party. In the acknowledgements it says:

MLB gratefully acknowledges JDLP for still agreeing to marry her after writing this.

Congratulations folks.

Ecological Economics must be economizing on language editors

How else can you explain the publication of a paper with the title

Economical sustainability of pinestraw raking in slash pine stands in the southeastern United States

Economical? As in the meaning of “thrifty”? What on Earth is “thrifty sustainability”?

I can forgive my students, especially non-economists, making this mistake, and as a non-native English speaker I understand the confusion between “economy” and “economics”, the Dutch translation of which is “economie” in both cases. But I wouldn’t expect such a mistake in the title of a paper in an international peer-reviewed journal that has “economics” in the name.

I know I’m being a bit of a language nazi here, but the point is also that Ecological Economics has a strange position in the environmental economics literature. The quality of the publications varies wildly between highly influential and original ideas on one hand, and vague econophobic claptrap on the other. Nevertheless, its impact factor is high enough to make it an A-journal. Perhaps that is why I have so far submitted most of my papers to that journal.

More thoughts on Stapel, Smeesters, and scientific fraud in general

Whenever there is a new case of scientific fraud the question pops up: does publish or perish force scientists to lie about their results? What makes this question all the more relevant is the fact that many universities employ their academic staff (including me) under some form of tenure track. Here the publish or perish is translated into a principle of up or out: either you keep increasing your education evaluation scores, publication list, Hirsch Index, project acquisition, and so forth, or you’re out of a job. Needless to say it gives quite an incentive to cook the books.

The first thing to realize here is that neither Stapel nor Smeesters are good examples of such a mechanism. Both had tenure, and Stapel has been making up data for the entire length of his career.

The second thing to realize, however, is that there are many forms of scientific misconduct, not all of which are outright fraud. Stapel is an extreme example of blatant fraud as he fabricated complete datasets. But there are more ways of behaving badly in science:

  • Skip observations that don’t support your hypothesis. This is what Smeesters is being accused of.
  • Copy text or ideas without citing the source.
  • The mirror of that: support a claim with a reference to a source that does not provide such justification.
  • Leave out details of the research method that would have put your results in a different light.
  • Run lots and lots of regressions on any combination of variables. You are bound to find a statistically significant relation between one or more variables somewhere. Present it as something you intended to investigate in the first place. (Be aware that “statistically significant at 5%” means “the probability that this relation is due to random fluctuations is 5%”, meaning that 1 in 20 of such “statistically significant” relations are really just a coincidence.)
  • Include the name of some big shot who hardly contributed to the paper but will make your paper look important. The big shot has yet another publication and you can bask in his glory.
  • When you do an anonymous peer review, tell the authors to cite some of your papers, especially the ones that improve your Hirsch Index if they are cited once more.
  • When you do an anonymous peer review, reject the paper if it presents results that you present in a paper that you just submitted to another journal. After all, you want to be the first to present the idea!
  • Or even worse than that: reject the paper (or a proposal) and submit the idea yourself. (Admittedly, given the huge time lag in publications you wouldn’t have a high chance of success.)

Note how difficult it is to identify bad intentions behind some of these, and that the line between good scientific practice and scientific misconduct can be surprisingly thin:

  • You can have very good reasons to skip an observation (protest bids in contingent valuation surveys are one). This is Smeesters’s defence.
  • You may have always thought that author X said Y in article Z, but actually you were confused with another article.
  • Nobody ever includes those details of the method in their papers, so why should you?
  • You’re a PhD student and you don’t want to let your professor down by not including him as an author – he is your supervisor, after all.
  • The paper you are reviewing would be incomplete without that reference, whether you wrote it or not.

It is easy to say that there are no such things as small sins and big sins: thou shalt not sin, period. But for most people it just doesn’t work that way: they wouldn’t mind crossing the speed limit by 10 km per h but object to crossing it by 100 km per h. And crossing the speed limit by 20 km per h may make you feel slightly worse about yourself, but when you are in a hurry it becomes easier to silence that guilty feeling.

So yes, I do believe the principles of publish or perish and up or out increase the incidence of scientific misconduct, but not in the way we read about it in the news. The cases you read about in the news are poor examples of such pressures. These are the sensational ones, the blatant fabrication of data by prestigious professors with big egos. The main damage is in the everyday nitty-gritty of science, and most of it may never be detected. Does that make it less bad? No, it may actually be worse because we don’t see, let alone quantify, the damage.

So is tenure track bad? Well, to paraphrase Churchill, it is the worst system except for all the other ones. The alternative we had in The Netherlands, where you had to wait for the current professor to die or retire before you could become one, has stifled scientific progress and chased a lot of talent out of the country. I believe the solution lies not in abandoning tenure track, but rather in the way we publish our results – but I’ll leave that for another post.