The emergence of agriculture

For most of history, humans have actually been hunter-gatherers. Specifically, if we assume that “modern” humans started with the cognitive revolution about 70,000 years ago, then we’ve roughly spent 85% of our time on Earth as foragers.

It is now a largely accepted fact among historians that hunter-gatherers actually had better, healthier diets, they worked less and their lives in general were much more enjoyable. At least when you compare them to early agriculturalists. Then what made us change our minds about hunting and gathering? And why did it not happen at the same time everywhere?

This is quite a complex question. And likely, many factors contributed. Consider for instance the fact that hunter-gatherers need a relatively large area to “free roam” on. If, however, population density increases, then such areas might become more scarce. People will thus not be able to forage anymore, they’ll have to settle down. This might explain why smaller places like the Middle East or Europe adopted agriculture earlier than places with huge open lands such as Africa.

Another potential explanation might simply be the availability of animals and plants that can be easily domesticated. It is generally considered that the Middle East, Europe or Central America had huge advantages here as well. “Guns, Germs, and Steel“, a book by Jared Diamond, is a good resource if you’re interested in how agriculture arose in more detail.

The topic of this post, however, is the theory that climatic variation influenced the adoption of agriculture. Specifically, consider a hunter-gatherer tribe that only produces food with labor and intermediate goods (such as tools, techniques) as inputs. Assume that the tribe is in a steady-state, in the sense that they do not really experiment with new tools or techniques because they can already produce enough food.

Suppose now that a climatic shock (such as a positive or negative temperature shock) hits the area the tribe is in. Then they will potentially have less of their usual food, because as the climate changes so will the conditions those (animal or plant) species need to thrive. There is thus going to be a negative shock to food production.

In response to that, the tribe can divert some labor from foraging to experimenting with new intermediate goods (i.e. inventing new tools, food extraction/processing techniques). This is just like R&D. Experimenting with new things will increase dietary breadth and will thus mitigate the effects of the climate shock on food production.

The new tools, techniques and technology that this experimentation brings to the tribe is complementary to the skills needed to adopt agriculture. This is because we’re talking about more ecological knowledge, better tools and so on. Now consider that various climatic variations hit the tribe’s region over tens of thousands of years. This tribe will accumulate new knowledge thanks to this. Then at the time when agriculture becomes a viable option (for whatever reasons), our tribe can adopt it much more easily and quickly.

The main idea behind this theory is that climatic variation was not the same everywhere. The theory would then predict that areas with low climatic variation adopted agriculture later and vice versa.

There is just one more caveat. The climatic variation cannot be too extreme. Too extreme climate such as ice ages or droughts that create deserts will generally not help foragers experiment with new things. These extreme conditions prevent experimentation because the number of plant and animal species decreases altogether.

In other words, the theory predicts that there is a hump-shaped (i.e. inverse U-shaped) relationship between climatic volatility and years since the adoption of agriculture. Ashraf and Michalopoulos (2013) test whether this is indeed the case with data. They find a robust, highly significant hump-shaped relationship in their cross-country analysis.

Climatic volatility vs. adoption of agriculture 1

As it can be seen in the figure, temperature volatility data is from the 20th century. This, however, is expected to be highly correlated with temperature volatility even tens of thousands of years ago. Data constraints prevent the authors from using earlier numbers. They do have data on temperature volatility for limited number of countries for the period 1500-1899, and it is highly correlated with volatility in the 20th century (the correlation coefficient is 0.998).

The authors of course control for a large set of variables that could have influenced the adoption of agriculture. These include the distance from the earliest agricultural sites, the abundance of domesticable animals and plants, and various geographical characteristics. And as an additional robustness check, they also run the regressions seperately for each season (to make sure it is not larger seasonal variations in certain areas that drive the results), and they also try replacing the 20th century temperature data with the 1500-1899 data (the sample is smaller in this latter case). The results from these robustness checks are positive and confirm the theory. In the following figure you can see the same hump-shaped relationship as above, but this time with the 1500-1899 temperature volatility data.

Climatic volatility vs. adoption of agriculture 2

Finally, the authors take advantage of a novel dataset on archeological sites, and the fact that their temperature data is available by grid cells. They have information on 750 archeological sites (shown below), where archeologists estimated relatively precisely the date of the onset of agriculture using radiocarbon dating. Since the authors have temperature data available by grid cells, they can easily calculate the temperature volatility at each site (and in its 50 km radius).

Map of neolithic archeological sites

The results are promising from this analysis as well. The authors again re-run the regression by each season seperately as a robustness check. The main thing we learn from seasonal analysis is that winter temperature volatility is much less important than say spring volatility. This is intuitive, as in the winter there are few opportunities to experiment with new food, techniques and tools that are somehow related to agricultural knowledge.

So in sum, it appears that the climatic volatility theory does check out. It indeed influenced the adoption of agriculture. An example mentioned in the paper is for instance that if Congo had the same spring volatility as the Netherlands, then agriculture would have appeared there 5227 years ago, instead of 3000 years ago. Congo has too low a volatility. Another example is that if Latvia had the same spring volatility as the Netherlands, then agriculture would have appeared there 1084 years earlier. Latvia has too extreme volatility. Similarly, at the more micro level if the German archeological sites of Uhyst and Bistoft had the spring volatility of the “most optimal” German site, Klein Denkte, then agriculture would have appeared 110 and 70 years earlier, repsectively. The former site is not volatile enough, the latter has too extreme volatility.

Advertisements

One thought on “The emergence of agriculture

  1. Pingback: ZeeConomics | Novelty-seeking traits and development

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s