Could neural networks identify every tree on earth?

In 2015, ecologists at Yale published a study in Nature that showed that there are upwards of three trillion trees in the world. That is an astounding number, especially considering that previous estimates had guessed 400 billion. However, the number is only as good as the method used to get there. In the Yale study, researchers combined satellite images with ground-based tree counts on every continent, covering some 400,000 hectares. By extrapolating the data from the known plots, they could estimate tree cover across the globe. However, with the addition of one key ingredient, I believe we can go a step further, and identify every canopy tree on earth.

Each tree carries its own unique signature, from the shape of leaves and color of bark to the form of the crown. A trained naturalist can identify a host of trees, but definitely would not have the time to attempt three trillion. Enter the neural network. It is pretty much what it sounds like, a computer program designed to learn like a brain. Neural networks learn like tabula rasa babies – by trying and failing many, many times. There is no direct teaching or programming necessary, just a huge dataset of known quantities for the computer to sample and resample. Imagine the network like a child with a huge plastic shape sorter. Given a million varied blocks, the neural network will attempt to match each block to the right hole. Each success will make a “neural” connection, each failure will erase one. Over enough time, it will be able to correctly sort blocks for the holes.


To extend the analogy, imagine images of trees are the blocks, and labels the holes. In the first stage, training, many images are presented to the neural network, each with a different correct species label. The computer receives the inputs, and makes connections to the labels, constantly checking and revising connections. If the inputs are plentiful enough to sum up the variation in the subject, the network with associate each different image with its correct label. In the second stage, inference, the network is given novel images and outputs their labels. Programmers have trained neural networks to do an outstanding variety of tasks, including recognizing images, translating speech, and predicting the next frames of videos.


To train our neural network, what we need is a huge training dataset. Fortunately, biologists have been doing just this for years. In many experimental forests and even cities like New York and LA, hundreds of thousands of trees have been identified and geotagged. These databases allowed the Yale team to count trees in the study I mentioned earlier. Using the geotags, aerial images can be created for each location that are ideal for training the network.

As good as neural networks are, they have rarely done better than humans at identifying images, and even the best biologists cannot identify most trees from aerial photos. However, networks do have the advantage that they can simultaneously incorporate several types of inputs. In addition to color satellite photos, NASA captures other ranges of wavelengths, including infrared. This would allow more information to discriminate tree species, since differences between species may only be visible in infrared.


Researchers have begun using neural networks to figure out tree species. A team at Caltech was able to teach a network to identify 18 tree species out of the 200 in Pasadena, CA. Eighteen may seem a meager number compared to all the species in the world, but keep in mind that the researchers in this study used a limited training dataset and relied on crowdsourcing to identify the trees in the first place. The results of this study form an incredible resource for tree care in Los Angeles. Working at a different scale, researchers in China used neural networks to identify species based on leaf images.

I am very far from a computer programmer with the know-how to carry out this project. And I recognize that our known datasets are biased toward temperate forests and would therefore limit tree identification in the tropics. Still, I think that the combination of neural networks with tree ID databases holds immense promise for understanding the diversity of trees on the planet. The oceans and the forests of earth store immense amounts of carbon, and knowing the state of both will become increasingly important as climate change wreaks havoc and forests continue to be lost. As flawed as this database might prove, I expect it would form a crucial starting point in having a terrific understanding of the world’s forests. I would view it as a draft human genome, not the final report, but a starting point for a whole new field of ecological research – is metaforestry taken?

Image credits
Google Earth

Why monuments matter

Executive Order 13792 of April 26, 2017 orders the Secretary of the interior to review the validity of all national monuments designated after 1996. In total, these monuments preserve more than 13 million acres of land, and 434 million acres of marine habitats. The review is probably a first step in eliminating or reducing the size of these crucial monuments.

The facts about national monuments

  1. National monuments are mostly administered by the National Parks Service. For all intents and purposes, they have the same function and protections as Parks.
  2. Monuments are made by presidential proclamation.
  3. Monuments can only be changed or removed by Congress. Only 3 small parks have been abolished (7 converted to state protection)
  4. The President and Department of the Interior cannot remove monuments, but they can deny them funding and enforcement.
  5. Many famous national parks began as monuments, including Grand Canyon, Olympic, Petrified Forest, Zion, Bryce, Saguaro, Joshua Tree,and Capitol Reef.

If you are reading this before May 24, you can comment on the monument review at Here is a copy of my comment (which was influenced by my first post on this blog) if you want inspiration.

I do not support the review of established monuments. I have several reasons, but I will focus on the economic because I hope they will be the most persuasive. The long term economic development of areas near national monuments will only be improved by the monuments, never decreased.

The land of the United States has been under development for hundreds of years. The Homestead Act (12 Stat. 392, 1862) offered land to anyone who could improve it, which led to a vast expansion of private land. I think the “twenty-dollar-bill test” is appropriate here. The name stems from the parable of an economist and his friend walking down the street. The friend spots a twenty dollar bill on the ground and points it out. The economist replies, “Couldn’t be a twenty, someone would have picked it up already.” That is, the lands being reserved for national monuments do not have great economic value, or else they would have been developed years ago.

There are many people whose economic well-being has been negatively impacted by national monuments. However, they often belong to unsustainable industries. One of the fundamental challenges to Devil’s Hole National Monument came from the Cappaert family in Southern Nevada. The Cappaerts challenged the Monument because the Department of the Interior asked them to stop pumping groundwater for their farm Cappaert (Cappaert v. United States, 426 S. Ct. 128, 1976). If you’ve ever been to southern Nevada, you understand why this is confusing. It is the heart of the Sonoran Desert. Water is crucially limited, and farming is probably the least efficient use of it. The Cappaerts were forced to slow their groundwater usage, and in turn the tourists, rangers, and scientists could keep bringing real, sustainable jobs to the barren area. Industries surrounding national monument land are often based on resource extraction (logging, mining, etc.). While they may bring money in the short term, resources are inevitably used up. The long term prospects of tourism and the benefits of science on park land provide more hope for long-term economic benefit than resource extraction.

In a hundred years, few will remember the geopolitical meanderings of this tumultuous year. Industries will change, people will argue, countries will split and reform. Economic gain in the United States will come more and more from high-level tech and finance. But in 2117, your great grandson might stand at the top of Shay mountain in the Northern reaches of Bear’s Ears National Monument and silently thank you for conserving something for him. At the end of the day, conservation is not about hugging trees or killing industries; it is about ensuring that we do not deprive our great grandchildren from the chance to decide what they want from their land.

This post will be quickly outdated as this moves forward, but for now, it is a good opportunity to learn about one of the most interesting conservation topics, the Antiquities Act. The Act was written in the early twentieth century, when lands in the west were rapidly being developed and pillaged. It allows the president broad power to declare a national monument on any federal land of historical or scientific interest.

I had the fortune to take a class in Conservation Law and policy this semester. I wrote a paper about the Antiquities Act precisely because I was worried that this administration would attempt to dismantle national monuments. If you care about this issue, I think it is a good background on understanding it. There is a fair bit of legalese, but I did try to make it a fun read: VanWallendael_Statute. I welcome questions or comments about the paper and the topic in general.

Screen Shot 2017-05-16 at 11.03.12 AM.pngElk Ridge, Bear’s Ears National Monument. From Bear’s Ears tribal coalition

Self-driving cars have an ejaculating sneaker fish problem

The autonomous, or self-driving car has gotten a ton of press recently, as Google’s cars log a million and a half road miles, Apple hints at a car, and Tesla’s autopilot polarizes users. Many are excited about this technology, but most people have reservations about handing over the wheel to a computer. In this essay, I would like to examine an interesting parallel between the business world and biology that I think is at the heart of one major problem with autonomous cars. This is the problem of sneakers.


Bluegill sunfish (Lepomis macrochirus) are masters of sneaking. Well, the males are anyway. The Bluegill is a pretty ordinary fish in most respects. It is native to eastern North America, inhabits lakes and ponds, and eats invertebrates in the water. However, it happens to be pretty freaky when it comes to reproduction. The males come in three types, or morphs. The first is the parental male. He is large and territorial, and courts females that visit his nest. Boring. The second is the sneaker. This smallest male rushes in while a parental is in the act of copulation and squirts ejaculate in an attempt to fertilize some of the eggs before swimming away. Rude, to say the least. The sneaker grows into the satellite. At this point, he looks so much like a female that the dominant males lets him get close to the nest. When a real female comes to mate, the sneaker interrupts the courting and fertilizes the female. Rude in the extreme.


We tend to think of biology as conforming to certain rules of engagement, but we must always remember that genes will persist as long as a strategy produces offspring. Sneaker bluegills require less food than dominant males, and need to expend less energy making nests. Unless dominant males evolve a way of preventing them, sneaking will remain a good evolutionary strategy. Sneakers have been found across a wide range of animals, from arthropods to birds and reptiles,  highlighting the importance of animals moving toward strategies that provide more offspring for less work.

How do ejaculating fish relate to autonomous cars? The business world has interesting parallels with biology due to the nature of the free market. In essence, both use ‘survival of the fittest’ to optimize outcomes. In biology, the trait that leads to the most offspring will increase in frequency. In business, the company that most appeals to consumers will be the most successful. Either way, organisms that fail to reproduce or companies that fail to sell products are lost, and the best suited persist.


To extend the analogy, we can imagine a fairly common business scenario that mirrors the bluegills’ world. Pharmaceutical companies spend billions of dollars each year on developing new drugs, so in this scenario, they are the parental males. However, once their patents expire, generic brands can produce the same drugs for much cheaper, because they do not need the R&D and advertising investment. These are the satellite males. They are usually very successful, because they offer the same product for cheaper. You can even see mimicry, just like in the fish! Generic brands will use the same color packaging as name brands to convince consumers that their product is the same thing. Name brands usually persist because their investment attracts consumers, but generic brands can do just as well.

One of the hallmarks of biological systems is a distinctive lack of cooperation. “Nature, red in tooth and claw,” is Tennyson’s phrase that is often used in discussions of competition. One reason that cooperation is rare is sneaking of a different sort. As soon as some organisms begin cooperating to achieve a goal, there are individuals that recognize that they can get the benefit of the goal without wasting energy in the cooperating. Imagine a group of crows working together to attack a deer. With enough birds, they could probably kill the deer and feast for weeks. However, any birds that did not participate in the attack could benefit from the meat just as much as those that did the work. Moreover, the lazy birds would not have to expend the energy and potential risk of attacking a big animal. In the long run, the lazy birds would probably produce more offspring, making it unlikely for cooperation to evolve.

If corporations cooperated to sell products, they could certainly do better. However, any companies that reduce prices will sell more products, so barring illegal activity, cooperation of that sort is impossible in capitalism. So we come back by a very roundabout way to the question of autonomous cars and the main point of this essay.

The autonomous car paradigm requires companies to cooperate and is therefore highly vulnerable to sneaking.

Autonomous cars promise us roads free of accidents, traffic, and all the pesky risks associated with human drivers. Proponents imagine a future where intersections are navigated efficiently, merges happen fluidly, and the user is delivered safely to his or her destination. Companies will be free to innovate in car interiors – to provide luxurious sleeping quarters or high-tech entertainment systems, and consumers will be happy because they don’t have to drive anymore.

Autonomous cars will be able to accomplish all of this because cars can sense each other, and react to other cars far faster than humans. With perfect knowledge of other cars’ movements, cars can optimize traffic flow

But then imagine that one company, say Hyundai, develops technology that will get its users to their destination 5% faster than other cars. They would be like the very first sneaker fish, and they would be very successful. Suddenly shopping for anything but the Hyundai would be like choosing the slower route on Google maps. Ford, Volvo, BMW etc. will all have to follow suit,  developing technology that will make their cars navigate intersections more efficiently, or perhaps allow manual override to break the speed limit, or perhaps run a red light when no police are nearby, or even edge out other cars in traffic.

When cars are programmed to avoid collisions, it becomes very easy to get around them. Roads as a whole flow better with cooperation, but individual cars can go faster if they sneak. I predict that driving algorithm efficiency will be an important selling point for autonomous cars, and this will degrade their safety and effectiveness.

So does this mean that the future of autonomous cars just be the same messy driving world that it currently is? Probably not. Consumers will choose faster cars until they start becoming unsafe. And legislation will undoubtedly prevent some sneaks from ever occurring. But barring centralized control of car software, there will be competition for speedier autonomous cars. Roads will not be as safe as they could possibly be, but they will still be way safer than they are now. And you will be able to get into your vehicle after a night on the town and say, “Car, I’m drunk, take me home.” Perhaps that is the most important outcome in the end. But when in twenty years you start seeing ads for Chevrolets with 7.2% faster transit times (city/highway average), I just hope you see them for the ejaculating sneaker fish that they are.



The race for cheap epigenomics: epiGBS versus bsRADseq

The world of science is rife with tiny dramas that can completely envelop the worlds of a few people and have huge effects on science in years to come. Follow the publication trail of any good (or even bad) question in science and you will discover conflict over the best answers to these questions. Usually the conflict is constructive, but it can sometimes turn nasty. The main focus of this post is, as far as I know, a dramatic, but amiable conflict over the best way to do a particular sequencing technique. I’ll give some of the background for laypeople and dig into the science for anyone who is interested in the difference between EpiGBS and bsRADseq.


One of the most rapidly advancing fields in biology is DNA sequencing technology. It is becoming so fast and cheap that it seems that we will be more limited by computing power to analyze the sequences than by the capacity to get the sequences themselves. In an earlier post, I discussed the development of Genotyping By Sequencing, or GBS. In short, it is a method that randomly samples DNA to get a vast amount of information about an organism without the cost and data problems of sequencing the whole genome. In 2016, this method has been adapted to include information about epigenetics, the information outside base pair sequence that can be passed between generations. DNA can have methyl groups attached to it, which change the way organisms produce proteins. The summary of these methylation patterns is the methylome. The methylome is more easily changed than the genome, and can therefore provide additional information that might help us understand natural populations. Scientists know little about how large scale methylation patterns impact natural populations, so this is an exciting advance. The conflict comes from the fact that two teams, one in Austria and one in the Netherlands, came up with different solutions within months of each other. Both were published in prestigious journals, but there are important differences between the techniques.

My Research

Being a grad student waiting for a new method to be published is an uncomfortable position to be in. I spoke to Dr. Christina Richards of University of South Florida about epigenetic GBS in 2014, and was hooked. For a year and a half, I collected my samples and waited for the methods to be published. Finally, last December, I couldn’t wait any longer. I had to defend my dissertation proposal without knowing the method. My committee was not so keen on committing to an unpublished method, so I had to scrap the epigenetics portion of my proposal. Of course, the papers were published less than a month after my defense. As Kurt Vonnegut would say, so it goes.

The Conflict

The paper I had been waiting for, from Dr. Thomas van Gurp of the Netherlands team, was not actually published first. They were beaten by the Austrians at the last moment. Dr. Emiliano Trucchi and the Austrians published their treatise in a special epigenetics issue of Molecular Ecology in late January 2016. The Ecological Epigenetics Facebook group (a most exclusive club) was abuzz with the news, and Dr. van Gurp even commented with his congratulations. But just days later, he gleefully posted his own publication in Nature Methods, a journal with a decidedly higher impact factor. The Austrians did not return the congratulations.

Trucchi and his team named their method bsRADseq, for bisulfite Restriction-Associated-Digest sequencing. Van Gurp’s dubbed theirs epiGBS, for epigenetic Genotyping-By-Sequencing. Starkly different titles, but it turns out the methods are pretty similar. Essentially, restriction enzymes cut up your sample’s DNA first. Next, the bisulfite treatment uses a chemical to replace nonmethylated cytosines with uracils. It is a very effective method, but produces challenges in deriving the original sequence. In the next PCR step, the traces of bisulfite conversion are wiped out. The DNA is sequenced in a high throughput sequencer, then analyzed using a bioinformatics platform.

Each of the two methods solves the problem of the wiped out original sequence in a different way. It took me a few careful readings of the two papers to figure out the differences, so I figured that I would summarize them so that others don’t need to go through the same slog.

epiGBS bsRADseq
First Author van Gurp Trucchi
Reference genome Not needed Seq 2x
REnzyme PstI but others possible SbfI but any possible
Paired-end seq. Paired Paired or single
Multiplex? Yes Yes
Clustering software Written for epiGBS in Python STACKS
Validation? Yes- Arabidopsis No
#loci 1626 1710-3180

This is only a small subsection of the possible differences, but I think these are some of the most important aspects to highlight. In practice, the most important difference will be that epiGBS does not require a reference genome, but requires paired-end sequencing and a specialized bioinformatics pipeline. From tricks of the PCR primers, they can work out which strand is which. From there, it is a computing problem to derive the original sequence from the two strands. BsRADseq sequences the same samples twice, once with bisulfite conversion, and once without. This saves the computing problem, but requires twoce the cost. However, bsRADseq data has been processed in the popular STACKS platform, whereas epiGBS data requires Mr. van Gurp’s scripts.

I am not sure which method biologists will embrace. The simplicity of not needing to sequence twice seems to vouch for epiGBS, but the method will not likely be popular until the bioinformatics pipeline has been in use for a bit and had the kinks worked out.

Understanding DNA in the Age of Omics

Omics sounds a bit like Olmecs, who were a culture that flourished in Mesoamerica near the Gulf of Mexico between 1500 and 500 BCE (you can read about them here). Omics refers to the blossoming of fields within biology that attempt to characterize the totality of a particular class of molecules within an organism. The first and most influential Omic (or -omic) is genomic study. Biologists who study genomics are attempting to characterize the diversity in structure and function of genes (DNA molecules) within their study species. Other Omics include proteomics (proteins), transcriptomics (transcribed RNA), lipidomics (lipids), and many more. The age of Omics represents a change in the way we study biology that has been caused by new technologies that allow us to capture and analyze vast amounts of data. 

One of the problems with Omics is trying to wrap your brain around the vastness of the molecular world. We know that DNA consists only of four bases: A, C, G, and T; but going from those simple letters to a functioning animal is where it gets sticky. Here is how I think of DNA. If you wanted to tell someone how to build an animal, like, say, an echidna, how would you do it? If it were me, I would write a manual called How to Make an Echidna. If you said that you would make a gif like those little recipes that people love to share on Facebook and call it: Make a Vegan Echidna Using These Six Easy Steps!, maybe this blog post is not for you. Okay, so an echidna manual. Most manuals we make contain a lot of numbered steps and diagrams. One problem we often run into is translation. You can’t make an echidna if the instructions are in Cyrillic. Fortunately, all organisms on earth use the same language, so don’t have to worry about this problem. Diagrams would be handy, but biology hasn’t figured that one out yet. Instead, that means the instructions need to be many times longer to explicitly describe what the diagrams would show. So now our How to Make an Echidna Manual is a very very thick encyclopedia-looking book. It contains everything on how to build an echidna. It has simple descriptions of how to build the proteins that make up its hair to complicated instructions on how to build and use the tools needed to make the hair and even the tools to sense when more hair needs to grow. It’s a long manual. Turns out that having one book this size is a pain in the ass for cells to work with. Plants and animals’ (and other eukaryotes’) solution to this problem is to package the manual into different volumes called chromosomes. Humans have 46 volumes to their manual, or genome, echidnas have 63 in the male and 64 in the female (don’t ask me why, but you can read here).
Now that we have our manual, we have a problem. Not every echidna is the same, which is actually a good thing for echidnas as a species. How did we get different varieties of echidnas? One way is mutation. The little proteins that act like nanoscopic monks relentlessly copying each page of the manual for making baby echidnas are very good at their jobs, but they are not perfect. Once in ten billion transcriptions they make a mistake and a mutation occurs. Over a very long period, these mutations can result in differing manuals between organisms. Sometimes mutations can result in a page of the manual that contains instructions on how to move itself to a different part of the manual. These pages are known as transposons or ‘jumping genes’. Regardless of how these variations arise, they are important for the study of any organism. Scientists are beginning to be able to study differences in genomes to learn a lot about how species’ DNA works.

What does this actually look like? My research is concerned with determining differences between the genomes of a plant in different areas of the continent. I can use these differences to learn things about how my plant is evolving and how it has spread through different regions. The answers are all there in the manuals, if you read them right.

How can we figure out these differences in manuals? The easiest way would be to read all of the manuals of all of the plants I want to study. Well, I could do this, but only if I had a few million dollars to throw at the problem. The best compromise that currently exists is called Genotyping By Sequencing (GBS). Essentially, this method randomly rips out millions of sentences from the manual of each sample using DNA cutters called restriction enzymes and compares them between samples. Due to the way sequencing technology works, it is cheap and fast to sequence these sentences, even if there are millions of them. Next, I can line the sentences up next to each other, and pick out the differences in letters, called Single Nucleotide Polymorphisms (SNPs). From there, I can do analyses like, “Hey! That plant from New York has 156 differences from the Maine one, but only 25 differences from the New Jersey one! The New York is more closely related to the Jersey one.” Or, “Hey! None of the plants from Virginia have any differences! They must be reproducing asexually.” The actual analyses are more complicated, but that’s how I imagine them in my brain.

A few years down the road, sequencing and computing will likely get to the point where we can easily read and analyze whole genomes, but for now, GBS is the best way to investigate large scale differences in populations of plants and animals. From a small DNA sample from a few hundred individuals, I could tell you how long ago the first Golden Retriever was bred. I could tell you whether or not corn in Iowa farms is genetically different from that in Kansas farms. With the DNA recovered from feces, I could tell you how many generations are left before Snow Leopards go extinct. Or I could tell you how building more roads is breaking apart populations of echidnas. The age of Omics is about an unprecedented ability to understand the living world around us by learning the right way to read the manual that each and every organism carries around within it, the genome.


For more facts about echidnas (some of which may or may not actually be true)

Image credits
Kitchenbowl Raises $1M To Help Users Create Easy-To-Follow, GIF-Heavy Recipes

Growing a beard to prevent forest fires

The North American west is burning. Millions of acres of forests and grassland burn, and the fires threaten homes. NPR has a nice summary of some abnormal fires in Washington. But what is the cause? The finger is most often pointed at climate change (or just “the drought” by skeptics), but invasive beetles, poor forest management strategies, and lightning also get some credit. So what is most to blame for the increased wildfires? We can’t blame lightning, because that hasn’t changed much. A recent paper shot down the evidence that forests killed by beetles actually resulted in more fires. This leaves climate change and management strategies as the most likely culprits.

I was lucky enough to work with a fire prevention crew in the stunning Ponderosa Pine forests above Lake Tahoe for one summer in 2012. With the Nevada Conservation Corps, I strapped on a chainsaw, grew a beard and some flannels, and hiked into the hilly terrain to thin out dead trees and smaller firs. You can see me here “expertly” walking down a dangerously leaning tree here. The objective of the summer was to reduce the fuel load, not to prevent all fires, but to prevent crown fires (fires that reach the tops of trees). The Tahoe region before it was settled relied on small fires called ground fires about every seven years to reduce fuel buildup and understory growth. The region fits into the “Understory fires every 0-34 years” category on this fascinating map. In the early 20th century, the entire basin was logged, and the fire prevention regime began. Dead trees built up. Firs that would normally be killed by ground fires grew to compete with the tall, fire-resistant Jeffrey and Ponderosa pines. This is the perfect recipe for a stand-destroying crown fire: the fire starts in the dry, dead tinder around, gains heat from the copious dead logs, then catches the sappy, medium-height firs, which gives the flames a ladder to reach the crowns of the pines. The thick, layered bark protects most pines in this regions from fire, but once the fire is in the crown, all it takes is a breeze to kills thousands of acres of trees. We were tasked with preventing that from happening by cutting off those “ladder” trees.

Management like this is effective, but there are many millions of acres of forests in the American west. We were tasked to Tahoe because it is an area with money to pay to protect its forests for tourism and to protect the expensive homes in the area. The forests most at risk have little money nor and less support. We can prioritize high-risk areas, but that is where climate change comes in. The above map was written for a climate that will change. I am not one to blame every drought or even blizzard on climate change. Extreme weather events have been happening since the Big Bang, but we now know for certain that they will be more frequent. As our weather regime shifts, it seems increasingly likely that we can expect more forest fires. And don’t forget, forest fires release carbon dioxide on a massive scale. The American and Canadian governments need more money to spend on fire prevention so that they do not need to spend millions on excessively dangerous wildland firefighting. Please support programs like the Nevada Conservation Corps, programs that teach young people about conservation and cost a minuscule fraction of the costs of fighting a fire.  And remember, climate change is real, climate change is expensive, and climate change is coming to your town.

Learn more about wildfires: Forest Service Wildfire Guide

Recent wildfires: NPR
Video credit: Daniel Burch a.k.a. BURCH!

Why you should hate this flower

If you are suffering through this dreary spring in a deciduous environment, you might recognize the flower I am featuring today. It is the lesser celandine (Ranunculus ficaria), a close relative of the buttercup. The lesser celandine has quite a history, ecologically and artistically, so I will try to feature both sides of the story.

You should hate this flower because it is an invasive species! Originating in Europe, it has become established in Northeastern US and Canada. You can find it in the spring coating the banks of invaded streams and rivers (like my local Bronx River), forming yellow and green blankets, sprouting up and blooming before most of the natives have had a chance to get started. A sleepy pollinator, like a honeybee, who makes her way to the river looking for a nice variety of spring flowers to wake its hive from winter slumber will find only celandines for her nectar meal. It’s like going to Chipotle and the only thing they offer to put in your tortilla is chicken nuggets. Imagine how disappointed you would be!

Though we may treat it like nasty chicken nuggets, the celandine is not inherently bad, of course. In its native English habitat, the celandine has predators and competitors to keep it in check, and it is known as a much-loved herald of spring, not a despised invader. In fact, the celandine has not one, but THREE poems dedicated to it by one of the most celebrated poets in the english language, William Wordsworth. In chronological order: To the Small CelandineTo the same flower, and The Small Celandine. Fortunately, the poems are much more creative and interesting than the titles. Wordsworth is known for his use of daffodils (I Wandered Lonely as a Cloud), but his treatment of celandines is considerable. Being among the first-emerging flowers in the spring make them a convenient symbol for the triumph of life after the bleakness of winter. C.S. Lewis uses celandines similarly, notably in the crucial passage in The Lion, The Witch, and The Wardrobe when Aslan brings spring back to Narnia. In the Lord of the Rings, celandines are one of the last flowers that Sam and Frodo see before they leave Gondor and begin to cross into Mordor. In this section, though, the celandines are white and “closed for sleep”, perhaps signifying that hope is gone for the hobbits. Or perhaps signifying that it is dusk. I was always bad at deconstructionism. In any case, I think it is very interesting that the character that made the celandine so popular with writers is the very thing that has made it so disliked by conservationists. To writers, it is a brave pioneer showing its cheery face to winter’s back, to conservationists it’s an opportunistic sneaker that is a pain in the ass to get rid of.

What happens when human aesthetic conflicts with environmental integrity? Is beauty a good enough reason to cause environmental degradation? Often the answer is yes. Offhand I can think of a few examples.

  1. Beaches. Heathy beaches are oft-changing, strewn with biological jetsam, and overall not the sterile sunbathing experience many desire. So beach managers clean the dead plants and animals, and bring in tons and tons of sand to supplement the shores.
  2. Lawns. Modern lawns are biological deserts; monocultures of castrated grass with few invertebrates or competitors to speak of. But perhaps the human eye enjoys this uniformity, because lawn retain their popularity.
  3. The introduction of the common starling (Sternus vulgaris) is a popular tale also of an invasive species and its literary history. The American Acclimatization Society in 1890 introduced the starling to New York as part of their effort to bring every bird in Shakespeare’s works to America. It is now considered a damaging invasive species, as it spreads invasive plants, damages crops, and competes with native birds for nesting sites.
  4. Ornamentals. Many of the worst invasive plants were originally brought in as ornamentals. My study species, Japanese knotweed (Reynoutria japonica) was thought to be an attractive garden plant, though that thought was quickly tempered when gardeners saw how hard it was to eliminate.

So how do we deal with invasive species? How do we reconcile the competition between human desires and things we know are better for the environment. Perhaps it is something as small as dealing with some smelly kelp on our beaches every now and then. Perhaps it is something as big as changing human aesthetic to favor diversity over uniformity, creating art that celebrates the disorder that is often healthy to nature.

On Wordsworth’s memorial plaque in a church in Grasmere, England, there is an inscription:

ANNO 1851.

Below the inscription there is a carving of Wordsworth’s head, and his favorite flowers: the daffodil, the snowdrop, the violet, and the celandine. Unfortunately, the person who carved the plaque got a picture of the greater celandine (Chelidonium majus), of which Wordsworth has never written a word, instead of the lesser. Perhaps the lesson here is not that, more than anything, we need to understand more about our natural world no matter what we choose to do with it. So perhaps you should not hate this little flower. But you should know about it.