When the 1906 earthquake struck San Francisco, most of the buildings at the time in the city were made of wood (like redwood harvested from the once vast stands of coastal redwood that grew in Northern California). This did not bode well for San Franciscans because immediately after the earthquake, a series of fires spread quickly over the city, largely razing to the ground almost every wooden structure that withstood the tremblor.
The Bekins building survived because it was made of a relatively new material that had largely been ignored (and vigorously opposed) in California. That material is reinforced concrete.
A problem with concrete is that it has great compressive strength. It can withstand high pressure without cracking. But it lacks tensile strength, meaning it cannot bend without shattering. Throughout the late 1800s, various builders tried to strengthen concrete with metal, mostly iron. With the advent of steel, which was becoming increasingly cheap to manufacture, and with a new technique based on twisting the metal to allow it to adhere better to the liquid concrete, a new era of construction was born.
In the years before the 1906 earthquake, the use of concrete was resisted by the legions of bricklayers, masons and powerful builders’ unions that saw in the material a threat to their survival. Others called the material ugly and not worthy of a great city like San Francisco.
One trade publication at the time wrote: “a city of the dull grayness of concrete would defy all laws of beauty. Concrete does not lend itself architecturally to anything that appeals to the eye. Let us pause a moment before we transform our city into such hideousness as has been suggested by concrete engineers and others interested in its introduction.”
The resistance against concrete was formidable enough that the material was not used widely in the city. Even after the earthquake, it took a while for people to grasp its value. Despite the overwhelming evidence that this new building material could dramatically help a city not only withstand an earthquake but fire as well, San Francisco building codes still forbade the use of concrete in high, load-bearing walls.
It wasn’t until two years later, in a contentious San Francisco board of supervisors meeting, that the city changed its building codes to allow the widespread use of reinforced concrete. By 1910, the city had issued permits for 132 new reinforced concrete buildings. The science of building advanced hugely in the wake of the disaster.
Today, most every tall building in the world makes use of steel reinforced concrete. The survival of the Bekins building was transformational for not only the city of San Francisco but in many ways, it heralded a watershed moment in the history of architecture, construction, and the planet’s cities.
At Caltech, Clair Patterson’s relentless determination to understand the health impacts of atmospheric lead changed the world for the better.
It started by asking one of the biggest questions of them all: how old is the earth?
One might think that we’ve known the answer to this question for a long time, but the truth is that a definitive age for our planet was not established until 1953, and it happened right here in California.
Some of the earliest estimates of the earth’s age were derived from the Bible. Religious scholars centuries ago did some simple math, synthesizing a number of passages of Biblical scripture and calculated that the time to their present-day from the story of Genesis was around 6,000 years. That must have seemed like a really long time to people back then.
Of course, once science got involved, the estimated age changed dramatically, but even into the 18th century, people’s sense of geologic time was still on human scales, largely incapable of comprehending an age into the billions of years. In 1779, the Comte du Buffon tried to obtain a value for the age of Earth using an experiment: He created a small globe that resembled Earth in composition and then measured its rate of cooling. His conclusion: Earth was about 75,000 years old.
But in 1907, scientists developed the technique of radiometric dating, allowing scientists to compare the amount of uranium in rock with the amount of lead, the radioactive decay byproduct of uranium. If there was more lead in a rock, then there was less uranium, and thus the rock was determined to be older. Using this technique in 1913, British geologist Arthur Holmes put the Earth’s age at about 1.6 billion years, and in 1947, he pushed the age to about 3.4 billion years. Not bad. That was the (mostly) accepted figure when geochemist Clair Patterson arrived at the California Institute of Technology in Pasadena from the University of Chicago in 1952. (Radiometric dating remains today the predominant way geologists measure the age of rocks.)
By employing a much more precise methodology, and using samples from the Canyon Diablo meteorite, Patterson was able to place the creation of the solar system, and its planetary bodies such as the earth, at around 4.6 billion years. (It is assumed that the meteorite formed at the same time as the rest of the solar system, including Earth). Subsequent studies have confirmed this number and it remains the accepted age of our planet.
Patterson’s discovery and the techniques he developed to extract and measure lead isotopes led one Caltech colleague to call his efforts “one of the most remarkable achievements in the whole field of geochemistry.”
But Patterson was not done.
In the course of his work on lead isotopes, Patterson began to realize that lead was far more prevalent in the environment that people imagined. In the experiments he was doing at Caltech, lead was everywhere.
“There was lead there that didn’t belong there,” Patterson recalled in a CalTech oral history. “More than there was supposed to be. Where did it come from?”
Patterson’s discovery was “one of the most remarkable achievements in the whole field of geochemistry.”
Barclay Kamb, California Institute of Technology
Patterson was flummoxed by the large amounts of environmental lead he was seeing in his experiments. It seemed to be everywhere: in the water, air and in people’s hair, skin and blood. Figuring out why this was the case took him the rest of his career.
He found it so hard to get reliable measurements for his earth’s age experiments that he built one of the first scientific “clean rooms”, now an indispensable part of many scientific disciplines, and a precursor to the ultra-clean semiconductor fabrication plants (so-called “fabs”) where microprocessor chips are made. In fact, at that time, Patterson’s lab was the cleanest laboratory in the world.
To better understand this puzzle, Patterson turned to the oceans, and what he found astonished him. He knew that if he compared the lead levels in shallow and deep water, he could determine how oceanic lead had changed over time. In his experiments, he discovered that in the ocean’s oldest columns of water, down deep, there was little lead, but towards the surface, where younger water circulates, lead values spiked by 20 times.
Then, going back millions of years, he analyzed microscopic plant and animal life from deep sediments and discovered that they contained 1/10 to 1/100th the amount of lead found at the time around the globe.
He decided to look in places far from industrial centers, ice caves in Greenland and Antarctica, where he would be able to see clearly how much lead was in the environment many years ago. He was able to show a dramatic increase in environmental lead beginning with the start of lead smelting in Greek and Roman times. Historians long ago documented the vast amounts of lead that were mined in Rome. Lead pipes connected Roman homes, filled up bathtubs and fountains and carried water from town to town. Many Romans knew of lead’s dangers, but little was done. Rome, we all know, collapsed. Jean David C. Boulakia, writing in the American Journal of Archaeology, said: “The uses of lead were so extensive that lead poisoning, plumbism, has sometimes been given as one of the causes of the degeneracy of Roman citizens. Perhaps, after contributing to the rise of the Empire, lead helped to precipitate its fall.”
In his Greenland work, Patterson’s data showed a “200- or 300-fold increase” in lead from the 1700s to the present day; and, most astonishing, the largest concentrations occurred only in the last three decades. Were we, like the Romans, perhaps on the brink of an environmental calamity that could hasten the end of our civilization? Not if Patterson could help it.
That may be far too grandiose and speculative, but there was no doubting that there was so much more lead in the modern world, and it seemed to have appeared only recently. But why? And how?
In a Eureka moment, Patterson realized that the time frame of atmospheric lead’s rise he was seeing in his samples seemed to correlate perfectly with the advent of the automobile, and, more specifically, with the advent of leaded gasoline.
Leaded gas became a thing in the 1920s. Previously, car engines were plagued by a loud knocking sound made when pockets of air and fuel prematurely exploded inside an internal combustion engine. The effect also dramatically reduced the engine’s efficiency. Automobile companies, seeking to get rid of the noise, discovered that by adding tetraethyl lead to gasoline, they could stop the knocking sound, and so-called Ethyl gasoline was born. “Fill her up with Ethyl,” people used to say when pulling up to the pump.
Despite what the Romans may have known about lead, it was still an immensely popular material. It was widely used in plumbing well into the 20th century as well as in paints and various industrial products. But there was little action taken to remove lead from our daily lives. The lead in a pipe or wall paint is one thing (hey, don’t eat it!), but pervasive lead in our air and water is something different.
After World War I, every household wanted a car and the auto sales began to explode. Cars were perhaps the most practical invention of the early 20th century. They changed everything: roads, cities, work-life and travel. And no one wanted their cars to make that infernal racket. So the lead additive industry boomed, too. By the 1960s, leaded gasoline accounted for 90% of all fuel sold worldwide.
But there signs even then that something was wrong with lead.
A New York Times story going back to 1924 documented how one man was killed and another driven insane by inhaling gases released in the production of the tetraethyl lead at the Bayway plant of the Standard Oil Company at Elizabeth, N.J. Many more cases of lead poisoning were documented in ensuing years, with studies showing that it not only leads to physical illness but also to serious mental problems and lower IQs. No one, however, was drawing the connection between all the lead being pumped into the air by automobiles and the potential health impacts. Patterson saw the connection.
When Patterson published his findings in 1963, he was met with both applause and derision. The billion-dollar oil and gas industry fought his ideas vigorously, trying to impugn his methods and his character. They even tried to pay him off to study something else. But it soon became apparent that Patterson was right. Patterson and other health officials realized that If nothing was done, the result could be a global health crisis that could end up causing millions of human deaths. Perhaps the decline of civilization itself.
Patterson was called before Congress to testify on his findings, and while his arguments made little traction, they caught the attention of the nascent environmental movement in America, which had largely come into being as a result of Rachel Carson’s explosive 1962 book Silent Spring, which documented the decline in bird and other wildlife as a result of the spraying of DDT for mosquito control. People were now alert to poisons in the environment, and they’d come to realize that some of the industrial giants that were the foundation of our economy were also having serious impacts on the planet’s health.
Patterson was unrelenting in making his case, but he still faced serious opposition from the Ethyl companies and from Detroit. The government took half-hearted measures to address the problem. The EPA suggested reducing lead in gasoline step by step, to 60 to 65 percent by 1977. This enraged industry, but also Patterson, who felt that wasn’t nearly enough. Industry sued and the case to the courts. Meanwhile, Patterson continued his research, collecting samples around Yosemite, which showed definitely that the large rise in atmospheric lead was new and it was coming from the cities (in this case, nearby San Francisco and Los Angeles). He analyzed human remains from Egyptian mummies and Peruvian graves and found they contained far less lead than modern bones, nearly 600 times less.
Years would pass with more hearings, more experiments, and the question of whether the EPA should regulate leaded gas more heavily went to U.S. Court of Appeals. The EPA won, 5-4. “Man’s ability to alter his environment,” the court ruled, “has developed far more rapidly than his ability to foresee with certainty the effects of his alterations.”
The Clean Air Act of 1970 initiated the development of national air-quality standards, including emission controls on cars.
In 1976, the EPA’s new rules went into effect and the results were almost immediate: environmental lead plummeted. The numbers continued to plummet as lead was further banned as a gasoline additive and from other products like canned seafood (lead was used as a sealant). Amazingly, there was still tremendous denial within American industry.
Although the use of leaded gas declined dramatically beginning with the Clear Air Act, it wasn’t until 1986, when the EPA called for a near ban of leaded gasoline that we seemed to finally be close to ridding ourselves of the scourge of atmospheric lead. With the amendment of the Clean Air Act four years later, it became unlawful for leaded gasoline to be sold at all at service stations beginning December 31, 1995. Patterson died just three weeks earlier at the age of 73.
Clair Patterson is a name that few people know today, yet his work not only changed our understanding of the earth itself, but also likely saved millions of lives. When Patterson was finally accepted into the National Academy of Science in 1987, Barclay Kamb, a Caltech colleague, summed his career up thusly: “His thinking and imagination are so far ahead of the times that he has often gone misunderstood and unappreciated for years, until his colleagues finally caught up and realized he was right.”
Clair Patterson is one of the most unsung of the great 20th-century scientists, and his name deserves to be better known.