Not all Science begins as clearly correct, nor does all non-science begin as clearly wrong. What happens to discoveries and theories over time that might move them from the borderlands of science (the "fringes") into or out of Science? What factors and institutions might be involved in keeping real science in the fringes or keeping non-science in the mainstream? We'll take a look at several case studies.
Case Study 1: Do the Continents Move?
Do the continents move over time?
- First suggested in 16th Century based on coastlines
- A common alternative model: the continental masses remain in the same spot laterally, but move vertically:
- Ancient ocean floor uplifted to form land; what is now sea floor may have been once above water
Some data to explain if the continents did not move:
- Shared sets of animal and plant fossils with highly discordant distributions
- Matching mountain ranges, geological formations across modern oceans
- Patterns of ancient glacial movement did not match modern climate zones
It was discovered in late 19th Century that the sea floor is flat, everywhere dense volcanic rock: very different from continents with mixed rock types and much lower average density. Thus, the ocean floor does NOT represent simply submerged versions of today's continents. The submergence/emergence model of past geography was clearly rejected. Additionally, it was discovered that when mapped out, earthquakes and volcanoes tend to follow particular tracks along the margins of some continents, in the middle of oceans, and other additional patterns that called for some explanatory theory.
Continental Drift: Theory proposed by Alfred Wegener, German geophysicist and glaciologist, in 1915. His model: the light continental masses move over dense layer of oceanic crust (by analogy to motion of light glacial ice moving over bedrock below.) Volcanoes, mountain building, earthquakes caused by crumpling of continental masses as they move along. In the distant past the continents were united together, but subsequently some force broke them apart and is moving them ever since.
Resistance to continental drift was strong in the US, Canada, and the UK (although more widely accepted by Southern Hemisphere geologists.) In part, northern resistance because Wegener failed to propose causal mechanism that could be well-verified (not that their own stabilist model had a verified causal mechanism, either!) But there were ad hominem components to the rejection, too: in part, post-war Germanophobia, and in part, cross-disciplinary "snobbery". At a 1926 Meeting of American Association of Petroleum Geologists, the majority of the talks were strongly against Continental Drift. From this point on, continental drift became a fringe subject among northern hemisphere geologists
Sea-Floor Spreading: In the 1940s and 1950s some geologists (notably Arthur Holmes and Harry Hess) had proposed a mechanism to move continents: the continents did not move OVER the oceanic crust, but carried along with it as the sea-floor itself was recycled. In post-WWII era, additional discoveries concerning depth of earthquakes, age of oceanic crust confirmed sea-floor spreading.
Plate Tectonics: models of continental drift and sea-floor spreading were combined by John Tuzo Wilson and colleagues to form plate tectonics.
- Earth's surface is comprised of numerous rigid lithospheric plates
- Plates themselves carry thick continental and/or thin oceanic crustal rock
- New material generated at divergent boundaries (mid-ocean ridges in sea, rift valleys on land)
- Plates slide past each other at transform boundaries
- Oceanic crust is lost under other ocean crust or continents at subduction zones (site of deep-sea trenches, earthquakes, volcanoes, etc.)
- Two continental masses meet at collisional boundaries
Most of geology occurs by the interactions between plates at their various boundaries:
Plate boundaries, from online USGS pamphlet "This Dynamic Earth".
Plate velocities predicted by theory confirmed by GPS studies in 1990s
Lessons from Plate Tectonics: Initial resistance to mobile continents was at least in part due to scientific critiques, but also ad hominem argumentation. When the evidence became overwhelming, the profession of science forced to accept it. (Also worth noting: not really any vested interest (financial, philosophical, etc.) for or against plate tectonics.)
Case Study 2: Vaccines
Since ancient times, some people have noticed that 1) people who are exposed to certain diseases seem to be immune to similar but much-more-deadly diseases and 2) by exposing people in advance to the weaker disease, you might therefore give them a better chance of surviving the serious one. This practice made its way into Western medical science through the work of Edward Jenner in the 1770s. In his particular case, he showed that milkmaids and others who had been exposed to cowpox were immune to smallpox, one of the most deadly diseases ravaging the world at the time.
Jenner used the procedure of injecting material derived from a cow pox pustule into a healthy person, who then got a (non-serious) cowpox infection. Many people (even in the medical community) railed against this idea as reckless at best, and practically witchcraft (given that it was in part derived from folk practices) at worst. Nevertheless, the success of this procedure was proven, and since the 18th Century onward vaccinations have become the second-most beneficial public health procedure worldwide, eliminating many once-fatal pandemics.
(For those wondering, the MOST beneficial public health procedure is something so commonplace we often forget it too was once radical: waste management, particularly of bodily wastes, particularly separating these from drinking water. Yeah, gross...)
Case Study 3: How Do Medical Substances Affect the Body?
Homeopathy started as a branch of medicine invented by Samuel Hahnemann (1755-1843). The name means "same badness". Its premise was called the Law of Similars: similia similibus curantur ("like cures like"). This law postulated that the best way to treat diseases is with extremely small amounts of substances that produce similar symptoms. In one context, this could be thought of as a precursor to the idea of vaccination (a small amount of the pathogen or a close relative is injected into the body, causing an immune response that increases resistance to future encounters with the serious pathogen).
However, homeopathy is something different: it is a method of treating diseases and their symptoms rather than a preventive. If you develop a fever, take a small amount of bee venom extract, since bee stings produce very similar symptoms to fevers. Hahnemann and his followers created a vast list of substances to use in treating various symptoms.
Homeopathy was contrasted with "allopathy" ("other badness"), in which symptoms are treated with medicines which counteracted those symptoms, not duplicated it. The two can be compared and contrasted:
- Both derived from earlier herbalist traditions
- Homeopathy involved treating symptoms with substances that produced the same symptoms, assuming that the body will take care of the origin of the problem; allopathy treated symptoms with substances that reduced their effects, and eventually eliminating the pathogen
- Homeopathy assumed that the smaller the dose the more effective it was; in allopathy dosage was not so simple, but in general the larger the dose the larger the effect
- In the early 19th Century, given the poor state of knowledge of allopathic medicines and worse state of other medical procedures (anesthesia, surgery, etc.), homeopathy was probably at least as effective (if not demonstrably less dangerous!) than allopathy
However, allopathy rather than homeopathy is the origin of modern medicine. Why is that? Let's look at homeopathy's premise with regards to dosages.
Homeopathic "cures" are delivered at dilutions of "nX" or "nC" (where "n" is an integer).
- For X:
- 1X: dissolve 1 part of substance in 10 parts of inert substance, mix
- 2X: make 1X solution, take 1 part of that in 10 parts of inert substance, mix
- Industry standard is 30X: 20 successive dilutions of 1 part in 10
- For C:
- As above, but 1 part of substance into 100 parts of inert
- Industry standards include 200X: 200 successive dilutions of 1 part into 100
At the time no one really understood the size of the atom nor the nature of compounds, so there was no real understanding of the limits of dilution. This changed due to the work of Lorenzo Romano Amedeo Carlo Avogadro, conte di Quaregna e di Cerreto (1776-1856)He was the first to recognize distinction between molecules and atoms (1811), pointed towards method of determine number of molecules in a given amount of substance. Rough estimate calculated by chemists in 1860, modern version (6.023 x 1023 molecules/mole) by early 20th Century. (NOTE: Avogadro NEVER knew Avogadro's Number!!).
Let's see how homeopathic dilutions stand with regards to Avogadro's number:
- 30X means 1 molecule active per 1030 inert
- Need to drink 7874 gallons to get 1 molecule active
- 200C means 1 molecule active per 10400 inert
- But there are only 1080 atoms in known universe!!
- So you would have to drink many, many, many, many universes' worth of liquid to guarantee a molecule of the active ingredient!
- Actual limit of dilution is around 24X (12C)
In other words, homeopathic "cures" are nothing but the inert material!! They are just water, or sugar pills! There is no "there" there!
Homeopathy was just rejected by Science. It started as a possible scientific theory, comparably reasonable to its alternative. However, it moved out of Science as Chemistry itself was developed. Furthermore, it has repeatedly failed time and again in double-blind studies. Homeopaths have tried to develop explanatory models for their work, which get more and more ludicrous: that water retains a "memory" of the healing agent (why that, and not everything else those water molecules encountered); that it works by nanotechnology, somehow; and so on.
However, while pushed beyond the fringes by Science, it still has mainstream acceptance. Since it doesn't produce any side effects (or, indeed, effects!) it is less scary than real medicine. It is widely practiced in Europe, and is protected by the US FDA through support by congressmen generation after generation. It is a large industry--not as large as "allopathic" pharma, of course, but a for-profit industry nevertheless. So there are social, political, and economic forces that keep it within a form of "mainstream" long after Science rejected it.
But what is homeopathy REALLY became mainstream...?:
Case Study 4: Biological Racialism and Scientific Racism
Anyone with experience with Western culture is familiar with the "modern" American concept of Race: that is, that a race is "a local geographic human population distinguished as a more or less distinct group of genetically transmitted characteristics." In practice, we generally recognize that there are only a small number of distinct races: enough to count on your fingers, perhaps few enough to count on the fingers of one hand.
What many people do not know is that this concept is historically recent, is not universally recognized, and is not supported by biological anthropology (the study of geographic human variation), genetics, or history.
The English word "race" is a cognate of similar words in German, Italian, Spanish & Portuguese. It's first recorded use in English is from 1570, where it meant "the offspring or posterity of a person; a set of children or descendants." The second definition (from 1580) was for the opposite end human diversity: "one of the great divisions of living creatures, Mankind." In this context there was only one race, "the human race" or "the race of men." It is only in 1774 that the word in English achieved its contemporary usage: "one of the great divisions of mankind, having certain physical peculiarities in common."
WAIT! 1774!! Didn't people notice that people from different parts of the world looked different before this? Of course they did. But what they didn't do is subdivide all of humanity into a small number of categories. That isn't to say they were egalitarian, because almost universally they were not. The Egyptians were Egyptian-supremacists, the Romans Roman-supremacists, the Han Han-supremacists, etc. (On rare occasions you would get a social critic who looked at some other society which they idealized as "noble savages" to shed light on the foibles of their own culture.) But the Egyptians didn't see a category that united them and the people of Punt and Libya as one group, the Hebrews and Canaanites and Babylonians as another, the Minoans and mainland Greeks as another, etc.; nor did Romans see that they, Gauls, Britons, Teutonic tribes, Greeks, and other peoples of Europa represented a single category from those of Asia or Africa; and so on.
It was in fact biological taxonomists like Linnaeus, Blumenbach, and others who started in the mid-1700s to categorize and catalogue the diversity of all living things. Of course, humans were part of that diversity, and so they categorized people into a small number (four, five, or more) subcategories, each with their own particular physical traits. As it turns out, biological anthropologists in the 1800s and early 1900s, as they collected more and more data, couldn't actually fit the diversity into just a few categories, and wound up proposing 30 or more races! You probably haven't heard of this, because modern racialists always seem to harken back to the three or four or five or so number of races...
By the 20th Century it was clear to those who studied the subject that there were no actually defensible boundaries in human diversity. Instead there is only continuous variation across regions. "Races" are attempts to force rigid structure onto a gradient. In some cases (particularly, North America), much of the population has come from a small number of spots in that continuity, which gives the false impression that there are a small finite number of distinct groups. But all inhabitants of the New World are relatively recent arrivals, from the First Americans who arrived 16,000 years ago or so to the post-Columbian colonization, Atlantic slave trade, and later immigration.
A consequence of all this is that we are still saddled with a heritage of racism (the belief that some races are superior to others, and the set of social systems that develop to support this). Even more broadly, though, we have an even broader acceptance of racialism: the idea that races as discussed above are real, natural entities rather than social conventions. This is an example of reification (literally, to "thing-ify"), the idea that just because you give a concept a name that the named thing must exist in reality.
We still see on the fringes of science and policy (and sometimes not so much on the fringes, unfortunately) people who promote racist ideas cloaked in science. A class example of this was the 1994 book The Bell Curve by sociologists R.J. Herrnstein & C. Murray. Their basic premises were:
- There are inherited differences in IQ (with Asians slightly higher than Caucasians, and Caucasians averaging 15 points higher than African-Americans, in concordance with some standard racist stereotyping)
- That certain social behaviors (tendency to commit crime; having children out of wedlock; being unemployed) correlated better with IQ than with socioeconomic status
As one can imagine, these claims were seized upon not only by racists but by those who objected for any reason for governmental spending on issues like public education. But very quickly Herrnstein and Murray's colleagues showed the errors in those claims:
- For instance, a trait can have a highly heritable relative mean, but that doesn't mean the same thing as a highly heritable absolute mean. For a classic example, the mean height in a human population and the distribution around that height shows strong heritability. But the expressed value of the mean and spread is largely contingent on environmental factors. So the same spread of genes in a population with a mainly grain-based diet will be expressed as a shorter average height than exactly the same distribution of genes when that population has a diet with more meat in it.
- Furthermore, it had been seen that measured IQ values in many places around the world tended to jump about 15 points when those societies adopted formal schooling (often as such schools were introduced to rural areas due to industrializing or decolonialization.) Notably this is this the same difference seen between the values Herrnstein and Murray found between historically-underfunded inner city (primarily black) schools and better funded suburban (primarily white) schools. So despite their claims to the contrary, the differences really did support the idea that more equitable funding of the inner city would bring up the scores.
- They had also ignored the studies that showed that black children from poorly-funded inner city communities who were adopted into affluent homes once again showed a jump in IQ scores, regardless of the ethnicity of the adoptive parents.
Closing Thoughts
As we can see, scientific ideas are not fixed in time. And (sadly) that the factors which control their acceptance or rejection can be based on aspects outside of their evidentiary support. So there can be ideas which are fundamental science but are considered non-science by substantial sections of society (e.g., vaccinations, evolution, climate change); ideas which are pseudoscience, but considered scientific by large demographics (e.g., homeopathy, anti-vaccination ideas, biological racism); or ideas which are still on the fringes, awaiting observations and discoveries sufficiently decisive to make them mainstream science (e.g., dark matter, dark energy, string theory).