I take great pleasure in introducing you to a guest blogger, William M. Goodman, Ph.D. His degrees, from the University of Waterloo, Canada, were in Philosophy (including decision theory, computer modeling, ethics, and philosophy of education). Yet his love of joining theory to practice led to teaching, consulting, and researching in Statistics and Risk Management, in industry and in the Faculty of Business and IT, of Ontario Tech University. He is now retired, but remains an Adjunct Professor and active in research. His main hobby is listening to and playing music.
When originally preparing this reflection on Just the Facts, I had not anticipated that I’d present it initially in a discussion on critical thinking and argumentation. I was thinking only about … well … just facts. Yet, as several previous posts in this blog have made clear, what we know as “the facts” is very hard to disentangle from the questions we think to ask, and from the depth of our reflection and critical analysis.
This post’s title suggests, ironically, an link between just plain fact and argument: One can imagine a hardened detective, as in the old TV show “Dragnet”, leaning over an evasive and uncooperative suspect, or witness, and growling: “Stop prevaricating with fanciful details and embellishments. Just give me the facts.” This same cool perspective was foreshadowed, as far back as 1842, by the Chief Justice of Massachusetts. “Looking solely at the indictment,” he writes, in his decision in Commonwealth vs. Hunt, “disregarding the qualifying epithets, recitals, and immaterial allegations, and confining ourselves to facts…”, he goes on to overturn the decisions of lower courts—on the basis of eminently persuasive arguments. (Chief Justice Lemuel Shaw, Massachusetts Supreme Court, opinion, Commonwealth vs. Hunt, March 1842.)
Although at first glance they are the paradigms of straightforwardness, something about facts seems to invite perpetual controversy and dichotomizing (even before today’s “fake facts” debates!) Those who, like the 19th Century biologist Virchow, prefer facts contrast these with so-called “speculation”; the latter is the stuff on which Church and state moral prohibitions, and so on, are based. Then there are those like the British idealist philosopher Bradley, who actually like to speculate, and so oppose the noble Objects of Metaphysics to the more questionable “facts” of science. (F.H. Bradley, Appearance and Reality: A Metaphysical Essay, London: George Allen and Unwin, 1893.) Plato, much earlier, also thought that metaphysical objects (which he called “forms” or “ideas”), represent the most fundamental reality, and he was not particularly interested in facts, in their everyday sense. (In Plato’s Republic, for example, in Book III, sections 402-403, education is called on to pursue the forms, rather than chase after mere appearances.)
A modern variant of Virchow’s dichotomy was expressed by Carl Oglesby, a JFK-assassination buff. (Oglesby recapped many of his earlier lectures and publications on this topic in The JFK Assassination: The Facts and Theories, New York, N.Y.: Signet, 1992.) His longstanding research, he thought, addresses the facts of the case–by which he means the things that “really” happened, regardless of whether anyone is aware of them. He contrasts these with mere “theories”. These “connecting stories”, as he calls them, can be constructed with or without good faith or good judgement; and, at best, they just fill in the blanks when facts are unknown.
But a member of a congressional inquiry on that same assassination takes a different view. Facts, for him, are what “the best evidence of science supports”. (Though some might say such constructions are “theories”.) The relevant contradistinction, he holds, is between facts, as thus conceived, and mere “appearances”. (Examples, in his mind, of appearances include eyewitness reports, where these run counter to the putative “facts”, as compiled by the experts.) The alternative bifurcations on the topic are endless: “Facts vs. Fiction.” “Facts vs. Values.” “Facts vs. Concepts.” …To name just three.
Returning to our no-nonsense detective, notice that he did not say “The Truth. Just give me the Truth.” Simple falsehoods—if that’s what the suspect was indeed issuing—are only one way of obscuring relevant details. But they are a simple way. What is merely covered can, with appropriate skills, be uncovered. The difficulties compound when facts, near-facts, and fiction come pre-packaged in an integrated whole that must be untangled. On the fact vs. theory model, the detective is saying “Look. I’ll do the theorizing here, the connecting together. You just give me the raw materials for my story, not for yours.”
This leads to the first element of my own position. I start from a loose variant of the so-called facts vs. theories model. I hold theories in higher esteem, perhaps, than Oglesby, and have not so much confidence in facts—as rock bottom or “real” starting points. But whatever facts are, theories do seem to be stories for connecting them; and, significantly, do appear to inherit, or be limited by, any flaws in our knowledge of the requisite, underlying facts.
However, I suggest that the boundary between facts and theories is fluid and fuzzy. It’s as changeable as those darned “statues of Daedalus” which so annoyed Socrates in Plato’s dialogue Meno. (The Greek sculptor Daedalus’s statues were said to be so life-like that people thought they had to be tied down to prevent them from running away!) Similarly, what is fact in one context is theory in another. For example, if I am surveying some property, the relations specified by Euclidean geometry and trigonometry are merely useful facts at my disposal. They give me confidence in using procedures like triangulation, for instance, to justify my conclusions. In a university Mathematics class, however, those same principles may just be theories–and not uncontested ones, at that.
The same goes for questions of quarks, quasars, and extended qualities. What provides some order to the chaos of where facts begin is the context of critical argumentation within which any given fact/theory distinction is being embedded. It is just as critical to sort out the reasonable bounds for these contexts as to discover specific new facts or to invent new theories. Indeed, the first of these tasks is of a piece with the latter two. The process is dialogical, and continuing.
All this, I hope, will be clarified by considering a research challenge encountered in the 1990’s at Ontario Hydro, then the provinces’ power generation and transmission company. Joint studies were in progress to examine possible health effects (if any) of exposure to electromagnetic-field radiation (or “EMF” for short). To pool data and increase the power of the study, Ontario Hydro worked in tandem with Hydro Quebec and Électricité de France, plus universities in their jurisdictions.
The study’s distribution of research responsibilities followed a traditional model. The participating universities were mandated to draw the theoretical, scientific conclusions, and the electrical utilities “only” had to supply the facts.
It was intended to work like this: The university-based teams (epidemiologists, statisticians, and so on) had access to two lists: cases (that is, workers or retirees known to have contracted brain cancer or leukemia) and controls (workers or retirees who were healthy). For each of these individuals, available information would be linked about their career in the electrical industry, particularly the occupations they performed over the years, and at what sites.
What the universities were expecting from the utility companies was data to plug into these biographies. In other words, they sought to “translate” these work-historical details into EMF-dose histories for the cases and controls. Then (after some possible adjustments to handle confounding variables) the correlations between dose histories and disease risk would presumably be readily computable.
But here’s the catch, relating to the “facts” issue: Suppose Case 1, a male, has worked at the occupation “Maintenance Supervisor”. What data could a utility company supply the epidemiologists to establish the “facts” about this person’s dose accumulation? Is it the average dose of all supervisors, independently of what they supervise? Or how about the average dose for all maintenance workers, independently of their ranks in the companies. (The latter option would ignore that the day-to-day activities of people (and so their possible exposures) might vary with their ranks.) Do we maybe need a narrower category bin (e.g. “What type of maintenance?”)? Or a wider bin? And might not work location impact on a worker’s dose?
Regarding the latter point, measurements in some small generating stations have shown that even desk-based workers in those locations can have doses similar to those of electrical workers elsewhere—due to the physical proximity of the generating apparatus to the offices for desk jobs. So, is just looking up the average dose for all clerks a way of getting the facts about what a particular clerk may have actually experienced?
Issues like these perplexed the study’s collaborators. Responding to budget constraints, Hydro Quebec went the route of developing a finite, “a priori” list of the occupations in the industry; so, they just had to sample these groups to find each group’s “typical” values and “typical” variances. Ontario Hydro, on the other hand, decided to correlate each worker’s presumed dose with not just their occupation (similar to Quebec’s list) but also with an additional, site-related variable. The different teams’ design choices impacted their apparent study findings.
Besides the above issue, the participating utility companies faced other problems: Is a group’s “average” dose better represented by the sample’s geometric mean, or its arithmetic mean, or some other parameter? And what was the most “factual’ approach to map 1990s’ parameters onto occupations people had in prior years—when, for example, the system load, design, and even the operating cycle, were different? In short, even if we know that individual “Case 1” held a specific series of jobs at known times, and at known sites, who can vouch for the facts about what he really experienced in terms of lifetime EMF dose?
None of this is to suggest that, relative to accepted scientific norms, there was anything wrong with the study that was conducted. Its challenges are representative of fundamental problems for all research in general. The study had excellent credentials: Its research protocols and stages were monitored by the Royal Society of Canada. [The latter’s report was published in 2000: https://rsc-src.ca/sites/default/files/Hydro-%20EMFreport.pdf ], and the Institute for Risk Research at University of Waterloo also reviewed the protocols.
Yet, suppose that additional time and funding—without limit—could be poured into following up this research. Would that ideal situation increase our confidence in the information that is gathered? Not necessarily; at least not in a linear sense. I think our confidence in relation to facts is something like our purported relation to needs in what psychologists and human resource people know as Abraham
In Maslow’s “Hierarchy of Needs” he posits a needs hierarchy ranging from rock-bottom physiological needs (for food, sleep, and so on) through safety and social needs, and up to ego (recognition) needs and, finally, self-actualization. Each higher-level need, it is said, is activated only when, and if, all its foundational needs are being satisfied. Otherwise, one is preoccupied with meeting just those earlier requirements.
Our relation to facts is analogous, as I’ll try to illustrate: An “ideal” EMF study might take the form of a super cohort study. That is, it would not start at the end of the hypothesized disease-development process, and peer back, blindly, to recapture 30 years of prior experience. Instead, it would start now. It would give everyone who works at a utility company an EMF dosimeter; and then it would patiently wait for the next 30 years, logging all measurements for all employees of all ranks at all sites during this time. At the end of this period, researchers can observe who has, and who has not, developed disease symptoms. Then, since all the employees’ dose records would be readily available, there would be no need to infer employees’ dose histories based on intermediate variables like occupation or work location.
From the perspective of the real-world EMF researchers who struggled in the 1990s to meet what could be called “low level” needs for facts, the imagined ideal data described above might have seemed like paradise. The actual data collection efforts, indeed, were aimed precisely to fill in the missing pieces that in the utopian scenario would not have been missing at all.
But if the imagined, 30-year cohort study really happened, a whole new level of needs would be identified, to yet again undermine our full confidence in the facts. For example, EMF dosimeters are usually worn on one body location per person, say, on the belt. Yet, for a variety of reasons, concerning location of source, shielding in the environment, etc., different parts of the body can be differentially exposed. So even given an exhaustive record of a worker’s waist-level dose, can we be sure (in the case of a study, for example, for brain-cancer causation) whether the same dose was experienced at workers’ head levels? And if the records pertain only to workers’ time at work, what about differential exposures of workers during non-work time? Could these factors confound the picture of the person’s true dose?
Lastly, 30 years of dose experience presents a long, complex picture: Periods of relatively consistent readings, interrupted with peaks and valleys; patterns which endured for just minutes or hours, and others lasting for weeks or years. If every individual data point over 30 years is treated as equally significant, any comparisons among people will be virtually impossible. But if grouping and summary statistics are allowed, which ones are more in accord with the real facts? We are tempted to answer: “Attend to just the patterns that matter, and ignore the inconsequential ones.” But hold on! We’re supposed to be just collecting the facts, in this scenario. We’re not supposed to know yet which facts will turn out to be causally, or otherwise, significant.
Where all this is heading, in my view, is towards a position akin to that of Larry Haworth and Conrad Brunk, at University of Waterloo, and Brenda Lee in Montreal. Their focus has been on risk assessment projects, and their main concern has been with normative issues, but I believe the principles involved to be more general. In their book Value Assumptions in Risk Assessment: A Case Study of the Alachlor Controversy, they question the traditional model for risk assessment (RA). This model assumes that objective scientific research can first collect the unbiased facts about a substance—its nature and hazards; ‘only’ then need the findings be presented to politicians, interest groups, and the public for debate about the preferred alternative responses (such as bans, warning labels, and so on).
The traditional RA model, say the above authors, is impossible. At countless points in the fact gathering, there are blanks to be filled in with assumptions. The choices made at this point already reflect a set of values, no less than opinions expressed in public debates. A classic example: When testing the “inherent” risk of pesticides for those who spread them, do you base your measurements on the assumption (as the manufacturers do) that workers are wearing the required protective clothing? Or (as other groups insist) do you assume that some workers will not be wearing this clothing—and argue that their risk matters too? This choice has enormous implications for how data are collected, and what findings are reached.
The values issue, per se, is beyond the scope of this post. But I agree there are blanks to be filled in any fact-gathering process. If one is content—whether through ignorance, budget considerations, hidden agenda, or any other reason—to get the project off the ground at a certain point, then quanta which others may view as theories or values can be plugged in as “assumed facts”. If interest or necessity makes this impossible, then other levels of facts can be uncovered and supported by their own theories and assumptions.
Perhaps a Peirce-like convergence towards a scientific consensus is possible; I’m not sure. The EMF examples may at first seem to suggest this: Even though the facts after a 30-year, comprehensive cohort study would still be uncertain, one suspects that the statistical “error bars” around the parameters identified would be narrower than those associated with more limited studies. Could additional studies keep narrowing this variance indefinitely? … But then, what if a change of analytical paradigm intervenes? (Perhaps it is not average or accumulated dose that matters, after all, in the effect on health—but only the peaks and duration of transient EMF bursts. The dosimeters used for 30 years may simply not have recorded these bursts.) All this could disrupt the tidy appearance of convergence.
This post does not venture into claims about what final progress is possible. Its emphasis is that any fact-finding enterprise occurs within a specific argument—or inquiry-based context. Some facts will be examined quite closely, and made to give a good account for themselves. Other facts—or constructs or assumptions or whatever one chooses to call them—will be drawn from a common pool of accepted starting points. Either there’s no need, or no time, to argue about them just now. If, in the course of a debate (whether a public debate, or an internal, dialectic analysis) these assumptions are questioned, then the need-to-know focus has shifted, and the fact-gathering must be renewed.
This post also sidesteps many traditional questions of epistemology. Fact-finding, as mentioned, occurs in a context of already held beliefs or knowledge. There’s plenty of scope for the usual debates about what exactly constitutes, or justifies one’s belief systems. This post is neutral on the subject. So long as the matters are believed—or less strongly, so long as no one is presently questioning the beliefs—fresh inquiries into new facts can begin.
At any rate, that’s the facts as I see ‘em.