As a nuclear engineer, I use radiation physics codes quite a bit. We use the computer software to model radiation effects experiments, to perform reactor safety calculations, and to complete nuclear criticality safety analysis. For us, the regulatory framework that we operate in requires that we validate the software thoroughly. By validation, I mean we compare the computer modeling results with a real world measurement of those results. In fact, the radiation transport and nuclear physics communities have performed many experiments using simple physical geometries and well-known material compositions designed to be modeled exactly by our computer codes. These are known as benchmark experiments, and I have performed both reactor benchmark experiments and code calculations for comparisons (link).
These comparisons are quite rigorous and quantitative. For instance, our transport codes will predict the benchmark experiments to less than 1% difference for the neutron multiplication factor. When we predict things like radiation doses and particles crossing certain boundaries, we generally don't do quite as well. Those quantities are usually predicted to within 5-20% depending on the complexity of the benchmark. These results allow us to use our software with confidence for problems that are similar in scope (same energy region, similar materials, etc.) to the benchmarks because we have demonstrated that the software code has the appropriate physics and material models.
So, here is excerpt from the abstract of an article: "Using quantitative proxy models of peat and bauxite formation, based upon modern analogues, predictions of the distribution of peats (coals) and bauxites for both the Late Jurassic (Kimmeridgian) and midCretaceous (Cenomanian) have been made using a General Circulation Model (GCM)." After saying to myself, "That's a load of BS!", I decided to write down why this is such a poor way to do computer modeling and why this is NOT validation.
Here is a clause by clause translation of the abstract from BS-ese to English:
- "Using quantitative proxy models of peat and bauxite formation ..." --> I took numbers from somebody's (another graduate student in my research group) model of peat and bauxite formation. I won't tell you where the numbers came from or how that model was constructed because the graduate student is a computer science major and knows absolutely nothing about geology.
- "..., based upon modern analogues, ..." --> We really don't know how these formations develop, we can't observe their formation happening anywhere today, but my collaborator at another university has a guess at something that supports my argument in this paper. Most people will just read right past this clause anyway.
- "...predictions of the distribution of peats (coals) and bauxites for both the Late Jurassic (Kimmeridgian) and midCretaceous (Cenomanian) have been made using a General Circulation Model (GCM)." --> I know we use GCMs to model the atmosphere and oceans and make wild guesses about climate, but I am going to show you where to find coal and aluminum ore using these magic codes. If you bought the load in the first couple of clauses, I bet that I can sell you the even bigger load to finish the sentence.
I will definitely admit to a general bias against climate models because I am unsure how much of what they do is truly science as opposed to computer science. For the sake of discussion, let's say that it is. Will someone please tell me the "world class" aspect of this paper? I KNOW that Sandia does world class science and engineering (examples), so I am pretty sure that I would recognize when something is in that category. If a co-worker brought this to me at Sandia as part of our internal peer review, it would get a "Not Recommended for Publication" rating. Saying this in another way, the article is well below the minimum information (science quota) that I would recommend for publication and is contributing to the decrease that I mentioned in For Starters. The sad thing is that the authors now have this article as a peer-reviewed publication (in the Journal of the Geological Society) on their CVs.
In summary, this article is NOT validation of any kind because the results are never compared to something that we can physically measure. At best, this is a verification that the modeling team writing the article has the same general physics models in their computer software as another computer code that models coal and aluminum ore formation. This is a good example of what Steven Milloy calls "PlayStation Climatology."
To those of you who know me well enough to have pushed my global warming button, I hope that this post on the subject provides an insight into my cynicism or lack of concern about the dire predictions that surface every few days. I don't want to put our collective economic futures in the hands of people that do "science" in this way. I am going to end with a note to Walt and Annie. I think that this is the type of stuff that you guys wanted me to write about. If it's not, leave me a comment and tell me where I went astray and suggestions for other posts.
5 comments:
"I am unsure how much of what they do is truly science as opposed to computer science."
Please! Computer Science can be a perfectly rigorous topic (Turing, Shannon, Dijkstra, et al). You probably mean 'playing with a spreadsheet'.
"I took numbers from somebody's (another graduate student in my research group) model of peat and bauxite formation. I won't tell you where the numbers came from or how that model was constructed"
They give two references to the models they use: Price et al. (1997) and Lottes & Ziegler (1994). Are you claiming that a graduate student updated those models and didn't document the updates? Who is the grad student?
"because the graduate student is a computer science major and knows absolutely nothing about geology."
How do you know this? Do you know the group members in question?
Beyond that, you haven't really made a point. Yeah, duh, when you can't control the environment like you can in engineering experiments, your predictions aren't as reliable. That doesn't mean that you can't formulate scientific hypotheses or test their predictions. Your rant basically amounts to, "We can't observe geological processes occurring in realtime today, therefore we know nothing about geology", which is frankly absurd.
In particular, your second and third points are nothing but snark.
Now, in this case I agree that their model doesn't have good skill, given their results in Figs. 1 & 2. Their predictions range from "so-so" to "poor". But you have to look at the results to tell that, not just sneeringly dismiss their abstract.
"In summary, this article is NOT validation of any kind because the results are never compared to something that we can physically measure."
They compare predicted locations of peat and bauxite formation with field measurements of peat and bauxite distribution.
(The authors claim "good correspondence", but it doesn't look like it to me.)
PI,
No special knowledge of publication in the academic environments is needed for the translation that I gave. Do you know for a fact that the translation that I gave is incorrect?
Nice try, but model-to-model comparison is specifically ruled out as validation. It is, AT BEST, a verification exercise (see "On the Role of Code Comparisons in Verification and Validation" by Trucano, Pilch, and Oberkampf, SAND2003-2752. Contact me if you can't get the electronic report as it is an unlimited release document and has been cited in many journal articles).
They did NOT compare to the actual geologic distributions of peat and bauxite formations. They compared to someone's model of them. Also, they did a "vu-graph" comparison as you allude to in your second comment.
What are the metrics that they selected for comparison? What was the criteria for acceptance? They say it is "good correspondence," but they don't give metrics for judging that statement (and for that matter neither do you in your disagreement with their assessment). It is simply an opinion in both cases with no way to scientifically validate the statement.
You're right though. My comments were snarky, but it is my blog. I intended them to be that way. I don't feel like this type of "science" should be used to justify the confiscation my hard earned dollars in the form of carbon taxes.
--------
Dodgy,
There are tremendous things done in computer science every day. We rely on it for making the radiation transport codes solve problems that we couldn't even dream about solving 20 years ago.
I have had the problem of working with some electrical circuit modeling code developers that would work on the computer science aspects (parallel implementation and generic equation solvers) of the problem rather than working on the physics involved. Based on that experience, I have some deep seated bad feelings about multi-physics code developers that delve into computer science. They are usually either good physicists or good computer scientists. It is actually quite difficult to find someone that does both well. For example, I write code that solves the problem I need to solve, but I would never claim to understand the deep aspects of the computer science algorithms for parallel implementation, etc.
The computer science is necessary but if the physics that is implemented is incorrect, it doesn't matter how efficiently the problem is solved.
--------
Thanks to you both for commenting.
Here is the link for the Trucano et al reference in my previous comment:
http://www.osti.gov/bridge/servlets/purl/918244-tbRwVC/918244.PDF
Post a Comment