Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Thursday, October 6, 2011

Kudos to Princeton for an open-access policy

It seems that Princeton has adopted an open-access policy for the papers their faculty publish.
...each Faculty member hereby grants to The Trustees of Princeton University a nonexclusive, irrevocable, worldwide license to exercise any and all copyrights in his or her scholarly articles published in any medium, whether now known or later invented, provided the articles are not sold by the University for a profit, and to authorize others to do the same.... The University hereby authorizes each member of the faculty to exercise any and all copyrights in his or her scholarly articles that are subject to the terms and conditions of the grant set forth above.

The legalese is making my head spin, but I think they are saying that the university gets full access to all faculty publications, and that the university is granting all faculty full access to their own publications. As a programmer, I yearn to write it in a more simple way, and probably to drop the "for a profit" part. Still, the spirit is there. Anything published by a Princeton faculty member will not be hidden exclusively behind a paywall.

Hat tip to Andrew Appel, who emphasizes that this policy is compatible with ACM's paywall:
Most publishers in Computer Science (ACM, IEEE, Springer, Cambridge, Usenix, etc.) already have standard contracts that are compatible with open access. Open access doesn't prevent these publishers from having a pay wall, it allows other means of finding the same information.

This is true, but I find it too gentle on ACM. The ACM is supposed to be the preeminent association of computer-science researchers in the United States. They would serve their members, not to mention science, if they made the articles open access. Charge the authors, not the readers.

Tuesday, August 23, 2011

A Scientist's Manifesto?

I was disheartened today to read so many glowing reviews of Robert Winston's "Scientist's Manifesto".

I think of science as a way to search for knowledge. It involves forming explanations, making predictions based on those explanations, and then testing whether those predictions hold true. Scientists make claims, and they fight for those claims using objective evidence from repeatable experiments.

Winston promotes an alternative view of science, that scientists are people who are in a special inner circle. They've gone to the right schools, they've gone through the right processes, and they've had review by the right senior scientists. Essentially, they are priests of a Church of Science. His concern is then with the way in which members of this church communicate with the outside.

If that sounds overblown, take a look at item one in the 14-item manifesto. It even uses the term "layperson":
We should try to communicate our work as effectively as possible, because ultimately it is done on behalf of society and because its adverse consequences may affect members of the society in which we all live. We need to strive for clarity not only when we make statements or publish work for scientific colleagues, but also in making our work intelligible to the average layperson. We may also reflect that learning to communicate more effectively may improve the quality of the science we do and make it more relevant to the problems we are attempting to solve.

Aside from the general thrust of it, many individual items I disagree with. For example, I think of scientists interested in a topic as conferring with each other through relatively specialized channels. Thus item three is odd to me:
The media, whether written, broadcast or web-based, play a key role in how the public learn about science. We need to share our work more effectively by being as clear, honest and intelligible s possible in our dealings with journalists. We also need to recognize that misusing the media by exaggerating the potential of what we are studying, or belittling the work of other scientists working in the field, can be detrimental to science.

Of course, it makes perfect sense if you think of science as received wisdom that is then propagated to the masses.

I also think of science as seeking objective truth. I can't really agree the claim that it is relative:
We should reflect that science is not simply ‘the truth’ but merely a version of it. A scientific experiment may well ‘prove’ something, but a ‘proof’ may change with the passage of time as we gain better understanding.

I don't even think peer review is particularly scientific. The main purpose of peer review is to give a mechanism to measure the performance of academics. In some sense it measures how much other academics like you. Yet, item 8 in the manifesto claims that peer review is some sacred process that turns ordinary words into something special, much like the process behind a Fatwah:
Scientists are regularly called upon to assess the work of other scientists or review their reports before publication. While such peer review is usually the best process for assessing the quality of scientific work, it can be abused....



I have an alternative suggestion to people who want the public to treat scientists with esteem. Stop thinking of yourself as a priest, evangelist, or lobbyist trying to propagate your ideas. Instead, remember what it is that's special about your scientific endeavors. Explain your evidence, and invite listeners to repeat crucial parts of the experiment themselves. Don't just tell people you are a scientist. Show them.

Wednesday, August 10, 2011

Schrag on updating the IRBs

Zachary Schrag has a PDF up on recent efforts to update IRBs. Count me in as vehemently in favor of two of the proposals that are apparently up for discussion.

First, there is the coverage of fields that simply don't have the human risks that medical research does:
Define some forms of scholarship as non-generalizable and therefore not subject to regulation. As noted above, the current regulations define research as work “designed to develop or contribute to generalizable knowledge.” Since the 1990s, some federal officials and universities have held that journalism, biography, and oral history do not meet this criterion and are therefore not subject to regulation. However, the boundaries of generalizability have proven hard to define, and historians have felt uncomfortable describing their work as something other than research.

I would add computer science to the list. A cleaner solution is as follows:
Accept that the Common Rule covers a broad range of scholarship, but carve exceptions for particular methods. Redefining “research” is not the only path to deregulation contemplated by the ANPRM, so a third possibility would be to accept Common Rule jurisdiction but limit its impact on particular methods.

Schrag's PDF gives limited attention to this option, but it seems the most straightforward to me. If a research project involves interviews, studies, or workplace observations, then it's just shouldn't need ethics review. The potential harms are so minor that it should be fine to follow up on reports rather than to require ahead-of-time review.


Schrag also takes aim at exempt determinations:
Since the mid-1990s, the federal recommendation that investigators not be permitted to make the exemption determination, combined with the threat of federal sanction for incorrect determinations, has led institutions to insist that only IRB members or staff can determine a project to be exempt. Thus, “exempt” no longer means exempt, leaving researchers unhappy and IRBs overwhelmed with work.

Yes! What kind of absurd system declares a project exempt from review but then requires a review anyway?

Wednesday, May 25, 2011

Regehr on bounding the possible benefits of an idea

John Regehr posted a good thought on bounding the possible benefits of an idea before embarking on weeks or months of development:
A hammer I like to use when reviewing papers and PhD proposals is one that (lacking a good name) I call the “squeeze technique” and it applies to research that optimizes something. To squeeze an idea you ask:
  • How much of the benefit can be attained without the new idea?
  • If the new idea succeeds wildly, how much benefit can be attained?
  • How large is the gap between these two?

I am not sure how big of a deal this is in academia. If you are happy to work in 2nd-tier or lower schools, then you probably need to execute well rather than to choose good ideas. However, it's a very big deal if you want to produce a real improvement to computer science.

The first item is the KISS principle: keep it simple, stupid. Given that human resources are usually the most tightly constrained, simple solutions are very valuable. Often doing nothing at all will already work out reasonably. Trickier, there is often a horribly crude solution to a problem that will work rather effectively. In such a case, be crude. There are better places to use your time.

The second item is sometimes called a speed of light bound, due to the speed of light being so impressively unbeatable. You ask yourself how much an idea could help even if you expend years of effort and everything goes perfectly. In many cases the maximum benefit is not that high, so you may as well save your effort. A common example is in speeding up a system. Unless you are working on a major bottleneck, any amount of speedup will not help very much.

Thursday, May 19, 2011

IRBs under review

There are several interesting blog entries up at blog.bioethics.gov concerning the ongoing presidential review of Internal Review Boards.

I liked this line:
“We pushed for an ethical reform of system, real oversight, and now we are left with this bureaucratic system, really a nitpicking monster,” Arras said, addressing Bayer. “And I am as stupefied as you are.”

I am not sure why this pattern would be stupefying. A great many things that people attempt to do don't work out as intended. IRBs are just one more for the list, albeit one that has lingered for decades.

I am not as sanguine about the reviewers about this conclusion about whether another "Guatemala" could happen:
“Of the many things that happened there, no, it could not happen again because of informed consent,” said Dafna Feinholz, chief of the Bioethics Section, Division of Ethics and Science and Technology, Sector for Social and Human Sciences, United Nations Educational, Scientific and Cultural Organization.

The idea is that since IRBs require informed consent of study participants, the Guatemala experiments could never again happen, because the study participants would know what is going on.

I hope so, but consider the following evil scenarios:
  • A wealthy autodidact negotiates directly with local authorities and runs the experiment on his own dime. No university is involved, so no IRB review even happens.
  • A university researcher learns about a disease outbreak in some part of the world. The researcher waits two years and then applies for a research grant to study the effects of the disease. Since the researcher did nothing for the first two years, there was nothing for the IRB to review.
  • The Professor Muckety, Chair of Hubert OldnDusty at BigGiantName University, announces a grand new experiment that he expects will cure cancer. He invites all of the upcoming faculty in his area to take part in it, and there will be numerous papers and great acclaim to all the participants. The IRB at BigGiantName U.'s is stacked with faculty that are totally brainwashed into thinking the experiment is for the greater good. Will they really take a stand against the project?
I would not be so sure that, despite all the efforts of IRBs, an evil experiment couldn't happen again.

Whenever something goes wrong, there is a natural reaction for everyone to yell, "DO SOMETHING!" IRBs are the result of such an outcry. They are there to project human subjects, but I don't believe they are very effective at that. I believe that the MucketyMucks largely breeze through the red tape doing whatever they like, and instead we are staffing a bunch of bureaucrats to check that the smaller players filed form T19-B in triplicate, double spaced and typed with a manual type writer.

To improve on the current mess, carving out a large exempt category would be a large improvement. Surveys, observations, and other experiments with minimal opportunity for harm shouldn't need prior review.

Tuesday, May 10, 2011

Externally useful computer science results

John Regehr asks what results in computer science would be directly useful outside the field. I particularly like his description of his motivation:
An idea I wanted to explore is that a piece of research is useless precisely when it has no transitive bearing on any externally relevant open problem.
A corollary of this rule is that the likelihood of a research program ever being externally useful is exponentially decreased by the number of fundamental challenges to the approach. Whenever I hear about a project relying on synchronous RPC, my mental estimate of likely external usability goes down tremendously. As well, there is the familiar case of literary deconstruction.

Regehr proceeds from here to speculate on what results in computer science would truly be useful. I like most of Regehr's list--go read it! I would quibble about artificial intelligence being directly useful; it would be better to be more specific. Is Watson an AI? It's not all that much like human intelligence, so perhaps it's not really AI, but it is a real tour de force of externally useful computer science.

One thing not on the list is better productivity for software developers, including tools, programming languages, and operating systems. When software developers get more done, more quickly, more reliably, anything that includes a computer can be built more quickly and cheaply.

Monday, April 25, 2011

Types are fundamentally good?

Once in a while, I encounter a broad-based claim that it's fundamentally unsound to doubt the superiority of statically typed programming languages. Bob Harper has recently posted just such a claim:
While reviewing some of the comments on my post about parallelism and concurrency, I noticed that the great fallacy about dynamic and static languages continues to hold people in its thrall. So, in the same “everything you know is wrong” spirit, let me try to set this straight: a dynamic language is a straightjacketed static language that affords less rather than more expressiveness.

Much of the rest of the post then tries to establish that there is something fundamentally better about statically typed languages, so much so that it's not even important to look at empirical evidence.

Such a broad-based claim would be easier to swallow if it weren't for a couple of observations standing so strongly against it. First, many large projects have successfully been written without a static checker. An example would be the Emacs text editor. Any fundamental attack on typeless heathens must account for the fact that many of them are not just enjoying themselves, but successfully delivering on the software they are writing. The effect of static types, whether it be positive or negative, is clearly not overwhelming. It won't tank a project if you choose poorly.

A second observation is that in some programming domains it looks like it would be miserable to have to deal with a static type checker. Examples would be spreadsheet formulas and the boot scripts for a Unix system. For such domains, programmers want to write something quickly and then have the program do its job. A commonality among these languages is that live data is often immediately available, so programmers can just as well run the code on live data as fuss around with a static checker.

Armed with these broad observations from the practice of programming and the design of practical programming languages, it's easy to find problems with the fundamental arguments Harper makes:
  • "Types" are static, and "classes" are dynamic. As I've written before, run-time types are not just sensible, but widely used in the discussion of languages. C++ literally has "run-time types". JavaScript, a language with no static types, has a "typeof" operator whose return value is the run-time type. And so on. There are differences between run-time types and static types, but they're all still "types".
  • A dynamic language can be viewed as having a single static type for all values. While I agree with this, it's not a very useful point of view. In particular, I don't see what bearing this single "unitype" has on the regular old run-time types that dynamic languages support.
  • Static checkers have no real restrictions on expressiveness. This is far from the truth. There has been a steady stream of papers describing how to extend type checkers to check fairly straightforward code examples. In functional languages, one example is the GADTs needed to type check a simple AST interpreter. In object-oriented languages, the lowly flatten method on class List has posed difficulties, because it's an instance method on class List but it only applies if the list's contents are themselves lists. More broadly, well-typed collection libraries have proven maddeningly complex, with each new solution bringing with it a new host of problems. All of these problems are laughably easy if you don't have to appease a static type checker.
  • Dynamic languages are slow. For some reason, performance always pops up when a computer person argues a position that they think is obvious. In this case, most practitioners would agree that dynamic languages are slower, but there are many cases where the performance is perfectly fine. For example, so long as Emacs can respond to a key press before I press the next key, who cares if it took 10 microseconds or 100 microseconds? For most practical software, there's a threshold beyond which further performance is completely pointless.

Overall, type checkers are wonderful tools. However, any meaningful discussion about just how they are wonderful, and to what extent, needs to do a couple of things. First, it needs some qualifiers about the nature of the programming task. Second, it needs to rest on at least a little bit of empirical data.

Wednesday, March 30, 2011

A computer science journal with open access

Since writing that economics has an online open-access journal, I've been informed that computer science has at least one: Logical Methods in Computer Science. Mea culpa.

Perhaps the ACM can follow their lead.

Monday, March 28, 2011

Economics now has an online journal

Economics joins the ranks of those fields with an online academic journal. All papers are free for download. They've even back-posted all the old papers back to 1970. Feel free to click through and browse around. There's no registration or charge.

The next best thing to open access is preprint archives, the most prominent of which is ArXiv (pronounced "archive"). ArXiv is infrastructure to upload papers that are usually also submitted to a journal or conference. I first heard of preprint archives as used among physics researchers. Physics is a natural enough field to kick this off considering that they built the the World Wide Web to host their papers. Physicists publish in journals that have long review and publication delays, on the order of 6-12 months, and they seem to have realized that a 6-12 month ping time is not good for a group conversation.

I applaud BPEA going open access, and I wish computer science proceedings would do the same thing. Currently, all the American conferences are published through the ACM digital library, which has a dizzying array of subscription plans that has been designed to maximize profit. The current model in computer science is that CS research is IP for the ACM to sell for profit, much like a CD or a DVD. I would prefer a model where CS research is meant to advance science. Pay walls have a damping effect on discussion, and science without discussion isn't really science at all.

Wednesday, March 23, 2011

Robin Hanson against IRBs

Robin Hanson makes the case against Internal Review Boards for research on human subjects:
IRBs seem a good example of concern signaling leading to over-reaction and over-regulation. It might make sense to have extra regulations on certain kinds of interactions, such as giving people diseases on purpose or having them torture others. But it makes little sense to have extra regulation on researchers just because they are researchers. That mainly gets in the way of innovation, of which we already have too little.

I agree with Robin. Mistreatment of fellow humans should certainly be stopped. However, why should academic researchers have to go before a board any time they want to interact with humans, just because they are researchers?

For the majority of legal responses that our society makes, the approach we take is that people act first and then, if there is wrongdoing, the legal system follows up. For example, you don't get interviewed before you buy a gallon of gas. You get interviewed after a house burned down with your car parked outside of it. You don't go before a board before you grade a stack of papers. You go before a board after it is rumored that you told people what other people's grades were. Prior review is stifling.

People who defend IRBs probably assume that they will apply a large dose of common sense about what is dangerous and what is not. For example, surely the IRBs for an area like computer science will simply green light research all day long. In practice, it seems they look for work to do to justify their budgets. Witness the treatment of "exempt" research, where IRBs that have the manpower to do so tend to require review even of "exempt" research projects.

I can only speculate why such a useless and harmful institution persists, but a big part of my guess is that Robin's signalling explanation is correct. If you are the president of a university, could you ever take a stand against IRBs? Such a stand would have the appearance of signalling that you are soft on protection of humans. I wish that people would pay less attention to signals and more attention to results. Pay less attention to how many institutes, regulations, and vice presidencies have been created, and pay more attention to exactly how a university is treating the people it draws data from.

Friday, February 25, 2011

External validation

Robin Hanson points to an article by Vladimir M about when you can trust the results of an academic field.
When looking for information about some area outside of one’s expertise, it is usually a good idea to first ask what academic scholarship has to say on the subject. In many areas, there is no need to look elsewhere for answers: respectable academic authors are the richest and most reliable source of information, and people claiming things completely outside the academic mainstream are almost certain to be crackpots.

Robin's angle is Bayesian. He argues that we should trust academics by default, because they are experts, but that we should adjust our level of belief if the field is ideologically driven:
However, let us accept for the sake of argument that all else equal in ideological fields intellectual progress is slower, and claims tend to be make with more overconfidence. What exactly would this imply for your beliefs about this area?

I have a rather different perspective, and several of the commenters seem to agree. I think of academia as a club of intellectuals, but not one that is always motivated by the search for objective truth. Many academics don't seem to be bothered with objective truth at all, but more are interested in being at the forefront of some movement. As one commenter points out, such academics are more like lawyers advocating a cause than they are experts seeking the truth.

A better way to find experts in a field is to look for external validation. Look for people and ideas that have stood some test that you don't have to be an expert to verify. You don't have to know much about artificial intelligence to know that IBM is good at it., because they've proven it with their Watson Jeopardy player.

Thursday, December 9, 2010

Published literature as fencing?

"Any discourse will have to be peer-reviewed in the same manner as our paper was, and go through a vetting process so that all discussion is properly moderated," wrote Felisa Wolfe-Simon of the NASA Astrobiology Institute. "The items you are presenting do not represent the proper way to engage in a scientific discourse and we will not respond in this manner."
Felisa Wolfe-Simon is responding here to attacks on a paper she recently published. This is a widely held view, that science takes place on some higher plane of discourse. In this view, ordinary speech is not enough to move the discussion forward. You must go through the publication process just in order to state your counter-argument. Science then progresses by an exchange of one high-minded paper after another.

Hogwash. This romantic picture has no relation to science in the fields I am familiar with.

A killer mismatch between this picture and reality is that counter-arguments are not publishable. If someone publishes the results of a horribly botched experiment, it would serve science to dissect that experiment and show the problem. However, there aren't any peer-reviewed journals to publish it in. If you take the quoted stance seriously, then you must believe it's not proper to criticize published research at all.

A second mismatch is that, in the fields I am familiar with, nobody in the field learns a major new result through the publication process. When someone has a new idea, they talk to their colleagues about it. They speak at seminars and workshops. They write messages to mailing lists about it. They recruit students to work on it, and students post all over the place. Everyone knows what everyone is working on and the way they are doing it. Everyone knows the new ideas long before they have any evidence for them, and they learn about the new pieces of evidence pretty quickly as well.

Researchers debate all right, but not via publication. They email lists. They write each other. They give public speeches attacking each other's ideas. Others in the field do all of the same, and they are often more convincing due to being less invested in the conclusions.

In short, declining to participate in discussions outside the publication process is often presented as some sort of high ground. This is a backwards and dangerous notion. It means that you are not defending your ideas in the forums that convince the experts.

Friday, October 29, 2010

Scientific medicine

Thorfinn of Gene Expression has a great post up on the difficulty of generating knowledge, even in a relatively hard science like medicine:
Doctors believe in breaking fevers, though there is no evidence that helps. Flu shots also don’t seem to work. I’ve also mentioned how uclers came to be declared a disease due to “stress”, when in fact they were clearly due to bacterial infection. Meanwhile, several large-scale tests of medicine use — from the RAND insurance study, or the 2003 Medicare Drug expansion — find minimal evidence that more medicine leads to better health.
[...]
I think our body of medical knowledge does illustrate how hard it can be to generate reliable knowledge, even in cases when we can easily run numerous experiments on a randomized basis.

Softer sciences have an envy of the hard sciences. Their researchers envy how reliable the experimental results are in a physics or chemistry experiment. In the hard sciences, it's possible to do controlled experiments where all of the relevant variables are controlled. Further, the models are simple enough that there aren't a host of alternative models that can explain any experiment. For example, if your theory is that the acceleration due to gravity is the same for all masses of objects, and your experiment is consistent with that theory, it's hard to come up with any simpler theory that would explain the same thing. "It doesn't matter" is already as simple as it gets.

I spent a lot of time with the Learning Sciences group at Georgia Tech. While they put an admirably high effort into careful experimental validation of their tools, methods, and theories, they were quite frank that the experimental data were hard to draw inferences from. They could describe a situation, but they couldn't reliably tell you the why of a situation.

The problem is that even with randomized trials, there are so many variables that it's hard to draw any strong conclusions. There is always a plausible explanation based on one of the uncontrolled variables. For learning sciences, a particularly troublesome variable is the presence of an education researcher in the process. Students seem to always do better when there's an experimenter present. Take away the experimenter, and the whole social dynamic changes, and that has a bigger effect than the particular tool. Seymour Papert's Mindstorms is a notorious example. Papert paints a beautiful picture of students learning deep things in his Logo-based classrooms, a picture that has inspired large numbers of educators. I highly recommend it to any would-be teacher. However, nobody can replicate exactly what he describes. It seems you need Papert, not just his tools, and Papert is darned hard to emulate.

All too often we focus on a small effect that is dwarfed by the other variables. The teacher, the software engineer, and the musician are more important than the tools. In how many other areas of knowledge have we fallen into this trap? We ask a question that seems obviously the one to ask--Logo, or Basic? Emacs, or vi? Yet, that question is framed so badly that we are doomed to failure no matter how good are experiments are. We end up comparing clarinets to marimbas, and from that starting point we'll never understand harmony and rhythm.

Saturday, June 5, 2010

Evidence from successful usage

One way to test an engineering technique is to see how projects that tried it have gone. If the project fails, you suspect the technique is bad. If the project succeeds, you suspect the technique is good. It's harder than it sounds to make use of such information, though. There are too few projects, and each one has many different peculiarities. It's unclear which peculiarities led to the success or the failure. In a word, these are experiments are natural rather than controlled.

One kind of information does shine through from such experiments, however. While they are poor at comparing or quantifying the value of different techniques, they at least let us see which techniques are viable. A successful project requires that all of the techniques used are at least tolerable, because otherwise the project would have fallen apart. Therefore, whenever a project succeeds, all the techniques it used must at least be viable. Those techniques might not be good, but they must at least not be fatally bad.

This kind of claim is weak, but the evidence for it is very strong. Thus I'm surprised how often I run into knowledgeable people saying that this or that technique is so bad that it would ruin any project it was used on. The most common example is that people love to say dynamically typed languages are useless. In my mind, there are too many successful sites written in PHP or Ruby to believe such a claim.

Even one successful project tells us a technique is viable. What if there are none? This question doesn't come up very often. If a few people try a technique and it's a complete stinker, they tend to stop trying, and they tend to stop pushing it. Once in a while, though....

Once in a while there's something like synchronous RPC in a web browser. The technique certainly gets talked about. However, I've been asking around for a year or two now, and I have not yet found even one good web site that uses it. Unless and until that changes, I have to believe that synchronous RPC in the browser isn't even viable. It's beyond awkward. If you try it, you won't end up with a site you feel is launchable.

Monday, January 18, 2010

Food science

Food science is the best kind. Check out John Weathers' double-blind test of.... Earl Gray teas. You don't need a lab coat to be a scientist. Study something you love and try to find out the truth.

Tuesday, November 24, 2009

Players make poor referees

The recently leaked emails between Phil Jones and Michael Mann raise a number of issues about scientific progress. I'd like to address two of them.

As background, the emails are between major researchers and activists in the climate change debate. Here is a sample of what has observers excited:
In one e-mail, the center's director, Phil Jones, writes Pennsylvania State University's Michael E. Mann and questions whether the work of academics that question the link between human activities and global warming deserve to make it into the prestigious IPCC report, which represents the global consensus view on climate science.

"I can't see either of these papers being in the next IPCC report," Jones writes. "Kevin and I will keep them out somehow -- even if we have to redefine what the peer-review literature is!"


Here we see two people influential with the IPCC conspiring to eject papers that conflict with their preferred conclusions. As a result, we cannot believe that the IPCC is giving a balanced summary of the research that is outstanding, thus undermining what the IPCC claims to do.

What to make of it? What I'd like to emphasize in this post is that it's not bad, by itself, that Jones and Mann are taking sides. The problem is that they are trying to wear two hats, two hats that are particularly incompatible.

To make an analogy, think of scientific claims as sports teams. How do you find out whether a particular sports team is any good? Really, there's no other way than to field the team against other sports teams that are also believed to be good. No amount of bravado, no amount of popularity, is really going to convince an unbiased observer that the team is really good. Ultimately, it needs to play against good teams and win.

The tricky part is here: What counts as playing against a good team? To resolve this, sports have rules that are laid out to be as objective as possible, and they have referees adjudicate the games to make sure the rules are followed. Referees are monitored to make sure that they are applying the rules correctly and fairly, but since the rules are objective, this is a relatively straightforward task. The team players, meanwhile, can try a variety of strategies and techniques. It's hard to judge whether the strategies and techniques are good by themselves, but it's not hard at all to tell who won a fairly refereed sports game.

Bringing it back to science, if Jones and Mann are to be faulted, it's because they are claiming to act as referees even though they are actively taking sides. I don't know the particulars of how the IPCC is organized nor of what influence these two have in it, but it doesn't take a specialist to know that players make poor referees.

Friday, October 2, 2009

Science versus activism

The theories of scientific progress I have read involve attacking theories mercilessly and keeping the simplest ones that stand up. Thus, the rationale for ejecting Mitchell Taylor from the Polar Bear Specialist Group (PBSG) is detrimental to science:

I do believe, as do many PBSG members, that for the sake of polar bear conservation, views that run counter to human induced climate change are extremely unhelpful. [...] I too was not surprised by the members not endorsing an invitation.

Gee, I would think that, for the sake of polar bear conservation, it is important to learn the truth.

On further reading, however, I'm not sure the PBSG is a scientific organization. From skimming their web site, they sounds more like a U.N. committee or an activist group. Such groups try to organize action, not to learn.