Monday, June 25, 2007

Physics or Social Justice

Almost a year ago I ran a piece entitled Math or Social Justice. That post described the trials and tribulations of someone holding un-PC views in becoming a math teacher.

Recently I ran across a post from a UK physics teacher. His open letter shows what happens when the PC-mindset invades the physics curriculum. The entire post is well worth the read, but here is sample:
The Non-scientific:
Lastly, I present the final question on the January physics exam in its entirety:

Electricity can also be generated using renewable energy sources. Look at this information from a newspaper report.
  • The energy from burning bio-fuels, such as woodchip and straw, can be used to generate electricity.
  • Plants for bio-fuels use up carbon dioxide as they grow.
  • Farmers get grants to grow plants for bio-fuels.
  • Electricity generated from bio-fuels can be sold at a higher price than electricity generated from burning fossil fuels.
  • Growing plants for bio-fuels offers new opportunities for rural communities. 
Suggest why, apart from the declining reserves of fossil fuels, power companies should use more bio-fuels and less fossil fuels to generate electricity.

The only marks that a pupil can get are for saying:
  • Overall add no carbon dioxide to the environment
  • Power companies make more profit
  • Opportunity to grow new type of crop (growing plants in swamps)
  • More Jobs
None of this material is in the specification, nor can a pupil reliably deduce the answers from the given information. Physics isn’t a pedestrian subject about power companies and increasing their profits, or jobs in a rural community, it’s is about far grander and broader ideas.

Conclusion:
My pupils complained that the exam did not test the material they were given to study, and they are largely correct. The information tested was not in the specification given to the teachers, nor in the approved resources suggested by the AQA board. When I asked AQA about the issues with their exam they told me to write a letter of complaint, and this I have done. But, rather than mail it to AQA to sit ignored on a desk, I am making it public in the hope that more attention can be brought to this problem.

At the high school level, science classes should lay a strong foundation so that the future student will be able to reason and research topics of importance. The Social Justice curriculum is much less concerned with laying a strong foundation in science then it is in co-opting the classroom discussion so that students come to the “right” conclusions NOW. In the case above, the process was so poorly thought out that the real agenda became obvious, but that may not always be the case. Educators should be wary of the intrusion of politics disguised as science (or religion disguised as science) into the classroom.

Thursday, November 30, 2006

AEI Conference on Black-White IQ Gap

Video, Audio, slides and relevant papers on the November 28th 2006 AEI conference on The Black-White IQ Gap: Is It Closing? Will It Ever Go Away? are available here.
For decades, the difference in the test scores of blacks and whites on the SAT, National Assessment of Educational Progress test, Armed Forces Qualification Test, and traditional IQ tests has been a vexed issue for American educational policy. Two of the leading scholars of this controversial topic, James R. Flynn of the University of Otago (New Zealand) and Charles Murray of AEI, will debate the causes of the difference, its implications, and recent trends. New studies of the subject by Professor Flynn and by Mr. Murray will be available for distribution at the session.

Rarely have I seen such a contentious issue discussed so civilly and scientifically. The conference left me with a lot of new information to think about.

Monday, November 27, 2006

Standards So Low a Caveman Could Meet Them

If 100 cavemen wanted to become high school mathematics teachers, how many could pass the licensure test? The answer appears below.

Teachers in core subject areas are required by the No Child Left Behind act to prove they know the subject they are supposed to teach. NCLB gives broad guidelines as to what constitutes proof, but the details are left to the states. Most states require their new teachers to take a licensure test in the content area they plan to teach. Score above the state-defined cut-score on the appropriate licensure test and you have met your burden of proof.

How high to set these cut-scores is subject to debate. What is not debatable is that examinees with zero relevant content knowledge should not be able to pass. No matter how good your teaching skills — “You can’t teach what you don’t know, anymore than you can come back from where you ain’t been.” [Will Rogers]

For secondary mathematics teachers, the Praxis II (10061) test is currently used by a majority of states for the purpose of proving sufficient mathematics content knowledge. The cut-scores vary widely.

I showed in a previous post that Colorado’s requirement was approximately equivalent to an examinee knowing 63% of the content on this high school level mathematics exam, whereas Arkansas’ standard is approximately equivalent to knowing just 20% of the content. Such extreme variation is already an indication that something is very wrong with how these state standards are set.

I say “approximately equivalent” because this equivalency assumes that the examinee takes the test only one time and has just average luck guessing on those questions he doesn’t know how to solve. However, in the real world, examinees who miss their state’s cut-off score can take the test an unlimited number of times. They are also encouraged to guess by a test format that does not penalize for incorrect answers. This situation makes it possible for examinees of much lower ability to (eventually) pass.

We can calculate the probability that an examinee with a certain true ability level will pass in one or more attempts. The examinee’s true ability level gives the percentage of questions they know how to solve. This is the score they would get on a constructed response exam, that is an exam with no answer choices. On an exam with four answer choices per problem, like the Praxis II, an examinee will correctly answer this percentage of questions plus, with just average luck at guessing, a fourth of the remaining questions. However, some examinees will have above average luck as seen in the table below.

Probability of Passing the Praxis II in Arkansas
True Ability LevelProbability of Passing
in One Attempt
Probability of Passing
in Ten Attempts
 0%  1.4%  13%
 4%  3.7%  32%
 8%  9.0%  61%
12% 19.0%  89%
16% 35.1%  99%
20% 56.0%≈100%
24% 76.9%≈100%
40%100.0% 100%
Table 1. Probability of passing the mathematics licensure test in Arkansas for various true ability levels.
An examinee with a true ability level of 20% has a better than even chance of passing on the first attempt and is all but certain to pass in a few attempts. In this sense, the Arkansas standard is approximately equivalent to knowing 20% of the material (red row). This is an extraordinarily low standard given the content of this exam. (It is sometimes misreported as 40% because this standard requires correctly answering about 20 of the 50 questions. However an examinee that knew how to solve just 10 problems would average another 10 correct by guessing on the remaining 40. He answered 40% correctly, but only knew how to solve 20%).

However, with a some luck, examinees with absolutely no relevant content knowledge can pass (blue row). If 100 cavemen were to take this exam, up to ten times each, about 13 would pass. We are not talking about the brutish-looking, but culturally sophisticated cavemen of the Geico commercial. We are talking about cavemen whose relevant content knowledge is limited to the ability to fill in exactly one circle per test question.

Now presumably such zero-content-knowledge examinees would never have graduated college. Yet the fact that the standards are set this low says that some people of very low ability must be managing to satisfy all the additional requirements and enter teaching.

Such extraordinarily low standards make a joke of NCLB’s highly qualified teacher requirements. They also make a joke out of teaching as a profession and are a slap in the face to all those teachers who could meet much higher standards.

Only teaching shows this enormous variation and objectively low standards. (Even Colorado’s 63% would still be an ‘F’ or at best a ‘D’ if the Praxis II were graded like a final exam.) In contrast, architects, certified public accountants, professional engineers, land surveyors, physical therapists, and registered nurses are held to the same relatively high passing standards regardless of what state they are in.

How is it that these other professions can set arguably objective standards, while teachers cannot? The standards in other professions are set by professional societies. Their decisions are moderated by several concerns including the possibility of having members sued for malpractice.

For teachers, the standards are first set by a group of professionals, but their recommendations can be overridden by state educrats. The educrats are concerned largely with having an adequate supply of teachers. The entire process lacks any transparency, so we cannot tell the extent to which the educrats substituted their concerns for the professionals’ judgment about standards for teaching.

In shortage areas, like mathematics, low standards guarantee adequate supply. It’s a lot less trouble for the educrats to simply lower standards than to pro-actively change incentives so that more academically able people might consider teaching math.

Something like NCLB’s requirement that teachers prove they have sufficient subject matter content knowledge is clearly needed to prevent cavemen from teaching our kids math, but the Feds trust in the states to set these standards is not justified. Under NCLB, the perfect scorer and the lucky caveman are both indistinguishably “highly qualified.” Setting higher standards would force states to begin to face the elephant in the room: Not enough is being done to attract mathematically talented people into teaching.

Monday, November 13, 2006

Reflection on the President’s Proposals

The president has proposed two new programs. One would train 70,000 high-school teachers to lead Advanced Placement courses in science and math. A second would bring 30,000 math and science professionals to teach in classrooms and give early help to struggling students.
... there was a specific concern about math and science scores. The President will build on the success of No Child Left Behind and propose -- to train 70,000 high school teachers to lead advance placement courses in math and science. We'll bring 30,000 math and science professionals to teach in classrooms and give early help to students who struggle in math so they have a better chance at good high-wage paying jobs. [Whitehouse Press Briefing]
The proposals themselves are a tacit admission that there continues to be something wrong with math and science education despite the fact that the vast majority of math and science teachers are “highly qualified”. Calculus is part of the high school curriculum. A “highly qualified” mathematics teacher should be able to teach calculus without needing additional content training, yet that is where the training in this proposal seems to be targeted
... provide incentives for current math, science and critical language teachers to complete content-focused preparation courses; [Expanding the Advanced Placement Incentive Program]
The second part of the proposal — putting math and science professionals in a classroom to help struggling students — presupposes that these professionals know how to help struggling students. Why should they? This is far more about pedagogy than it is about content knowledge. If their current teachers do not have the content knowledge to help them, why are they in the classroom?

Here’s a thought — wouldn’t it be better to have the professionals, who presumably understand the content, teach the advanced placement courses and let the teachers, who presumably know about helping struggling students, help the struggling students.

I recently had a friend resign his teaching position at a local high school. Until this September he was a scientist (physics Ph.D.) who worked in a research lab. He went alternate route, passed his Praxis II tests by wide margins, and even had the benefit of some preservice training.

As in many American high schools, seniority plays a significant role in how teaching assignments are made. So rather than being assigned to teach high level courses, where his superior content knowledge would be a big plus, he was assigned to teach fairly low level courses, where his lack of teacher training began to show. His students expected to entertained more than taught. They did not expect to have to think. They began to rebel and it was all downhill from there.

This school lost a potentially great teacher who was mis-assigned. Let’s hope the president doesn’t make the same mistake.

Saturday, August 19, 2006

Studies Prove

Thomas Sowell recently wrote a series of articles entitled “Studies Prove” (I, II and III). He gives examples, some from personal experience, about how stakeholders will selectively use data that bolsters their theory and suppress other data that doesn’t. A few salient points:
It was a valuable experience so early in my career to learn that what "studies prove" is often whatever those who did the studies wanted to prove. ... it is a terminal case of naivete to put statistical studies under the control of the same government agencies whose policies are being studied.

Nor will it do any good to let those agencies farm out these studies to "independent" researchers in academia or think tanks because they will obviously farm them out to people whose track record virtually guarantees that they will reach the conclusions that the agency wants.
In part III, he discusses a study that “proved” the effectives of affirmative action policies at universities. However, the study authors would not release their raw data for scrutiny by others, including the distinguished Harvard Professor Stephen Thernstrom, who has conducted some famous studies of his own. Prof. Sowell tells us of a similar experience he had:
Back in the 1970s, I tried to get statistical data from Harvard to test various claims about affirmative action. Derek Bok was then president of Harvard and he was the soul of graciousness, even praising a book on economics that I had written. But, in the end, I did not get to see one statistic.

During the same era I was also researching academically successful black schools. I flew across the country to try to get data on one school, talked with board of education officials, jumped through bureaucratic hoops -- and, after all this was done and the dust settled, I still did not get to see one statistic.

Why not? Think about it. Education officials have developed explanations for why they cannot educate black children. For me to write something publicizing outstanding academic results in this particular black school would be to open a political can of worms, leading people to ask why the other schools can't do the same.

Education bureaucrats decided to keep that can sealed.

Critics of affirmative action have long said that mismatching black students with colleges that they do not qualify for creates wholly needless academic failures among these students, who drop out or flunk out of colleges that they should never have been in, when most of them are fully qualified to succeed in other colleges.

Has the ending of preferential admissions in the University of California system and the University of Texas system led to a rise in the graduation rates of black students, as critics predicted? Who knows? These universities will not release those statistics. [Emphasis added]
One of the repeating themes of my posts is the plea to make as much data public as possible. For example, state boards of education and state colleges and universities have a wealth of data on how prospective teacher candidates perform on their licensure exams. Examination of this data could help explain why some states can set cut-scores 30 points higher (on a 100 point test) than others. But since this data might also be embarrassing as well as revealing, it is not available.

When I was soliciting data from the Educational Testing Service (ETS) for my investigations, it was made very clear that I could not have any disaggregated state level data. This restriction was a contractual obligation ETS had with the individual states that contracted for ETS’s services. Otherwise, ETS could hardly have been more gracious or cooperative.

Since policy advocacy often taints research I have been interested to read the studies of a outfit that claims to be “non-aligned and non-partisan” — The Nonpartisan Education Review:
We are allied with neither side. We have no vested interest. Unlike the many allied education pundits and researchers who call themselves independent, we actually are. And, we prove it by criticizing both sides, though probably not nearly as much as they deserve.

The Nonpartisan Education Review’s purpose is to provide education information the public can trust.
One of their reports, which discussed how states cheat on student proficiency tests, was featured in my post History Lesson.

I found this article by Richard Phelps of particular interest. It serves as an introduction to the caveats of educational research. It begins:
Over a decade ago, I was assigned a research project on educational testing at my day job. I read the most prominent research literature on the topic, and I believed what I read. Then, I devoted almost two years to intense study of one subtopic. The project was blessed with ample resources. In the end, it revealed that the prevailing wisdom was not only wrong but the exact opposite of reality.
He then exhibits a long list of claims all of which are “either wrong or grossly misleading”.

So perhaps it shouldn’t have come as a big surprise to me that when states and the federal government want there to be an ample supply of “highly qualified” math and science teachers, their data will show that abracadabra they pop into existence, whether they really exist or not.

Thursday, August 10, 2006

The Highly Qualfied Physical Science Teacher

What content knowledge is needed to be an effective science teacher? I began pondering this question when, by a quirk in NJ standards, I was required to take content knowledge tests in both physics and chemistry. Until 2004, NJ did not have separate chemistry and physics certifications. They only had physical science certification. This required knowledge of both chemistry and physics.

If one had trained to be such a combined physics/chemistry teacher then there would be no problem. However, NJ gets a substantial fraction of these science teachers through its alternate route programs. Typically such an alternate route candidate would have a background in chemistry or physics, but not both. Such was my case.

I know physics but not chemistry. I have advanced degrees in physics. My chemistry background consists of high school chemistry and one course in physical chemistry as a college freshman. That was more than 30 years ago. I have not had much contact with chemistry since. I have had no organic chem and no biochem — both of which are on the Praxis II test that NJ uses. In my opinion, I do not have the content knowledge necessary to teach high school chemistry (nor would I meet the current NJ requirement of at least 15 college credits).

If you have been following my earlier posts, you can guess how NJ got the physics people to pass chemistry and the chemistry people to pass physics. It just set very low standards. To earn physical science certification NJ required three tests. For physics, they used the one-hour Praxis II (10261) test of physics content knowledge, for chemistry the one-hour Praxis II (20241) test of chemistry content knowledge. [They also required a Praxis II test in General Science (10431) that includes biology.] The pre-2004 NJ cut-scores were a 119 for chemistry (19% of the scaled points possible) and a 113 for physics (13% of the scaled points possible).

How low these scores are, was put into perspective for me by my performance. I surpassed the chemistry cutoff by more than 60 points. This was my moment of enlightenment. Something was seriously wrong if my level of chemistry knowledge was more than 4 times the “highly qualified” minimum.

A majority of states use the two-hour versions of the Praxis II content knowledge tests (10265 and 20245). In chemistry, the cut-scores run from a high of 158 (Delaware) to a low of 135 (South Dakota). The 85th percentile score is 184. Assuming the one- and two-hour tests are comparable, it is comforting to know that at least 15% of the chemistry teachers know more chemistry than me. Cut-scores for individual states can be found on the ETS website (here) or at each state’s Department of Education website.

In physics the high cut-score is 149 (Indiana), the low 126 (West Virginia). The 85th percentile score is a 177. Delaware sets a cut-score of 112 on the one-hour physics test, a truly abysmal standard. Utah requires these tests but sets no cut-score. My guess is that Utah will eventually set cut-scores at that level that gives them an adequate supply of teachers. Objective standards, objective standards, we don’t need no stinkin’ objective standards.

Further analysis of these results is problematic. The Education Trust did not review the content of these exams, so what follows is entirely my own opinion. On the previously discussed math Praxis II, I thought a high score (above 180) was solid evidence of mastery. The physical science tests simply do not have content challenging enough for me to reach a similar conclusion. To score highly on the physics test, one only needed rote knowledge of a few formulas. Few of the questions tested concepts. One could have a high score and still have a poor conceptual understanding of the subject. Similarly, in chemistry I would not claim mastery of either rote knowledge or concepts and yet I had a high score.

Prior to NCLB and its “highly qualified” provisions, the minimal ability definition was a do no harm standard:
In all professional licensure assessments, minimum competency is referred to as the point where a candidate will “do no harm” in practicing the profession.
The post-NCLB era uses loftier language,“highly qualified”, but hasn’t actually raised the standards. In my opinion, on these tests, scores below 160 fail the “no harm” standard. Essentially these teachers have failed (scored below 60%) on a test of fairly rudimentary rote subject knowledge. I suspect these low scoring prospective teachers would also struggle on the SAT II or AP tests, yet we are saying they are “highly qualified” to help prepare our children to take these exams.

You should not have to take my word for it. It would be nice if old versions of these tests passed into the public domain. Without this level of transparency, the level of these tests remains largely hidden from public scrutiny. You can get some idea from the sample tests I linked to, but to really understand you need to see all the questions on a real test.

Before closing this topic, an appeal to anyone who can clarify the situation in Delaware. Delaware sets the lowest standard in physics, a 112 (on the one-hour test). They set the highest standard in chemistry, a 158 (on the two-hour test). They are transitioning their chemistry test from the one-hour to the two-hour. On the one-hour test their cut-score was a very low 127. Is the two-hour test much easier than the one-hour test? If not, I do not understand these vastly different standards. Is Dupont laying off chemists, thereby providing a surplus of potential teachers? Please leave a comment if you know something.

Friday, August 04, 2006

Save the Data

In my previous posts I’ve presented evidence for how much (or really how little) mathematics our secondary math teachers need to know to be annointed “highly qualfied”. The Reader’s Digest version: On the Praxis II test, a test whose content is at the advanced high school level, teachers can gain “highly qualified” status even if they miss 50% of the questions. In some states the miss rate climbs above 70% all the way to 80%. If this test were graded like a typical high school exam, about 4 out of 5 of the prospective teachers would fail.

In this post I will look at a related question: “How Much Math Should Math Teachers Know?”; that is, what evidence is there for a correlation between teacher math knowledge and student math achievement? I touched on this topic briefly in my previous post. Let’s look at some details.

The bottom line here is that we don't know. The research is largely uninformative. In a 2001 review of research entitled &ldquoTeacher Preparation Research: Current Knowledge, Gaps and Recommendations”, Wilson et. al. state:
We reviewed no research that directly assessed prospective teachers’ subject matter knowledge and then evaluated the relationship between teacher subject matter preparation and student learning.
They reviewed no such studies, because no large-scale studies of this type existed. An opportunity was missed with the TIMSS study. In a previous post, I wondered why the TIMSS study didn’t also test the teachers. Such a study could have been quite informative. If it showed a significant difference in subject matter knowledge between U.S. teachers and teachers from countries with superior student results, then teacher preparation should get more attention. If not, then we can primarily look elsewhere for solutions. Both the magnitude of any differences in teacher knowledge and its possible correlation with student achievement would be of interest. When a very small study was done of Chinese versus U.S. elementary teachers, huge differences were found.

Studies of the effect of teachers’ math knowledge use indirect proxies for teachers’ math knowledge. The typical proxies used in these studies are based on the teachers exposure to college level math. For example, did they have a major or minor?; or simply how many college math courses did they take. It was plausible that math majors would be better at high school level math than others. If so this would be a reasonable proxy.

The data says something different. My analysis of teacher testing results revealed the surprising fact that math and math education majors do not exhibit mastery of high school level math. Nor do they do any better than other technical majors on the Praxis II. That means the proxies are poor. The minimal or non-existent correlation shown by the studies Wilson reviewed is therefore entirely consistent with my teacher testing data, even if a strong correlation exists between teacher math mastery and student achievement.

Wilson makes similar observations:
The research that does exist is limited and, in some cases, the results are contradictory. The conclusions of these few studies are provocative because they undermine the certainty often expressed about the strong link between college study of a subject matter area and teacher quality. ...

But, contrary to the popular belief that more study of subject matter (e.g., through an academic major) is always better, there is some indication from research that teachers do acquire subject matter knowledge from various sources, including subject-specific academic coursework (some kinds of subject-specific methods courses accomplish the goal). There is little definitive research on this question. Much more research needs to be done before strong conclusions can be drawn on the kinds or amount of subject matter preparation that best equip prospective teachers for classroom practice.

Some researchers have found serious problems with the typical subject matter knowledge of preservice teachers, even of those who have completed majors in academic disciplines. In mathematics, for example, while preservice teachers’ knowledge of procedures and rules may be sound, their reasoning skills and knowledge of concepts is often weak. Lacking full understanding of fundamental aspects of the subject matter impedes good teaching, especially given the high standards called for in current reforms. Research suggests that changes in teachers’ subject matter preparation may be needed, and that the solution is more complicated than simply requiring a major or more subject matter courses. [emphasis added]
Requiring a math or math education major, as some states do, is no guarantee of mathematical mastery. There is no control over the quality of the courses, or the reliability of the grades. There is no quantitative measure of how much was learned. Even if there was, it is debatable to what extent exposure to college level course work correlates with mastery of high school level math. (In my study, math majors had a mean score that was essentially at the minimal ability level. This level is almost 40 points, on a 100 point scale, below what I would call mastery.) Teacher licensure tests could provide a more reliable direct measurement of that mastery.

Without clear and convincing evidence, the interpretation of studies is subject to confirmation bias
Confirmation bias refers to a type of selective thinking whereby one tends to notice and to look for what confirms one's beliefs, and to ignore, not look for, or undervalue the relevance of what contradicts one's beliefs.
Every human being operates with both knowledge and beliefs. However, sometimes they confuse their beliefs for knowledge.

I believe that a deep, grade relevant, understanding of mathematics is essential to great mathematics teaching. I don’t think you need a math major. I do believe you also need some knowledge of how to teach, of how to control a class, of how to manage a classroom, of how to assess a student, and of how to deal with parents and administrators. I believe it takes years to acquire the necessary math skill. I believe it would take only weeks to aquire the other skill set, at least that part that can be taught in a classroom, if it were efficiently organized and if you already have decent people skills. There were great math teachers before there were schools of education, but I have yet to meet a great math teacher who doesn't know math. It also helps to have good textbooks and a rational curriculum.

As scientist I am willing to change my beliefs when presented with data. The relevant experiments are becoming easier to do, if only the data was preserved and made publicly accessible. A lot of educational research reminds me of the man that’s looking for his lost keys by the lamp post because the light is better there. Education researchers use the data that is convenient without sufficient attention to the relevancy of that data to the questions they are trying to answer. I have some sympathy for both the man and the researchers. I would probably first look where the light was good. After all, maybe the keys are there. But when you cannot find them, after a thorough search, it is time to look elsewhere.

Some 36 states now use the Praxis II to test prospective mathematics teachers. The questions on this exam go through an elaborate vetting process (see here, 90 page PDF). Unfortunately, most of the richness of this data set is discarded. What is preserved is a pass/fail decision, the criteria for which varies enormously from state to state. That’s not good enough.

Save the data!