CQ Global Researcher

Did you know there's a CQ Global Researcher? Same great reporting as the CQ Researcher, but with an international perspective. The May issue (out next week) will be "The Future of the Middle East." Here's the introduction by Irwin Arieff.

Three major events reshaped the political landscape of the Middle East during a seven-week period, beginning in late 2008. Israel launched a devastating 22-day assault on Gaza to halt Palestinian rocket fire, Israeli parliamentary elections displayed growing disenchantment with the peace process and Barack Obama moved into the White House promising to try to help resolve the Arab-Israeli conflict after more than six decades of violence. Obama's pledge raised hopes in some quarters for a revival of peace talks — in limbo since the Gaza war began. But Israel's political shift to the right and deep, continuing Palestinian divisions raise the prospect of continued stalemate. Years of talks and several interim agreements have failed to encourage either side that they can eventually get what they want. Israelis, pursuing security, remain the target of militant attacks, while Palestinians — seeking a state of their own — remain under effective Israeli control.

Access the CQ Global Researcher Online [subscription required]. More info on subscriptions can be found here.

Recurrences of Flu in History from the CQ Researcher Archive

The following is an excerpt from the CQ Researcher Archives report on "Influenza Control", September 24, 1976. In the fall of 1976, the federal government sponsored a National Influenza Immunization Program that at the time was the most ambitious such effort in history. It was the first time a nation's government attempted to vaccinate virtually an entire population against a potential influenza epidemic.

Recurrences of Flu in History
Influenza apparently has plagued humankind throughout history, A flu-like epidemic swept through the Athenian army in 412 B.C., and Hippocrates described another epidemic in the same century that was probably influenza. The name influenza was coined by two Italian historians, Domenico and Pietro Buoninsegni, in 1580. They believed that fevers, sore throats, soreness and nausea were attributable to un influenza—an influence—of the stars. The word influenza was introduced to the English language in the mid-1700s.

For many years influenza was thought to be passed along by the winds. It was not until the late 19th century that this idea was dispelled. In 1892 Richard Pfeiffer identified the influenza bacillus, a tiny organism of varied forms that lives on certain substances in blood. For a time, this bacillus was thought to be the causative agent of influenza. But that notion was proved wrong during the period of intensive research and experimentation that followed the worst outbreak of influenza in world history, in 1918.

Devastating Influenza Pandemic of 1918–19
The great influenza pandemic that swept the world in 1918–19 may have been the most virulent outbreak of disease in history, at least in terms of the swiftness of its devastation. It killed more than 20 million persons around the world, including some 550,-000 in the United States—all within two years. [1] “Mathematically, had the epidemic continued its rate of acceleration, humanity would have been eradicated in a matter of months,” Joseph E. Persico wrote in American Heritage. [2] The pandemic killed about one out of every 100 people living in the world at that time. Influenza deaths outnumbered World War I fatalities by more than two to one.

The disease was known as the “Spanish flu” or the “Spanish lady” in the United States because, although waves of it swept many European countries during the final year of World War I, only in neutral, uncensored Spain did the newspapers report the full extent of the epidemic. No one knows for sure where the flu originated. Some scientists believe it may have started independently in Europe and Asia, or that Chinese workers carried it to Europe. Others say the best evidence points to the United States. At Ft. Riley, Kan., in March 1918, hundreds of soldiers came down with flu after a severe dust storm had whipped up clouds of dirt and smoke from burning piles of manure. Many of the soliders later sailed for France, and the French and British armies soon were hit by influenza outbreaks. Crowded military camps, unsanitary wartime conditions and constant troop movements contributed to the spread of the disease.

The German army was soon stricken, as were inhabitants of such faraway places as India, Japan, Chile, Greenland, Alaska and Africa. Among the world leaders who fell ill were British Prime Minister David Lloyd George, King George V, French Premier Georges Clemenceau, German Premier Prince Max of Baden, Kaiser Wilhelm II, Gen. John J. Pershing and Navy Assistant Secretary Franklin D. Roosevelt. Roosevelt nearly died, and Clemenceau lost a son to the flu. Densely populated cities naturally were centers of contagion, but people in remote farmhouses, lumber camps and sheepherders' cabins also were afflicted.

There were three “waves” of the pandemic in the United States: it appeared in the spring of 1918, peaked in the early fall, and reappeared in the spring of 1919. The disease completely disrupted everyday life throughout the nation. Doctors were virtually helpless in treating the illness, and were forced to fall back on the most rudimentary public-health measures. In Washington, D.C., schools, theaters and bars were closed, public gatherings were prohibited, and federal employees went on staggered work shifts. In Boston, the stock exchange was open only half a day, sports events were canceled, and “churchless” Sundays were declared. In New York City, huge signs warned that public coughing or sneezing without a handkerchief was unlawful and punishable by fines or jail sentences. San Francisco was one of many cities that required people to wear face masks in public. Around the country hospitals overflowed, health agencies ran short of supplies and personnel, and coffins became scarce.

Efforts to Develop a Vaccine Against Flu
The influenza struck abruptly, without warning. One minute a person would feel fine and the next minute he or she would be weak and helpless. Headache, chills, fever, sore throat, and leg or back pains were among the early symptoms. Severe coughing and gasping for breath followed, with accompanying symptoms of swollen ankles, bloody urine and eye-muscle paralysis in some people. The lungs filled with a dense fluid, and showed signs of hemorrhaging, abscessing and swelling; sometimes they collapsed. Since the capillaries could not carry oxygen to the bloodstream, the actual cause of death was asphyxiation. Some patients succumbed to pneumonia after being weakened by the flu. But those who did not die were usually up and around within a week. Strangely, the disease hit the young and strong harder than the old or weak.

For the medical profession, the 1918–19 pandemic was a frustrating and sobering experience. That generation of physicians had developed the germ theory of disease and had found vaccines or other means of preventing typhoid fever, diphtheria, tetanus, meningitis, tuberculosis, malaria and yellow fever. But influenza was unaffected by known medications. No one knew what caused it, how it traveled or why it killed people. Various theories attributed the disease to chemical warfare gases, cosmic rays, coal dust, atmospheric stagnation, cats, dogs, fleas and dirty dishwater.

In September 1918, at the National Swine Breeders' show in Cedar Rapids, Iowa, a prophetic discovery was made—but few people realized it at the time. Many hogs at the show had been stricken by a debilitating disease, and Dr. J. S. Koen, a hog cholera inspector from the Bureau of Animal Husbandry, noted a similarity to human influenza. In his report to Washington, Koen called the sickness “hog flu,” but it was years before researchers found a connection between the human and animal illnesses.

The American virologist Richard E. Shope, who studied swine flu extensively, was the first to demonstrate, in 1931, that it was caused by a virus in conjunction with influenza bacteria. In 1933 influenza virus, type A, first was isolated by three British scientists, and in 1936 one of them, Patrick Laidlaw, suggested that swine flu was the cause of the 1918–19 pandemic. Shope supported Laidlaw's contention, and demonstrated that swine flu can survive in a latent form in hog worm parasites and can erupt suddenly for various reasons. During the 1930s, several scientists found that influenza viruses could be transmitted from humans to animals, and that animals infected with the virus developed an immunity to the disease. This discovery led to the development of flu vaccines.

In 1943, a U.S. Army commission on influenza conducted experiments with influenza virus vaccines grown on chick embryos in eggs. A test group of 6,250 vaccinated men experienced only one-fourth as much illness from type A influenza as a similar group of unvaccinated men. In 1945 the entire U.S. Army was vaccinated, and during a type B influenza epidemic that winter only 1 per cent of all the soldiers fell ill, compared to 10–13 per cent of the unvaccinated population.

Outbreaks of the Disease in Recent Years
The most severe influenza outbreak in recent years was the Asian flu of 1957–58. There were 45 million cases of Asian flu in the United States during the fall of 1957 alone, according to HEW estimates, and 70,000 persons died. From its point of origin in central China in February 1957, the Asian flu spread quickly to Hong Kong, Taiwan, Singapore and other nearby countries. By summer there were reports of influenza in Europe, the Middle East and the United States.

The West Coast was hit first in this country, followed by the South and then the East. In the fall the epidemic spread to the central and northern states, reaching its peak in late October. A second wave peaked in February 1958, but the number of cases was far fewer by then. Intensive efforts were made to prepare for the epidemic, through large-scale production of vaccines and nationwide cooperation of many agencies. A considerable amount of vaccine was available just before the fall outbreak and its usage is believed to have prevented much illness and death.

A decade later, there was a pandemic of the so-called Hong Kong flu, which started in that city in July 1968. It spread to the Far East, India and Australia by the early fall, and then to the United States. There were major outbreaks nationwide in the fall and winter of 1968–69, starting in Puerto Rico and Alaska in late September. California reported the first outbreak in the continental United States in late October, and large numbers of cases appeared in the eastern states in mid-November. By Christmas, the Hong Kong flu epidemic had touched all 50 states. The disease spread so swiftly that vaccination efforts were largely ineffectual. About 50 million cases were reported, 33,000 persons died, and the estimated cost to the nation was $3.8-billion in medical bills, sick leave and related costs.

[1] The Plague of Justinian, which began in 542 A.D., may have claimed 100 million lives, but it lasted for 50 years. The bubonic plague, or Black Death, of the 14th century, killed more than 60 million over a period of several years.

[2] “The Great Swine Flu Epidemic of 1918,” American Heritage, June 1976, p. 84.

To view the entire report, login to CQ Researcher plus Archives [subscription required]. The 1976 article is not currently available for individual sale, but you may be interested in a related report from 2006 on "Avian Flu"

Judicial Elections

Are races for judgeships bad for justice?

By Kenneth Jost, April 24, 2009

The United States is the only country in the world that requires most judges to face popular elections to gain or hold office. Today, as in the past, most judicial elections attract little attention. Over the past three decades, however, political parties and interest groups have spent millions of dollars on targeted races for state supreme courts in order to change the tribunals' political or ideological composition. Business groups succeeded in recent elections in West Virginia and Wisconsin in backing candidates who defeated incumbent justices and tilted the courts toward business interests. Defenders of judicial elections say they help make sure courts are accountable and responsive to the public. Critics say the special-interest funding and misleading campaign tactics of many judicial campaigns threaten the integrity of the justice system. Proposals for change, however, are making little headway. Meanwhile, the U.S. Supreme Court is considering whether to require judges to bow out of cases involving major campaign supporters.

The issues:

  • Should states take new steps to control campaign contributions or spending in judicial elections?
  • Should states adopt stricter rules for judges to recuse themselves?
  • Should states with judicial elections modify their rules for selecting judges?
To read the Overview of this week’s report, click here.
To view the entire report, login to CQ Researcher Online [subscription required], or purchase the CQ Researcher PDF

    Judicial Elections: Overview from the April 24, 2009 CQ Researcher Report

    By Kenneth Jost

    Chief Justice Shirley Abrahamson of the Wisconsin Supreme Court is well known not only at home but also across the country as an advocate for judicial independence. But when Abrahamson learned she would face an opponent for reelection to a fourth 10-year term, she pulled out all the political stops.

    The 75-year-old Abrahamson hired a veteran political operative to head her campaign, collected endorsements from across the political spectrum and raised more than $1.3 million. “She came into the race prepared,” says Charles Franklin, a political science professor at the University of Wisconsin in Madison.

    Abrahamson had reason to take seriously the challenge by Randy Koschnick, an outspokenly conservative circuit court judge in Milwaukee. Just a year earlier, a conservative challenger had knocked off one of Abrahamson’s fellow liberals on the bench with a hard-hitting, multimillion-dollar campaign financed in part by the state’s business lobby. Michael J. Gableman’s election as justice in April 2008 gave conservatives a 4-3 majority on the Wisconsin court.

    One year later, however, Abrahamson’s political efforts paid off on April 7 with a 59 percent to 41 percent victory over Koschnick. “I ran a good race and kept it clean,” Abrahamson told Milwaukee’s Journal Sentinel afterward. But she added that her financial advantage over Koschnick – who spent only $180,000 – was critical. “That makes a big difference in how you can get your message out.”

    For most of the world, Abrahamson’s victory would not be as remarkable as the fact of the election itself. Except for Japan and Switzerland, the United States is the only country that requires judges to face popular election to gain or hold office. Even though federal judges serve life terms after nomination by the president and confirmation by the Senate, 39 out of 50 states use some form of election for judgeships either at the trial or appellate level or both. The elections vary from traditional partisan contests to nonpartisan races to so-called retention elections in which incumbent judges run without an opponent and remain in office unless a majority votes to remove them.

    Today as in the past, most judicial elections attract little attention. Most vacancies are initially filled by gubernatorial appointment, and virtually all incumbents remain in office whether they face “contestable” or retention elections.

    Over the past 30 years, however, judicial elections in a handful of states have become high-cost, bare-knuckle political battles. In particular, the U.S. Chamber of Commerce’s decision in 2000 to dive into state judicial politics in a big way has led to multimillion-dollar campaigns like Wisconsin’s 2008 contest that have succeeded – as in Wisconsin – in tilting some state supreme courts toward business interests on civil litigation and some other issues.

    The Chamber – which now generally avoids direct comment on judicial election issues – said at the time it wanted to counteract political influence in the judicial selection process by trial lawyers’ groups. Business groups like the Chamber, the National Association of Manufacturers and the American Tort Reform Association blame the plaintiffs’ bar for a history of favorable rulings on personal injury suits only recently being cut back in some states.

    The increasing cost and the deteriorating tone of judicial election campaigns worry many bar associations, traditional court reform organizations and liberal advocacy groups. “Over the last 20 years, and especially in the past seven years, we’re seeing a race to the bottom with respect to financing and campaigning in judicial elections,” says Seth Andersen, executive director of the American Judicature Society, a 95-year-old court reform organization that created the retention-election systems now used in 19 states.

    “Judicial elections are now posing the single, greatest threat to fair and impartial courts,” says Tommy Wells, a Birmingham, Ala., lawyer and president of the American Bar Association. The ABA along with state and local bar associations has been the major interest group supporting retention-election plans.

    Wells and others fear that campaign contributions from businesses and from lawyers with cases before courts are undermining public confidence in judges’ impartiality. “There’s a real fear that money alone can be tipping scales of justice,” says Bert Brandenburg, executive director of Justice at Stake, a Washington-based coalition of liberal-leaning legal advocacy groups.

    Judicial elections are strongly defended, however, by an assortment of Republican officials and leaders, business lobbies and conservative advocacy groups and experts. They emphasize that judges, especially state supreme court justices, have the power to make law in their respective jurisdictions – in some cases with no effective review by the federal judiciary.

    “Judges are making law, and it’s only appropriate for the people to choose judges,” says James Bopp, a lawyer in Terre Haute, Ind., who has represented Republican and anti-abortion groups among others in campaign-speech cases at the U.S. Supreme Court and in lower federal courts. “The whole idea of popular sovereignty supports judicial elections.”

    Bopp and others profess little concern about the impact of increased campaign costs and spending by businesses and other interest groups. “If you are going to elect your judges, then you pretty much have to allow much of the same trappings that you do for any other election,” says Sean Parnell, president of the Center for Competitive Politics, a Washington-based organization critical of campaign finance regulations.

    Michael DeBow, a law professor at Samford University’s Cumberland School of Law in Birmingham, Ala., and a member of the conservative-libertarian Federalist Society, says the public supports judicial elections despite concerns about the impact of contributions on judges’ decisions. “They don’t want to let go of judicial elections,” DeBow says.

    The debate over the impact of campaign contributions and spending is now pending at the U.S. Supreme Court. The justices are being asked to decide whether constitutional due process may require judges to step out of a case – in legal parlance, to recuse themselves – because of campaign contributions or spending by a party, lawyer or other individual with a stake in the outcome.

    The issue reached the justices in a case brought by the president of a now defunct coal company in West Virginia who says state supreme court justice Brent Benjamin should have recused himself from ruling on the $50 million award the company won against a rival coal business. Benjamin refused to recuse himself even though the president of the rival company had spent more than $3 million to help Benjamin during his successful campaign for the supreme court seat in 2004. Benjamin eventually cast a critical vote in the 3-2 decision in March 2008 overturning the award.

    Stricter standards on recusal are among the reforms the ABA, Justice at Stake and other public-interest groups are urging to try to counteract what they see as the negative effects on public confidence in the judiciary due to judicial elections. They also express interest in public financing of judicial campaigns – a system now on the books in two states, North Carolina and New Mexico.

    From the opposite perspective, judicial election supporters say stricter recusal standards may undermine elections by deterring campaign contributions or spending. They similarly argue that public campaign financing – accompanied by overall limits on candidates’ spending – will reduce the amount of information for voters in judicial contests.

    The arguments over judicial elections are drawing only limited attention from state legislators, who would have to be involved in making any changes in selection or election methods or campaign finance regulations. The general public also is largely unengaged on the issue. Indeed, despite public support for judicial elections in general, voter turnout is traditionally low in judgeship races. Wisconsin’s relatively high-profile supreme court race in April 2008 drew 830,000 voters – fewer than one-third of the nearly 3 million state voters in the presidential election in November.

    To view the entire report, login to CQ Researcher Online [subscription required], or purchase the CQ Researcher PDF

    What are you doing for Earth Day?

    Are you doing anything special to acknowledge Earth Day (April 22, 2009)? If so, we'd like to know about it. Tell us by leaving a comment and/or taking the poll on the blog.

    And yes, informing yourself by reading an environment-related CQ Researcher report counts! :)

    To get started, here are some ideas:
    1. Check out this excerpt from the overview section of January's report on Confronting Warming.
    2. Comment on the graphic on the right from the Dec 2008 issue on Reducing Your Carbon Footprint showing which U.S. metro areas have the highest and lowest carbon emissions.
    3. Calculate your carbon footprint

    Wrongful Convictions

    Is overhaul of the criminal justice system needed?

    By Steve Weinberg, April 17, 2009

    Until March 2009, few Americans had heard of Ronald Cotton, who was convicted in North Carolina of raping a college student and served 11 years in prison before being exonerated by DNA testing. Now Cotton is a household name because of a book about his case and appearances on “60 Minutes” and NBC’s “Today” show. As recently as 10 years ago, the proposition that innocent men and women regularly end up in prison failed to find traction. Today, thanks to the power of DNA evidence, media coverage and the establishment of innocence projects, there is general acceptance that wrongful convictions indeed occur. Dozens of states have passed laws to prevent wrongful convictions and compensate those wrongly imprisoned. Defense attorneys and many academics say wrongful convictions are a recurrent problem requiring substantial changes in the criminal justice system, but prosecutors, police and other academics say mistaken convictions are such a small percentage of all cases that the system should mostly be left alone.

    The Issues:

    * Are wrongful convictions a serious problem?
    * Do errors by forensic laboratories contribute to wrongful convictions?
    * Would systemic reform reduce the number of wrongful convictions?

    To read the Overview of this week’s report, click here.
    To view the entire report, login to CQ Researcher Online [subscription required], or purchase a CQ Researcher PDF

    Wrongful Convictions: Overview from the April 17, 2009 CQ Researcher report

    By Steve Weinberg, April 17, 2009

    Darryl Burton walked out of a Missouri prison in 2008 after serving 24 years for a murder he did not commit. He had proclaimed his innocence from the day of his arrest in St. Louis. Sixteen years into his prison sentence, Burton’s hope for release took an upward tick when Centurion Ministries agreed to look into his case.

    The nonprofit organization in Princeton, N.J., was founded by James McCloskey, an ordained minister and former business executive who has spent the last 29 years investigating alleged wrongful convictions. Working with a paid staff of five and a dozen volunteers, McCloskey reviews thousands of inmates’ requests for assistance every year and selects the few his organization can afford to investigate. Entirely dependent on donations from private individuals, Centurion has played a major role in more than 40 exonerations.

    Nobody knows how many innocent men and women are serving prison terms for crimes they did not commit. There is no doubt, however, that since DNA testing became accepted as accurate some 15 years ago, 235 inmates have been freed because of the forensic technique, according to the Innocence Project, a national organization based in New York City.

    But testable DNA material shows up in only about 10 percent of crimes — mainly murder and rape — that lead to arrests. Moreover, in most jurisdictions, fewer than 10 percent of all crimes charged proceed all the way to trial. In cases with trial records, it is sometimes possible to determine later the innocence of a convicted defendant. But most inmates end up in prison by pleading guilty before trial, leaving a scant public record.

    In the 2006 decision by the U.S. Supreme Court in Kansas v. Marsh, Justice Antonin Scalia, writing a concurring opinion to the majority ruling, said the wrongful-conviction rate across the nation is minuscule. Scalia quoted approvingly from a New York Times op-ed by Joshua Marquis, the district attorney in Clatsop County (Astoria), Ore., and a director of the National District Attorneys Association. Marquis, citing what he considered a misguided study by a law professor, wrote, “Let’s give the professor the benefit of the doubt — let’s assume that he understated the number of innocents by roughly a factor of ten, that instead of 340 there were 4,000 people in prison who weren’t involved in the crime in any way. During that same 15 years, there were more than 15 million felony convictions across the country. That would make the error rate .027 percent, or, to put it another way, a success rate of 99.973 percent.”

    In fact, Scalia asserted, numerous cases labeled “exonerations” are nothing of the sort. Instead, they are primarily violations of defendants’ due-process rights. “Most are based on legal errors that have little or nothing to do with guilt. The studies cited by the dissent demonstrate nothing more.”

    One of the scholars mentioned critically by Scalia is Samuel R. Gross, a University of Michigan law professor. After studying Scalia’s opinion, Gross called the .027 percent error rate Scalia cited “absurd.” Gross noted that “almost everything we know about false convictions is based on exonerations in rape and murder cases, which account for only 2 percent of felony convictions. Within that important but limited sphere, we have learned a lot in the past 30 years; outside it, our ignorance is nearly complete.”
    Gross argues that cases involving a plea agreement — and thus no trial — frequently lead to undocumented wrongful convictions. Innocent individuals plead guilty, Gross says, because they worry an adverse jury verdict will result in a longer prison sentence than the deal offered by the prosecutor — or even the death penalty.

    A great deal more is at stake with wrongful convictions beyond simply the welfare of innocent individuals in prison. There is also the sobering reality that every time an innocent defendant is incarcerated, the actual murderer or rapist or armed robber might be at large, committing more crimes. Also at stake is public trust in the criminal justice system. Mistrust due to repeated wrongful convictions leads to decreased citizen cooperation with police and jurors who disbelieve prosecutors.
    Generalizations about the criminal justice system are difficult to make, because it is not really a unified system. Instead, arrests, pretrial negotiations and trials are decentralized. The United States is divided into more than 2,300 local criminal jurisdictions, each served by an elected or appointed prosecutor (most commonly known as a district attorney), judges and police agencies. Superimposed onto the local jurisdictions is the federal system, with at least one federal prosecutor (called a U.S. attorney) and federal judges in each state. Some jurisdictions have no documented wrongful convictions. Others have spawned multiple wrongful convictions.

    The National District Attorneys Association argues that wrongful convictions are episodic, not epidemic, and almost always arise from well-intentioned law enforcement work, not from incompetence or dishonesty. If pressed to place a number on wrongful convictions, district attorneys tend to say it’s less than 1 percent of all cases charged. Conversely, members of the National Association of Criminal Defense Lawyers say wrongful convictions are epidemic in multiple jurisdictions and frequently arise from incompetent or dishonest law enforcement personnel. If pressed, defense lawyers say the percentage of wrongful convictions is between 5 and 10 percent.

    For its part, the American Bar Association (ABA) acknowledges the reality of wrongful convictions. A report by the ABA’s Ad Hoc Innocence Committee to Ensure the Integrity of the Criminal Process offers numerous recommendations aimed at reducing wrongful convictions. The frequency of wrongful convictions “undermines the assumption that the criminal justice system sufficiently protects the innocent,” according to the report.

    Increased public awareness of wrongful convictions, like that of so many other social problems, has been generated by the news and entertainment media. The public has been bombarded by exoneree stories in recent years, including best-selling author Scott Turow’s novel Reversible Errors; the stage play “The Exonerated”; the celebrated documentary movie “The Thin Blue Line”; the Hollywood drama “Just Cause,” starring Sean Connery and Laurence Fishburne, plus, of course, “CSI” and numerous other television police procedurals.

    Indeed, some prosecutors and judges refer to the “CSI effect,” in which real-life juries acquit defendants because the forensic evidence police present fails to match the quality of the fictional evidence that TV police evidence technicians working in sophisticated labs uncover — all within an hour.

    The new awareness of wrongful convictions has led to numerous in-depth studies of the problem and a wide range of enacted and pending legislation in many states, from new funding for crime labs to compensation for wrongly convicted men and women.
    Since his release from prison in Missouri last year, Burton, like many exonerees, has attended occasional gatherings of other exonerees. Invariably, they exhibit forgiveness remarkable to behold. When they speak in anger, it is almost always because they say they have never received apologies from the police officers and prosecutors who wrongly sent them to prison, or they have trouble finding decent jobs, often because they lack job skills or potential employers wonder if they are truly innocent.

    Ronald Cotton was found innocent and released after nearly a dozen years in prison in North Carolina for a rape he didn’t commit. He forgives Jennifer Thompson-Cannino, the woman whose mistaken testimony convicted him, but still feels angry about the aftermath. In a book about his conviction and redemption, co-authored with her, Cotton explains: “All those years with bars and razor wire around me — you’re no better than a dog in a cage. After being locked up for so long, they just toss you out and expect you to deal with it. I had no money, and how could I explain on job applications where I had been for the last 11 years?”

    To view the entire report, login to CQ Researcher Online [subscription required], or purchase a CQ Researcher PDF

    Korea's Roots

    The following is background from the April 11, 2003 CQ Researcher issue entitled "North Korean Crisis" by Mary Cooper:

    North Korea emerged from the ashes of World War II in 1945 to become one of the most enduring vestiges of the Cold War. But the culture and political ideology of the communist state are unique, owing as much to the Korean Peninsula's troubled history as to Cold War rivalry. Indeed, the authoritarian, paternalistic, isolationist and highly militaristic regime that rules North Korea today has its roots in Korea's troubled dealings over the millennia with its powerful neighbors — China, Russia and Japan. [17]

    For many years, Korea managed to ward off Western encroachment, which began in earnest with U.S. Navy Commodore Matthew C. Perry's opening of Japan to foreign trade in the mid-19th century. Indeed, Americans' first attempt to penetrate Korea's isolation ended badly. In 1866, when the U.S.S. General Sherman steamed up the Taedong River to the outskirts of Pyongyang, local inhabitants burned the ship and killed all its crew. North Korea's late leader, Kim Il Sung, claimed that his great-grandfather participated in that attack, now celebrated as a heroic victory against foreign invaders.

    Korea's isolation was short-lived. Japan annexed the peninsula in 1910 and turned it into a colony whose natural resources would help build the Japanese war machine. Korea's occupiers industrialized the peninsula, building factories, roads and hydroelectric dams and laying the foundations of later private industrial development in the south and state-controlled industry in the north.

    The colonial experience, which ended with Japan's defeat in World War II, left a lasting impression of national humiliation that would feed Korean aspirations for independence.

    Korean resentment of its colonial status fueled intermittent protests and insurrections that were brutally suppressed by Japanese administrators. Exiled to China and the Soviet Union, some of the dissidents, including Kim Il Sung, gained military training. After Japan annexed Manchuria in 1931, the rebel leaders returned to the region and led guerrilla actions against the Japanese occupation forces, which had a profound influence on North Korea's military and ideological development. Indeed, Kim and his resistance compatriots would occupy most leadership positions in North Korea for the next 50 years.

    Korean War

    Even before World War II ended, the United States and its allies began deliberating the future of Korea. At a meeting in Cairo, Egypt, in December 1943, they endorsed President Franklin D. Roosevelt's vague proposition that upon Japan's defeat Korea would become independent “in due course.” The Roosevelt administration also reversed traditional American non-involvement in Korean affairs by defining security on the peninsula as important to postwar Pacific — and therefore U.S. — security.

    On Aug. 11, 1945, War Department officials, without consulting Korean or Soviet officials, made the fateful decision to divide Korea into Soviet and U.S. zones separated along the 38th parallel. In early September, 25,000 American soldiers occupied southern Korea, ending the hated Japanese occupation of the peninsula. But they immediately faced opposition among Koreans who saw the U.S. presence as a continuation of colonialism and resented the notion that they were not ready for independence. Meanwhile, Soviet forces occupied Korea north of the 38th parallel and brought with them Kim Il Sung and other communist leaders who had left the country during the Japanese occupation.

    Soviet leader Josef Stalin had quietly accepted the partition of Korea, but U.S.-Soviet relations quickly chilled. Although Korea was home to one of the oldest communist movements in Asia, the United States saw the emergence of communist leanings in the South in late 1945 as evidence of a Soviet plan to dominate the entire peninsula.

    In 1947, President Harry S Truman called for the containment of communism within existing boundaries — the so-called Truman Doctrine. The U.S. won United Nations support for U.N.-supervised elections for all of Korea if the Soviet Union approved the plan. When it didn't, elections were held in the South in May 1948, resulting in the establishment of the Republic of Korea and the ascendance to power of Syngman Rhee, the first of several authoritarian leaders who would rule South Korea for the next three decades. [18]

    Kim, meanwhile, had emerged as the leader of the communist movement that consolidated power in the North and established a central government in February 1946. Over the next year, land and industries were nationalized and brought under a system of central planning along the Soviet model. Bolstered by his earlier activities as a nationalist guerrilla, Kim became highly popular, far more than the new leaders in the South, who were regarded by many Koreans as puppets of the American colonial occupiers.

    Kim strengthened his hold with the merger of communist parties in 1949 into the Korean Workers' Party, which dominated the new Democratic People's Republic of Korea (DPRK) from its founding on Sept. 9, 1948, three weeks after the Republic of Korea's formation.

    In contrast to Soviet-supported regimes in Eastern Europe, Kim's brand of communism was no mere copy of the Soviet model — partly because of Stalin's withdrawal of Soviet forces from Korea in 1948. Kim infused a singularly Korean theme into his communist system through the adoption of chuch'e ideology. Defined roughly as keeping foreigners at arm's length, chuch'e appealed to the traditional Korean ideals of self-reliance and independence. Kim put his doctrine into action in 1955, when he distanced his regime from the Soviet Union, and throughout his rule by subjecting North Koreans to continual political indoctrination.

    In 1949, Kim had himself named suryng, an old Korean word for “leader” that was modified to mean “great leader.” That year he began condemning South Korea as a puppet state.

    Although neither Seoul nor Pyongyang recognized the 38th parallel as a legitimate boundary, historians generally blame the North — and not the South — for the outbreak of the Korean War. Bolstered by some 100,000 war-trained forces and support from China and, to a lesser degree, the Soviet Union, North Korean forces invaded South Korea on June 25, 1950, and took control of all but a small corner of southeastern Korea around the port city of Pusan.

    In September, U.N. and South Korean forces led by U.S. Gen. Douglas MacArthur drove out the invaders. The war dragged on for another three years, costing the lives of some 800,000 Koreans on both sides of the parallel, 115,000 Chinese and 37,000 Americans and laying waste to much of the peninsula. An armistice was signed in the summer of 1953 recognizing the de facto division of Korea.

    Military Ambitions

    The war's conclusion 50 years ago this July 27, 2003 came not with a peace treaty but with an armistice that merely suspended the hostilities and separated the two sides at the 38th parallel. To bolster South Korea's military forces, the United States retained a sizable military presence in South Korea, backed by naval forces in the Pacific and, ultimately, its superpower nuclear deterrent. Faced with such a formidable adversary, North Korea poured its resources into creating one of the most militarized societies on Earth — eventually building a million-man army equipped with some 11,000 artillery pieces.

    It was not long before the North sought to move beyond its conventional arsenal. As early as 1964, Pyongyang set up a nuclear-energy research complex at Yongbyon, where the Soviets built Korea's first nuclear reactor a year later. A plutonium-reprocessing plant and other support facilities appeared over the next two decades.

    Despite signing the NPT in 1985 — which barred signatories without nuclear weapons from developing them — barely two years later Pyongyang began hindering U.N. inspections of its nuclear facilities to ensure compliance with the treaty. The IAEA inspectors did not gain access to North Korean nuclear facilities until May 1992. Amid intelligence reports that North Korea was secretly continuing its nuclear program at clandestine sites, their findings were inconclusive.

    Besides pursuing a nuclear capability, North Korea also is believed to have developed biological and chemical weapons beginning in the early 1980s, even though in 1987 it acceded to the 1972 Biological and Toxin Weapons Convention banning pathogens for military uses. But, according to the Washington-based Nuclear Threat Initiative, North Korea produced weapons containing anthrax, botulinum toxin and plague. [19]

    The group also estimates that North Korea has 12 chemical-weapons plants producing some 4,500 tons of mustard, phosgene, sarin and other chemicals — and that annual production could reach 12,000 tons in case of war. Unlike the United States, North Korea never signed the 1993 Chemical Weapons Convention, which bans chemical weapons and provides for monitoring compliance, including intrusive inspections and allowances for sanctions and the use of force against violators. In addition, North Korea's thousands of artillery systems can deliver chemical weapons into the DMZ and Seoul. [20]

    Since the 1970s, military experts say North Korea has been developing missiles capable of reaching targets beyond the range of conventional artillery. By 1984, it had tested a ballistic missile based on the Soviet Scud technology, and it has since produced several types of missiles, including 100 of the advanced, 800-mile-range Nodong. The even longer-range Taepodong-1 failed during a 1998 test launch, while the newer Taepodong-2, which potentially could reach the U.S. West Coast, is reportedly almost ready for testing.

    Although there is no evidence that North Korea has exported its weapons of mass destruction, it has sold its missile technology to several countries, including Egypt, Iran, Libya, Pakistan, Syria and Yemen.


    [17] Unless otherwise noted, material in this section is based on “North Korea — A Country Study,” Library of Congress, June 1993.

    [18] For more information on Korea's postwar history, see Selig S. Harrison, Korean Endgame (2002).

    [19] “North Korea Overview,” Nuclear Threat Initiative, January 2003.

    [20] For background, see Mary H. Cooper, “Chemical and Biological Weapons,” The CQ Researcher, Jan. 31, 1997, pp. 73-96.


    To view the entire report, login to CQ Researcher Online  [subscription required], or purchase the CQ Researcher PDF

    Business Bankruptcy

    Should the federal government provide financing to bankrupt companies?

    By Barbara Mantel, April 10, 2009

    Seven weeks after Lyondell Chemical filed for Chapter 11 in early January, the judge in the case approved an $8 billion “debtor-in-possession,” or DIP, loan to the Houston-based company, one of the largest bankruptcy loans in U.S. history. Such loans are “the fuel that keeps companies going through bankruptcy, allowing them to continue paying their suppliers and their employees as they try to become profitable again,” according to Thompson Financial News. Exit financing, another class of bankruptcy loans, is needed at the end of the process when the company is ready to emerge from Chapter 11.

    But despite the record-setting size of Lyondell’s loan, most companies are struggling to find bankruptcy financing, and the consequence is, essentially, death. “Without bankruptcy financing, you can neither keep the company alive long enough to fix it or to sell it,” says Jeffrey Wurst, a bankruptcy lawyer in Uniondale, N.Y. Experts estimate that as recently as a year and a half ago as many as 30 companies were vying to provide bankruptcy financing, and today there are fewer than five.

    Historically, financial institutions have been eager to provide the cash necessary for firms to survive the bankruptcy process because these loans are the first to be paid back, command high interest rates and fees and rarely default. “I get e-mails every day from lenders saying they are looking to make DIP loans,” says Wurst. But they can no longer find borrowers that look like a good risk, he says.

    Companies entering bankruptcy these days already have pledged so many of their assets as collateral for earlier loans that there’s nothing left to pledge to a new lender. Not only have companies put up their inventories, accounts receivable, equipment and plants as collateral, oftentimes they have put up intangibles like intellectual property as well, long before bankruptcy was even contemplated.

    Even when unencumbered assets can be found that could be pledged for a DIP loan, their value keeps dropping and is difficult to determine in this economic climate. “It’s comparatively easy to value an asset if the business is going to be running a year from now, but it’s hard to know that will be the case now,” says David Skeel, a professor of corporate law at the University of Pennsylvania Law School. “And it’s comparatively easy to value an asset if you know there is an active market for the assets of companies,” he adds, “but there aren’t liquid markets for much of anything right now.”

    With traditional providers of bankruptcy financing stepping back, companies in Chapter 11 have had to turn to their existing lenders. And the interest rates, fees and conditions those lenders are imposing are often onerous. “The interest rates for DIP financing are astronomical now,” says William Lenhart, national director of business restructuring services for BDO Consulting. “In addition, the post-petition financing may only be for 60, 90 or 120 days,” he adds, which often leaves no time to reorganize and forces the debtor into a sale or liquidation.

    Many times the DIP loan doesn’t actually include much fresh money. Lenders roll their pre-bankruptcy loans into the DIP financing, garnering higher interest rates and fees in the process. In addition, as the value of a debtor’s assets continues to decline, lenders will often restrict the amount it can borrow even further. After Circuit City’s bankruptcy filing last November, it arranged DIP financing with a face value of $1.1 billion from its existing bank group. But Circuit City’s lawyers told the bankruptcy judge that when all was said and done, there would only be $50 million of fresh money available to the retailer, at a cost of $30 million in fees and expenses. A little more than three months after filing, Circuit City shut down, selling its assets and letting its 34,000 employees go.

    Some bankruptcy experts argue the credit markets are simply broken and that the federal government may need to step in. “My bottom line is that I would much rather have the government provide DIP financing in a bankruptcy than bail out an industry,” says Skeel. Providing DIP financing would be cheaper, he says, because “all a bankrupt company needs is enough cash to fund its operations; it doesn’t have to pay its general creditors.”
    Choosing which industries to help would be tricky. “The government would have to prioritize industries,” says Skeel, “based on which would have the most devastating consequences if companies filed for bankruptcy and could not find financing.” Altman of New York University says that if General Motors ends up in Chapter 11, the federal government should step in and provide the necessary DIP financing, which he estimates could amount to as much as $50 billion.

    But Altman is not prescribing the same medicine for other companies. “There are not too many companies like GM,” he says. Rather than provide bankruptcy financing directly to other companies in Chapter 11, he recommends government arm-twisting of traditional lenders of DIP and exit loans — many of whom received bailout money themselves — to get back in the game.
    Or the government, he says, could set up a DIP fund and arm-twist the country’s largest banks and corporations, like General Electric, which have financial subsidiaries, to contribute a total of $40-$50 billion.

    On second thought, he says, the government could be a contributor as well. “Why not,” he asks, “if that would get it going, since DIP financing is usually a pretty good investment?”
    But other bankruptcy experts say this multibillion-dollar corner of the credit markets is not broken and needs no government intervention. “Bankruptcy financing is a way in which the market imposes discipline on businesses,” says Williams of the American Bankruptcy Institute. “If the market itself will not provide financing because it’s not confident that the business will generate enough revenue to cover the cost and return on that investment, then if the government provides that financing instead, it’s doing so for reasons that have nothing to do with economic reality.”

    To view the entire report, login to CQ Researcher Online [subscription required], or purchase the CQ Researcher PDF.

    Vermont legalizes same-sex marriage. What do you think?

    What do CQ Researcher readers think about today's announcements that Vermont legalized gay marriage and that the Washington D.C. Council voted to recognize same-sex marriages performed in other states?  Vote in the latest poll on the CQ Researcher blog.  

    For background, check out the September 2008 issue called "Gay Marriage Showdown" [subscription required] or purchase the PDF.  

    An excerpt from the issue is here on the blog.

    Obama Official Establishes Contact With Iran

    A brief exchange this week between presidential envoy Richard C. Holbrooke and Iranian diplomat Mohammad Mehdi Akhondzadeh marked the first face-to-face contact between the Obama administration and the government of Iran.

    The unplanned encounter took place at The Hague, Netherlands, during a pause at a conference devoted to Afghanistan. Although nothing of substance was discussed, the two promised to stay in touch, according to Secretary of State Hillary Rodham Clinton.

    The U.S. also handed the Iranian delegation a letter requesting its intercession in the case of three American citizens (private investigator Robert Levinson, freelance journalist Roxana Saberi and student Esha Momeni) being held in Iran.

    For background, see the following CQ Researcher report [subscription required]: "U.S. Policy on Iran" (Nov. 16, 2007). Or purchase the CQ Researcher PDF.

    Extreme Sports

    Are they too dangerous?

    By Marcia Clemmitt, April 3, 2009

    The wild world of so-called extreme sports ranges from motorcyclists executing double back flips to kayakers navigating deadly Class 5 rapids to mixed martial arts (MMA) — also known as “ultimate fighting” — where combatants use kicks, punches and stress holds. But many “extreme” athletes reject the label, arguing that the term marginalizes their sports as the sole province of adrenaline and violence junkies, when they actually require high degrees of skill. Now legislatures in New York and other states are considering bans on MMA. Proponents say the matches, legal at the pro level in 37 states, are safer than boxing and emphasize fighters' broad-based martial-arts training. But opponents argue that allowing such a wide variety of aggressive moves in a single fight is barbaric. However, skateboarders and other extreme athletes cite statistics showing that traditional sports such as boxing and football cause injuries and deaths at a higher rate than any of the extreme sports.

    The Issues:

    * Should “ultimate fighting” be banned?
    * Are extreme sports more dangerous than other sports?
    * Have media portrayals boosted action sports?

    To read the Overview of this week’s report, click here.
    To view the entire report, login to CQ Researcher Online [subscription required], or purchase a CQ Researcher PDF.

    Overview of the CQ Researcher Issue on "Extreme Sports"

    By Marcia Clemmitt, April 3, 2009

    Canadian teenager Dean Lewis’ biggest mistake may have been getting into the ring in Winnipeg with a more experienced fighter. Just 18, he had a lot to learn about the “extreme” sport of mixed martial arts (MMA), which allows combatants to use potentially deadly moves from kickboxing, jujitsu, sumo and other combat techniques. After a series of blows to his head and body, the young man collapsed in the ring with brain swelling and a severe concussion. As his lungs filled with blood, ringside doctors put a breathing tube down his throat; Lewis suffered several seizures on the way to the hospital.

    It was “the bloodiest fight I have ever seen live,” said Keith Grienke, who blogs about MMA at cageplay.com. An “illegal upkick to the nose” was the blow that ultimately felled Lewis, Grienke said.

    After recovering, Lewis said he wanted to start training again as soon as possible, but that isn’t going to happen. According to one of his trainers, Winnipeg MMA fighter Rodrigo Monduruca, Lewis “will never be able to fight again — ever.”

    MMA is the most controversial of the many so-called extreme sports that have vaulted onto the national stage in recent decades.

    While it is unquestionably the bloodiest, it is far from alone. Controversy also has dogged other extreme sports such as snowboarding, skateboarding, kayaking down waterfalls and BASE jumping — or parachuting from buildings, bridges and cliffs.

    Extreme sports are generally defined as individual rather than team-oriented activities that athletes essentially invent by coloring outside the lines of the traditional sport world, often by attempting extreme feats or performing in unusual venues.

    Critics argue that the sports are overly risky; that some, like skateboarding, damage property; and that many, like snowboarding, promote reckless, even thuggish, behavior. Moreover, they say spectators are attracted by the potential for severe injury and violence, and they scoff at the claim that the craving to watch “the bloodiest fight” is healthy But many athletes say their events are mislabeled as “extreme,” preferring the term “action sports” for pursuits that they say are more about skills than thrills.

    “I . . . have a problem with it being a ‘sport’ just because someone defines it as a sport,” said Terri Mills, a planning commission member in West Valley City, Utah, where tighter MMA regulations are being considered out of concern that bouts may encourage brawls or other violence among spectators.

    MMA’s skyrocketing popularity means that if states don’t regulate it, illegal — and potentially far more dangerous — bouts will proliferate, says Bernie Profato, executive director of the Ohio Athletic Commission, which regulates MMA in the state. The public is drawn to many sports, including NASCAR, because of a craving to witness risk and violence, and unregulated fights are the dangerous ones, says Profato. Before Ohio regulated MMA, unregulated fights occurred all over the state, but since 2005, when MMA became legalized and regulated in Ohio, the state hasn’t had a single unregulated event, he says. Regulated events require certified ring doctors, ban certain tactics and take other precautions.

    Once considered a niche market, action sports are attracting growing interest from the biggest media names. The CBS television network raised eyebrows in 2008 when it announced plans to periodically broadcast MMA matches in prime time on Saturday nights. Also last year, NBC television expanded its action-sports broadcasts beyond competitions to include “lifestyle” coverage of top athletes, and ESPN launched a cluster of online sites to offer up-to-the-minute coverage of online sports from BMX to “freeskiing.”

    The public’s rising interest in action sports comes from advertisers and TV sports channels that choose the most “extreme” images to sell products ranging from athletic shoes to soda.

    They focus on “the adrenaline rush because that’s what sells to the couch potato,” says Dale Stuart, a clinical psychologist in Torrance, Calif., who is a seven-time world champion in freestyle skydiving, in which gymnastics tricks are performed during free fall.

    “People are basically afraid of the unknown, and when they think about skydiving, for example, they think, ‘I’m going to be up there with no idea what to do,’ ” she says. They don’t realize that before jumping “divers spend a whole day with a coach,” learning what to do.

    While risk is certainly an element in sports like skydiving, practitioners are “usually success-oriented people, aware of the risks and making conscious, thoughtful decisions about what they do,” Stuart says.

    Action athletes generally do have “thrill-seeking” personalities, says Frank Farley, a professor of psychology at Philadelphia’s Temple University, who coined the term Type T to describe such people. But many Type Ts are the opposite of reckless, Farley says. “Those who live prepare” for their sports, he says. “They don’t want to die. They want the challenges, the creativity, the risky experiences. So they prepare.”

    Type Ts do push the envelope, not just in sports but in science and the arts, Farley says. And while some Type Ts push it in negative ways, such as by taking drugs or committing crimes, many others are society’s creators and innovators. “If we didn’t have these people, we’d be back in the cave,” Farley says.

    Snowboarding has furnished many an extreme image to marketers, but that’s far from the full picture, according to Holly Thorpe, a snowboarder and lecturer in sport and leisure studies at the University of Waikato in New Zealand. There are more than 18.5 million snowboarders worldwide, ranging in age from 5 to 75, making “the notion of snowboarding as extreme [seem] obtuse,” she wrote.

    Nevertheless, “with more than 75 percent of American snowboarders under the age of 24 and males constituting approximately 70 percent of all boarders, it is no wonder that stereotypes continue to abound,” Thorpe acknowledged.

    Action sports like freestyle motocross — in which motorcycle riders do jumps and acrobatics — and whitewater kayaking do pose dangers, participants acknowledge.

    While preparation is vital to most action sports, the situation will always throw unforeseen risk into the mix, such as a slippery surface, says Farley. Motorcycle stunt riders like Robbie Knievel — son of the legendary Evil Knievel — “practice over and over again, but at the moment of takeoff it’s always risky,” he says. “Very competent people die all the time doing things like climbing Mt. Everest.”

    Freestyle skiing — skiing off an icy ramp in order to perform airborne acrobats — was banned as a competitive sport in the 1960s and ’70s because athletes falling from a five-story height while twisting and turning risked serious injury. But “this did not deter . . . athletes from training and competing unofficially” or prevent media coverage, said Kenneth P. Burres, a back surgeon in Montclair, Calif. Today, it’s an Olympic event, and standardized venues and equipment have reduced the injury rate, though high risk remains, he wrote.

    “The fact that they’re called ‘extreme’ sports means, if you make a mistake, you die,” says Ron Watters, an adjunct professor of outdoor education at Idaho State University. In extreme climbing, for example, “the edge that you walk between life and death is a knife edge,” he says. Casual spectators can be tempted into danger if they fail to realize the people that make these highly entertaining DVDs of high-risk activities “did not just start doing the sport yesterday. They started out with teachers,” he says.

    While media images of many action sports depict rugged individualism, teamwork is actually the order of the day and helps make sports safer, says Jay Young, a West Virginia-based writer and rock climber who runs the Web site rockclimbing.com. “It’s very common for a person to fall off a rock but very uncommon for them to hit the ground” because most people climb in groups, literally tied together, he says.

    Progressing from marathon running to the even more extreme Ironman triathlon — a 2.4-mile swim, followed by a 112-mile bike ride and a 26.2-mile marathon — “I actually find myself healthier,” says Taneen L. Carvell, president of a Washington, D.C., marketing firm, who completed her first Ironman nine days after turning 40. The multiple skills required of an Ironman athlete demand that “you know your body” and cross-train in a more balanced way than runners often do, she says. “You’re out there for 12 or 13 hours, not three,” as in a marathon, so you must be thoroughly healthy, she says.

    “One of the nice things” about action sports like skateboarding “is that each kid can express himself at his own level,” without needing top skills just to stay in the game as a teenager, as is the case with many traditional sports like baseball, says John R. Ricciardi, Jr., president and founder of the New Jersey-based Action Sports Association, a nonprofit that aims to expand corporate, government and parental support for action sports. Most action sports “are healthy lifestyle habits” that one can pursue for decades, he says. Furthermore, “a lot of kids have problems at home, and these sports can be their salvation; my kids used the skateboard to work through their problems,” like their parents’ divorce, says Ricciardi.

    Skateboarding “does cause minor property damage,” but that’s far from its essence, says Ocean Howell, a former professional skateboarder who is a writer and a graduate student in architectural history at the University of California, Berkeley. “If you see a kid going down the street, you may think he’s a thuggy jerk, but if he’s jumping on and off the curbs on a skateboard, making it look smooth, then that kid has a tremendous work ethic,” Howell says. “That kid is practicing an art form, and has an interest in doing something right.”

    To view the entire report, login to CQ Researcher Online [subscription required], or purchase a CQ Researcher PDF