Saturday, January 29, 2011

A Berlin Wall Moment in the Middle East?

On December 17, 2010, Mohamed Bouazizi, a 26-year old fruit seller in the Tunisian city of Sidi Bouzid stood in front of the local governor’s office and set himself on fire after pouring a can of gasoline over his head. He died of his burns on January 4. While Bouazizi’s immolation was startling, his despair was not. News of his suicide spread quickly, mostly by Facebook, setting off a month of civic demonstrations by secular working- and middle-class Arabs, protesting the country’s corruption and its dictator, Zine el-Abidine Ben Ali.

On January 14, Ben Ali, the leader of Tunisia for more than two decades, fled to Saudi Arabia with his wife by private jet, leaving fist-shaking mobs behind in the capital city, Tunis. The toppling of Ben Ali sparked hopes that the liberation of this North African country of 10 million would set off democracy movements in other countries of the region whose sclerotic dictatorships are every bit as oppressive as Ben Ali.

Was Tunisia the Berlin Wall moment for the Middle East?

The “statehood” of most countries in the Middle East was invented by drawing lines on a map after World War I to divide feuding nomadic and ancestral tribal lands. Tunisia has always been a state with ancient origins that predate the Roman Empire. The Tunisian colony of Carthage was founded around 800 B.C. – one of several stopover points along the Tunisian coast established by seafaring Phoenicians who put ashore every night. It became the strongest of the Punic (Phoenician) settlements and thus the capital of a North African empire.

"Africa" was originally a Roman term that meant Carthage (Tunisia) long before it meant anything else. Jutting out of North Africa into the Mediterranean across from Sicily, Carthage was strategically located to control all of the western Mediterranean and thus became a vast commercial power. As the power of Rome grew, the Carthaginians and Romans were drawn into three wars, known as the Punic Wars, which were fought during the century between 246 B.C. and 146 B.C. These ended with the defeat and destruction of Carthage.

When the dominance of Rome came to its end, Carthage-Tunisia remained the gateway to North Africa first under the Vandals in the 5th century, then the Byzantines, the medieval Arabs, followed by the Turks, and finally the French in the 19th century.

After the Roman general Scipio defeated Hannibal in 202 B.C. outside modern-day city of Tunis, he dug a demarcation ditch, or fossa regia, that marked the extent of civilized territory in North Africa. It was reminiscent of the legendary Pillars of Hercules whose inscription, Non Plus Ultra, (nothing more beyond), served as a warning to sailors and navigators to go no further.

The fossa regia retains similar relevance today where its visible remains run southward from Tabarka on Tunisia’s northwestern coast and then turn directly eastward to Sfax, another Mediterranean port. The towns beyond the fossa regia have fewer Roman remains, and today tend to be poorer and less developed, with historically higher rates of unemployment.

The town of Sidi Bouzid, where Bouazizi immolated himself, lies 130 miles south of ancient Carthage and just beyond Scipio’s fossa regia.

When Tunisia won independence from France in 1957, Habib Bourguiba became the Arabic version of Ataturk and the embodiment of the modern Tunisian state. He was the country’s first president, or more accurately, its fierce secular dictator for the next three decades. Though he was Muslim, Bourguiba rejected militant Islam. He was an advocate for normalizing Arab-Israeli relations a decade before Anwar Sadat of Egypt went to Jerusalem. He gave women the right to vote, scrapped polygamy, and cracked down on wearing the veil. He devoted public resources to women’s education, birth control, and elementary education rather than his personal aggrandizement or large expenditures on building projects and an army. In fact, Bourguiba made sure the army remained apolitical and small, a legacy which kept the recent overthrow from being bloodier.

In one of a series of internal crises, his interior minister, Zine el-Abidine Ben Ali, saw an opportunity to usurp power, and in 1987, he had Bourguiba declared too infirm to rule in order that he might depose him in a bloodless coup. But Ben Ali was not the measure of the man he replaced. He lacked a vision for Tunisia other than to plunder it, little different than the regime of Egyptian leader Hosni Mubarak. Ben Ali’s primary goal has been to maintain internal order, which he accomplished by killing and torturing Islamists and other dissidents.

In fairness to Ben Ali, it could be said that he presided over an economy that grew rapidly at about an annual rate of 5% due largely to tourism. A middle class developed. Social progress during his 23-year dictatorship extended beyond the fossa regia – somewhat.

But in Ben Ali’s Tunisia, virtually all business activities were under the supervision of the regime. That included the vegetables young Bouazizi was trying to sell before he made himself a human torch. Business permits were sold to poor people to raise the revenue that income taxes and sales taxes failed to supply, and Bouazizi didn’t have one.

Although he had a college education, Bouazizi, like many college-educated young people in Tunisia, couldn’t find a job. One in three college-educated can’t find a job in Tunisia. Selling fruits and vegetables illegally – i.e. without a permit – at least provided him some money for his widowed mother and family. But when the police confiscated his produce, they essentially denied him access to any livelihood. His suicide was his last act of freedom.

Early this month, Hillary Clinton lectured a group of Arab leaders telling them that many Arabs on the street had “grown tired of corrupt institutions and a stagnant political order.” She didn’t reveal how she knew this but went on to say, “If leaders don’t offer a positive vision and give young people meaningful ways to contribute, others will fill this vacuum.”

Before its government collapsed, Tunisia was failing not because it neglected to “offer” its people opportunities, as Clinton so confidently suggested; it was failing because it denied them opportunities, as in the case of Bouazizi. Excluding poor people from Tunisia’s market economy because they didn’t have connections or the proper permits was the modus operandi of Ben Ali’s “shakedown” government, little different than a mob’s protection racket. If people got desperate enough, they would find the money – or steal it.

When WikiLeaks recently published thousands of embarrassing confidential cables from America’s ambassadors around the world, among them were several from Robert F. Godec, our ambassador to Tunisia. One report, now posted online, stated “Whether it’s cash, services, land, property, or yes, even your yacht, President Ben Ali’s family is rumored to covet it and reportedly gets what it wants.” Godec went on: “Corruption . . . is the problem everyone knows about, but no one can publicly acknowledge.”

The Marie Antoinette of Tunisia was 74-year old Ben Ali’s second wife, Leila Trabelsi, a gold-digging former hairdresser who is 20 years younger than her husband and seems to have been positioning herself to take over the family business. Allegedly, she stole a ton and a half of gold from the central bank, worth $56 million, and had it loaded on the private jet as she and Ben Ali were escaping to Riyadh last week.

Her greed knew no limits. She even introduced her children and relatives to her schemes, which had them living like millionaires (which they were) inside the fossa regia. They stole with abandon. Their avarice was so profane, that even sycophants were criticizing the family, causing Ben Ali to allegedly call a family meeting and warn, “If you want money, at least do it with discretion.” During the 23 years Ben Ali ran the country, he and Trabelsi accumulated wealth for themselves estimated at $5.2 billion, which is deposited in French banks.

A WikiLeaks cable exposed the disbelief of Ambassador Godec after attending a dinner at the beachfront mansion of Leila Trabelsi’s son-in-law. Frozen yogurt had been flown in from St Tropez and the family pet was a tiger who consumed four live chickens and prime cuts of beef daily. Another cable described how one of Trabelsi’s relative’s stole a $3 million yacht from a French businessman, had it repainted, and unashamedly anchored it just off shore from his beachfront house.

"Corruption in the inner circle is growing," Godec wrote to the US State Department. "Even average Tunisians are keenly aware of it. And the chorus of complaints is rising. Tunisians intensely dislike, even hate, Leila Trabelsi and her family . . . even those close to the government express dismay at her reported behavior."

As the excesses of the Ben Ali regime are becoming public, we should be asking why our government has been backing people like Ben Ali and other corrupt leaders in the Middle East. In his State of the Union speech Tuesday night, after some undeserved breast beating to take credit for the recent independence vote in south Sudan, Obama had this to say:

“And we saw that same desire to be free in Tunisia, where the will of the people proved more powerful than the writ of a dictator. And tonight, let us be clear: The United States of America stands with the people of Tunisia, and supports the democratic aspirations of all people. (Applause.)

Yet before Ben Ali fled the country, Hillary Clinton was asked directly about the Tunisian street protests and America’s response. Her answer? "We can't take sides."

Here was a moment in history for Obama, who some have called the smartest president in history, and Clinton, who has been called the smartest woman on earth, to have seized what good fortune had dropped in their laps and say unequivocally that America supports self-determination, opposes extremism, and will not support corrupt governments and leaders. Doing so would have thrown support behind people throughout the Middle East and North Africa who could topple the regimes that are doing more to increase terrorism than a hundred bin Ladens.

Instead, Clinton and the president she represents chose to stay on the sidelines. That is, until it was safe to take sides, once Ben Ali was out of Tunisia.

I can’t imagine Ronald Reagan not taking the side of the East Germans who were pushing against the Iron Curtain and the Berlin Wall. It was Reagan, whom Obama says he admires, who said, “Mr. Gorbachev, tear down this wall!” – wasn’t it?

Bush 43 became an advocate of promoting democracy abroad after 9/11. That horrific event made it apparent that national security was as much about what went on inside of states as it was about the behavior of those states among the nations of the world.

Yet when Obama became president, he made it clear that he was cautious about promoting democracy throughout the world. He even went so far as to apologize to rouge states like Iran for America’s past meddling in their affairs. His approach has been to “appeal” to the leaders of Muslim countries and to “seek mutual understanding” rather than to support reform movements. He showed that intent only too well when he refused to voice support for the Iranians protesting the disputed 2009 reelection of Mahmoud Ahmadinejad. The absence of a strong American voice virtually assured that the arrested protestors would be held without trial, abused, and denied basic civil rights. Absent outside pressure, the Iranian mullahs and Muslim leaders like Egypt’s 82-year old President Mubarak have become more oppressive.

We are entering a time when the citizens of an authoritarian regime have a weapon that offsets the power of a police state – social media. The American and French revolutions would not have been possible without the printing press and the political papers circulating in the 1ate 1700s – the social media of its day. The Cold War was at least partly won because the West spent billions to broadcast the Voice of America and to smuggle photocopiers and fax machine into Soviet countries. As people networked to understand how fragile the Soviet regime was, economic failures could no longer be hidden and change became inevitable.

Today, the social media empower angry activists to share awareness that other people are angry and move those people to take action. Eighteen percent of Tunisians have a Facebook account and even more have other ways to message each other. To say how much this helped them overthrow a dictator in a popular uprising would be speculation, but their protest quickly went viral despite Ben Ali’s attempt to unplug them. The fact that they were able to quickly use proxy servers to subvert his efforts suggests that they had been thinking about their communication networks for a long time.

The Tunisian uprising has now spread to Egypt, where protests calling for Mubarak’s ouster have been going on for five days, and to Yemen, where calls for the resignation of Ali Abdullah Saleh are in their third day. The demonstrations seem to have started among the Internet-savvy middle class, but since Arabic language cable television makes the Middle East a virtual community, protests are seen by a broader audience than social media reach, emboldening more people to get involved.

The feckless foreign policy promoted by the US State Department for decades supported dictators, thinking it was the best defense against radical Islam. The fact that these dictators crushed all dissent, even reasonable outrage against their abuse and unequal access to economic opportunities, was ignored. It was just the cost of doing business with tyrants. Those chickens seem to be coming home to roost. "What started in Tunisia is not over," shouted the crowd at a Yemen university campus. They are probably right. Whether the Middle East is on the cusp of a Berlin Wall moment or a 1979 Shah of Iran moment is the question.

A blogger in Tunisia seemed to have caught the spirit of the moment: “It actually happened in my lifetime,” he said. “An Arab nation woke up and said ‘enough.’”

It’s now up to Obama to decide if he wants to be a player or a spectator.

Saturday, January 22, 2011

Bertie and David

They should have had a close relationship, raised as they were under the domineering hand of an unloving father and failing to find nurture from their chilly mother. Yet they detested each other.

They were princes and brothers – two of the four sons of King George V. Despite the shortcomings of their parents, David grew into an attractive and capable young man – blond, athletic, self-assured. As the eldest son, he was born to be a king and he looked the part.

Bertie was the opposite. Knock-kneed, his father compelled him to wear braces at the age of eight. Though he was naturally left-handed, his father insisted that he write with his right hand. He suffered from chronic stomach problems, no doubt stress related. Perhaps also due to the psychological stresses of his childhood, he had a relentless stammer that infuriated his father and made him an object of sibling ridicule.

It didn’t take David long to discover how attractive he was to women. In 1916 two of the prince’s attendants packed him off to France and left him in the arms of a prostitute. The next year he spent three days in bed with a Parisian woman named Maggy.

Back in England David continued to have affairs – mostly with older women, mostly the daughters of dukes, and mostly married women. The notable names of his liaisons included Viscountess Coke, 12 years his senior, Lady Sybil Cadogan, Lady Cynthia Hamilton, Lady Diana Manners, Lady Rachel Cavendish and Lady Rosemary Leveson-Gower.

The prince’s addictive womanizing continued into the 1920s and 1930s and caused much hand wringing by his father as well as Prime Minister Stanley Baldwin and others concerned for his future as king. David’s private secretary believed that "for some hereditary or physiological reason his normal mental development stopped dead when he reached adolescence."

The king grew disgusted with his son’s affairs and, at the same time, was concerned with his fitness to be king. "After I am dead, the boy will ruin himself in 12 months." King George V could not have known how prescient his words would be.

In contrast to his brother, Bertie seemed to have no attraction to women. The royal photographer, Sir Cecil Beaton, would later record that Bertie “was a backward young man and the courtiers were beginning to worry.” Since he showed none of a young man’s usual interest in women, it was the custom of that time to find “some trustworthy young woman could be chosen to initiate the young Prince into the rites of sex.”

Beaton believed that the chosen lady was the revue actress Phyllis Monkman, almost four years Bertie's senior, with whom Bertie was alleged to have dined privately in rooms in Half Moon Street, Mayfair. But months before, the 22-year old prince had slipped away from the British Embassy in Paris and spent the night with an unnamed French girl – or so he claimed to his brother David.

Two years later, Prince Bertie became infatuated with a 19-year old singer, Evelyn “Boo” Laye, who appeared in the London Palladium. She was a remarkable beauty who would become one of Britain’s most celebrated stars. In her old age she would recall her first meeting with Bertie:

“He came backstage clutching the most beautiful bouquet and he paid me the loveliest compliment I have ever received. ‘Miss Laye,' he said, and he struggled to get the words out because of that cruel stammer, 'I would really like to invite you out to supper, but if I did that, there would be gossip and publicity. Your people wouldn't like that and neither would mine.'”

It never went beyond that with Bertie and “Boo.” Yet the future wife of the prince knew he carried a life-long crush on her, and when they attended her shows, the queen would elbow him teasingly and say, “Look, Bertie, here comes your girl friend.”

Learning of Bertie’s deep feelings for Evelyn Laye, the royal family realized how emotionally immature and vulnerable he was and decided it was time for him to marry. He was introduced to the Lady Elizabeth Bowes-Lyon, daughter of the Earl and Countess of Strathmore. Elizabeth was a descendant of King Robert the Bruce (Robert I of Scotland) and King Henry VII of England.

But Elizabeth had no interest in marrying into the Royal Family and had even less interest in Bertie because, at the time, she was in love with the prince’s womanizing equerry, Captain the Honorable James Stuart. It would take Bertie two-and-a-half years of patient courtship, several rejections, and the intervention of Queen Mary, who removed Stuart to the oilfields of Oklahoma, before Elizabeth finally consented to marriage.

Elizabeth’s influence over Bertie was almost hypnotic. Though he had continued to see Evelyn Laye regularly, when he and Elizabeth married, Laye was delighted with his choice. “He needed to marry a strong and confident wife,” she said in later years. “Thank God for him, and for the country, that he found the right girl.”

In 1930, King George gave David, the older brother, a country home. There he carried on a series of relationships with married women including textile heiress Freda Dudley Ward and Lady Furness. It was Furness who introduced Prince David to an American woman, Wallis Simpson, the woman who would change David’s life forever. Simpson was divorced from her first husband and at the time of her introduction to the prince, was still married to her second husband, Ernest Simpson, a half-British, half-American businessman. While Lady Furness was traveling abroad, Simpson became David’s mistress, ousting Lady Furness and Freda Ward from his life.

Despite having been discovered in bed by David’s staff, he denied to his father that he and Simpson were lovers. But it soon became evident that the Prince was under her sway. Simpson dominated David and was irreverent toward his royalty. He vacationed openly with her in Europe and began neglecting his official duties, causing his father to say of his sons and granddaughter Elizabeth ("Lilibet"): "I pray to God that my eldest son will never marry and have children, and that nothing will come between Bertie and Lilibet and the throne."

On January 20, 1936, King George V died, and David ascended the throne taking the name King Edward VIII. Since he was not married and had no heirs, Bertie would be the presumptive heir to the throne until David had children. In November of that year, the new King invited Prime Minister Stanley Baldwin to Buckingham Palace to discuss his intention to marry Wallis Simpson. Baldwin told him that the English people would not accept the King’s marriage to a woman with two living husbands, not only because the Church of England opposed divorce, but also as King, he was the head of the Church and its protector. The king’s options were to leave Simpson, or marry her and cause a constitutional crisis, or abdicate.

King Edward had been a modern day Esau throughout his adult life. So it was not surprising that on the night of December 11, 1936, in a broadcast to the nation and the Empire, he announced his decision to abdicate the throne, explaining "I have found it impossible to carry the heavy burden of responsibility and to discharge my duties as king as I would wish to do without the help and support of the woman I love." He had reigned only 325 days, one of the shortest reigns in British and Commonwealth history. He was never crowned.

Next in line to the throne, Bertie would become king. With all of his psychological baggage – a self-confidence emasculated by his father and a terrifying stutter – to be King of the British Empire was a responsibility from which he cowered.

It is unlikely that Bertie, now King George VI, could have risen to the task without the strong hand of his wife, Elizabeth. She had seen him weep on his mother’s shoulder when he learned he would be king. She knew that a secret report had been written concerning Bertie’s fitness to rule. She also knew that, because of doubts about Bertie, a plan was contrived to make Queen Mary the Regent, setting the stage for Bertie’s youngest brother to become King. But the private Elizabeth was not the smiling, jocular public Elizabeth. She seethed with anger at Edward’s frivolous abdication. But once he was gone, she was determined that Bertie would be king.

Bertie himself knew that, because of the concerns about his qualities as a national leader, he was not a shoo-in to inherit the throne. That only compounded his humiliation. Then there was the emerging medium of radio, which would compel the new king to speak more often to his people than if he spoke to them only in public. And there would still be numerous public speeches. The thought of standing before those seen and unseen audiences handicapped by a stutter was terrifying.

The coronation of George VI took place on May 12, 1937, the date previously intended for Edward's coronation. Despite Bertie’s misgivings, the coronation was followed by a live radio broadcast that evening that was heard by tens of millions of people across the Empire. It proved a resounding success. He barely stumbled over his words. "The King's voice last night was strong and deep, resembling to a startling degree the voice of his father," reported The Star. "His words came through firmly, clearly – and without hesitation."

The success of the king’s speech was due largely to one man: a failed actor and self-taught Australian speech therapist 15 years the King’s senior, named Lionel Logue. In 1926 Bertie consulted him about his stammer. Logue had been working with Bertie ten years by the time of the coronation speech. After his initial meeting with the royal patient, Logue had written an assessment in spidery hand-writing on a small card which survives to today:

“Mental: Quite Normal, has an acute nervous tension which has been brought on by the defect. Physical: well built, with good shoulders but waist line very flabby.”

His prescription for the future King was a mixture of breathing exercises and some fiendish tongue twisters, combined with a form of Freudian talking therapy – including some unkingly profanity.

A brewer’s son, Logue was an unorthodox choice for a king’s minister. His technique was developed out of his work with speech-impaired veterans of World War I in Australia. Because he had no medical or professional credentials, however, he was dismissed as a quack by the British medical establishment. Still, he helped his royal patient conquer his speech impediment, but more importantly, he helped Bertie overcome his feelings of inferiority and inadequacy, which made it possible for the young king to become a greater monarch than the modern kings who had preceded him. With Elizabeth at his side, they would become the embodiment of English resolve during the darkest days of the Second World War.

After Bertie became King, Logue’s relationship with him came into its own. The two men became friends – as much as is possible between king and commoner – and remained so until Bertie’s death. Out of his gratitude to Logue, King George inducted him into the Royal Victorian Order, appointing him to be a Member (MVO) in 1937. Logue wore the MVO badge as he sat in the apse as the king delivered his coronation speech. Later, in 1944, the king elevated him to Commander (CVO) of the Victorian Order. The Order rewards personal service to the sovereign and admission to it is the personal gift of the monarch.

When the King spoke to the Empire on the evening of September 3, 1939, the day Britain declared war on Germany, Logue had rehearsed the speech with Bertie several times, striking out difficult words from the text. Logue was beside the king in the room from which the speech was broadcast at Buckingham Palace. The king began:

"In this grave hour, perhaps the most fateful in our history, I send to every household of my peoples, both at home and overseas, this message, spoken with the same depth of feeling for each one of you as if I were able to cross your threshold and speak to you myself."

As the red light faded on the broadcast lectern, Logue turned to him and said, "Congratulations on your first wartime speech." Bertie, relieved his ordeal was over, said simply: "I expect I will have to do a lot more."

The story of the king’s speech is delightfully told in a film of the same name that has been in the theaters since December. My wife and I saw it last weekend, and I can say it hews closer to historical accuracy than most historical dramas. The King’s Speech has Colin Firth in the role of Bertie with Geoffrey Rush as Logue. I suspect both may win Oscars for their performances so I can’t think of a better way to spend a leisurely couple of hours than watching them performing their roles.

King George died in 1952 at an all too young age of 56. He had been a heavy smoker and developed lung cancer. Logue wrote Queen Elizabeth to offer his condolences. She was now beginning what would be half a century of life as a widow and the Queen Mother. She replied to Logue with gracious thanks.

“I know perhaps better than anyone just how much you helped the King, not only with his speech, but through that his whole life and outlook on life. I shall always be deeply grateful to you for all you did for him."

Saturday, January 15, 2011

Death in Arizona

Last Saturday afternoon I was putting the finishing touches on that week’s blog when the first fragments of a news story came over the Internet. There had been a shooting in an Arizona shopping mall. Several people had been killed – perhaps a US Representative among them.

Over the next several hours, the pieces began to fall in place. The shooter was Jared Loughner, a 22-year-old, described as a loner by the few people who knew him. At an event called "Congress on Your Corner" sponsored by Representative Gabrielle “Gabby” Giffords, an ordinary Saturday had brought together ordinary people, most of whom did not know each other. Loughner had indiscriminately wounded 14 of them, including the Congresswoman – his apparent target. But six had been killed incidentally, among them a 9-year-old-girl, who wanted to meet a real politician, and US District Judge John M. Roll, a friend of Giffords, who had just attended Catholic mass and decided on the spur of the moment to stop by. Other than a Giffords aide who was killed, the other killed and wounded had chosen to visit the event and were simply at the wrong place at the wrong time.

Yet, within two hours of the shooting, Paul Krugman of the New York Times felt compelled to launch a demagogic conspiracy theory by posting these words on his newspaper website:

“A Democratic Congresswoman has been shot in the head; another dozen were also shot.

“We don’t have proof yet that this was political, but the odds are that it was. She’s been the target of violence before. And for those wondering why a Blue Dog Democrat, the kind Republicans might be able to work with, might be a target, the answer is that she’s a Democrat who survived what was otherwise a GOP sweep in Arizona, precisely because the Republicans nominated a Tea Party activist. (Her father says that “the whole Tea Party” was her enemy.) And yes, she was on Sarah Palin’s infamous 'crosshairs' list.”

The image of Jared Loughner began to fill in. Here was a paranoid, delusional anti-Semitic racist whose YouTube rants about Adolph Hitler and Karl Marx’s Communist Manifesto were entwined with the occult and an obsession with how the government used grammar to control people’s minds. He was an abuser of alcohol and marijuana with reading interests that included Animal Farm, Peter Pan, and To Kill a Mockingbird, and he had a preoccupation with gold standard currency. After a 2007 encounter with Giffords, in which Loughner had asked her a question and was apparently put down by her answer, he had stalked her and threatened her on at least one occasion.

Loughner had been thrown out of a local community college where as a student he frequently disrupted classes. He behaved bizarrely around fellow students, frightening most of them. One of his college classmates was so concerned with his imbalance that she sat by the classroom door so she could make a fast exit if he became erratic and dangerous. A former high school classmate remembers him as a left-wing kook, yet what is now known makes it doubtful that his disordered thoughts were capable of holding a coherent political ideology on the left or right. The videos and writings he left behind reveal a man losing control of his mind.

As the picture of Jared Loughner began to emerge, Krugman could have admitted that he had gotten ahead of the story, and he should have retracted his earlier comments. Instead he published another screed on Sunday under the headline “A Climate of Hate,” in which he wrote this curious logic:

“It’s true that the shooter in Arizona appears to have been mentally troubled. But that doesn’t mean that his act can or should be treated as an isolated event, having nothing to do with the national climate.

“Last spring Politico.com reported on a surge in threats against members of Congress, which were already up by 300 percent. A number of the people making those threats had a history of mental illness — but something about the current state of America has been causing far more disturbed people than before to act out their illness by threatening, or actually engaging in, political violence.”

I’ve underlined the word “but” to make this point. In the two statements, Krugman says something that is essentially factual followed by the word “but” and a denial of its relevance. Why then state the fact at all? Why not just come right out and state an unfounded and outrageously slanderous opinion that ties a violent act to a political ideology with which Krugman disagrees?

Christianne Amanpour, whose “This Week” program on ABC is plumbing new depths in the ratings, made a similar scurrilous connection by saying, “…But in fact the suspect in custody, 22 year old Jared Loughner, has no known ties to the tea party or any conservative group.” What if I were to say, “… but in fact Christianne Amanpour is not known to have committed tax fraud.” Aren’t I implying that there is smoke but the fire has yet to be discovered?

Not to miss his 15 minutes of fame, Pima County Sheriff Clarence Dupnik postured before the national television cameras with his psychoanalysis of the shooting. Blaming “the vitriol that comes out of certain mouths about tearing down the government,” Dupnik called Arizona “the Mecca for prejudice and bigotry” – evidently because it believes in enforcing immigration laws. Last spring he got an extra 15 minutes of fame by announcing that he would refuse to enforce the new Arizona immigration law and using flamboyant statements he accused immigration enforcement supporters of being racists.

When interviewing him, Fox reporter Meghyn Kelly forced Dupnik to admit that he had absolutely no evidence as to any connection between the allegedly vitriolic political rhetoric and the shooting. "It's just my opinion period," he had to admit, "I don't have that information yet" when pressed as to whether he had evidence to make the connection."

Krugman, Amanpour, and Dupnik weren’t the only ones trying to make a connection where there wasn’t one. Many of the mainstream media blamed the pre-election rhetoric of Glenn Beck, Rush Limbaugh, and Sarah Palin for inflaming the passions of the kook class. Palin was especially singled out for having put bull’s eyes on a map of targeted districts she wanted to win over to the Republican candidate in those races. Giffords’ was one such district. However, there have been no shootings of the other Democrat candidates who won in districts on Palin’s bull’s eye map.

Not to be outdone when there’s blame to be heaped, Hillary Clinton, who is in the Middle East this week, piled on with this bit of moral equivalence: "We have extremists in our country," Clinton said. "A wonderful and incredibly brave young woman Congress member was just shot by extremists in our country. We have the same kinds of problems [as citizens of Middle Eastern countries]” Was “just shot by extremists”? You mean like more than one?

Apart from the revulsion that a US Secretary of State should be standing up for the values of her country rather than joining its critics, (we already have a President who does a pretty good job of that) Clinton was suggesting to her audience in Dubai that there's no real difference between a lone psychopath who shoots a Congresswoman and others in Tucson, and the scourge of Islamic terrorism around the world. Terror groups are led by perfectly sane leaders who believe they are acting out a religious mandate by killing innocent people (maybe that’s a form of insanity) but otherwise they and their followers are little different than organized criminals. The Tucson shooter, on the other hand, is a nut case who probably should have been institutionalized and on medication – hardly an extremist as Clinton claimed.

Undoubtedly Congress will now spend the next several weeks on hearings for gun control, high capacity ammo clips, a reinstatement of the “Fairness Doctrine,” security for Congress that will rival the President’s, and criminalizing the placement of a bull’s eye on maps of Congressional districts. Not a scintilla of thought will be given to the constitutionality of these proposals. And after they have been milked for all their political content, Congress will hopefully get back to the People’s business, dealing with out of control spending and the economy.

So, what is to be learned from this tragedy and the opéra bouffe parading in the disguise of objective reporting?

The first lesson learned is there is nothing we can learn from this about guns, public security, or politics in general as long as we want to live in a free and democratic society. Lunatics strike without plan or expectation in the same way that tires go flat and lightning strikes. To suggest that anything can be done to prevent setting off a ticking human time bomb in an open society is absurd. Where would we start? Banning Peter Pan? How about Animal Farm? Or maybe we should expunge all historic references to Hitler?

Every time I go out in public, I see what I consider to be weird behavior. If the elimination of weirdness is the way to make society safer, a lot of people are going to be under surveillance – maybe you and me! I’m reminded of a statement by the 19th century utopian reformer, Robert Owen, who said to his business partner, "All the world is queer save thee and me, and even thou art a little queer."

There will always be more than a few Timothy McVeighs, Sirhan Sirhans, and Lee Harvey Oswalds with screwy grievances, and in a democratic society, they can act on those grievances. Of course we could overreact and create another government agency like the “grasp and gape” TSA. But anyone dedicated to cause trouble will find a way to penetrate the level of security that free citizens are willing to tolerate, and I for one am opposed to giving up more of my freedom because of the actions of a few nut cases. We are already bordering on being a police state in some instances.

Moreover, any call to cool "inflammatory" speech is a call to police all speech. In his rambling, incoherent interview after the shooting, Representative James Clyburn (D-SC) called for the reinstatement of the FCC “Fairness Doctrine,” which is little more than the suppression of the free speech rights of a talk radio ideology that has a more popular following than its opponents’. How about we cool down Florida Democrat Alan Grayson who condemned his opponent in the recent election as a “religious fanatic” and called him “Taliban Dan Webster” He also said Republicans wanted old people to die quickly. Is that the kind of “climate of hate” speech that should be regulated? However, there isn’t anyone, least of all in government, whom I would trust with that power. Smart people aren’t taken in by demagogues. And troubled people listen to their own inner voices, not those of others.

Look for the Arizona shooting to resurrect the old canard that guns kill people. This will be followed by calls to restrict gun sales and ammo clip capacity. But as we’ve heard gun rights advocates say ad nauseam for years, guns don’t kill people; people kill people. I’m good with keeping guns out of the hands of dangerous people. Define dangerous.

The second lesson learned is don’t expect to find dots to be connected in this tragedy as so many are trying to do. People are looking for motives in a senseless act of violence. Because it is senseless, it has no cause. Who is to blame for tragedy? Jared Loughner. The fact that others saw troubling things in his behavior does not make them culpable for what he did. Even his parents say they are surprised that he was capable of this unspeakable act.

If not who is to blame for this, then what is to blame? I’ve already addressed this. What is to blame is our culture. It is free, open, and tolerant. The spirit of America doesn’t permit people to be seized or monitored simply because they are creepy.

The third lesson learned: there is evil in this world. I saw one face of it when Loughner’s shave-headed mug shot appeared smirking on my television Monday night. Violence of his type is usually random, not thoughtful. Loughner obviously is a mentally disturbed person who targeted Congresswoman Giffords because she was prominent in his mind. The others who were killed and wounded were tragically there when it happened.

Because I know there is evil in this world and because I believe it can strike randomly and without warning, I am always acutely aware of my surroundings when I am in public, and I’ve encouraged my children and wife to be likewise.

I have had two threats on my life, one anonymously and the other from someone I knew. In the latter case, I complained to the police and FBI and was told they could do nothing until the threat “materialized.”

A number of years ago a man just released from a mental hospital shot up the food court of a local shopping mall, killing one and wounding four. I had just been in that food court.

Because of my experiences and instances like the Arizona shooting I dislike being in crowds. I avoid public gatherings, like political rallies, that might attract an unstable person with a grievance. When I am in a crowd of people, I pay attention to everyone, especially watching facial expressions and body language. If I see a hint that something is out of the ordinary, I get out of that place quickly. If I’m in a restaurant, I sit so I can see who comes in the front door. When I go into an unfamiliar building, I look for ways to get out if necessary.

Because we live in an open society, we should be watchful – always.

There are two books I recommend that will make us all more alert to our surroundings and therefore safer: The Unthinkable: Who Survives When Disaster Strikes and Why and The Gift of Fear and Other Survival Signals that Protect Us from Violence.

A little paranoia could save your life.

Saturday, January 8, 2011

Our Amazingly Elastic Commerce Clause

In 1807 Robert Fulton built the first working steamboat. Because his partner, Robert Livingston, was a member of a prominent New York family, the pair was able to get its state legislature to grant their company a lucrative monopoly to operate in New York waters. In time they licensed Aaron Ogden to operate under their monopoly and by 1812 a steamboat fleet of six plied the waters over which New York claimed jurisdiction.

The success of the venture did not escape notice of potential competitors, one of whom was Thomas Gibbons of New Jersey. Gibbons obtained a license from the US Congress under the Coasting Act of 1793 to put a competing steamboat on a run between New York City and Elizabethtown, New Jersey.

Ogden sued in the New York Chancery Court, contending that state laws governing interstate commerce had “concurrent” power with the US Congress and asked that his monopoly be upheld. When that Court complied and issued an injunction restricting Gibbons from operating in waters under the jurisdiction of New York, Gibbons appealed to the US Supreme Court.

The year was 1824. Gibbons v Ogden gave the Supreme Court its first case challenging the Commerce Clause provision of enumerated powers in Article I, Section 8 of the Constitution:

“[Congress shall have Power] … to regulate Commerce with foreign Nations, and among the several States, and with the Indian tribes;”

Gibbons' lawyer, Daniel Webster, (yes, that Daniel Webster) argued that Congress had “exclusive” (i.e. not concurrent) power over interstate commerce nationally under Article I, and that to conclude otherwise could result in contradictory, and thus confusing, federal and state regulations.

Under Chief Justice John Marshall, the Court had to decide the Article I meaning of “to regulate,” “commerce,” and “among the several states” in the context of this case as set forth in Clause 3 of the enumerated powers. After deliberating, the Court decided that the power “to regulate commerce” was exclusive with Congress. It further decided that “commerce” included not only the trade transaction but also navigation, which meant the steamboats as well as their passengers and cargos were subject to federal regulations. And it decided that “among the several states” meant the right to regulate commerce when it crosses the border of one state and went into the interior of another.

The decision of the case for Gibbons on these merits was a breathtaking expansion of Congressional power. It was also a breathtaking expansion of the exegetical powers of the Supreme Court to divine the meaning of the Constitution, then 37 years old, when most of its authors were gone.

Today appeals to the Constitution are not argued in terms of the meaning or intent of the Constitution’s Framers or the language they used. Rather they are argued on the decisions of precedent cases – the perverted principle of stare decisis – whose justification as precedent is nowhere found in the Founder’s papers or letters. For all intents, precedent case decisions have replaced the Constitution as the standard for legality. And predictably, by interpreting interpretations of the Constitution, rather than interpreting the Constitution itself, each generation, loosed from an absolute Constitutional reference, finds new rights and prohibitions never before discovered.

The original meaning and intent of the Framers is an issue, of course, only if our Courts and Congress profess commitment to a written Constitution and feel some compunction to be bound to its original meaning. Ghastly judicial interpretations of its Necessary and Proper provision and the Commerce Clause provision through the years suggest otherwise. There are four, and occasionally five, members of the Court who believe this country and its lawmakers are not lawfully bound to a vision held by the 55 white men who met in Philadelphia 224 summers ago.

Perhaps that explains why the outgoing 111th Congress distinguished itself by being unmatched as the most constitutionally illiterate lawmakers since the first Congress met in 1789. Public statements made by some of them suggest they couldn’t pass a high school civics class. In a television interview, one lawmaker justified the Constitutional authority for ObamaCare as the “good and necessary” rule. Having sworn to uphold the Constitution, their ignorance of it was both outrageous and embarrassing.

Perhaps that also explains why Representative Pete Stark (D-CA) could confidently say, "The federal government can do most anything in this country." Congress surely did so in the passage of ObamaCare with its constitutionally illegal mandate requiring every citizen to buy health insurance which complies with federal specifications. Since there was no debate on the mandate’s constitutionality under the Commerce Clause or the Necessary and Proper provisions, it’s apparent that constitutionality was not an issue to these “lawmakers.”

In his book of essays on political thought, Here the People Rule, author Edward Banfield laments that – perhaps inevitably – “[t]he ink was not yet dry on the Constitution when its revision began." Almost from its Constitutional inception, Congress began pushing the boundaries beyond the limited government the Framers thought they were giving the descendants of those who had lived under the tyranny of George III.

Banfield argues that the attempt to limit the power of those who govern now seems a flawed objective, doomed to failure from the very beginning:

“Nothing of importance can be done to stop the spread of federal power, let alone to restore something like the division of powers agreed upon by the framers of the Constitution. The reason lies in human nature: men cannot be relied upon voluntarily to abide by their agreements, including those upon which their political order depends. There is an antagonism, amounting to an incompatibility, between popular government – meaning government in accordance with the will of the people – and the maintenance of limits on the sphere of government.”

It is impossible today to return government, particularly federal government, to its original Constitutional design. To do so would require overturning the excesses of the New Deal, the Great Society, and the over-reaching network of federal regulation that invades the most intimate liberties of free society. But there is always hope, however Sisyphean that hope might seem, that a free society, in addition to its narcissistic “pursuit of happiness,” so passionately defended by Jefferson, will learn that it must also keep a watchful eye on the government which threatens its happiness – something “We the People” haven’t done well in the past 75 years. The Americans of the 1780s were so suspicious of government that they felt it an act of citizenship to be informed on where their government was taking them. Should the present generation be less informed? More trusting?

Should we “ordinary” citizens care what power the Constitution granted to Congress with the clause, "The Congress shall have Power... to regulate Commerce ... among the several States?" The ObamaCare mandate claims authority under this provision. But unlike other Commerce Clause cases, no part of ObamaCare is about regulating the commerce of the healthcare system. It is about fundamentally changing the delivery and access to healthcare in America.

The Constitutional origin for the Commerce Clause is this: as long as the colonies were part of the British Empire, trade within the colonies and trade between the colonies and the rest of the world were regulated by London. With the revolution and subsequent independence, no governing American entity existed to fill the vacuum of government and regulation. The closest thing to government was found in the Articles of Confederation under which the Continental Congress had functioned before and during the war. The Continental Congress consisted of representatives from each of the colonies. There was no upper and lower house, no executive, and no judiciary.

The post-war absence of a central government caused the colonies to begin acting in their self-interests, which included, among other things, passing trade barriers between the states. Each barrier motivated retaliatory barriers to be passed by other states. Commercial gridlock prevailed, which caused the domestic and international economy of the colonies to begin failing. It also made the colonies vulnerable to economic (and military) exploitation from other nations.

These were the reasons a Constitutional Convention was called to meet in Philadelphia in the summer of 1787, and the main agenda item was to amend the Articles of Confederation so that states would stop treating each other as hostile nations. However, the Virginia delegation arrived in Philadelphia and began selling its plan to replace the Articles (sometimes referred to as the first Constitution) with a new Constitution and a totally different governance system. Because the colonies were jealous of their individual independence and were deeply suspicious of each other, this was no easy sell. The Constitutional Convention was forced to remain in session from May to September debating the differences in their vision for the proposed central government. Meetings were always secret, meaning none but state delegates were allowed in and no one was to record or disclose the proceedings. Of course, we now know that Madison kept detailed notes, which is the only way we know today who said what. And what we know is that considerable horse trading went on to protect the rights of the states from an intrusive federal government.

When the Constitutional convention broke up in September 1787, the sales job to the states began. The Constitution was only a proposal at this point. Ratifications took place in state conventions that lasted for periods that roughly correlated with each state’s resistance to the new government. The sale to the states was not easy to make. The pro-constitution Federalist Papers and anti-constitution Anti-Federalist Papers began their publication in October 1787 – almost concurrently with the release of the Constitution to the states for ratification. A Bill of Rights limiting the federal power and protecting individual liberty had to be tacked on to the Constitution in 1789 to grease the skids for its approval

Nine of the 13 colonies had to ratify the Constitution for it to become the law of the land. If nine did ratify, there was danger that the remaining would not join their union and would continue as independent colonies. Delaware was the first to ratify on December 7, 1787 and Rhode Island was the 13th and last colony to ratify on May 29, 1790. The ninth colony, New Hampshire, ratified on June 21, 1788 – almost a year after the Constitutional Convention ended. A colony became one of the United States when it ratified.

History makes clear that neither the Constitution nor the new Republic rushed into open arms. Those who did embrace them did so with caution, if not distrust. Knowing the temper of their fellow citizens, the Framers of the Constitution established the enumerated powers in the narrowest terms, if for no other reason than to get them accepted by the populace.

I haven’t the space in this posting to exegete each of the three significant terms in the Commerce Clause, but others have, notably Randy Barnett and Robert Bork. Their works can be found on the Internet. Suffice it, then, to say that it is clear the Framers used the word “commerce” to mean trade – buying and selling – and only trade. The problem they were trying to solve was to open trade between the states by preventing states from imposing anti-trade tariffs that were strangling the economy.

Here is what Madison said in Federalist 42 during the ratification period:

“The defect of power in the existing [Articles of] Confederacy to regulate the commerce between its several members, is … clearly pointed out by experience … it must be foreseen that ways would be found out to load the articles of import and export, during the passage through [state] jurisdiction, with duties which would fall on the makers … and the consumers … We may be assured by past experience, that such a practice would be introduced by future contrivances; and both by that and a common knowledge of human affairs, that it would nourish unceasing animosities, and not improbably terminate in serious interruptions of the public tranquility.”


That the Commerce Clause gave Congress the power to regulate "buying and selling" of goods in interstate commerce but not their manufacture and not agriculture is supported by other contemporary definitions of "commerce" as well as the purpose behind the Clause.

As Barnett has noted, in none of the 63 appearances in the Federalist Papers of the term "commerce" is it ever used to unambiguously refer to any activity beyond trade or exchange. It was never used to describe “any gainful activity” as modern Supreme Court decisions have allowed. The Framers of the Constitution meant for the Commerce Clause to be limited to the buy/sell transaction, specifically excluding manufacturing, agriculture, and mining – and in today’s modern economy, healthcare.

What did the framers mean by “regulate” in the context of regulating commerce? In the vernacular of that day it meant pretty much what it means today: “if you want to do something, here is how you must do it.” Thus it seems logical that if the Framers were asked, they would have said “regulate” also included the right to prohibit when the prohibited thing interfered with the ability to regulate.

The Framers gave Congress the authority to regulate only activity. It defies imagination that they ever intended Congress to have the power to regulate inactivity – i.e. fining a person (regulation) who fails (inactivity) to buy health insurance as is envisioned in the ObamaCare mandate.

Finally, what did the Framers mean by “among the several states” in the context of regulating commerce? Since the purpose of the Commerce Clause was to prevent the interference of trade between states, which had been impossible under the Articles of Confederation, it seems clear that trade would have to cross the boundary of one state and enter at least one other state to give Congress the authority to regulate it. It therefore does not grant Congress the authority to regulate trade that occurs only within a state – intrastate trade. Since almost all healthcare is bought and sold within a state, Congress does not have authority to regulate it under the Commerce Clause. Yet that is precisely what ObamaCare intends to do.

In the Progressive Era of this country from the late 1880s through the 1920s, government hewed to a narrow view of the Commerce Clause. Beginning with the 1895 case of United States v E.C. Knight and up to the 1936 case of Carter v Carter Coal, the Supreme Court drew a distinction between "production" – such as manufacturing, agriculture, or mining – and “commerce" or buying and selling the things produced. These Courts never interpreted commerce as gainful activity and never allowed Congressional regulation of intrastate commerce.

This restricted the power of Congress to regulate the entire US economy. Not surprisingly, those advocating that Congress should be able to control anything, chief among whom was Franklin Delano Roosevelt, grew frustrated. Following his lead, liberals criticized the Court for failing to acknowledge that the meaning of the Constitution must evolve to meet changing circumstance. Roosevelt’s attempts to “pack the Court” by expanding it beyond nine members initially met with a disastrous backlash and political losses. But the convenient deaths of several Court members allowed him to replace the vacancies with men of his mind and the radical expansion of the meaning of “commerce” began.

On the heels of the most constitutionally illiterate Congress, the 112th Congress adopted House rules this week that require every bill to cite the Constitutional provision authorizing it. No doubt most will cite the amazingly elastic Commerce Clause or the even more expandable General Welfare clause. But it’s a start. Maybe it will serve the House, if not ultimately the Senate, with the quaint reminder that the only basis for law is the Constitution as it was written.

Saturday, January 1, 2011

The Grinch That Stole the Internet

Clothed in the benign sounding name of “net neutrality,” the Federal Communications Commission met on December 21 to issue new Internet regulations that will decidedly “de-neutralize” internet management and put into the hands of political appointees the power to restrict free speech. This comes despite smack-downs by a federal appeals court in April in the FCC's first effort to enforce “net neutrality” rules and by the US Congress, which warned the agency that it was overstepping its authority. Then, to make sure the boneheads running the FCC had read their lips, 300 members of Congress recently sent a letter to them opposing Internet regulation.

So, what does all of this say of FCC Chairman Julius Genachowski, a former law school chum of Barack Obama, and the other two Democrats on the five-member Commission? It says they are going to do pretty much what they want – the Constitution be damned! What a surprise after the hose job the Democrats gave the American public with ObamaCare and its unconstitutional requirement to force the citizens of a free country to buy a product or face punishment. But that’s a subject for another blog.

The statements of each FCC commissioner are posted on the agency’s web site.

Robert McDowell, one of the two Republican members, said this in his dissenting statement:

“Using these new rules as a weapon, politically favored companies will be able to pressure three political appointees to regulate their rivals to gain competitive advantages. Litigation will supplant innovation. Instead of investing in tomorrow’s technologies, precious capital will be diverted to pay lawyers’ fees. The era of Internet regulatory arbitrage has dawned.”

Meredith Baker, the other Republican member, said this in her statement:

“[This] is not a consumer-driven or engineering-focused decision. It is not motivated by a tangible competitive harm or market failure. The majority bypasses a market power analysis altogether, and acts on speculative harms alone. The majority is unable to identify a single ongoing practice of a single broadband provider that it finds problematic upon which to base this action. In the end, the Internet will be no more open tomorrow than it is today.”

Obama, a longtime advocate of “net neutrality,” is getting his way on Internet regulation through Genachowski, who has worked so closely under White House guidance that visitor logs show he met with Obama at least 11 times there.

Obama said the FCC's action will "help preserve the free and open nature of the Internet." But in a speech to graduates at Hampton University in Virginia, Obama complained that too much information is a threat to democracy. "With iPods and iPads and Xboxes and PlayStations – none of which I know how to work – information becomes a distraction, a diversion, a form of entertainment, rather than a means of emancipation," he whined. "All of this is not only putting new pressures on you; it is putting new pressures on our country and on our democracy."

Really? Information is a threat to democracy? You’ll recall that early this year during the healthcare debate Obama said we should turn off cable news. Too much information. Later, he dissed Fox News as the mouthpiece of the Republican Party, characterized the network as little more than talk radio, and urged other news organizations not to treat Fox News as a legitimate news station.

When her husband was embroiled in the Lewinski sex scandal, Hillary said “We are all going to have to rethink how we deal with [an uncensored Internet], because there are all these competing values … Without any kind of editing function or gate keeping function …” But then she recently criticized China for its Internet censorship.

If there’s one thing that gives politicians of all stripes a bad case of heartburn, it’s a free and independent source of information. The Internet is a digital town hall, the paragon of free speech, where people can say whatever they want without the interference or permission of government. That is what has made it Political Enemy No. 1. The former gatekeepers of information – the mainstream media and the federal government – find Internet access to information a pain in the derrière and will do whatever they can get away with to regulate it.

Hey, I freely admit there’s a lot of nonsense on the Internet because anyone who wants to go to the trouble can write anything and post it online. Some would point to this blog as evidence of that. But there’s also a lot of valuable information on the Internet – the kind that used to take hours, if not days, to scrounge up in the prehistoric days of Dewey Decimal System-based libraries – which I can now find in minutes.

So what’s broken on the Internet that requires “net neutrality” and the FCC to fix? The FCC claims the new rules will prevent potential future harms and they could shape how Americans access and use the Internet years from now. But ISPs and web sites currently do a pretty good job of regulating Internet traffic, I think. There's no evidence the public is demanding new rules to protect them from ISP, cable company, and telephone network abuses, as the FCC alleges could happen (but hasn’t). In fact, since the FCC announcement of new rules, a Rasmussen national telephone survey found that only 21% of “Likely U.S. Voters” want the FCC to regulate the Internet as it does radio and television. Fifty-four percent are opposed to such regulation.

Unelected bureaucrats at the FCC were the same troglodytes that forced everyone to buy clunky Ma Bell telephones and protected the company from innovative competitors for decades. This same FCC imposed un-free speech diversity using its "fairness doctrine" club against conservative talk radio. These same technology trolls took 20 years to approve cellular systems for market and embraced cable and satellite innovations with the same glacial zeal. This is who wants to manage the Internet?

Despite Al Gore’s fatuous claim to have taken the initiative in creating the Internet, heterogeneous and diversified interests of the private sector that truly did create it have done quite well without the help or oversight of government, thank you very much. Pingdom, an Internet monitoring firm, estimates that 1.7 billion worldwide users sent an average of 247 billion e-mails a day in 2009. On Cyber Monday, November 29, Americans racked up more than $1 billion in online sales. Without government subsidies or federal specifications to guide them, Google invested its own resources to develop the Ngram Viewer, released on the 17th of this month, which puts the contents of 5,195,769 books spanning five centuries on the Internet in a searchable database. It was private capital that changed the way the world communicates using the Internet. The USPS now works less and costs more than ever. It was private capital and a non-government Internet that destroyed the barriers to information sharing, expanded the frontiers of knowledge, created new forms of entertainment and commerce, and generated trillions of dollars in wealth.

Would an FCC-managed Internet have accomplished any of this? No. A bottom-up unregulated and under-taxed market (you and me) made it possible for free speech, innovation, and competition to thrive on the Internet at affordable prices. The Internet portends an ideological, not a commercial, problem for government.

"Net neutrality" would essentially strip the control and traffic management of broadband networks from those companies that deployed them and make them run properly, and transfer much of that oversight to clueless but meddling online traffic cops. The FCC commissioners will write incomprehensible rules governing ISPs, bandwidth use, content, prices, and disclosure requirements on Internet speeds.

FCC Chairman Genachowski claims “net neutrality” is designed only to prevent the non-problem of big bad cable and phone companies from blocking some web sites while favoring others, particularly their own, with higher speed and better quality. Yet somehow the Internet has managed thus far to succeed on its own, and bloggers have no complaints about getting their voices heard and providing forums for unfettered free expression.

Currently, broadband is defined as an information service, which means it doesn't suffer much FCC oversight. That’s the problem. Genachowski's plan is to shift broadband into the same classification as telephone service, so broadband can enjoy more oversight by the agency. It would also bring broadband and Internet services under rules that were developed for the rotary phone in the 1930s. FCC Commissioner Robert McDowell, whose dissent is excerpted above, said that the rules for telephone services aren't appropriate for Internet service providers:

“The commission is seeking to impose 19th-century-style regulations designed for monopolies on competitive, dynamic and complex 21st-century Internet technologies.”

The FCC says it would not subject Internet service providers to the full brunt of regulation that would come with the new classification, instead choosing a "third way" that would apply only some of the rules. Yeah, sure. Like TSA’s grope and stare screening for terrorists.

What the FCC is attempting is complicated, irrelevant, and an unvarnished power grab. As I’ve said above, the Internet poses an ideological threat, not a commercial one, to government. The alleged potential for commercial abuse by big cable and telephone providers exists only in the ideological imaginations of the three liberal FCC commissioners and their godfather in the White House. The claim that regulatory power is needed to ensure that consumers enjoy an "open Internet" is facetious. With the growing number of broadband providers is there a problem with “openness” on the Internet? Of course not! At the end of the day, the new “net neutrality” rules give government political appointees the authority to dictate how ISPs must handle the traffic that flows over ISP infrastructure. What’s “fair” or “open” about that?

But the FCC has before and can again censor speech. (Remember the “fairness doctrine”?) And when the FCC can regulate ISPs, they will be more compliant and interested in making censors happy. Remember also that the FCC protected Ma Bell from innovative competitors for decades. They can now quash the nascent Internet video industry with regulations if Netflix, Apple TV, and Hulu.com start stealing customers from complaining cable and pay TV services. Streaming video already consumes 20% of peak broadband traffic in this country, and one of the FCC rules deals with different pricing structures for different services instead of fixed prices.

I don’t have space in this blog post to write about the ideological foundations of the “net neutrality” movement and its acolytes. Maybe that’s something I’ll do next week. Its leftist tilt and funding come from billionaire George Soros and other left-wing think tanks and non-profits. It parades its radical, speech-censoring agenda as a crusade for “media justice.” If social justice thrives on the redistribution of wealth and economic rights, media justice thrives on the redistribution of speech and First Amendment rights.

The convocations of the universal broadband disciples are marked with Marxist-tinged rants about “disenfranchisement” and “empowerment.” Broadband access is a new “civil right.” Therefore, high speed access must be made available to all Americans. Democrat handwringers bemoan the concentration of corporate media power. Not surprisingly, the “net neutrality” crowd considers its enemies to be conservatives on talk radio, cable TV, and on the web who have exposed their schemes. Conservatives are hate speech purveyors who must be managed, if not censored, in the brave new media world of “net neutrality.” The Media Justice Fund, which is funded by the Ford Foundation, is a major lobby for universal broadband, and describes the movement as “grounded in the belief that social and economic justice will not be realized without the equitable redistribution and control of media and communication technologies.” In other words, down with media capitalism.

The “net neutrality” rhetoric is almost theological. These new FCC rulings should alarm anyone who believes safeguards are needed to protect First Amendment rights – most particularly from government abuse. If I were Julius Genachowski, I’d hold up on the regulation writing just now. A new Republican House convenes in a couple of weeks. It will control the FCC purse strings – and their regulatory power.