Saturday, August 28, 2010

China Rising

An announcement was made last week that received surprisingly little editorial comment given its potential impact: China passed Japan to become the second largest economy in the world based on second quarter GDP and most surely will best Japan for the year. It is an unprecedented accomplishment for a developing economy to surpass a developed economy. Moreover, it took only three decades to achieve it after Deng Xiaoping took the helm following the death of Mao Zedong in 1979 and set course for China to become a market economy.

If present trends continue, the Carnegie Endowment for International Peace predicts that China is on track to equal the US GDP in 2035 and to be twice the US GDP in 2050. That’s a big “if” because present trends have a nasty habit of not continuing. We only have to recall the 1980s when American Henny Pennys were predicting the sky was falling because Japan was taking over leadership of the global economy and America’s only economic salvation was to become more Japanese. Thereafter Japan slipped into a 20-year long recession after a disastrous Obama-like government takeover of its economy, making the average Japanese poorer than the average citizen of Mississippi.

That China would overtake Japan in aggregate output has long been expected. It is inevitable that pundits would then begin speculating on if China will overtake the US and how long will it take. However, recalling the Nipponphobia of 30 years ago, should America expect that it will lose its hegemony as world leader any time soon – if ever? The predictable spate of recent Sinophilic books like Martin Jacques’ 400-page tome extravagantly titled When China Rules the World: The End of the Western World and the Birth of a New Global Order, Reed Hundt’s In the Shadow of China, and Fareed Zakaria’s The Post-American World say yes.

American business leaders know surprisingly little about China, and as Will Rogers once observed about common knowledge, “what they know ain’t so.” For example, it is widely thought that Chinese prisons are full of dissidents, when in fact the Chinese incarceration rate is 119 per 100,000 people against the US rate of 760 per 100,000 – the world’s highest. The mainstream media often reports Chinese persecution of Christians when in fact there are 80 million Christians in China and religious freedom is a constitutionally guaranteed right as it is in the US. There is more economic freedom in China than the US, which is strangling its economy with more taxes, regulations, and government spending. In 30 years, the government share of the Chinese economy shrank to 11% from 31% whereas, since Obama took office two years ago, the US debt-to-GDP ratio has grown from 40% to 63% and is projected by OMB to grow to 90% by 2020.

With 1.3 billion people, China is the world’s most populous country. Therefore its aggregate GDP statistic can be misleading. Per capita GDP is about $1,000 for China, about $10,000 for Japan, and $45,000 for the US. But China’s GDP growth rate has been growing an eye-popping 9% per year for 30 years, doubling its economy every eight years for three decades and moving 400 million people out of poverty. Recent US GDP growth has been 1.6%.

While China exports more in a day today than it did in all of 1978, the year before Deng Xiaoping announced its open market policy, it must begin fueling future growth from its domestic market by employing more people and turning them into middle class consumers. China is the world’s largest holder of money, currently sitting on over US$2 trillion in foreign currency reserves. It can’t hope for an ever-expanding global demand fed by foreign trade deficits and adverse current account balances – particularly when the US and European economies are in a multi-year funk with high unemployment rates.

China’s domestic consumption has fallen from 55% of GDP in the late 1990s to 36% recently. Long-term US domestic consumption has been 70% of GDP. Because China’s capitalism is corporate rather than private, its downward trend is due to a low share of household income and a high share of corporate income in the GDP. Low wages fueled its export engine. The share of wages in GDP has been falling since the late 1990s and this is perfectly mirrored by the declining share of private consumption. However, as happened in Japan after its boom, the Chinese workforce will press for a larger share in the fruits of its labors or else the Chinese government will begin experiencing widespread labor unrest. Having the money to spend, however, but few consumer products to buy, as happened in Russia after it became more market-oriented, will only add to social unrest. China’s export-oriented economy cannot easily shift to a consumer products economy, not only because of production differences, but also marketing and market research, retail distribution, after-market service support – skills that China doesn’t possess or need to possess in an export-oriented economy.

After 2016, China will face the steepest aging curve of any large population in history. Not a happy prospect in a country with a healthcare system ranked 188 out of 191 nations by the World Health Organization. As a consequence of its one child policy, whose intent was to prevent widespread starvation, China not only reduced its future workforce, but also exacerbated it by changing the gender mix because of Asian depreciation of females. Chinese males outnumber females by 51.5% to 48.5%, while in the US females outnumber males 51% to 49%. That means there are 45 million more potential Chinese husbands than wives and mothers. By 2050 31% of China’s population will be over 60 years of age – i.e. 400 million elderly with no social security and few children to support them. The labor force is shrinking faster than the population. China is in a desperate race to get rich before it gets old and loses a large chunk of its workforce.

China’s surplus of males over females has fed its army, which helps absorb young Chinese men who will never find mates. Notwithstanding the U.S. Seventh Fleet’s presence in the South China Sea, China faces no significant land threat that requires an army. Since the 1980s, 100 million rural Chinese have moved to the cities. China needs the army to keep its population down on the farm. Without the army, the poor and rural population would rush into urban centers, causing China’s largely coastal cities to triple in size overnight. That, in turn, would cause these cities to collapse. Already, 93% of the population lives on 44% of the land along the country’s east coast.

The rush to the cities has contributed to urban pollution. The World Bank says that sixteen of the world’s twenty most polluted cities are Chinese. Over 70% of the water in China’s major rivers is already considered undrinkable. In a recent Forbes article, economist Joel Kotkin observes that if water is the "new oil," China faces a thirsty future. “China's freshwater reserves,” he says, “are about one-fifth per capita of those of the US and the U.S. has become more efficient in its water usage.” A less developed economy, like China, will face increasing demands from industrial and agricultural users as well as hundreds of millions of households that now don't enjoy easy access to clean drinking water.

China builds one coal-fired power plant every week and yet it is starving for energy to maintain its industrial development. It burns more coal – the source of two-thirds of its energy – than the US, Japan, and Europe combined, contributing to its pollution. Long an exporter of coal, it is now a major importer. Moreover, it is dependent on foreign oil imports for 60% of its oil consumption.

High energy consumption and cost, scarce water, and pollution are limiting China’s food production. It needs more dams to control flooding. Its pollution causes acid rain, which falls on a third of its agricultural crops, reducing yields. China now depends on the US and Canada to import its demand for corn, and as its economy becomes richer, the demand for high protein food – beef and pork – will outstrip its resources to produce them domestically.

A country with no democratic tradition, China relied on the dictatorship of the Communist party to set its course to a market economy. But government never makes complex decisions as effectively as open markets and competition do. The Soviet Union tried it for 70 years and failed to competitively allocate resources as efficiently as the US and other free market economies and thus collapsed. Today, for example, inflation in China is soaring, but the government-controlled interest rate, heavily influenced by crony capitalists and political insiders isn’t being raised fast enough. The Chinese money supply has recently grown at a 16% annual rate while inflation during the same period jumped from 2% to over 8% -- political suicide in an economy like ours. Yet the real money supply, after adjusting for inflation, was rapidly decelerating. This is one downside of not allowing financial markets to function freely.

Another basic problem plaguing Chinese capitalism is the lack of reliable information about the country’s economy. A high level Chinese official attending a cocktail party in Beijing with a group of international bankers was asked whether his country was getting serious in achieving transparency of the financial system. The official surprised the group with his forthright response: “Our model,” he said, “is that the best fishing is done in murky waters.”

Therefore, looking at the prospectus of any of the Chinese banks that have recently gone public one will read in the “Risks” section page after page of descriptions such as: “Mr. Wang of our Hong Su branch was arrested for embezzling $2 million. Mr. Hu, the branch manager in our office in wherever, was arrested for stealing $10 million.” The prospectus acknowledges that the bank has no risk management system and no liquidity management system. Yet, having gone public, it enjoys a market cap of $100 billion. This is what happens in an environment of massive surplus capital under government capitalism.

As China becomes a major player in the global society, America’s values and leadership are not lost on the governments of Southeastern Asia. They embrace America’s qualities more than those of China when deciding which to choose as an ally. Despite the recession that began in 2008 America’s global leadership has not been jeopardized even as Japan’s economy, heavily influenced by the US in the post-war years, slipped to third place in terms of aggregate output. We should reflect on why that happened to Japan, and we should return to the fiscal policies of the 1980s which led this country out of the Carter recession. Instead, the Obama administration seems determined to follow Japan’s failed Keynesian policies.

As for China, it will struggle for decades to solve the problems it must solve to become a diversified developed economy. We should hope it succeeds. If China’s economy fails or substantially shrinks, the global economy will not escape the consequences. But contrary to the expansive speculations of China apologists, the US will not be overtaken in a qualitative sense by China’s economy.

However, American economic policy must abandon its current reckless anti-business posture. It has caused $2.5 trillion in private capital to park on the sidelines because American employers are loath to invest in an economy stalled by “stimulus” spending and stymied by uncertainty about the prospects of higher taxes, stricter rules, and more regulations. In a speech before a technical conference this week, Intel CEO Paul Ontelli warned the Obama administration that unless it mended its ways, "the next big thing will not be invented here. Jobs will not be created here," because it costs $1 billion more to build and equip a factory in the US, 90% of which is due to taxes and regulations that other countries don’t have.

Saturday, August 21, 2010

Happy 75th Birthday!

Last Saturday – August 14 – was the 75th birthday of Social Security, the first of Franklin Delano Roosevelt’s New Deal programs, not to mention the most enduring and controversial.

Before packing for another grueling vacation, this time at Martha’s Vineyard, Obama celebrated the occasion by using his Saturday address to remind the great unwashed that his party remained the guardian of this government-engineered house of cards, whereas, he warned, the Republicans had the tumbrel waiting to carry the program to its privatized doom on Wall Street.

No sane person, however, would deny that Social Security is headed for doom under its own steam, but the day of reckoning – probably 2037 and beyond – is so far in the future that, what the heck, there’s no rush to worry about it now. That “let tomorrow take care of tomorrow” mentality has been the bane of this program throughout its history, preventing its flawed design from being corrected. Feckless politicians since its inception have only patched the Social Security program to give it a few more years of life in the short term but no real chance to survive on its own in the long term. Why haven’t taxpayers and future Social Security beneficiaries howled for major reform? Because the design of the Social Security program contributes to its support and its intractability. Here’s how.

Social security was a child of 1930s social mores and norms. Notwithstanding the difficult straits the Depression imposed on people, the nature of their time caused them to refuse welfare handouts. Unlike today, people then accepted only what they felt they earned, and that feature of society was behind not only the design of the Social Security Act, but also the three New Deal work programs, namely, the Civilian Conservation Corps (CCC), the Civilian Works Administration (CWA), the Works Progress Administration (WPA), the and its predecessor, the Federal Emergency Relief Administration (FERA) begun under the Hoover administration.

FDR wanted Social Security to be “a matter of earned right” and therefore morally acceptable by making it “contributory social insurance” rather than a welfare program funded out of general tax revenues. Its earned “rightness” came from a sense of personal responsibility to save for the future and from the private rather than public nature of the program – i.e. people worked, paid into an interest-bearing trust fund, and received back distributions from their “investment” in Social Security. It was not a handout.

Despite criticism about its regressive nature (initially everyone paid into the system 1% of income up to $3,000) FDR was a big picture politician. Responding to a critic who questioned the economic soundness of the program, he said, “I guess you’re right on the economics, but those taxes were never a problem of economics. They are politics all the way through. We put those payroll contributions there so as to give the contributors a legal, moral, and political right to collect their pensions … With those taxes in there, no damn politician can ever scrap my social security program.”

One insightful politician of the time, however, saw potential problems with granting the federal government the monopoly FDR sought. Senator Bennett Clark (D-MO) wondered if private retirement pensions might not outperform a government-run plan, so he introduced an amendment allowing private employers to opt out of Social Security if the employer matched the government program in premiums and benefits and had the employee’s consent. The Clark Amendment was debated in the Senate and passed 51-35. FDR was furious and threatened to veto the Social Security bill if it came to him with the amendment. When the House passed its version of the Social Security bill without the Clark Amendment, FDR and his cronies used a parliamentary trick in the conference committee to work out a compromise without the amendment, promising to appoint a joint committee to study it and report back to Congress in 1936, the following year. That never happened. Clark was prescient, however. Over the past 70 years, private pension plans have returned 8% while Social Security benefits have averaged less than 2%.

When deciding how to start a program that was to be funded entirely by contributions – FDR’s mandate – the government had to solve the dilemma that the first beneficiaries would not be receiving the return of their own contributions unless the program pay-outs did not begin for one or two decades. That defeated the purpose of helping the elderly in 1935 and beyond. In a curious twist of political logic and manipulation, therefore, the designers of Social Security decided that it wasn’t important who made the contributions that were paid out in current benefits as long as all beneficiaries had contributed some amount to the system. And since in the early years of the program, the number of contributors would exceed the number of beneficiaries, the excess contributions could be paid out in benefits rather than into the reserve or trust fund as initially planned. However, lacking a paid-in beginning reserve, the Social Security program would have to be subsidized from the beginning by general revenues, and even if it got into the black in future years, the committee foresaw deficits reappearing in 1965.

Even as the design of the program shifted from a “savings” program, in which John Jones got John Jones’ contributions with interest, to a pay-as-you-go system in which current contributors paid current benefits, the initially planned payroll tax had to be increased and it had to be incremented sharply in following years to keep the program solvent and to build a reserve. Taxes would begin in 1937 and reach their highest in 14 years and benefits would begin in 1940. This design had political logic – pleasure (benefits) came before pain (higher taxes later) and taxes would support contemporaneous benefits for at least a generation, perhaps until 1980, after which the issue of solvency would be left to another generation of voters and politicians.

Playing the present against the future figured importantly in the design of Social Security. Politicians looking for reelection votes had a built-in way to get them with this program. Benefits were delivered before an election; taxes were raised after elections. Increased benefits today were paid by today’s workers – not today’s beneficiaries – which is one reason the program is so endearing to the retired and those approaching retirement.

Over time, the program was manipulated in a way that brought in additional taxpayers faster than benefits were increased so that the added cost of benefits was spread over a larger taxpayer base. Moreover, when future benefits are rising relative to low current and past taxes, the tax base perceives (and receives) superior returns on what it paid in, whereas later entrants into the tax base, even if the future benefit to current tax relationship is less favorable, have no consciousness of the bias toward early entrants (because they weren’t one.)

Those who were the earliest entrants into the Social Security system got something for nothing. Ida Fuller, a Vermont legal secretary, paid into the system a total of $24.75 from 1937 to 1940. When she retired in 1940 at 65, she began receiving a monthly Social Security check of $22.54, and when she died in 1975, she had received $22,888.92 – a payout of about $1,000 for every dollar she had paid in. Even those who paid taxes for their entire career paid only enough to support retirees during those early years. They paid less than the benefits they would receive – i.e. they also got something for nothing. Not to worry that Social Security was actuarially unsound because costs didn’t equal benefits and no residual benefit was paid to a decedent’s estate; to a politician, this was an advantage, not a problem. Giving away something for nothing was a way to get reelected. And after all, current politicians represented the current generation of voters, not a future generation.

In the early years of Social Security, its intergenerational design was a source of frequent criticism. Why should the current generation be allowed to compel future generations to bear a tax burden it did not, and likely would not, impose on itself? Why should future generations be made to pay – not for their own future benefits – but for those of the current retired generation? The designers of Social Security essentially shrugged off such questions and said they weren’t interested in people who weren’t yet born; let them take care of themselves.

The intergenerational financing of Social Security had other advantages. It made the system seem as if it was self-supporting and fiscally responsible. Retired voters assumed the benefits they were receiving resulted from taxes they had paid in the past. But throughout its 75-year history, the system has never balanced current taxes against the present value of current benefit costs. The current generation of retirees has never concerned itself with the cost of their benefits on future generations because they never thought of Social Security (and Medicare more recently) as converting part of their children’s wealth, if not their grandchildren’s, into their benefits.

It becomes apparent why FDR and the designers of Social Security weren’t interested in a mandated self-directed private saving system, like a 401(k) program. As FDR astutely observed, politics outweighs economics, and a 401(k)-type program has no political leverage. The current generation wouldn’t be able to get something for nothing – i.e. it wouldn’t be able to receive benefits it hadn’t yet saved. FDR needed a system in which participants appeared to be paying into a corpus for their future retirement when in fact that retirement income would come from a future generation of workers. Every retired generation, therefore, will always fight to protect the return on the taxes they paid, and politicians will not be inclined to resist a common interest group that is as politically active as retirees. That is what makes Social Security reform so intractable.

Martha Derthick, a student of public policymaking, exposes the seduction of Social Security:

The program had a powerful appeal to self-interest – the self-interest of the taxpayer-voter, who got back far more in benefits than he paid in taxes, and the self-interests of the politician, who could all at once, provide the current taxpayer-voter with these excess benefits, defer high tax rates to a future generation, and proclaim with a straight face the “fiscal soundness” of the program.

Indeed, this is the flaw in all government spending that isn’t paid by the generation on which the spending is spent. It (i) mortgages a future generation, and (ii) it disconnects the current generation from the obscenities of reckless spending – like that of the Obama administration.

In the end, the Social Security program is a program for redistributing wealth, created during a generation that abhorred the redistribution of wealth and in fact hewed closely to the Jeffersonian ideals of self-sufficiency. The aim of the designers of the Social Security program was the opposite: to fight self-sufficiency and create dependency. Just as Medicare displaced private health insurance, Social Security sought to displace private retirement programs. Early on a displacement of 50% was believed possible, but in the 1960s zealots like Commissioner of Social Security, Robert M. Ball, an early player in the creation of Social Security, wanted it to displace all private retirement programs. If not politically, then practically, Ball was a socialist who believed dependency on government benefits produced grateful voters, and if not grateful, at least voters who would play one politician against another causing programs to expand and herald in a welfare state that could not be dismantled by future generations of voters and politicians.

Is Social Security a Ponzi scheme? A Ponzi scheme transfers money from one group of contributors (late entrants) to another group of contributors (early entrants). No wealth is created. However, the early entrants expect a return on the money they contributed, so they get much more than they gave. The organizer takes a significant part of the receipts and uses them for other purposes than to pay the contributors. As long as new entrants contribute more than the exiting early entrants are paid, the scheme works. When exiting entrants are due more than new entrants are paying in, the scheme collapses.

Social Security works much the same way. New money is paid to old money. To pay for expanded benefit costs, more money is needed than old money paid. Some portion of the receipts is swept out of the trust fund, used for other purposes, and replaced with government bonds – essentially an IOU (which is why raising taxes to increase reserves is silly.) The ongoing viability is jeopardized when new entrants are insufficient to cover exiting entrants. (As a society gets richer, its birthrate falls, so when the program started, there were 16 workers for every beneficiary; today there are 3.3 workers per beneficiary, and in 2030 there will be 2 workers per beneficiary.) If it walks like a duck and quacks like a duck, is it a duck? Sure is. Social Security is a Ponzi scheme.

This year, for the first time since 1983, the government will pay out $41 billion more in benefits than it receives in Social Security tax revenue. This is due to the high unemployment caused by the recession and is likely to continue into next year. It is a warning sign of things to come. In 2014 and years following, current benefits will exceed current taxes, triggering withdrawals from the trust fund, and assuming the government stands good on it IOUs, by 2037 the trust fund account will be empty and Social Security receipts will only cover 75% of benefits. In addition to paying its IOUs, the government will have to make up the 25% shortfall out of general revenues – the same source for paying those IOUs. In other words the Treasury currently owes Social Security $2.5 trillion in IOUs, which was originally taxed out of the incomes of working Americans, and to repay it, it will have to tax working Americans again to get the $2.5 trillion to pay the IOUs. Taxpayers get taxed twice to pay the same benefit! Why? Because their government purloined their savings account. What a country!

Is Social Security solvent? Only if those IOUs can be paid. And a recent Gallup poll showed that 60% of currently working Americans bet they can’t – i.e. they don’t expect to receive any benefits from Social Security when they retire.

People have been forced by law for 75 years to go along with this pay-as-you-go Ponzi scheme, promised that after years of taking their money and giving it to retirees, the government will take other people’s money and give it to them when they retire. If or when there is little money to pay the last people retiring after years of paying into the system, they will get hosed.

Bernie Madoff would be proud.

Saturday, August 14, 2010

The Mosque

In the seventh century an Arabian merchant and shepherd, Muhammad ibn Abdullah, claimed he had received revelations from God to restore monotheistic religion. The content of these revelations were memorized and recorded by his companions as the Koran. From this creed sprang the practice of Islam, which spread quickly across the desert peninsula by the time of his death in 632.

Six years later Syria and Palestine fell to Islam. With its capital in Damascus, Muslim armies fanned eastward through Mesopotamia to India and Central Asia, westward to the Nile and across North Africa. From North Africa in 711 Tariq ibn Ziyad, a Muslim general, crossed the straits of Gibraltar with soldiers and horses in four borrowed boats. Once on European soil, he dispatched the four-boat fleet back to ferry the rest of his army, and then assembled his 12,000 Muslims, which history has called Moors, to conquer Spain – a conquest that would last 900 years before they would be expelled by King Ferdinand III.

South of Cadiz, the invaders met the hastily gathered forces of Spain's Visigoth king, Roderic. "Before us is the enemy; behind us, the sea," shouted General Tariq, drawing his scimitar. "We have only one choice: to win!" King Roderic was killed, and the Moors – who were North African converts to Islam led by Arabs from Damascus and Medina – moved on to capture Cordoba in 716. Fifty years later Cordoba became the capital of the independent Muslim emirate of al-Andalus, later a Caliphate itself.

The Islamic Emir Abd ar-Rahman I modified the Christian Visigothic church of St. Vincent to become the great Mosque (Mezquita in Spanish) of Cordoba whose completion required two hundred years of construction. When finished, it was the most magnificent of the more than 1,000 mosques in Cordoba. The Mezquita held an original copy of the Koran and allegedly an arm bone of the prophet Mohammed, making it a significant Muslim pilgrimage site.

The Reconquista to expel the Moors was waged for nearly 800 years. By the thirteenth century, the sole remaining Moors were the Nasrid dynasty in the Kingdom of Granada. They were defeated in 1492, which brought the entire Iberian Peninsula under Christian rule, thus completing the Reconquista.

In 1236, Cordoba was recaptured from the Moors by King Ferdinand III of Castile and rejoined Christendom. The Christians initially left the architecture of Mezquita as it was and simply consecrated it, dedicated it to the Virgin Mary, and used it as a place of Christian worship. The kings who followed, however, added further Christian features. In the 14th century a chapel was built in the center of the mosque. Today it exists as a Christian church known as Catedral de Nuestra Señora de la Asunción.

The Cordoba Mosque, as this brief recap shows, was a significant monument in the westward conquests of Islam. Therefore, when it was recently announced that the New York City zoning authorities had cleared the way for a 152 year-old building to be demolished and replaced with a $100 million, 15-story Islamic cultural center and mosque only 600 feet from the former site of the World Trade Center Towers, and that the project name was the Cordoba Initiative, the public reaction was predictable.

Opponents of the mosque are accused of being bigots and tea party wing nuts. Supporters are accused of being naïve enablers of a highly symbolic project – a deliberate effort by foreign investors and powers to show dominance near a spot hallowed by the mass murder of innocents. Here’s my take.

That a particular spot can take on a sacred quality is not unique to Ground Zero. One cannot stand on the Gettysburg Battlefield, the Pearl Harbor Memorial, or the Normandy Beaches and not be awed by what happened there. Building a mosque near Ground Zero is wrong and it should be possible to protest it without being called a bigot or Islamaphobe.

Religious tolerance is a two-way street. Muslims don’t get this. There are about 2,000 mosques in the US and countless other houses of worship. There are no Christian churches or synagogues in Saudi Arabia. Or Iran. Or Egypt. Or most Muslim countries. So who’s intolerant?

Americans do not whip and stone women for adultery as Iran does, cut off the nose and ears of a woman for running away from an abusive marriage as the Taliban did in Afghanistan, or cut off the hand of a thief as was done is Saudi Arabia. So who’s intolerant?

There are 30 mosques in New York City. It’s not like another is needed within two short blocks of ground hallowed by the slaughter of 3,000 innocents. What if a Serbian Orthodox Church was proposed on the ground where 8,000 Muslims were killed in Srebrenica, or a Japanese group planned a Shinto shrine near the Pearl Harbor memorial, or a German cultural center were proposed in sight of the remains of the Auschwitz extermination camp? Would there be a reaction? Would it be justified? In fact a Carmelite convent near Auschwitz caused such cri de coeur among Jews who thought it “Christianized” the Holocaust that Pope John Paul ordered it relocated – notwithstanding its good intentions of the nuns to pray for the souls of the Jews who died there.

Enter Michael Bloomberg. When it was revealed that the Times Square car bomber was Faisal Shahzad, a Pakistani, Bloomberg felt compelled to do his best imitation of an inner city school principal and warn New Yorkers that there would be no toleration of retribution against Pakistanis or Muslims. No matter that there had not been a single incident of retributional violence against a Middle Easterner from the time of the 9/11 attack to the present.

So it wasn’t surprising that the pietistic Bloomberg, against the backdrop of the Statue of Liberty and surrounded by ministers of various faiths, rose in high dungeon to pontificate about the decision to build the mosque 600 feet from Ground Zero and exhort New Yorkers to be tolerant. Yet in this and other comments Bloomberg has made, it is apparent that he has gone beyond advocating tolerance and has become a proponent of the mosque. “I happen to think this is a very appropriate place for somebody who wants to build a mosque, because it tells the world that America and New York City really believe in what we preach.” Who in this world, Mr. Mayor, needs the reassurance of which you speak?

Unfortunately for them, Mayor Bloomberg is not an advocate for the congregants of Saint Nicholas Greek Orthodox Church, which was once directly across the street from the World Trade Center until the collapse of Tower 2 flattened it. Plans to rebuild the church two blocks from its original location were blocked by the New York Port Authority which objected to its 24,000 sq. ft. footprint and its traditional grand dome that the Authority said couldn’t rise above the planned WTC memorial. Curiously, the Authority had no problem with the 15-story cultural center and mosque.

A recent Quinnipiac poll showed that New York City residents oppose building the mosque and Islamic cultural center two blocks from Ground Zero by 52% to 31%. A Rasmussen poll shows 54% of the nation is opposed to its building. One of the most surprising opponents is the liberal Anti-Defamation League, the Jewish civil rights group, which issued a statement saying that while the Muslim organizers had the right to build, the specific site is “counterproductive to the healing process.”

If the people behind the Ground Zero mosque were really interested in improving Islam’s image, they would build the planned mosque somewhere else. They were offered another location and turned it down. They would be respectful of the sensitive nature of Ground Zero in the American psyche. They aren’t. One has to wonder why. Either they are witless or else they know exactly what they are doing.

Daisy Khan, the wife of Feisal Abdul Rauf and a partner in the Cordoba Project, conceded in a recent NPR interview that Islam had been hijacked by the extremists, and this center is going to create counter-momentum which will amplify the voices of the moderate Muslims.” Yet her husband refuses to acknowledge that Hamas is a terrorist organization. “The issue of terrorism is a very complex question,” Rauf said. But while he can’t quite bring himself to grapple with the complexities of blaming terrorists for being terrorists, he has no problem blaming America for being the target for terrorism: “I wouldn’t say that the United States deserved what happened [on 9/11], but the United States policies were an accessory to the crime that happened.”

Andrew McCarthy, a writer for the National Review, has documented that Imam Feisal Rauf”s book, What’s Right with Islam is What’s Right with America, has a very different title when published abroad: A Call to Prayer from the World Trade Center Rubble: Islamic Dawa in the Heart of America Post 9/11. “Dawa” means Islamic proselytizing – which means imposing sharia law in this country. The book was published by the Muslim Brotherhood, sponsors of Islamic terrorism, particularly Hamas.

The Muslim Brotherhood, by the way, is not a nice organization. Its founder, Hasan al-Banna, said: “It is the nature of Islam to dominate, not to be dominated, to impose its law on all nations and to extend its power to the entire planet.” It believes in the radical application of jihad against America and Israel. In 1991, its American leadership prepared a mission statement calling for a grand jihad to destroy Western civilization from the inside so that Islam is victorious over all other religions. Feisal Rauf”s father was a member of this organization.

Feisal Rauf won’t reveal the source of the funding for the $100 million Cordoba Project, which is one concern of those opposed to it. In particular, they want to know if Saudi money is involved, since Saudi Arabia is the seat of Wahhabi radicalism. Nina Shea of the Hudson Institute has shown how Wahhabi hate literature and educational materials have made it into American mosques, encouraging Muslims to spill the blood of infidels and Jews. Ironically, if Saudi money is substantially involved, the same people who funded the 9/11 radicals who created Ground Zero will have helped build the mosque that outrages the families of its victims.

It is also troubling that one of Feisal Rauf’s partners, Sharif el-Gamal, who provided the $5 million to purchase the property for the Islamic cultural center and mosque, is in his 30s and was waiting tables a few years ago in New York City restaurants before becoming a multi-million dollar real estate investor. He is the one who holds the title to the property and says a yet to be created nonprofit will control the Islamic center. Rauf is one of 23 directors whose names el-Gamal declines to reveal as well as the name of the center’s Executive Director. Regardless of Rauf’s vision for the cultural center and mosque, he isn’t in charge; el-Gamal is, and one has to wonder how so young a man could come into so much money in so short a period of time unless it was given to him for a purpose.

America’s culture of religious toleration renders it unable to prevent the construction of mosques that become centers of preaching Islamic supremacy and create a fertile ground for producing future jihadists. The Washington Islamic Center in the nation’s capital distributed a tract calling for the death of apostates, homosexuals, and infidels (Americans.) The Saudi-funded King Fahd Mosque near LAX in Los Angles distributed radical literature and was the place where two of the 9/11 hijackers went when they arrived in America. The Al Farouq mosque in Brooklyn promoted jihad through literature and the sermons of Omar Abdel Rahman, the Blind Sheik, who was convicted of seditious conspiracy for planning the 1993 World Trade Center bombing. The Dar Al-Hijrah mosque in Falls Church, Virginia was constructed with the help of the Saudi embassy and has a history of radical connections including Anwar al-Awlaki who helped radicalize Major Nidal Hassan, the Fort Hood shooter, Umar Farouk Abdulmutallab, the Nigerian “underwear” bomber, and six radicals who planned to “kill as many soldiers as possible" in Fort Dix several years ago.

The adherents to Islam know its history. They will understand the significance of building a mosque virtually on top of the site where 3,000 people died at the hands of Islamic radicals. They understand the symbolism of Cordoba.

Thomas Mann, the German novelist, once said, “Tolerance is a crime when applied to evil.” He was speaking of the rise of Nazism in Germany, but it remains good advice when applied to those who exploit American freedom to do us harm.

Friday, August 6, 2010

The Iran Dilemma -- Part II

In the immediate aftermath of the 9/11 terrorist attack, US broadcasters showed images of reactions from around the world. In Saudi Arabia and Egypt -- allegedly our allies -- there was cheering in the streets that America had gotten its comeuppance. In Iran there was a candlelight memorial.

This is the paradox of our relationship with Iran. Exclusive of their leaders, Iranians are more like Americans than any other Muslim population. The majority of Iranians – 70% of them – are under 30 years of age, they are sympathetic to American values, and indeed would like them reflected in their own society. They distinguish between the people of America and the policies of the American government. It was the American people who were attacked and killed on 9/11.

Since the Shah fell from power 30 years ago, the US policy toward Iran has been to isolate it and discredit its leaders in the eyes of Iranian society. This hasn’t worked because several significant nations of the world ignore American sanctions, among them Russia, China, and some of Europe. Moreover, sanctions play into the hands of Iran’s leaders, giving them an excuse to blame the country’s economic woes on the US.

But more important, America’s isolation of Iran is precisely what the country’s governing elite wants. As I described in last week’s blog, the more isolated a country is, the more stable it tends to be. Cuba and North Korea are extreme examples of isolated stable societies. As countries become more open, which China is becoming, their people see how the rest of the world lives and realize that openness brings freedom and reforms because the government loses power to control behavior. This can lead to instability and civil unrest if the government doesn’t release its grip on power and allow more civil liberties and a representative form of government to emerge.

Iran’s young population wants openness, access to Western culture, and the opportunity to adapt Western ideals to their culture. Rather than make life more difficult for its citizens with sanctions, we should be doing the reverse in hopes of undermining Iran’s conservatives. Obama should have spoken out, for example, against the violent repression the mullahs unleashed on the Iranian demonstrators last year, letting the Green Movement know that Americans supported them in their struggle for freedom. Instead Obama punted, explaining his reluctance to “meddle” in the affairs of Iran because of what America did there more than a half-century ago.

When one of Iran’s most reliably independent pragmatists, Akbar Hashemi Rafsanjani, who at the time was the Iranian president, announced in 1995 that Iran had signed a $1 billion contract with Conoco to develop its offshore gas fields, the Clinton administration was caught off guard. It found itself in the embarrassing position of asking other nations to boycott Iran while it engaged in commercial arrangements.

Secretary of State Warren Christopher immediately attacked the Conoco deal and denounced any transaction that was “inconsistent with the containment policy that we have carried forward,” adding that it put money into "the evil hand of Iran." Since US oil firms are barred from buying Iranian crude oil, their foreign subsidiaries can be a channel to purchase crude and sell it abroad as was planned in this case. Because Christopher’s old law firm had represented Conoco in its multi-year negotiations with Iran, he was compelled to recuse himself from further involvement. Given Christopher’s objections, however, Clinton announced that he would issue an executive order prohibiting the deal, and Conoco was forced to pull out of it.

The Conoco transaction had been the first oil concession granted by Iran to a US firm since the revolution. It finally offered a break in 16 years of relentless anti-American policy in Iran. Rafsanjani’s representatives said the selection of an American company was not based on Conoco’s superior technology or its financial package, but rather it was a political decision, specifically intended to signal a desire to improve relations with Washington. Later, in an interview with an American reporter, Rafsanjani confirmed the motivation for the deal:

“We invited an American firm and entered into a deal ... this was a message to the United States, which was not correctly understood. We had a lot of difficulty in this country by inviting an American company to come here with such a project because of public opinion.”

On the surface, here was the first step by Iran to create a foundation of mutual interests with America on which further reconciliation could be built. The Clinton administration either did not or chose not to pick up on the strategic possibilities the Conoco deal offered, and Clinton surprised Iran by turning it down.

When Bush became president, members of his administration argued for easing or removing sanctions. A review of Iranian relations was conducted with the promise that Congress would be a partner in any sanction reform. It was believed that reengagement with Iran would strengthen the hand of the pragmatists in countering the influence of the conservatives. However, 9/11 closed the door on this and future initiatives for rapprochement.

Then in 2002 the Israelis intercepted an Iranian-backed boatload of weapons headed to Palestine militants. Some believe the scheme was set up by operatives of the mullahs in a way that it would be discovered and would provoke a response from the Bush administration. Whether or not this is true, it was nevertheless disastrous for the reformers. In his State of the Union message later that year, Bush used the term “axis of evil” for the first time and included Iran in it. Iran’s conservatives went on the attack, using Bush’s characterization of Iran as evidence that the reformers were being duped. Thus Bush gave the conservatives exactly the ammunition they needed to turn away from accommodation with the US.

The election of hardliner Mahmoud Ahmadinejad in 2005 poisoned US-Iran relations even further. Unlike Rafsanjani and Khatami, the preceding two presidents, Ahmadinejad is firmly in the camp of the conservatives. He is unlikely to be open to any accommodation with the West as were Rafsanjani and Khatami. By denying the Holocaust and Israel’s right to exist as a country he is openly provocative. And he has emerged as the spokesman for Iran’s nuclear ambitions, which resumed on his watch. Since his disputed presidential election in June 2009, Ahmadinejad has had to contend with grassroots disaffection with his administration, and on August 4, 2010 there was an attempt to assassinate him.

The development of nuclear energy is popular among the Iranian people. Surveys have variously shown it is supported by 80% to 90% of the population. The nuclear weapons program is less popular, but it is still supported by half of the population. However, the fact that Iran’s regional neighbors – Pakistan, India, and Israel – have nuclear weapons makes it likely that Iran will continue pressing forward to join them. Even regime change and replacement with leaders like Rafsanjani and Khatami would not likely cause the nuclear program and perhaps the weapons program to be suspended.

This poses a dilemma in managing our relationship with Iran. Unlike North Korea or Israel, both nuclear but stable countries for different reasons, Iran is not stable, making it a dangerous owner of nuclear weapons. Sooner or later there will be a showdown between the opposition and the ruling elites. When that happens, Iran will become destabilized for an unknown period and may descend into chaos until a successor government takes control. Nuclear materials and technology could go in every direction. There is no way that an outside agent, like the US, can make the showdown happen before the Iranian people want it to happen, but there are ways to avoid having nuclear weapons around during protracted instability.

Here are two options.

The Obama administration has been clear that a military strike is on the table if other means fail. Predictably, that would play into the hands of Khamenei and Ahmadinejad who would try to use it to silence the reformers and rally the people around the government, which may or may not be successful. An air strike, however, would have limited success. Even with “bunker buster” bombs, some facilities are safely deep under the ground and all of them are widely scattered in anticipation of an air attack. If we attack, a retaliatory response against Israel is possible, because Iran’s leaders would surely implicate Israel’s collusion. If some but not all of Iran’s nuclear facilities are destroyed, it is possible that Iran’s nuclear program would still be set back a number of years. It’s also possible that the opposition could succeed in convincing the Iranian people that their nuclear weapons program wasn’t worth risking another attack, particularly if the US could make a credible threat that it would destroy weapons-making capacity if it is rebuilt.

If Obama decides to attack, he would not act peremptorily, meaning that he would go to great lengths to get the UN behind him, and he would try to make it a multilateral effort, even though the US would do the heavy lifting. However, one need only reflect on Bush’s attempt to get the UN to authorize a strike against Iraq to understand how long it would take to get UN concurrence to attack Iran – if ever. The Europeans and the UK (now lacking Blair) would agree to the Iranian threat, drag their feet, advise patience, and opt for more sanctions. Unless Obama is willing to go it alone without allies, he would be stuck waiting for the UN to resume relevance.

Equally important would be the willingness of the American people to initiate a military action on yet a third front, tired as they are with the slow pace of the Iraq and Afghanistan conflicts. And the acquiescence of the countries in the Persian Gulf region would also be needed for fly-over permission and staging support. After being bombed, Iran wouldn’t be a good neighbor for perhaps years, and we would need the support of the neighborhood before disturbing its peace.

The second option is similar to the first except that its goal would be to make the case for avoiding a military confrontation. A good poker face is needed to make this approach work. Obama would have to tell the Iranian leadership in the most compellingly credible terms that the US absolutely will not permit Iran to develop nuclear weapons and that it will attack without notice unless the weapons program is immediately, verifiably, and permanently terminated. Moreover, if an attack is launched its sole purpose would be to destroy Iran’s nuclear program, not to achieve regime change, and military hostilities would end when that objective was accomplished. However, if Iran retaliates against the US or any other country, the US will press the attack until Iran is no longer able to retaliate. In other words, military action will only escalate if Iran causes it to escalate. It is a conflict Iran cannot possibly win.

One hopes that sober minds would be reflected in Iran’s reaction. But the threat is only credible if the US is willing to see it through and launch an attack subject to these terms. Whoever blinks first loses.