Saturday, November 30, 2013

What If …?

Over a dozen years ago Robert Cowley compiled an intriguing book of essays under the title What If? The essay authors – all outstanding historians of the day – were invited to compose a credible outcome, a “what if” alternative, for a pivotal moment in history if circumstances had taken a different, often minor, turn of events.

For example, in 1889 Buffalo Bill’s Wild West Show toured Europe featuring Annie Oakley’s famous shooting skills with her Colt .45. Her act concluded with an invitation for a gentleman in the audience to step into the arena and allow her to shoot off the ash from his cigar at a distance far enough away to make it interesting. It was all show because there were never any real takers. Her husband, Frank Butler, was planted in the audience as a stooge. He would bravely step forward with a Havana clenched in his teeth, allowing Annie to bring the crowd to its feet with her keen shot.

When the show made a stop in Berlin for a performance at the Charlottenburg Race Track, Annie offered her customary dare. In the audience was the young and showy Kaiser Wilhelm II who immediately took her offer, much to her shock and the horror of the Kaiser’s security detail. Stepping on to the arena, the local police tried to intervene but Wilhelm waved them off.

Annie couldn’t back out without losing credibility, so she paced off her usual distance and took aim. Sweating under the pressure of shooting at a crowned head of Europe and wishing she hadn’t consumed so much whiskey the night before, she nevertheless shot away the Kaiser’s cigar ash to the crowd’s wild jubilation.

Cowley asks what if she had missed? There would have been no bellicose Kaiser Wilhelm II alive 25 years later to start war in Europe. When World War I did in fact break out, Annie wrote the bombastic Kaiser and asked for a second shot. He never responded.

History offers many “what ifs.”

In the aftermath of World War I, the Treaty of Versailles was more punishment than peace treaty. It forced Germany to admit its ‘guilt” for the war as well as pay reparations for it. The Treaty’s major accomplishment was to invent Hitler.

Hitler rose to power in 1933 and obsessed over undoing the Treaty. He pursued this in a succession of trial provocations each intended to test the resolve of the former allies – primarily Britain and France. In 1934 he ordered the German home guard to arm for war and, the following year, reintroduced conscription – both flagrant violations of the Treaty. With no response forthcoming from the major world powers, Hitler was emboldened. In 1935 he began building tanks, planes, and submarines – further violations of Versailles. Still no intervention by England or France.

In 1936, Hitler ordered troops across the Rhine and created an armed threat in the demilitarized zone of the German Rhineland, the territory between the French border and the Rhine River, an unmistakable violation of the Versailles treaty. There his raw army recruits and 36,000 policemen faced nearly 100 French and Belgian divisions. France and Belgium were within their right to cross the Maginot Line and confront the Germans. But the French and English heads of state said nothing and did nothing. 

One of the Rhineland occupiers, General Heinz Guderian, said after the World War II that if the French had responded to their provocation in 1936, “…we should have been sunk and Hitler would have fallen." Another German officer confessed that the German General Staff considered Hitler’s move tantamount to a suicide mission.

I can tell you that for five days and five nights not one of us closed an eye. We knew that if the French marched, we were done. We had no fortifications, and no army to match the French. If the French had even mobilized, we should have been compelled to retire.

Hitler himself said:

The forty-eight hours after the march into the Rhineland were the most nerve-racking in my life. If the French had then marched into the Rhineland we would have had to withdraw with our tails between our legs, for the military resources at our disposal would have been wholly inadequate for even a moderate resistance.

The head of the French army, General Maurice Gamelin, believed that a confrontation of the German Rhineland occupiers would be unpopular at home, costly, and would lead to war requiring full mobilization of French armed forces. He counseled against action.

In England, not only were there no anti-German Rhineland protests, there were peace demonstrations. One Member of Parliament observed "the feeling in the House [of Commons] is terribly pro-German, which means afraid of war." The Prime Minister during the 1936 crises, Stanley Baldwin, had tears in his eyes when he admitted that British public opinion would not support military intervention in the Rhineland.

Winston Churchill, a backbench Conservative MP in 1936, was the British Jeremiah – one of the few voices raised against German rearmament and its threat to future peace. He wanted his country to reinforce a French challenge of the Rhineland occupation under the coordination of the League of Nations. The League of Nations, however, was as useless then as the United Nations is now and nothing happened. Churchill predicted the Rhineland would become a hinge allowing Germany to swing its forces through Belgium to attack France. And that’s precisely what happened in 1940.

A “what if” opportunity thus passed unexploited in 1936 that could have prevented World War II.

An Austrian German by birth, Hitler provoked the 1938 Anschluss crisis two years after the Rhineland showdown. It was part of his plan to unite all German speakers in one state. "People of the same blood should be in the same Reich" he had written in his autobiography Mein Kampf.  Hitler’s annexation of Austria was accomplished by threatening invasion so convincingly that the Austrian government resigned and Hitler’s armies were invited in.

German-Austrian union was forbidden by the Treaty of Versailles. Notwithstanding, England and France did not protest, did not mobilize, and did not defend the Treaty.

When an aggressor senses his opponent’s lack of resolve to confront him, he will act without restraint. And that’s precisely what Hitler did. He was now convinced that British Prime Minister Neville Chamberlain, Baldwin’s successor, and French Prime Minister Édouard Daladier were both appeasers – paralyzed by their nightmarish memories of World War I – and therefore loath to confront his militaristic aggressions.

His next move would exploit their fecklessness.

In 1938 Hitler demanded that Czechoslovakia cede the Sudetenland – that portion of Czechoslovakia on its north, west, and southwest border which was populated by German speakers. If his demand was not accommodated, he would invade Czechoslovakia.

It was all bluff. At that time Czechoslovakia had one of the strongest armies in Europe. It was well trained and well equipped, thanks to the country’s armaments works – primarily those of Skoda, the Czech equivalent of Ford or GM. Moreover, its military industries were protected by military fortifications on the German and Austrian borders – all in the Sudetenland – which explains why Hitler wanted it.

Czechoslovakia probably could have won a fight with Germany given the poor quality of the German fighting force at the time. In fact, senior German generals were horrified with Hitler’s plans to invade, convinced it would ignite a world war which Germany would lose. So convinced were the German generals of a bad outcome, that one of the earliest conspiracies was plotted to overthrow and arrest Hitler. Representatives were sent to meet with Chamberlain, telling him that the moment Hitler gave the invasion order he would be arrested and imploring Chamberlain to intervene militarily in support of Czechoslovakia. Czechoslovakia had mutual defense treaties with France and Russia, and England was a defense partner of France. Poland too would likely have joined if France and England were in the fight.

But it was not to be. A meeting was held in Munich involving France, England, Germany, and Italy. Czechoslovakia was not invited. Chamberlain followed his appeasement instincts and the Munich Agreement was signed transferring the Sudetenland to Germany. Betrayed by its treaty allies, Czechoslovakia conceded. Hitler agreed not to invade Czechoslovakia – an agreement that lasted six months when Germany troops swallowed up the remainder of the country, now defenseless without its Sudeten armaments and fortresses.

The cowardly Chamberlain flew back to England and landed among the adulation of adoring pacifists. Deplaning, he waved the infamous Munich Agreement, assuring the crowd at the airport that he had won for them “peace in our time.” That phrase has gone down in history as a verbal monument to the gullibility of naïve world leaders who appease international bullies expecting conciliation to preserve peace. Think Jimmy Carter and Barack Obama.

Chamberlain’s surrender to Hitler was criticized by some far-sighted British politicians, among them Winston Churchill who unleashed his rhetorical fury on the worthless document:

We are in the presence of a disaster of the first magnitude...we have sustained a defeat without a war, the consequences of which will travel far with us along our road... we have passed an awful milestone in our history, when the whole equilibrium of Europe has been deranged, and that the terrible words have for the time being been pronounced against the Western democracies: "Thou art weighed in the balance and found wanting." And do not suppose that this is the end. This is only the beginning of the reckoning.

Another “what if” moment was lost.

With the fall of Czechoslovakia, Hitler’s territorial ambitions now became apparent even to the western fools of Munich. The English and French publicly assured Poland that they could be relied on to protect it against German aggression. But the time for red lines had passed. Germany could no longer be denied its long sought destiny. Equipped with the armaments stolen in the Munich treaty, Germany was rolling like a juggernaut toward the Polish frontier, which bordered the northern Sudetenland acquisition. Germany was now the best armed military force in Europe, ironically clad in Panzer tanks that had been built for the Czechs by the Skoda works. England and France, Poland’s outfoxed treaty allies, could do no more than watch the eight-month massacre of an ally in 1939.

Sixty million people would die before the German beast was slain in 1945.

History may not repeat itself, but as Mark Twain said, it does rhyme. What can we learn today from this disgraceful episode of serial incompetence? One could certainly argue it teaches that preemptive military action, however unpopular at the time, is usually a winning antidote for an unbridled aggressor. Hitler had made his ambitions abundantly clear in Mein Kampf, assuming any of the western leaders had bothered to read it. And the provocations he initiated in 1936, twice in 1938, and in 1939 were straight out of his play book.

After World War II ended Churchill, whose resolve had saved the British, was ousted from office. So much for the thanks of a grateful nation.

However, a relatively unknown school, Westminster College in the small Missouri town of Fulton (population 7,000), wished to bestow an honorary degree on the statesman that got England into the war and kept it there to the end. And because Churchill was indisputably the best rhetorician of the 20th century, he was invited to be the keynote speaker. The audience numbered over 40,000 and his speech, which is often referred to as the “Iron Curtain” speech, alerted the world to a coming “cold war.”

Churchill’s speech presaged many of his thoughts that would later appear in his seminal recollection of the Second World War, most especially its first volume entitled The Gathering Storm. Here is an excerpt from the Iron Curtain speech that verbalizes the many lost “what ifs”:

Up till the year 1933 or even 1935, Germany might have been saved from the awful fate which has overtaken her and we might all have been spared the miseries Hitler let loose upon mankind. There never was a war in all history easier to prevent by timely action than the one which has just desolated such great areas of the globe. It could have been prevented in my belief without the firing of a single shot, and Germany might be powerful, prosperous and honoured to-day; but no one would listen and one by one we were all sucked into the awful whirlpool. We surely must not let that happen again.

We surely must not let that happen again indeed.

Today we are once again confronted by the villainous face of evil. This time it’s Iran. Iranian leaders are as fanatical as Hitler but are many times more dangerous given nuclear weapons. Iran’s mullahs are believers in the 12th imam and hold the apocalyptic theology that bringing the world to an end will usher in the “second coming” of this imam. He will establish the Shia religion as the world religion and begin an unending era of peace. In short, a worldwide nuclear holocaust favors Iran’s theology.

Moreover, the west is once again afflicted with gullible leaders who believe appeasement is the answer to aggression. The recent “agreement” championed by Kerry and Obama does nothing to stop Iranian bomb-making. Worse, it deludes the America people into thinking that something short of military intervention will stop a nation intent on eliminating Israel, whose existence it won’t officially recognize. Israel will be our Poland.

Once again, our leaders believe that peace can be won without victory over disturbers of peace. We are standing aside while a replay of 1936, 1938, and 1939 passes before our eyes … while, as Mark Twain observed, history rhymes.

Once again the Left believes – and tries to convince the rest of us – that our choice is between war and peace when in fact it is, and always will be, a choice between fighting and surrender.

Saturday, November 23, 2013

How We Got Thanksgiving Day

In early September of 1620, 104 men, women, and children crowded aboard a leaky ship that was about 90 feet long and 26 feet wide amidships and set sail for the New World. The ship, named the Mayflower, would be at sea for 66 days before making landfall on the point of the fish hook we call Cape Cod, where it anchored near the location that would become Provincetown. It was well north of its intended destination of Virginia and therefore the passengers had no patent from the English crown to settle in this place.

The passengers continued living on board for a month while a few men first explored the Cape area. Finding curious mounds, the explorers punched holes in several revealing some to be granaries for corn and beans but others to be graves whose desecration didn’t endear the trespassers to the natives.  A boat was built to explore the leeward shoreline of Cape Cod, and finding the natural harbor at modern day Plymouth and a defensible hill above it, they decided to make their settlement there. With winter approaching, shelter had to be built before the majority of passengers could disembark.

The long ocean crossing and the additional month crammed aboard ship had done little to improve the disposition of the passengers, which was compounded by the fact that 44 of them were religious dissenters from the Church of England while 66 made the voyage as a business venture. The dissenters called themselves the “Saints” and called the others “Strangers” – hardly a good way to create unity. Despite having more differences than similarities, their survival depended on cooperation, of which there was little on board the ship. Therefore, William Bradford, who had emerged as the informal leader, recommended that before disembarking every passenger should sign an agreement that set forth rules for self-government, which later came to be called the Mayflower Compact.

The first winter was ghastly. Now calling themselves Pilgrims, over half of them died in three months. They were buried at night, fearing that the surrounding Indians would learn that their number was dwindling which might encourage an attack. Unlike the Indians encountered on the Cape, however, the Pilgrims had settled among the peaceful Wampanoags. And in March the tribal chief, Massasoit, sent Samoset as his ambassador to the settlers because Samoset spoke English. He had providentially learned English from sailors who had fished the coast and briefly lived on land nearby. After his first encounter with the Pilgrims, Samoset returned with Tisquantum, known in history as Squanto, an Indian who had been kidnapped in 1614 by an English slave raider and sold in Málaga, Spain. There he had learned English from local friars, escaped slavery, and found his way back on an expedition ship headed to the New England coast in 1619 – the year before the Pilgrims arrived.

Since Squanto spoke better English than Samoset, he became the technical advisor to the Pilgrims, teaching them how to raise corn, where and how to catch fish, and how to make things needed for working and hunting. He showed them plants they could eat and plants the Indians used for medicinal purposes. Squanto was the reason that the settlement survived during its first two years.

The first year the Pilgrims farmed communally and nearly starved. William Bradford’s diary tells us he astutely learned from that failure and decided thereafter that each man should forsake communal farming and instead farm for his own family’s food needs. "This had very good success," Bradford wrote, "for it made all hands very industrious, so as much more corn was planted than otherwise would have been. By this time harvest was come, and instead of famine, now God gave them plenty, and the face of things was changed, to the rejoicing of the hearts of many." The Pilgrims’ experiment in socialism was a valuable lesson.

After taking in an abundant harvest in the fall of 1621, the Pilgrims invited Squanto, Samoset, Massasoit, and 90 other Wampanoag men to join them in a three-day celebration of their success. The festivities consisted of games and feasting – and a not-so-subtle display of Pilgrim musketry just in case the natives became unfriendly in the future. This celebration is recorded in history as the first Thanksgiving – which it wasn’t.

In fact, two years earlier on December 4, 1619, a group of 38 English settlers arrived at Berkeley Hundred, part of the Virginia Colony, in an area then known as Charles Cittie (sic), It was located about 20 miles upstream from Jamestown, the first permanent settlement of the Virginia Colony, which had been established in 1607. The Berkley settlers celebrated the first known Thanksgiving in the New World. Their charter required that the day of arrival should be observed yearly as a "day of thanksgiving" to God. On that first Thanksgiving day, December 4, Captain John Woodleaf presided over the service. The charter specified the thanksgiving service: "Wee ordaine that the day of our ships arrival at the place assigned for plantacon in the land of Virginia shall be yearly and perpetually keept holy as a day of thanksgiving to Almighty God."

But not for long. Nine of the Berkley settlers were killed in the Indian Massacre of 1622 which also wiped out a third of the population of the Virginia Colony. Therefore, Berkeley and other outlying settlements were abandoned as the colonists moved back to Jamestown and other more secure points. Thanksgiving was forgotten.

The first national celebration of Thanksgiving occurred in 1777. This was a one-time only Thanksgiving in which the 13 colonies, rather than celebrating food and God’s providence, celebrated the defeat of the British at Saratoga in October by Washington’s Continental Army.

In 1789 President George Washington made the first presidential proclamation declaring Thanksgiving a national event. Under this proclamation it was to occur later that year on November 26. Some were opposed to it, particularly those in the south. They felt the hardships of a few Pilgrims did not warrant a national holiday and besides, such proclamations were excessively Yankee and Federalist – or so they thought.

John Adams, the second president, issued a Thanksgiving proclamation in 1798 enlisting the help of the Almighty not only against celestial evil but also in the more mundane battles of his administration. He seemed to be asking God to side with the Federalists against his struggles with the Jeffersonians. When he later revealed that the proclamation had been recommended by (gasp!) Presbyterians, it set off a firestorm that Adams, a devout Unitarian, was leading a movement to establish the Presbyterian Church as the national religion. Adams became the first one-term president – a fact he attributed to his proclamation.

In 1779, as Governor of Virginia, Thomas Jefferson decreed a day of “Public and solemn thanksgiving and prayer to Almighty God.” But as the third president, he opposed nationalizing Thanksgiving proclamations. Writing to a Reverend Samuel Miller, Jefferson said, “I consider the government of the United States as interdicted by the Constitution from intermeddling with religious institutions, their doctrines, discipline, or exercises …”

In 1817, New York became the first of several states to officially adopt an annual Thanksgiving holiday. Each state celebrated it on a different day, but the South didn’t embrace the tradition. Therefore, for almost 60 years following Jefferson’s presidency, Thanksgiving remained a non-event on the national scene with no advocate until Sarah Josepha Hale.

Hale was no shrinking violet. She raised $30,000 for the construction of the Bunker Hill monument in Boston and started the movement to preserve Washington’s Mount Vernon home for future generations. She was a fervent believer in God and the American Union, as well as being a fierce abolitionist. Hale had made it her business to advocate and get action on symbols that celebrated America and what today is known as American exceptionalism.

Notwithstanding Andy Warhol, Hale had more than 15 minutes of fame. She authored the words to “Mary Had a Little Lamb” and was the editor for two prominent women’s magazines of her day. Beginning in 1846 she had written editorials calling for a uniform national celebration of Thanksgiving, writing four presidents and dozens of congressmen to push her cause.

In October 1863, America was embroiled in the American Civil War where the concept of "Union" was very much at issue. Hale tried again writing to her fifth president, Abraham Lincoln.

Hale’s proposal found a place in Lincoln’s heart. The battle of Gettysburg had been fought three months earlier and he was to travel to the battlefield the following month. He had been invited to be the clean-up batter after the keynote speaker, Edward Everett, who orated for two hours. Lincoln’s 278-word dedicatory address would become his most famous utterance.

Touched by Hale’s pleas, Lincoln issued his Thanksgiving Proclamation on October 3, 1863 setting its observance on the last Thursday of November.

After Lincoln’s assassination, his successor, Andrew Johnson, ever the contrarian, issued an 1865 Proclamation to “observe the first Thursday of December next as a day of national thanksgiving to the Creator of the Universe …” Yet the next three proclamations of the quirky tailor from Greeneville Tennessee returned Thanksgiving to the last Thursday in November.

Despite the presidential proclamations, states went their own ways. Southern governors often opted for inexplicable dates for observance or none at all. Oran Milo Roberts, governor of Texas in the late 1880s refused to observe Thanksgiving in the Lone Star State, snorting, “It’s a damned Yankee institution anyway.” But the South eventually succumbed to observing it.

Then along came Franklin D. Roosevelt whose finagling with the date of Thanksgiving created a national uproar.

In 1939 there were five Thursdays in November and the last one was the 30th, leaving only three weeks and change before Christmas. This wadded the boxers of the presidents for Gimbel Brothers, Lord & Taylor, and other retailers concerned less with tradition than sales in the waning years of the Great Depression. They asked Roosevelt to move Thanksgiving to the 23rd allowing an additional week for shopping. Although I’ve never understood why Christmas shopping couldn’t start before Thanksgiving, Roosevelt acceded and the country went ballistic.

Polls showed that 60% of the public opposed the change in date. Republicans in Congress were affronted that Roosevelt, a Democrat, would change the precedent of Lincoln, a Republican.

New England, from which the Thanksgiving tradition sprang, put teeth in its resistance. The selectmen of Plymouth, Massachusetts informed Roosevelt in no uncertain words, “It is a religious holiday and [you] have no right to change it for commercial reasons.” Massachusetts Governor Leverett Saltonstall harrumphed that “Thanksgiving is a day to give thanks to the Almighty and not for the inauguration of Christmas shopping.”

Methodist minister Norma Vincent Peal was outraged, calling it "...questionable thinking and contrary to the meaning of Thanksgiving for the president of this great nation to tinker with the sacred religious day with the specious excuse that it will help Christmas sales. The next thing we may expect is Christmas to be shifted to May first to help the New York World’s Fair of 1940."

Nor did all merchants favor the presidential rejiggering of the Thanksgiving date. One shopkeeper hung a sign in his window reading, “Do your shopping now. Who knows, tomorrow may be Christmas.”

Usually the states followed the federal government’s lead on Thanksgiving, but they never relinquished their right to set their state’s date for the holiday. Predictably 48 battles erupted.

New Deal Republicans had wit on their side in the national lampoon of Roosevelt. Republican Senator Styles Bridges of New Hampshire urged the President to abolish winter. The Republican mayor of Atlantic City recommended that Franklin Roosevelt’s holiday be renamed "Franksgiving," while the Republican Attorney General of Oregon came up with this bit of doggerel:

                                          Thirty days hath September,
                                          April, June, and November;
                                         All the rest have thirty-one.
                                         Until we hear from Washington.

Twenty-three states celebrated Thanksgiving 1939 on November 23, and another 23 stood fast with November 30. Two states, Colorado and Texas, shrugged their shoulders and celebrated both days, with Texas having the innovative reason – to avoid having to move the Texas versus Texas A&M football game. The 30th was labeled the Republican’s Thanksgiving, while the 23rd became the Democrat’s Thanksgiving.

Roosevelt’s experiment in moving the Thanksgiving date to improve Christmas sales continued for two more years, although 1940 and 1941 had Novembers with four Thursdays. But the evidence was against the assumptions – more shopping days did not increase sales. Roosevelt conceded and agreed to move Thanksgiving back to the last Thursday in November.

Under public pressure, the US House of Representatives passed a joint resolution in October 1941 to put Thanksgiving on the traditional last Thursday beginning in 1942. However, when the resolution reached the Senate in December, the Senate converted the resolution to law and changed one word: “last” was amended to “fourth” so never again would Thanksgiving fall on the 29th or 30th of November. The states followed suit, although Texas held on to the last Thursday until 1956.

So on this Thanksgiving, and all the future Thanksgivings, let’s raise a drumstick in salute to Sarah Josepha Hale, who instituted its observance, and to Franklin Roosevelt, who went on to convince Americans that he could “save” daylight and move an hour from the morning to the afternoon.

Now, that’s a nice trick!

Saturday, November 16, 2013

Read Fiction to Learn Business

When I was studying engineering in college, our mathematics courses were taught by the mathematics department located on a next door campus in the university’s liberal arts college. Dutifully we schlepped to the math classroom several times a week to endure hours of mind-numbing blackboard lectures displaying various mathematical pyrotechnics that the professor manipulated to produce an “answer,” never sure why an answer was important in the first place.

It turned out that the mathematics professors weren’t sure why an answer was important either. As we got into the complexities of real engineering problems in fluid mechanics, kinematics, and electromagnetics, the vapor of calculus, differential equations, and vector analysis had long since blown away, and the mathematics had to be retaught in the context of real problems that couldn’t be solved unless certain mathematical tools were employed.

As an adult the futility of acquiring knowledge in a vacuum was driven home when I tried to teach my children how to tell time. “What time is it when the big hand is here and the little hand is there?” In fact, they learned to tell time when they went out to play with friends, watches strapped to their wrists, and were told to be home not later than 6:15 p.m. for dinner – or else. Like mathematics, learning to tell time succeeds as applied knowledge.

These two experiences came to mind as I’ve read a spate of reports in recent months bemoaning the decline of humanities – literature, poetry, and social sciences – as more students and college resources shift to science, technology, engineering, and mathematics (STEM) courses. One op-ed critic this past summer essentially declared good riddance; “literature has been turned into a bland, soulless competition for grades and status.”

As a former university professor of business, my two cents worth is that literature, a relatively recent addition in college curricula, is studied as a contextless subject, not unlike the way I learned mathematics (badly at first) or attempted to teach time-telling to my children. There once was a time when all education was taught under the rubric of philosophy – i.e. as an integrated whole. That, after all, is the way the world’s knowledge exists. Then some education genius came along and said, “Hey, how about we split this up into separate courses of study, say, mathematics, science, history, literature, and …” Well, you get the point. But the world’s knowledge isn’t split up into disciplines as it’s taught today. It’s still an integrated whole.

One of our companies is studying literary fiction and plays in a way I think gives literature a sensible place at the table. Their goal is to gain a greater understanding of themselves and others as human beings and to learn how people in literature struggle with complex problems. The literary fiction they are studying is not the popular genre of Tom Clancy, Clive Cussler, and Frederick Forsyth whose flat characters are formulaic and whose predictable plots are designed to carry readers on exciting journeys that whipsaw their emotions. Popular fiction is entertainment. Literary fiction draws the reader to struggle with the characters in their dilemmas and to teach moral lessons.

How could fiction be the basis for a serious study of human behavior? How could fictional predicaments equip everyone in the company I’ve mentioned to deliver a better customer experience – their ultimate aim? Because a make-believe story about make-believe people in a make-believe place is not make-believe. The reader’s suspension of disbelief (a term coined by Samuel Taylor Coleridge) causes a story and its characters to become real. It allows readers to participate vicariously in the choices fictional characters make without suffering the consequences they suffer in the story. Notwithstanding the popular aphorism, reading fiction is gain without pain.

Kazuo Ishiguro’s award-winning novel, The Remains of the Day, was recently read by our company and discussed as a case study in loyalty. Loyalty is normally a virtue sought in organizations. But Ishiguro’s principal character, Stevens, takes it to a fanatical extreme. As an English butler serving Lord Darlington, he never questions what he’s told to do, among which was to fire two of Darlington Hall’s Jewish maids during World War II, essentially a death sentence since without jobs they were likely deported to Germany.

Stevens’ job as the loyal head of Lord Darlington’s household consumed so much of him that he had no emotional reserve to understand and return the affection of Miss Kenton, the house keeper of Darlington Hall. Near the end of his career, if not his life, Stevens realizes he has misspent his life and in “the remains of the day” will die in lonely remorse.

The novel, which went on to become a film with eight Academy Award nominations, is a warning to every busy executive who is “married” to his job and has nothing left for his family.

Our company also studied Antigone, the 2,500-year old play by Sophocles, which pits two characters, Antigone and King Creon, against each other as they take unrelenting stands on their principle. Both are inflexible ideologues who spurn the counsel of those with opposing views, and this leads Antigone and Creon to a predictable and tragic end. The occupant of the White House would have done well to read Antigone and understand its moral lessons.

The Secret Sharer is Joseph Conrad’s short story about a newly appointed ship captain. He possessed all of the technical skills his job demands and indeed possessed all of the experience needed except “the novel experience of command.” His insecurities threaten to upend his new career as a leader. But during his first night onboard, he volunteers to take the anchor watch from 8 p.m. to 1 a.m. – unheard of duty for a captain. Walking the deck during his watch, he is alone. He discovers that the rope ladder over the side of the ship has not been hauled in. When he pulls on it, he finds a mysterious stranger clinging to the ladder in the water. He allows the stranger to come aboard without alerting any member of the crew – a breach of procedure.

Thus begins a cat and mouse game as the captain, whose name is never given, hides the stranger’s presence from his crew. The stranger – Leggatt – swam over a mile from the nearby ship Sephora where he was the first mate. But during a storm at sea he had killed an insolent crew member for refusing an order to reef a foresail during the storm. The Sephora’s captain, an inflexible rulebook officer, had locked him in his room to await trial for the unwitnessed incident. Leggatt refused to submit to this kind of “justice” and escaped while his ship was at anchor.

Conrad uses Leggatt as a doppleganger for the insecure new captain. Leggatt possesses all of the personal attributes the new captain lacks. His quest to keep Leggatt hidden from his crew forces the young captain to take risks that steel his backbone. When the wind lifts the sails, the rookie captain orders the anchor hoisted and undertakes a daring feat of seamanship to tack close enough to land for Leggatt to swim ashore to freedom. The new captain’s technical skill allows the ship to catch a land wind in a maneuver that frightens his first mate into virtual paralysis. Asserting his authority by ordering the paralytic first mate to take charge of the crew, the new captain finds himself. That act and his seamanship win the admiration of the crew.  

The Secret Sharer is a case study of a new manager in a new role with a new team. No Harvard business case could teach the struggle as well.

People do not read fiction or watch films as observers. Rather they are drawn to participate in the story, making it reality. This has several benefits. It lets them experience how others deal with problems – how their dilemmas confuse them, engage them rationally and emotionally, challenge their values, and force them to balance competing issues. Reading fiction nurtures skills in observation, analysis, diagnosis, empathy, and self-reflection – capacities essential for good customer experiences, for caring about others, and for promoting good leadership practices. Fiction helps its readers to develop insights about people who are different from themselves. As they ponder what they might have done if confronted with a character’s situation, fiction helps its readers to gain insight about themselves as well.

Literary fiction, in contrast to popular fiction, focuses on the psychology of their characters and their interrelationships in the story. The authors of literary fiction reveal their character’s minds only vaguely, leaving out important details. The omission requires the reader to fill in the gaps if the character’s motives are to be understood. Literary fiction is rarely explicit about the internal dialog running inside each character’s mind, which consequently forces the reader to imagine it. This is the way the real world works.

Real world people are complex and multi-dimensional. Their experiences transform them. The same thing happens in literary fiction. Author/critic E. M. Forster calls such characters “round,” distinguishing them from the “flat” characters of popular fiction. The inner lives of round characters are only partially understood by the individuals themselves. Little wonder that readers also struggle to understand them. They can be confusing because they don’t match the reader’s expectations and prejudices about who they should be and how they should act.

Even our children can learn important life lessons from fiction. In his book, The Uses of Enchantment, author Bruno Bettelheim asserts that fairy tales help little children learn how others – often children themselves – work through their problems. To that end, G. K. Chesterton said, “Fairy tales do not tell children that dragons exist. Children already know that dragons exist. Fairy tales tell children dragons can be killed.” 

The dragons of the business world, however, do not appear as Grendel or Humbaba the hideous antagonist of Gilgamesh. They appear as Enron, Tyco, Global Crossing, WorldCom, and Xerox. How did the leaders of these organizations allow such scandals to happen? Surely, Enron CEO Jeff Skilling did not graduate in the Harvard MBA class of 1979 with the goal of spending years in prison. He was happily married, successful, had three young children, and probably a dog who wonders where he went.

Sociologist Robert Jackall explored how good people make bad decisions in his book, Moral Mazes. He notes that the managers interviewed in his research were not “evil” people in their everyday lives. But in the context of their jobs, they had developed a separate moral code, which Jackall calls the “fundamental rules of corporate life.” It was an altogether separate life – almost a form of non-pathological schizophrenia – needed to resolve the dissonance of their bipolar world.

It’s fair to ask what the study of fictional characters, their dilemmas, and decisions have to do with the customer experience – the goal of our company’s study. I could argue at least three reasons.

First, a customer’s experience is emotional. To deliver it successfully, every member of our company must get into the characters of their customers – never an easy task because most of us lean toward some degree of narcissism. The Secret Garden by Frances Hodgson Burnett tells a story of Mary, a bratty little self-centered orphan girl sent to live in the English manor house of her uncle. She discovers a secret garden which was created by her uncle’s wife and locked when that aunt died ten years before. Discovering the key, she enters the overgrown garden and, as she begins transforming it, it transforms her.

Mary, the orphan girl, discovers the manor house hides another secret – a secret room in which her cousin, heretofore unknown to her, lives bedridden, the victim of a mysterious spinal ailment that is more psychological than real. Mary smuggles her cousin Colin into the secret garden, and he too is transformed by the garden and the outdoors. Convinced by Mary that his handicap is psychosomatic, he leaves his wheelchair permanently. The children run and play in the secret garden like any healthy children. When his neglectful father returns from traveling and mourning his wife’s death, his son’s newfound health gives the widowed father a reason to get on with his life. He is transformed and becomes a loving father, a role he has shirked for ten years.

A true customer experience can only be delivered by real people who believe in its power to transform its recipients. We’ve all been contaminated by the sour dispositions of some people and we’ve all been lifted up by the sunny dispositions of others. For good or ill, we tend to pass on what we get from others. The Secret Garden uses the regenerative quality of an untended and overgrown garden as symbolic that all life is regenerative. No better argument can be made for the regenerative quality of a customer experience delivered by sensitive people who believe in its power. If there weren’t a scintilla of economic benefit in doing it, why wouldn’t we?

Second, in B2B businesses there isn’t “a customer” – there are multiple customers. Each customer is a type incomparable to other customer types. One size won’t fit them all because their needs are different. We can identify how the experience for one type of customer should be different from that of others. But can we identify how the experience of one individual should be different from that of another? That requires insight into individuals and their differences. It takes judgment to decide how much accommodation of their differences is justified. And it takes patience and sensitivity to deal with the complexities of people who are often unaware of how their behavior comes across to others. As Ethyl Thayer counseled Billy Ray after the fire scene in the play and later film, On Golden Pond:

You mustn’t let Norman upset you, Billy … He wasn’t yelling at you … he was yelling at life … he’s like an old lion … he has to remind himself he can still roar…

Billy, sometimes you have to look hard at a person and remember that he’s doing the best he can. He’s just trying to find his way, that’s all. Just like you.


You don’t find that kind of insight in business books.

The third reason is a general belief. I can’t be persuaded that a company of people steeped in the knowledge of a wide range of literary fiction could deliver anything less than a world-class customer experience to their customers. They would treat each other differently. And they would relate to the complexities of family and friendships better. They would be more effective human beings. I don’t think this oversells the value of reading literary fiction. Fiction is a hothouse of human behavior told in the terms of its context.

All of the laws, the lectures, and the sermonizing about evils of racism will never have the same influence to change minds that a single reading of Harper Lee’s To Kill a Mockingbird has. But it is an obscure incident in which Scout Finch has a bad day with a teacher in her school and wants to stay home that brings the wisdom of her father, Atticus, to bear on this seemingly trivial problem:

First of all, he said, if you can learn a simple trick, Scout, you'll get along a lot better with all kinds of folks. You never really understand a person until you consider things from his point of view – until you climb into his skin and walk around in it.

Lee’s entire story could be encapsulated in that summation.

Your sins may find you out but that doesn’t seem to prevent people from trying to get away without their sins being discovered. In Dostoyevsky’s Crime and Punishment, Raskolnikov, committed what seemed to be the perfect crime. His doppleganger Svidrigailov gives Raskolnikov a glimpse of where his life is headed. Readers get the sense that Raskolnikov could have gotten away with the murder he committed but his conscience gave him no peace and he voluntarily confessed his deed. A moral spark remained in his soul. He was sent to prison, happily content that redemption only comes through suffering. Too bad Jeff Skilling didn’t read this novel before becoming CEO of Enron. Too bad he didn’t read it in prison. He has yet to admit his guilt and will have no peace until he does.

I predict a bright future for fictional literature if it moves over to the right context: the classrooms of business (and perhaps other professions.) There are many tens of thousands of business books published every year. Only a relative handful of them are worth reading and the scope of each is narrow.

In contrast there are hundreds of thousands of works of literary fiction. Most are worth reading and their scope is broad enough to serve multiple interests.

If I were asked to suggest a good business book for business leaders to read, I’d say read fiction.

Saturday, November 9, 2013

Why ObamaCare Will Fail

Hubris, the overestimation of one’s competence and ability, especially among those in positions of power, has sent mankind on many a fool’s errand and has been the cause of much anguish through the ages.

One of the earliest recorded instances of it is in Genesis 11. In the days following the biblical flood, people spoke a common language, allowing them to collaborate in joint ventures, such as the building of the great tower of Babel in modern day Iraq. “Come let us build ourselves a city and a tower with its top in the heavens,” they said, in order “to make a name for ourselves.” God observes their hubris – the desire to be like Him – and confuses their language so they can no longer communicate with each other; then He scatters them so that their construction project is left incomplete.

Farther down mankind’s timeline Solomon, allegedly the wisest man who ever lived, warned that “pride goes before destruction, and a haughty spirit before a fall.” Sage advice. Hubris is accompanied by a willingness to take excessive risk. It was at the root of the Challenger disaster, the Bay of Pigs catastrophe, and a couple of years ago, the Deepwater Horizon oil rig explosion.

When Barack Obama assumed the office of the presidency, our country was facing high unemployment, a meltdown of financial institutions, two foreign wars, a near-nuclear Iran, the misadventures of a tyrant in Korea whose sanity was questionable, and a fulminating conflict in Palestine. Yet despite all of these challenges, Obama and his minions in Congress chose to “reform” the American healthcare system which represents one-sixth of the economy and was not a smoldering problem. We are left to guess his motivation in this risky undertaking, but one thing is certain: Obama is not burdened with excessive modesty. His self-image borders on messianic. Like the ancient builders of the tower of Babel, one wonders if this large scale government reengineering was driven by the desire to “come; let us make a name for ourselves.”

However, even if it had the noblest motivations, ObamaCare is doomed to fail because of its sheer scale and risk. Here’s why.

The American healthcare system is, well, a system. A system by definition is the aggregation of interdependent parts, activities, or functions – very few of which, are superfluous. Shut down one part, activity, or function and the system or a subsystem of which it’s a part will cease to function. In complex systems, a failure in one part can cascade throughout the system causing failures in related subsystems.

In mid-July of 1977, for example, a New York City blackout occurred because a lightning strike at a substation tripped two circuit breakers. A loose locking nut in one breaker box together with a tardy restart cycle ensured that the breaker was not able to reengage and allow power to begin flowing over the lines again. This caused the loss of two more transmission lines, which caused the loss of power from the Indian Point nuclear power station, which caused two major transmission lines to become overloaded, which caused automatic circuit breakers to trip, which reduced power on the grid, which put the city in total darkness one hour after the lightning strike, which caused widespread looting and rioting.

Note the cause and effect linkages.

Such is the nature of systems. Their greatest strength is their greatest weakness: i.e. their interconnectedness. Our inclination to think there is symmetry in causes and consequences – that disastrous system failures are caused by equally monstrous blunders – is usually wrong. The root cause is most often quite benign and accelerates to a catastrophic ending. A lightning strike on a remote box worth less than $25 caused hundreds of millions of dollars in riot and looting losses and damages almost 100 miles away.

Unlike many technological and organizational systems, the American healthcare system is not the product of a master design. It has evolved over many decades and continues to evolve. It is so vast that there is no person who understands how it works. There are people who understand how parts of the system work – relatively small parts. There are people who possess a global view of how the system works. But there is no one who possesses a ground-level understanding of how inputs are converted to outputs from end to end throughout the system. No one.

Into this unknown world of cause and consequence fools rush where angels fear to tread. Yet Obama and his Democrat lawmakers, academics, and policy wonks with unbounded hubris proposed to improve the effectiveness and reduce the cost of this system that no one fully comprehends – a system with perhaps billions of micro-connections and work-arounds, most of which are invisible to people working in the system, let alone people outside of it, a Pick-up Stix web of relationships whose equilibrium can be put into a tailspin of unintended consequences by small disruptions of it.

The flagships of the ObamaCare invasion will be over a hundred new government bureaucracies under the commands of managers who will face implementation problems they have never confronted before, de novo bureaucracies with no legacy of precedent, whose operating procedures will have been composed by unrelated small armies of regulation writers, who have never worked in the administrative environments their rules are prescribing, each anthill of activity laboring independently of the other regulation writing anthills, thus assuring there is no coherency in their collective work product. There will be, however, a bumper crop of unintended outcomes, some of which will require years to erect adequate organizational defenses preventing their recurrence. As has happened with Social Security, Medicare, and Medicaid, costs will exceed the most pessimistic CBO estimate, perhaps two-fold or more, jeopardizing the U.S. economy for decades, if not forever. The bureaucracy managers will fail, although there will be few objective standards to reveal how badly they are failing. Their failures will not be due as much to the fact that they have not ever dealt with the issues facing them, but that no one has.

Orbiting any new government program with the scale and intrinsic risks of ObamaCare will be two potentially fatal threats. One is the naïve optimism that things will go as planned. They won’t. However, instead of launching initiatives as trial projects with the intent of adapting as new learning is acquired, as well-run business organizations do, they will be launched with a bureaucratic rule book whose effectiveness is believed to correlate with its weight. Immeasurable resources and time will be spent trying to make the system work as planned. In predictable bureaucratic behavior, breakdowns and bottlenecks will be “fixed” with patch upon patch, rule upon rule – repairing rather than replacing defective operations.

The second fatal threat is that ObamaCare is not customer-centric. It is procedure-centric. Customer satisfaction was never its goal. This is by design. In their arrogant hubris Obama and his Democrat legislators assumed as an article of faith that government makes better decisions – certainly more rational ones – than the recipients and providers of healthcare services. The recess appointment of David Berwick in 2010 to head the Centers for Medicare and Medicaid Services and its $900 billion annual budget made that abundantly clear. Berwick is an academic technocrat who has publicly stated multiple times his lack of confidence in private enterprise solutions for healthcare delivery. Yet one need only look to public education, Amtrak, and the U.S. Postal Service to see how Procrustean government-designed and government-managed services are. These institutions have “survived” because there are competitive alternatives to using them. The aim of ObamaCare is to eliminate competitive alternatives and have only a single payer.

The failure to make ObamaCare customer-centric could be its undoing. Absence of a feedback loop from the market and alternative choices assures that healthcare services will be substandard. Americans, with their legacy of enjoying the best products and services in the world, may suffer this for a while, but not for long. Democratic society works because of the consent of the governed. People pay their taxes, follow society’s rules, and accept civil authority voluntarily. The few that don’t are manageable because they are a few. This country has not had to deal with large-scale civil disobedience since the Civil War, but it would be foolish to think that civil disobedience is not a possibility if society believes its public institutions are not serving the interests of the majority. Hopefully society’s frustration with ObamaCare will be resolved at the ballot box, not riots.

These criticisms of ObamaCare do not mean that the American healthcare system has no room for improvement. It does. But the system seems to work for about 85% of its users. Instead of focusing on the 15% that doesn’t work well, the hubris of ObamaCare is its redesign of the entire system.

Why not take insurance and focus on improving it? Small scale highly focused interventions would produce improvements in a relatively short period of time. At least new knowledge would be produced concerning what works and what doesn’t, and that new knowledge would lead to improvements. Such an approach is experimental, flexible, and adaptable. Notwithstanding Berwick’s lack of confidence in private enterprise, a private sector partnership would be critical to the success of the undertaking. Once insurance is “reformed” perhaps unnecessary testing and treatment could be addressed next, followed by improvement initiatives confronting other failures of the healthcare delivery system.

This piecemeal approach has worked in improving business processes. It would work in improving the cost and quality of healthcare delivery. If performance improvement had been Obama’s aim, he would not have undertaken a large-scale, high risk overhaul that has no chance of succeeding. He would have taken a more modest, less visible, and less risky approach.

The hubris of his claim that while he wasn’t the first president to try reforming the American healthcare system he intended to be the last revealed an aim that is ages old: “Come; let us make a name for ourselves.”

***

The preceding blog was posted in July 2010 – 40 months ago. I’ve re-posted it today not to show its clairvoyance, but to show that ObamaCare’s recent and well-publicized failures are, as the original post asserts, due to the arrogant belief that large scale change can succeed despite the complexity of the problem it attacks if only the “right people” can be assembled as the change agents. A remarkably good piece of investigative reporting appearing over last weekend reconfirms ObamaCare’s conceit.

One of its architects, Dr. Ezekiel Emanuel, who made a fool of himself on Chris Wallace’s Fox News Sunday program, wanted a project leader with proven expertise in business, insurance and technology. Instead, Obama chose Nancy-Ann DeParle, a Clinton political hack with no expertise to lead a project like this.

“They were running the biggest start-up in the world, and they didn’t have anyone who had run a start-up, or even run a business,” David Cutler, a Harvard health economist and adviser to ObamaCare observed. “It’s very hard to think of a situation where the people best at getting legislation passed are best at implementing it. They are a different set of skills,” Cutler said.

In 2008 voters gave Democrats the keys to the kingdom – a once-in-a-lifetime bullet-proof majority in both houses of Congress plus the White House, letting Democrats have a free hand to run the government without Republican interference. ObamaCare was the result. It was rammed through the legislative process without a single Republican vote. Many of the Democrats who voted for the health reform law have since lost their seats or opted to retire, victims of voter remorse. Among the retirees is Max Baucus, the Senate architect of ObamaCare. Three years after its passage, a majority of Americans oppose the law – and by double-digit margins in many polls. False promises and a bush-league launch of a key element in the healthcare takeover have pulled Obama’s approval down to 40% and pushed his disapproval figures up to 53% according to the latest Gallup poll. And for what?

For political gain. ObamaCare was never about improving healthcare delivery. It was about ideology – a scheme to lay the groundwork for a single payer health system, the Holy Grail of liberalism.  That was a prize worthy of Obama’s overreach … and his political aspiration.

“Come; let us make a name for ourselves.”

Saturday, November 2, 2013

Working Longer

The first of 78 million baby boomers began reaching age 65 years a couple of years ago. Their impact on society will continue to be felt over the next two or three decades. Boomers are that cohort of people born between 1946 and 1964. Their parents had put their lives on hold to fight World War II, after fighting the Great Depression, and with the war over they returned to a “normal” life, which among other things meant getting married and starting a family. They succeeded at both more than any previous generation.

In the depression decade prior to the war, families produced an average of two children. But in the years following the war, family sizes jumped almost immediately to three and peaked at 3.8 children in the late 1950s. Average family size would not settle back to the pre-war level until the early 1970s. US population increased 44% during the 20-year span of the baby boom.

The baby boom became a veritable “pig in the python” as it has moved through the various life stages of society to the present. When the firstborn boomers reached school age, it started a school building boom. They entered the workforce from the mid-1960s through the late 1980s and created a boom in white collar jobs and led the transition from a manufacturing to a service economy and then an information/knowledge economy.

As the “pig in the python” has begun to reach age 65, what’s next? Retirement? Don’t count on it.

The age of 65 as the milestone age for retirement was conceived by Otto von Bismarck of Germany in the late 1800s when old Otto was conniving to find a way to combat the German Socialist Party. He created a social security system to appeal to his country’s working class but being the ethically-challenged politician that he was, Bismarck knew his program would cost very little. The average German worker of the time never lived to age 65, and the few Germans who did only lived a year or two beyond.

Franklin D. Roosevelt, one of Bismarck’s most ardent admirers, saw the political gimmickry in the German social security system and fobbed off the Social Security Act of 1935 on Americans whose life expectancy was then 61 years. Life expectancy began exceeding 65 with the end of WW II. Recently it’s about 78 and will soon be 80. The Census Bureau projects life expectancy to rise to 86 by 2075 and to 88 by the end of the century. One in every nine baby boomers (nine million of the 78 million people born between 1946 and 1964) will survive into their late 90s, and one in 26 (or three million) will reach 100.

Boomers were a rebellious bunch in their teen years, and they won’t go quietly into retirement. If anything, they will reinvent what retirement means. The notion of a golden age of leisure following a career of work is heavily glossed by mid-20th century values when work was physically demanding and about as intellectually stimulating as reading the Manhattan telephone directory. Boomers are better educated and healthier than their parent’s generation and many will continue working well past age 65 – either in their current career or in a second, which may be unpaid volunteer work.

Twenty years ago just one in ten people older than age 65 were still working but today that figure has reached almost one in five – and it's continuing to grow. A study of people who retired and then returned to work found that over half were employed in full time paid work five years later and one in five worked more than 41 hours a week. Over one in ten men over 75 years of age in a recent study continued to work, whereas half of all women in that age cohort were still working.

This is good economic news. The over-65 population will grow from 13% of the population recently to over 20% in about 25 years. Remaining in the workforce boosts economic growth, reduces demand for public assistance by those who lack the resources to retire at 65, and increases income tax revenues. While there is only anecdotal evidence of it, those continuing to work have better physical and mental health than those with time on their hands who are inclined to overeat, abuse alcohol, and die prematurely.

Delaying their application for Social Security means working seniors can increase the size of their future check by 8% each year. This continues until age 70 when everyone must draw Social Security. There aren’t investments today that pay 8% yields, so continuing to work has a double benefit – a wage income and a yield on deferred retirement income. It’s clear to see why those who have the health and disposition to do so defer retirement and even then, for many, don’t make a complete exit from work.

With the percentage of over-65s growing while the percentage of prime working age adults (i.e. 25 to 54-year olds) have little growth, the US workforce can only grow by extending the date of retirement. This comes at an opportune time because the number of workers per retiree is positioned to drop from 4.5 to 3.0 by 2030 if people retire at 65. (The ratio was 160 when FDR foisted the Social Security shell game on gullible Americans and it was still 42 at the end of WW II.) Social Security taxes are low at these ratios. The Social Security Ponzi scheme “works” as long as more people pay in than are paid out, although Social Security taxes must rise as the ratio falls.  Therefore the fact that a growing number of post-65s continue to work helps the ratio.

Labor force participation among older workers fell in the five decades following the enactment of the Social Security Act. But it began to grow in the late 1990s, helped by a shift in the perception of what “old age” meant. A recent survey reported 60% of the over-55s polled felt younger than their age. This positive attitude correlates with their income and job responsibility.

The idea that seniors who continue to work deprive younger workers of jobs is without merit. Employed older workers with deep experience are more likely to create jobs by facilitating business expansion than they are to produce a zero-sum outcome. Their knowledge makes others more productive, which produces jobs. Many in post-retirement years also create businesses. Harlan Sanders comes to mind.

Still, there are dark clouds for some on the retirement horizon. Nearly half of the workforce at age 50 will be required to extend the age they expected to retire when they were 40. They now know they must work an additional three years, according to a recent study. Financial health is the cause of most of these extensions.

For example, 40% of homeowners over 65 had mortgage debt in 2010, more than double the percentage two decades earlier. The refinancing boom prior to the 2008 Great Recession induced many to capitalize on Fed-driven low mortgage interest. Unfortunately, many chose cash-out refinancing instead of paying down mortgage balances and shortening mortgage duration. Some equity cash-out was needed to finance education loans for children, but some was for vacations and cars when the economic future looked bright. Now nearing retirement age, senior couples are stuck with mortgage balances and home values that are underwater.

Retirement savings were also battered in the Great Recession. Companies have abandoned defined benefit pension plans in favor of 401(k)-type plans which aren’t as generous. The Great Recession reduced defined contribution plan values below the amount needed to support retirement without a radical lifestyle change. The fear that many facing retirement rightfully have is that they will outlive their assets and become a burden on their children or be forced into some form of public assistance. Consequently older workers continue to work in order to rebuild retirement asset values, assuming continued employment is an option. For some it isn’t, and they must seek part-time work, often in multiple jobs.

The “age 65 retirement delusion” causes too many people to ignore the actuarial fact that men are living to 76 years and women are living to 81 years. Longevity will continue for both sexes because a natural limit for life may be well north of 90. The generations of people in the workforce would do themselves a service to forget retirement at 65 unless they are unusually well off.

Yields on stock and bond funds have been squeezed by Fed policies so that traditional “rules of thumb” about saving no longer apply. There was a time not so long ago that financial advisers told clients to save eight times their last year (presumably the highest) of income for retirement. Today, advisers are more likely to say eleven times, and that’s probably too little. Fifteen and or twenty is more likely to become the norm, especially since no one knows the inflation beast Ben Bernanke’s reckless money printing schemes may release.

But do the math. A person earning $100,000 the last year of work would need to have saved $1.1 million according to the “11 times” rule before retiring. In years past, another “rule of thumb” was to expect yields of 4% to 5%. If those yields existed today, which they don’t, a retiree could make withdrawals of these amounts without encroaching on the saved corpus. Withdrawing 4% to 5% per year would pay out $44,000 to $55,000 per year – hardly a kingly amount (the median US income is $51,000.) But in a zero yield environment, the corpus is gone in 25 years at 4% and 20 years at 5%.  Yields above zero would pay part of the annual withdrawal, but until yields get to 4% or 5%, some portion of each year’s pay out depreciates the corpus. I don’t know investment instruments today with 4% to 5% yields, nor do I know many who could cut their lifestyle in half in retirement – i.e. from $100,000 to about $50,000.

Blame the Fed for its profligate bond buying stimulus that helped cause this retirement environment, and blame reckless government spending. But also blame the boomers themselves for saving too little toward retirement. The recent savings rate has been about 4.5¢ per after-tax dollar – down from 12.5% in early 1970s when it began its almost 40-year decline.

While some continue to work because they must, others continue working for other motives. At age 82 Warren Buffett is among the many who have worked well past the traditional retirement age. One wonders why with his wealth, but he isn’t alone. One reason that the rich get richer is that many of them don’t stop working. A recent survey revealed that, among those earning incomes in categories from $100,000 to $750,000, the highest earners were likely to keep going. The wealthiest are most likely to attribute their success to hard work. After a life of hard work, why stop? They keep working.

Others, earning considerably less income, keep working for the enjoyment of what they do. The old saw that if you do what you love you’ll never work a day in your life is true for many. Why stop doing something you’ve spent a professional lifetime learning how to do well? The converse is also true: why do something for a career that you can’t wait to get away from in retirement?

One of the most professionally rewarding careers is apparently university teaching. An amazing 81% of professors in a recent study cited job satisfaction for continuing their careers beyond 65, and with the 1994 elimination of mandatory retirement at 70 in higher education, many contented professors have no plans to retire.

A dimension of the retirement issue that is often ignored is the brain drain it represents to companies and organizations which lose valued employees. For example, about half of the nurses in hospitals and elsewhere are over 55. As they retire, their “manpower” can be replaced – we hope – but their expertise and instincts can’t. A young nurse graduate will spend 35 years getting to the point that he or she can intuitively respond to patient needs, especially in specialized care like the neonatal unit where the patients can’t answer questions and intuition may save lives.

Efforts to combat “brain drain” losses exist but rarely as a well-conceived response to a strategic threat. Some companies redesign their work environments to induce valued employees to stay beyond retirement and disincent early retirement. The impending nurse shortage, for example, has led administrators to put nurse stations closer to patients. Trucking companies, already dealing with a shortage of drivers, especially long haul drivers, are working with cab manufacturers to create more comfortable sleep spaces; they are organizing driving teams and modifying work schedules. Flexible scheduling, part-time work, and telecommuting are becoming more commonplace to accommodate workers with skills a company wants to retain among its employees.

As a practical matter the most important assets in business and other organizations walk out the front door every day. Leaders should be asking themselves what is being done to capture the institutional knowledge and industry know-how that resides in those mobile heads – particularly the older workers who employ intuitive intelligence more than job skills in work.

The true knowledge people gain with age can’t be found in textbooks or corporate documents any more than Grandma can write down a recipe for a dish she has been honing and preparing instinctively for decades. Long after a key employee has departed for the golf links on earth or in heaven, there may be questions where an ancient document is filed or how a complex procedure should be performed or reasoned out. Yet, preserving institutional intelligence in a knowledge base is among the most neglected acts of corporate self-preservation. Even small organizations of 500 or less employees have a hard time knowing who among them knows what.

Social networking platforms, internal wikis and blogs, email archives, employee knowledge/expertise profiles, collaboration and sharing procedures, internal chat rooms and forums are all attempts to snag floating corporate knowledge but they are in their infancy and usually not a mission-critical priority in most organizations.

The key leaders of one of our companies recently spent several days off-site. Among other things they worked in small groups to give narrative to the company’s business strategy and, most importantly, to diagram the business model that would execute that strategy. Their work product was impressive and will, of course, be preserved.

What won’t be preserved is a description of the process, much of which was extemporaneously developed on the fly, that led to their outcomes. A preserved detailed record of their process – their false assumptions, the blind alleys, the breakthroughs and failures, and the thought processes – would be a treasure map. It would enable future generations of leaders, as well as today’s leaders in our other companies, to be virtual eye witnesses now and in the future to their struggle. A record of the process is more important than a record of the product. Given a understandable description of the process, the product could be reproduced even if future business circumstances compel a very different product. But this engine can’t run in reverse. Knowledge of the product won’t produce the process, which will be soon lost in the haze of time if not written down. Even the original participants in this meeting will have difficulty repeating their efforts in a year or two.

Most of the boomers are still working. They converted the American economy from the manufacturing age to the service age to the information and knowledge age. They have worked more differently and longer than any previous generation. They are changing what retirement means. But ultimately they must retire. The greatest transfer of wealth in the history of the country will be passed from the boomers. But they cannot pass on their intellectual assets as they will their physical assets.

Before they shake off their mortal coil, we must find a way to preserve what they spent almost 80 million adulthoods learning.