Tuesday, January 31, 2023

Forgotten History- The U.S. Military’s Obsessive WWII Ice Cream Crusade

An army, Napoleon Bonaparte once noted, marches on its stomach. No matter how vast its ranks, advanced its weaponry, or brilliant its commanders, a military force full of hungry, malnourished troops is unlikely to be an effective one. The central role of food in combat effectiveness goes beyond merely supplying calories; there is no quicker way to sow demoralization or even outright mutiny than to supply one’s troops with low-quality food. Few countries have understood this better than the United States, which over the years has gone to great lengths to provide its fighting men and women with the best possible food no matter where they are deployed. Despite severe shortages of staple foodstuffs such as sugar, during the Second World War the U.S. Government made such comforts as chewing gum, coffee, chocolate, and tobacco an essential part of every serviceman’s rations. But few treats could match the morale-boosting power of ice cream. U.S. forces were addicted to the stuff, consuming nearly 135 million pounds of it in 1943 alone. So great was demand, in fact, that the U.S. Army even built miniature ice cream factories just behind the front lines, while the U.S. Navy spent $1 million dollars on a floating ice cream factory capable of churning out over 500 gallons of cold, creamy goodness every single day.

America’s obsession with ice cream goes all the way back to its founding. George Washington spent the modern equivalent of $5,000 on ice cream in a single summer, while Thomas Jefferson returned from his time as U.S. ambassador to France with a hand-cranked churn and a handwritten recipe for vanilla ice cream. Over the next 100 years, ice cream became intimately associated with childhood, seaside vacations, and other comforting experiences – so much so that ice cream became standard fare for patients convalescing in hospitals while, starting in 1921, immigrants arriving at New York’s Ellis Island were fed ice cream as part of their introduction to American life. But what really established ice cream as a cornerstone of the American experience was the 1919 passage of the 18th Amendment to the U.S. Constitution, which banned the manufacture and sale of alcohol in the country. Prohibition forced brewers such as Anheuser-Busch, Yuengling, and Stroh’s to switch to ice cream making in order to stay afloat, while those citizens uninterested or unwilling to seek out speakeasies and other sources of illegal liquor turned to soda fountains and ice cream parlours to fill the social need previously provided by bars and saloons. The Navy had been hit even earlier in 1914, when General Order 99 banned liquor aboard all naval vessels. As a result, between 1916 and 1925, ice cream consumption in the United States skyrocketed by 55%; by the end of the decade, Americans were consuming nearly one million gallons every single day. Curiously, the 1929 Stock Market Crash and the onset of the Great Depression only drove up consumption, as ice cream was among the few comforting luxuries almost anyone could afford to indulge in. Indeed, the popular ice cream flavour Rocky Road, consisting of chocolate mixed with nuts and chunks of marshmallow, was introduced by ice cream manufacturer William Dreyer in 1929 as a metaphor for the hard times he and his fellow countrymen were experiencing.

When the Second World War broke out in September 1939, shortages of sugar, milk, vanilla, and other raw ingredients forced most Allied nations to ban or severely restrict the manufacture of ice cream and other sweet treats, with the British Government even going so far as to recommend carrots on sticks as a patriotic alternative – and for more on that, please check out our previous video How World War II Made Everybody Think Carrots Were Good for Their Eyes. When the United States joined the war in December 1941, it initially followed suit and declared ice cream a non-essential food. However, faced with vigorous lobbying by the International Association of Ice Cream Manufacturers and the National Dairy Council lobbied, the Government reversed its decision, even going so far as to place ice cream on its official chart of Seven Basic Foods – the precursor to the modern and hilariously bad for you “food pyramid” (as previously discussed in our video The Preposterous Pyramid). Meanwhile, the armed forces doubled down on America’s national obsession, adopting a morale-boosting policy of supplying its troops, whenever possible, with as much ice cream as they could eat.

The Navy was particularly keen on ice cream, with Secretary of the Navy James Forrestal assigning the dessert top priority after one of his assistants reported that:

“…ice cream in my opinion has been the most neglected of all the important morale factors.” 

Nearly every ship in the fleet larger than a destroyer was fitted with refrigeration and ice-cream making equipment, the sweet treat being served at all hours in the “gedunk bar” or canteen. The origin of the naval slang “gedunk” is disputed, with some historians theorizing that it is an onomatopoeia derived from the sound of food falling out of a vending machine. Others posit that it comes from the popular comic strip Harold Teen, or from a Chinese word meaning “place of idleness.” Whatever the case, the gedunk bar and the ice cream it served became a revered fixture of Navy life and an unlikely source of equality in an otherwise rigidly-structured system. According to one likely-apocryphal story, one day two newly-promoted ensigns aboard the battleship USS New Jersey, flagship of the Third Fleet, went down to the ship’s gedunk bar to get some ice cream. Finding the line unacceptably long, the ensigns decided to pull rank and skip ahead past all the ordinary sailors. The moment they reached the ice cream bar, a voice barked out demanding they get back in line. The ensigns wheeled around to reprimand the insubordinate sailor, only to find their challenger to be none other than Admiral William Halsey, commander of the fleet – waiting his turn in line like everyone else.

But perhaps the most dramatic demonstration of the Navy’s obsession with ice cream came on May 8, 1942 during the Battle of the Coral Sea. At around 11:30 AM, the aircraft carrier USS Lexington was hit by multiple bombs and torpedoes dropped by Japanese aircraft, setting off a series of explosions and fires and completely crippling the ship. At 5PM the ship’s captain, Frederick Sherman, gave the order to abandon ship. The crew obeyed – but not before breaking into and consuming the ship’s entire supply of ice cream, with some sailors filling their helmets and licking them clean before lowering themselves into the Pacific Ocean.

By 1945, demand for ice cream had reached such proportions that the Navy spent $1 million converting a concrete barge into a floating ice cream factory in order to supply the needs of ships too small to carry their own ice cream making equipment. Due to a shortage of metal, early in the war the Navy had contracted Pennsylvania-based firm McClosky & Company to build twenty-four 6,000-ton concrete barges to transport supplies. The barges, which had no engines and had to be towed by tugboats, were unpopular with Navy crews, and most were fobbed off to the Army’s Transportation Corps. The unnamed ice cream barge, however, quickly became one of the most popular ships in the fleet. A marvel of wartime engineering, the barge could churn out 500 gallons of ice cream every day and had refrigerated storage capacity for nearly 2,000 gallons.

But the Navy was not the only service with a sweet tooth. Throughout the war, the Army’s Quartermaster Corps provided American troops with the machinery and ingredients to manufacture some 80 million gallons of ice cream every year; in 1943 alone it shipped out 135 million pounds of dehydrated ice cream mix to the front lines. Given a sufficient source of refrigeration, any soldier could combine the mix with water and standard-issue powdered milk to whip up a tasty frozen treat right on the firing line. But this was apparently not good enough for the Quartermaster Corps, who, in early 1945 as Allied troops were advancing through Germany, built dozens of miniature ice cream factories just behind the lines, allowing half-pint cartons to be brought right to the troops in their foxholes.

Even those without access to standard-issue ice cream found ingenious ways of improvising their own, with some soldiers reportedly mixing snow and melted chocolate bars in their helmets to create makeshift sorbet. But the undisputed masters of battlefield ice cream were pilots. In November 1944, after three months of savage fighting, American forces finally captured the South Pacific island of Peleliu from the Japanese. This was followed by a brief lull in the fighting which left the Marine Corps aviators stationed on the island with little to do. Hoping to raise the morale of his men, J. Hunter Reinburg, commander of a squadron of Vought F4U Corsair fighter-bombers, decided to turn his aircraft into a flying ice cream machine. Under his direction, maintenance crews cut an access hatch in the side of the aircraft’s fuel drop tank and suspended a metal ammunition can inside. Reinburg then filled the can with canned milk and cocoa powder and flew his aircraft to an altitude of 33,000 feet, where the freezing temperatures and the vibration of his engine would, he hoped, churn the mixture into ice cream. But when he landed 35 minutes later, the mixture was still a sticky liquid mess. Realizing that the drop tanks were too close to the aircraft’s hot engine, Reinburg moved the ammunition cans further outboard and tried again. While this time the milk-and-cocoa mixture froze, it still did not have the creamy consistency he was seeking. Reinburg eventually solved the problem by rigging the cans with small wind-driven propellers to churn the ice cream mixture directly as it froze.  Army Air Force crews flying bombers like the B-17 Flying Fortress and B-24 Liberator over Europe independently came up with the same technique, though like Reinburg they, too, initially placed the ice cream mixture too close to the engines. Eventually the perfect location for ice cream making was found to be the tail gunner’s turret, which was simultaneously cold and turbulent enough to produce a perfectly smooth, frozen treat – provided, of course, the aircraft and her crew returned in one piece.

The American military policy of providing its troops with ice cream endured well past the end of the Second World War. The only time it was seriously challenged was during the Korean War, when legendary Marine General Lewis “Chesty” Puller complained that ice cream was “sissy food” and that American troops would be tougher if supplied with beer and whiskey instead. The Pentagon disagreed, and issued an official statement guaranteeing that soldiers would be served ice cream a minimum of three times a week.

While many factors contributed to final Allied victory during the Second World War, ice cream played a small but crucial role in maintaining the morale of American troops and reminding them of the home they were fighting for. So the next time you dig into a pint of Ben & Jerry’s, remember to take a moment and thank it for its valiant service.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References

Siegel, Matt, How Ice Cream Helped America at War, The Atlantic, August 6, 2017, https://www.theatlantic.com/health/archive/2017/08/ice-cream-military/535980/


Funderburg, Anne, Chocolate, Strawberry, and Vanilla: A History of American Ice Cream, Popular Press, University of Wisconsin, https://ift.tt/9fGm2rJ


Hegranes, Emily, We All Scream for Ice Cream: World War II and America’s Sweet Tooth, U.S. Naval Institute, July 25, 2019, https://www.navalhistory.org/2019/07/25/we-all-scream-for-ice-cream-world-war-ii-and-americas-sweet-tooth


Stilwell, Blake, This is How WW2 Marines Made Ice Cream at 30,000 Feet, We Are the Mighty, June 12, 2021, https://ift.tt/AiqwybG


WWII Builders of Concrete Ships and Barges, March 2013, https://web.archive.org/web/20180924224714/http://shipbuildinghistory.com/shipyards/emergencylarge/wwtwoconcrete.htm


O’Brien Caitlin, That Time the Navy Spent a Million Dollars on an Ice Cream Barge, Observation Post, July 21, 2021, https://www.militarytimes.com/off-duty/military-culture/2021/07/21/that-time-the-navy-spent-a-million-dollars-on-an-ice-cream-barge/


Meija, Paula, Why the U.S. Navy Once Had a Concrete Ice Cream Barge, Atlas Obscura, July 5, 2018, https://ift.tt/GoMDSqZ


Dunnigan, James & Nofi, Albert, Dirty Little Secrets of World War II, Perennial, New York, 1994

The post Forgotten History- The U.S. Military’s Obsessive WWII Ice Cream Crusade appeared first on Today I Found Out.

from Today I Found Out
by Gilles Messier - January 31, 2023 at 09:40AM
Article provided by the producers of one of our Favorite YouTube Channels!

The Amazonian Arrow Poison that Revolutionized Medicine

Somewhere deep in the Amazon, a monkey leaps nimbly through the rainforest canopy, blissfully unaware that it is being hunted. Down below on the forest floor, the hunter lurks patiently in the shadows, stalking its prey, waiting for the perfect moment to strike. When the moment comes, the hunter raises his long bamboo blowgun, places it to his lips, and blows hard, launching a small poison-tipped dart into the canopy. The dart strikes home, causing the monkey to flee in panic. But it lasts only a few minutes before collapsing and plummeting to the ground, dead. The hunter returns to his village with his prey slung over his shoulder, having once again exploited the deadly power of the Amazon’s most infamous poison. Once the source of countless myths and legends among early European explorers, in more recent years this substance has made a surprising transformation from deadly arrow poison to miracle drug, helping to unlock many of the mysteries of the human nervous system and make modern surgery safer and more effective. This is the fascinating story of curare [“Koo-rah-ree”].

Curare is manufactured and used by dozens of indigenous tribes across Central and South America and the Caribbean, from the Island Caribs of the Lesser Antilles to the Macusi of Guyana and the Yagua of Columbia and Peru. The term “curare” refers not to a single substance but rather a whole family of similar arrow poisons, and is derived from the Macusi word uirary meaning “it kills birds.” Over the centuries there have been numerous attempts by Western researchers to transliterate this word, leading to the substance being variously dubbed “Woorari”, “Woorara”, “Woorali”, “Ourari,” and finally “Curare”. In its traditional form, curare consist of a dark brown, sticky, resin-like paste which can easily adhere to the tips of arrows and blowgun darts. Due to the relative rarity of its ingredients and the significant time involved in its manufacture, curare is largely considered too valuable for use in warfare and is employed almost exclusively in the hunting of game. As most of the animals hunted using curare live high up in the dense rainforest canopy, great care is taken to maximize the potency of the poison so that prey animals cannot flee out of visual range before dropping to the ground. This potency is typically evaluated using animal tests – for example, by pricking a frog with a poison-tipped dark and counting the number of hops it makes before dying. For the most potent curare recipes, time to death ranges from 1-2 minutes for birds, 10 minutes for small mammals like monkeys, and up to 20 minutes for large mammals like Tapirs. As you might imagine, the process for making curare is very complex and often a closely-guarded secret, allowing certain tribes to establish local monopolies and grow wealthy on its manufacture and trade.

While it is not known exactly when curare was first discovered – or by whom – the poison first came to the attention of the Western world in the 16th Century as European explorers began pushing deeper into the South American continent. One of the first descriptions of curare comes from Pietro Martir d’Anghiera – better known as Peter Martyr – an Italian-Spanish historian who in 1530 compiled numerous accounts of South American exploration in a book titled On the New World. In the book, d’Anghiera describes battles between Spanish conquistadores and indigenous tribes in which soldiers and horses were struck by poisoned arrows and appeared to become paralyzed before eventually dying. This detail would be the first hint as to how this mysterious poison actually worked. d’Anghiera goes on to describe how curare was exclusively manufactured by “old criminal women” who were locked up in cramped huts and forced to produce the poison from various toxic plants. When these women were overcome by the toxic fumes and passed out, the poison was deemed to be ready for use. In reality, however, this account turned out to be a complete fabrication, for as later explorers and anthropologists would discover, most South American tribes believe women to be too ritually unclean to participate in the sacred rite of curare production. Indeed, allowing a woman anywhere near the manufacturing process would diminish the magical powers and thus the potency of the poison.

Another early description of curare comes courtesy of English adventurer Sir Walter Raleigh, who in 1595 travelled up the Orinoco River in modern-day Guyana in search of El Dorado, the legendary lost city of gold. But Raleigh’s account is no less fanciful than d’Anghiera’s. In his 1596 chronicle of the expedition, The Discovery of the Large Rich and Beautiful Empire of Guiana, claims that garlic, salt, sugar, or tobacco are effective antidotes to curare poisoning – claims which are known to be laughably false.

Thanks to such rampant exaggeration and confabulation – not to mention extreme secrecy on the part of indigenous tribes – for nearly two centuries the true nature of curare remained a total mystery, the substance passing into myth and legend as one of the many exotic dangers that lurked in the Amazon jungle. It would not be until the 18th century that explorers and scientists began to shed some light on this mysterious poison.

One of the first Westerners to acquire and study a sample of curare was French explorer and mathematician Charles Marie de la Condamine. Perhaps most famous for teaming up with philosopher Voltaire to exploit a loophole in the French National Lottery in 1729, in 1735 Condamine was sent to South America on a surveying mission to measure the length of one degree of latitude. However, Condamine was more interested in finding and bringing back examples of the Cinchona tree, whose bark was the source of the antimalarial drug Quinine. To this end, he travelled widely across the continent for nearly a decade, making many valuable discoveries along the way. For example, he was the first European to encounter and describe caoutchouc or natural latex rubber, identified valuable platinum ores, and brought back seeds of several exotic plants including cacao, vanilla, and sarsaparilla. During his travels he also encountered an indigenous tribe called the Yameos, from whom he acquired a large sample of black curare resin. Upon returning to Europe, Condamine teamed up with Dutch physicians Herman Boerhaave and Gerard van Swieten and anatomist Bernhardus Albinus at the University of Leyden in the Netherlands to investigate the properties of this legendary poison. By injecting various doses into a cat, the four scientists were able to confirm that curare works by paralyzing the muscles. To their surprise, however, they discovered that the cat’s heart continued beating for up to two hours following its apparent death.

Today, we know that curare works by interfering with the function of motor neurons – that is, nerves which command the muscles to contract. At the junction between two nerves or between a nerve and a muscle or organ lies a tiny gap known as a synapse. When a nerve impulse reaches a synapse, it triggers the release of special signalling chemicals known as neurotransmitters, which cross the gap and bind to receptors on the other side, transmitting the impulse across the synapse. In motor neurons, the primary neurotransmitter is acetylcholine. Curare works by binding to the acetylcholine receptor sites, preventing the neurotransmitter molecules from transmitting nerve impulses across the synapse and resulting in the flaccid paralysis of the victim’s entire body. The first muscles to be affected are usually those of the eyes, meaning that the first signs of curare poisoning are fatigue, double vision, and difficulty keeping your eyes open. Next to go are your tongue and throat muscles, making it difficult for you to swallow or speak, followed by your limbs and all other voluntary muscles. But while this in itself sounds bad enough, unfortunately it gets worse. For you see, curare only works on the voluntary and semi-voluntary muscles, meaning that your heart will be unaffected and continue to beat. However, curare does paralyze the diaphragm and intercostal muscles responsible for breathing, meaning your death will be a slow and agonizing one by asphyxiation. Even more horrifying, curare has no effect on human consciousness, meaning you will be fully awake – but unable to move or even call out for help – as you slowly suffocate. Lovely. About the only good news is that, as we shall see, it is now possible to survive curare poisoning thanks to advances in medical technology.

In addition to discovering that curare does not affect the heart, Condamine and his colleagues also found that the poison is harmless when swallowed and must be injected through the skin in order to take effect. Indeed, Western explorers reported that Amazonian tribes readily ate the meat of animals killed with curare with no special preparation. In the 1850s, famous French physiologist Claude Bernard would conduct even more detailed experiments on curare’s effects and mechanism of action, establishing much of what we now know about the poison.

The Leyden team’s research was next taken up by English botanist Edward Bancroft, who in his travels throughout South America became the first European to discover the botanical source of curare and witness its manufacture. This process was also recorded by German naturalist Alexander von Humboldt, who travelled widely throughout the Americas between 1799 and 1804. While recipes vary from region to region, the main ingredient in curare is typically the vine Strychnos toxifera, first identified and classified by German explorers and brothers Robert and Richard Schomburgk in the 1850s. This, along with other plants including Chondrodendron tomentosum and Sciadotenia toxifera – and sometimes even ant or snake venom – are crushed and boiled in water for up to two days, until the residue boils down to a black, sticky substance resembling pitch or molasses. Then, slivers of the Cokarito palm are dipped into the mixture and used to smear it on the tips of arrows and blowgun darts. While the alkaloids responsible for curare’s toxicity mainly come from the Strychnos toxifera vine, the other ingredients act as adjuvants to increase its effectiveness – for example, by accelerating its absorption into the bloodstream or preventing the blood around the arrow or dart wound from clotting. In 1895, German Pharmacologist Rudolf Böhm attempted to classify the different types of curare by the type of container they were stored in, identifying three main categories: tube curare, stored in sections of bamboo, pot curare, stored in terracotta pots, and calabash curare, stored in hollow gourds. This system, however, was later found to  have little factual basis. In reality, different tribes simply have different recipes for curare and different containers in which they typically store the finished product, the two being unrelated to one another.

In 1811, English surgeon Benjamin Brodie repeated Condamine, Boerhaave, van Swieten, and Albinus’s experiments, confirming that curare paralyzes the breathing muscles while leaving the heart unaffected. Crucially, however, he postulated that if the victim’s lungs could be kept mechanically ventilated using bellows, then life could be sustained until the poison wore off, allowing them to make a complete recovery. This hypothesis was confirmed in 1825 by fellow Englishman Charles Waterton, in a series of now-classic experiments conducted on a female donkey. Waterton, who had made his name as a South American explorer by – among other things – wrestling an alligator and a boa constrictor, first applied a tourniquet to one of the donkey’s legs to cut off blood flow before injecting the leg with curare. The donkey suffered no ill effects. But when Waterton removed the tourniquet, allowing blood from the leg to recirculate into the donkey’s body, the animal collapsed and stopped breathing within minutes, confirming that curare does not act locally and must be carried throughout the body to be effective. Once the donkey had collapsed, Waterton proceeded to insert bellows into its windpipe and artificially ventilate the animal for two hours until the poison wore off. The donkey made a full recovery with no lingering effects and lived for another 25 years, having been put to pasture as reward for her medical contributions. Waterton might not have known it at the time, but a century later this remarkable achievement would form the basis of a revolution in surgical science.

Based on these experiments, Waterton began to wonder whether curare might have medical applications. Observing that the action of curare appeared to be the exact opposite of tetanus, rabies, and strychnine poisoning, he suggested it be used in the treatment of these afflictions. Waterton had a particular interest in rabies, having been bitten by a dog as a child. Unfortunately, he never had the chance to test these theories. The closest he got was in 1839 when one Mr. Isaac Phelps, a police inspector for the city of Nottingham, was bitten by a rabid dog he was attempting to rescue from a deep hole. The wound healed and for a while Phelps appeared well, but seven weeks later he began displaying the classic symptoms of rabies – then known as hydrophobia – and was admitted to hospital. One doctor, who had read of Waterton’s theories, called for him at his home 80 kilometres away. Unfortunately, by the time Waterton arrived with his curare preparation, Phelps had already died. While Scottish physician George Harley would later prove that Waterton was correct about curare’s usefulness in treating tetanus and strychnine poisoning, the poison is unfortunately useless against rabies, which directly attacks the central nervous system.

It would be another 100 years before curare finally found a place in the pharmacopeia. In the meantime, however, the poison would play a vital role in solving a medical mystery and expanding our understanding of the human nervous system. In 1900, Austrian physiologist Jacob Pal was conducting experiments on digestive function using dogs. By this time, curare was commonly used to paralyze laboratory animals so that their automatic physiological responses to various drugs and treatments could be more easily observed. On one occasion, Pal injected a paralyzed dog with the toxin physostigmine, extracted from Physostigma venenosum – better known as the Calabar Bean. To his shock, the dog suddenly resumed breathing on its own. Pal had accidentally discovered the first known substance which could counter the effects of curare, unearthing a vital clue as to how the poison actually worked. This discovery in turn led to another dramatic medical breakthrough 35 years later. In 1935, Dr. Mary Walker at St. Alfege’s Hospital in Greenwich, England was studying a rare neurological disorder known as Myasthenia gravis, which causes profound and sustained muscle weakness and fatigue. After observing that patients with the disease were uniquely sensitive to small doses of curare, Walker hypothesized that the affliction was caused by a curare-like substance produced by the patient’s own body. Having read about Jacob Pal’s discovery three decades before, Walker decided to try injecting her patients with physostigmine. The results were dramatic – so dramatic, in fact, that the sudden and seemingly complete recovery of her patients is now widely known as the “Miracle at St. Alfeges.” It is now known that Myasthenia gravis is not, in fact, caused by a natural curare-like substance but rather by an auto-immune disorder that attacks the acetylcholine receptors in the motor neuron synapses. Physostigmine, meanwhile, acts as a cholinesterase inhibitor, interfering with the enzymes that break down acetylcholine molecules after a nerve impulse is sent, effectively resetting the nerve for the next impulse. This increases the amount of neurotransmitters in the synapses, allowing the nerves to overcome the effects of the disease.

Nonetheless, Walker’s discovery led to the discovery of acetylcholine’s role in motor nerve transmission, which had been hypothesized by English pharmacologist Sir Henry Dale in 1914. After two decades of research, Dale, along with German psychobiologist Otto Lowei, finally confirmed the function of acetylcholine, the two men sharing the 1936 Nobel Prize in Physiology or Medicine for their groundbreaking discoveries.

A year earlier, the composition of curare was finally worked out by English chemist Harold King, who determined that the most active toxins in the sticky mixture were the alkaloids curarine and tubocurarine. This attracted the attention of pharmaceutical company E.R. Squibb & Sons, who, intrigued by the potential medical uses for the poison, began purifying and selling small amounts of d-tubocurarine to medical researchers under the brand name intocostrin.

In 1939, curare finally saw its first medical application when American psychiatrist Abram Elting Bennett used it to paralyze patients undergoing metrazol-induced convulsive therapy. The precursor to electroconvulsive or shock therapy, this procedure was used to treat various psychiatric disorders including depression and schizophrenia and involved inducing seizures using a drug called Metrazol or Cardiazol. The problem was, the convulsions induced by this drug were so severe that patients often ended up injuring themselves. By paralyzing the patient with curare first, the procedure was rendered much safer – and for more on how this bizarre treatment came to be, please check out our previous video Who Invented Shock Therapy and Does it Actually Work?

But it was not until 1942 that curare would finally find its most prominent medical application. It was in that year that anesthesiologist Dr. Harold R. Griffith and resident Enid Johnson of Montreal’s Homeopathic Hospital, first experimented with the use of intocostrin in surgery. While surgical anaesthesia had been around since 1946 and was well-established a century later, it still wasn’t a perfect technology. While early general anaesthetics like ether, chloroform, and cyclopropane quickly and efficiently rendered patients unconscious, they did not completely inhibit their automatic responses, meaning patients could still twitch and move about on the operating table, with sometimes tragic results. And while such movements could be eliminated by administering more anaesthetic, this risked inducing respiratory arrest and other deadly side effects – especially in patients with severe medical conditions or bad hearts. Griffith and Johnson’s solution was to administer intocostrin in combination with the anaesthetic, inducing complete paralysis with none of the side effects and allowing less anaesthetic to be used. As he later recalled in 1944:

In June, 1939, Dr. L.H. Wright, of E.R. Squibb & Sons of New York, told me of this new work with curare and remarked how nice it would be if we could use some of it in anaesthesia to relax the muscles of our patients when they got a little too tense. I agreed that such an effect is often to be desired but was too horrified at the old poisonous reputation of curare to be seriously interested.  I met Dr. Wright again in October, 1941, and asked him how he was getting on with curare in anaesthesia.  He said he still thought the idea was sound, but that so far as he knew no one had tried it.  I thought I had better not pass up a good thing any longer, so Dr. Wright kindly sent me some ampoules of intocostrin and in January, 1942, we began using it in the operating room of the Homeopathic Hospital in Montreal.  We administered the drug intravenously to patients under general anaesthesia, and found that it acts quickly, producing in less than a minute a dramatic and complete relaxation of the skeletal muscles.  Even under the most favourable circumstances, and with every general anaesthetic agent, occasions do arise when it seems impossible to get the patient sufficiently relaxed to make abdominal exploration or to close a friable peritoneum.  To have a drug at hand which will give the patient at these critical moments complete relaxation, uniformly, quickly and harmlessly, has seemed to us a blessing to both surgeon and anaesthetist.”

So groundbreaking was this development that anesthesiologists often divide the history of surgery into “before Griffith” and “after Griffith”. Of course, the use of curare and its derivatives in this fashion also paralyses the patient’s lungs, requiring them to be artificially ventilated throughout the surgery. While this was initially done using a hand-pumped rubber bulb, in the late 1940s and early 1950s inventors like John Emerson and Forrest Bird developed mechanical ventilators which used electric or pneumatic motors to automatically breathe for the patient. These twin developments revolutionized surgery, allowing increasingly complex procedures to be performed more safely and with fewer side effects and postoperative complications. While curare derivatives like intocostrin have since been supplanted by safer but related paralytics like pancuronium, the basic procedure remains relatively unchanged since Griffith’s pioneering experiments. Like botox, digitalis, and other poisons-turned-medicines, curare has certainly come a long way from its deadly Amazonian origins.

Of course, as you might expect from a deadly poison, curare also has its darker uses. Indeed, pancuronium bromide is one of three drugs used combination for medical euthanasia and lethal injection – the official method of execution in 28 U.S. states. The other two drugs are sodium thiopental to induce unconsciousness and potassium chloride to stop the heart. While this method is intended to be quick and painless, as previously mentioned, paralytics like pancuronium have no effect on consciousness, meaning that it is impossible to tell whether a person is conscious during the procedure. This has led to speculation that many prisoners may actually experience severe pain during their executions, especially if an inadequate dosage of barbiturates is administered. Such fears have led eleven U.S. states to switch to a single-drug lethal injection method wherein death is induced via an overdose of sodium thiopental.

And if any murder mystery fans in the audience are wondering if curare has even been used in a crime, it most certainly has. Pancuronium was the lethal agent of choice of Efren Saldivar, a serial killer who is estimated to have killed up to 200 patients between 1988 and 1998 while working as a respiratory therapist at Adventist Medical Centre in Glendale, California. The drug was also used by four paramedics nicknamed the “Skin Hunters” to kill five elderly hospital patients in the Polish city of Lodz- the personal details of whom the killers sold to competing funeral homes. And finally, more than a century ago, curare was at the centre of a now long-forgotten assassination attempt. In 1916, a group of socialists and pacifists known as the Adullamites plotted to assassinate British Prime Minister David Lloyd George and Paymaster General Arthur Henderson in the hopes of ending the First World War. At the heart of the conspiracy was Alice Wheeldon, a women’s suffrage and anti-war activist who ran a chemist’s shop in Southampton – along with her two daughters and son-in-law. But before the plot could be carried out, Wheeldon’s group was infiltrated by one ‘Alex Gordon’, an agent for British Military Intelligence. When Gordon informed his handlers of the plot, they sent another agent, Herbert Booth, to gather more information. Booth must have been one hell of an actor, for he soon ingratiated himself to the conspirators to the point that they selected him as Lloyd George’s assassin! Given an air pistol and pellets dipped in curare, Booth was instructed to hide out on Walton Heath Golf Course in Surrey, and there ambush the Prime Minister. Instead, Booth betrayed the plot to his handlers and the conspirators were arrested and charged with treason. Five were put on trial and three found guilty, with Alice Wheeldon being sentenced to ten years in prison and her daughter and son-in-law seven and five years, respectively. A potentially disastrous wartime assassination was thwarted, and Lloyd George doubtless breathed a great – and thankfully non-paralyzed – sigh of relief.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References


Curare (n.), Online Etymology Dictionary, https://www.etymonline.com/word/curare

Curare, a South American Arrow Poison, https://web.archive.org/web/20120728084632/http://www.botgard.ucla.edu/html/botanytextbooks/economicbotany/Curare/

Lee, M.R, Curare: the South American Arrow Poison, Journal of the Royal College of Physicians Edinburgh, 2005, https://www.rcpe.ac.uk/sites/default/files/curare.pdf

Birmingham, A.T, Waterton and Wouralia, British Journal of Pharmacology, April 1999, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1565951/\

Gray, T. Cecil, The Use of D-Tubocurarine Chloride in Anaesthesia, Royal College of Surgeons of England, April 17, 1947, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1940167/?page=1

From Arrow Poison to Surgical Muscle Relaxant, Ye Olde Log, https://web.archive.org/web/20080509154855/http://www.yeoldelog.com/medicinal/curare.shtml

Milner, Daniel, From the Rainforests of South America to the Operating Room: a History of Curare, Summer 2009, https://ift.tt/mbzXtQI

The post The Amazonian Arrow Poison that Revolutionized Medicine appeared first on Today I Found Out.

from Today I Found Out
by Gilles Messier - January 31, 2023 at 09:22AM
Article provided by the producers of one of our Favorite YouTube Channels!

The Curious Case of the People With Split Brains

In late 1961, Drs. Philip Vogel and Joseph Bogen, neurosurgeons at the California College of Medicine in Los Angeles, were preparing to carry out a radical new procedure. The patients under their care suffered from severe epilepsy, which despite their doctors’ best efforts had resisted all attempts at conventional treatment.  One such patient, a 48-year-old former paratrooper identified in medical records as W.J, had suffered a head injury during a combat jump in WWII and soon began experiencing frequent blackouts and convulsions, with one particularly severe episode in 1953 lasting three full days. Vogel and Bogen suspected that these seizures were amplified by rogue neural signals spreading from one side of the brain to the other, and hoped that their severity might be reduced by interrupting communication between the two. This procedure, known as a corpus callosotomy, involved severing the corpus callosum, the bundle of white matter that bridges the two hemispheres and allows signals to cross from one to the other.

Vogel and Bogen had every reason to believe the procedure would work, as experimental corpus callosotomies performed on cats and monkeys had produced few observable side effects. Psychologist Karl Lashley had even speculated that the corpus callosum served no greater purpose than to “keep the hemispheres from sagging.” And indeed, upon waking from their surgeries the epileptic patients seemed entirely normal, with one even quipping that he had a “splitting headache.” But as they recovered, it soon became apparent that something wasn’t quite right. The patients began to favour the right side of their bodies in everyday activities, and seemed oblivious to any stimulation coming from the left side. For example, if they bumped their left arm they would not notice, and if an object was placed in their left hand they would deny its existence. Intrigued by this strange behaviour, in 1962 psychologists Roger Sperry and his graduate student Michael Gazzaniga of the California Institute of Technology began a series of groundbreaking experiments to find out just what was going on inside the split-brain patients’ heads. What they discovered would change our understanding of the human brain forever.

It has long been known that the brain’s functions are not evenly distributed, with each hemisphere specializing in different tasks. For example, in 1861 French physician Paul Broca discovered that damage to a specific region of the left frontal lobe – a region now known as Broca’s Area – resulted in various forms of aphasia – the inability to speak or understand words. This and other observations lead to the left hemisphere being recognized as the primary language centre of the brain. But until the experiments of Sperry and Gazzaniga, neurologists did not truly understand the full extent of this specialization. The split-brain patients provided a golden opportunity to study hemispheric specialization, for the surgery had essentially left them with two, nearly-independent brains. Contrary to Karl Lashley’s dismissive appraisal, the corpus callosum is in fact a highly sophisticated and essential part of the brain, containing some 200 million neural fibres capable of transmitting one billion bits of information per second. This is vital to normal functioning of the brain, for due to a quirk of vertebrate evolution our nervous systems are contralateral, meaning that each hemisphere receives information largely from the opposite side of the body. For example, the optic nerves, which convey visual information from our eyes to our occipital lobes, cross over at a junction called the optic chiasm, meaning that information from the right eye is transmitted to the left hemisphere and vice-versa. Ordinarily this counter-intuitive arrangement works just fine as the information is immediately transmitted to the correct hemisphere via the corpus callosum. But in split-brain patients this channel of communication no longer exists, meaning that information transmitted to a particular hemisphere stays in that hemisphere. And this is where things start to get weird.

Sperry and Gazzaniga probed the patients’ hemispheres individually by stimulating the opposite side of the body – for example, by presenting an image to the right eye to stimulate the left hemisphere. In one early experiment, they flashed a series of lights across the patients’ field of view. When asked to report when they had seen a light, the patients only reported seeing lights flashing on the right. But when asked to point whenever they saw a light, they successfully reported seeing lights on both sides. Next, Sperry and Gazzaniga projected the word HEART such that the letters HE appeared in the patient’s left-hand field of vision and ART in their right-hand field of vision. When asked to report what they saw, the patients verbally responded “ART”; but when asked to point to the word they saw using their left hand, they pointed to HE. Similarly, if an object was placed in the patient’s right hand, they were easily able to name it, though when asked to point to an image of the same object using their right hand,  were unable to do so. When the sides were reversed, the patients could easily point to the object, but, much to their confusion, were unable to name it. This and similar experiments indicated that language processing abilities are almost entirely localized in the left hemisphere, while the right hemisphere specializes in visual perception tasks such as recognizing faces and emotions and spotting differences between objects. This verbal-perceptual divide is even baked in from birth, with most infants favouring the left side of their mouths when smiling and the right side when babbling.

But as is often the case in biology, Sperry and Gazzaniga soon discovered that things weren’t quite so clear-cut, and that the right hemisphere was a far more capable communicator than previously believed. For example, one patient, when presented with a picture of his girlfriend in his left eye, was unable to speak her name, but was able to spell it out using Scrabble tiles. Sperry and Gazzaniga also found that while the left hemisphere excels at making straightforward word associations, the right hemisphere is better at recognizing subtler relationships and insinuations. For example, when the left hemisphere was presented with the word foot, it was better at picking out a related term like heel from a list of words. But when the right hemisphere was presented with two additional words, cry and glass, it more easily picked out the connecting word – in this case, cut.

But as strange as these discrepancies are, the experience of living with a split brain can sometimes be even more bizarre, with patients feeling as though they literally have two separate brains – brains which are often at odds with one another. For example, patients have reported doing up their shirt buttons with one hand only to have the other hand spontaneously unbutton them, or placing items in a shopping cart with one hand only for the other to place them back on the shelf. Many patients are even able to copy two different images using each of their hands, though given the right hemisphere’s greater spacial reasoning capabilities, the left hand is generally superior at this task than the right. In rare cases this phenomenon can even take the form of “alien hand syndrome,” in which a patient’s hand appears to have a mind of its own and sometimes attempts to strangle its owner or others. This is also sometimes known as “Dr. Strangelove Syndrome” after Peter Sellers’ character in the 1964 Stanley Kubrick film who exhibits similar symptoms. Unfortunately there is no cure for the condition other than keeping the offending hand occupied with other tasks and restraining it at night to prevent injuries.

In such cases of independent limb movement, the offending limb is almost always the left one. This reflects what is perhaps Sperry and Gazzaniga’s greatest discovery: the executive dominance of the left hemisphere. Their and later experiments revealed that much of the right hemisphere’s reasoning and decision-making processes are entirely unconscious and must be mediated and interpreted by the dominant left hemisphere for us to become aware of them. When the connection between the hemispheres is severed, this mediation and interpretation function is lost, hence why the left-hand limbs – controlled by the right hemisphere – are able to act without their owner’s conscious knowledge. This effect also applies to logical reasoning. For example, when one split-brain patient was shown a picture of a chicken foot in their right-hand visual field and a snowy field in their left-hand visual field and asked to choose the closest association from a list of words, they paired a chicken with the chicken foot and a shovel with the snowy field. However, when asked to rationalize why they had chosen the shovel, the patient responded “to clean out the chicken coop.” This indicated that the image presented to the left hemisphere – the chicken coop – had overridden that presented to the right hemisphere in the patient’s mind.

This extreme divide between the functions, capabilities, and even the “personalities” of the two hemispheres stunned the psychology community, and lead Roger Sperry to conclude in 1974 that:

“…[each hemisphere is] indeed a conscious system in its own right, perceiving, thinking, remembering, reasoning, willing, and emoting, all at a characteristically human level, and … both the left and the right hemisphere may be conscious simultaneously in different, even in mutually conflicting, mental experiences that run along in parallel.”

Sperry and Gazzaniga’s experiments revolutionized our understanding of how the brain organizes different perceptual and conceptual tasks, and for this work Sperry, along with David Hubel and Torsten Wiesel, were awarded the 1981 Nobel Prize for Medicine and Physiology. Sperry continued to study split-brain patients until his death in 1994, while Gazzaniga still pursues this line of research to this day.

If all this talk of hemispheric specialization sounds a bit familiar, it may be due to the popular belief that people are predominantly “left” or “right-brained” depending on their particular cognitive strengths. The reasoning goes that since the left hemisphere specializes in language and logical reasoning and the right hemisphere in visual and spacial reasoning, those with a mathematical or scientific bent are predominantly “left-brained” while those with a more creative, artistic temperament are “right-brained.” But just like the notion that we only use 10% of our brains, the whole “left-brained”/“right-brained” dichotomy is nothing but a load of bunkum. Numerous studies using functional MRI have shown that both sides of the brain are used more or less equally regardless of the cognitive task being performed. And this makes sense, as supposedly “left-brain” fields like mathematics and science are also profoundly creative activities, while supposedly “right-brain” activities like art often require a great deal of analytical precision. So, sorry, but all those online quizzes promising to reveal your cognitive style are about as scientific as a horoscope.

As for the split-brain patients themselves, the vast majority reported a significant reduction in the frequency and severity of their seizures. While this improvement came at the cost of living with a pair of often uncooperative brains, most eventually learned to cope with this strange existence in a variety of fascinating ways. Sperry and Gazzaniga even observed one of these adaptations, which they called “cross-cuing,” in one of their experiments. This experiment involved flashing either a red or green light in the patient’s left-hand visual field and asking them to report which colour they had seen. If the patient answered correctly more than half the time – as would be expected if they were just guessing – this would indicate that the right hemisphere had at least some spoken language ability. Strangely, when the experimenters allowed the patient to make a second guess, their scores improved dramatically. After a while, Sperry and Gazzaniga realized that when the patient’s right hemisphere saw one colour but heard the patient say the other, it unconsciously caused the patient to frown. The patient’s left hemisphere then detected this frown and deduced that it had guessed wrong. In this manner, split-brain patients are able to use subtle physical cues to allow their right and left hemispheres to communicate to a limited extent. Studies of children born without a corpus callosum also indicate that the hemispheres also communicate via other means, meaning that split-brained patients’ brains may not be as disconnected as we once thought. To paraphrase Jeff Goldblum in Jurassic Park: the brain, uh, finds a way.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References


Myers, David, Psychology, Worth Publishers, NY, 2004


Experiment Module: What Split Brains Tell Us About Language, McGill University, https://thebrain.mcgill.ca/flash/capsules/experience_bleu06.html


Gazzaniga, Michael, The Split Brain in Man, Scientific American 217(2), 1967, https://people.psych.ucsb.edu/gazzaniga/michael/PDF/The%20Split%20Brain%20in%20Man.pdf


Metcalfe, Janet; Funnell, Margaret & Gazzaniga, Michael, Right-Hemisphere Memory Superiority: Studies of a Split-Brain Patient, University of California Davis, May 1995, http://www.columbia.edu/cu/psychology/metcalfe/PDFs/Metcalfe%20Funnell%20et%20al%201995.pdf


No, You’re Not Left-Brained or Right-Brained, Psychology Today, February 15, 2018, https://www.psychologytoday.com/ca/blog/consciousness-self-organization-and-neuroscience/201802/no-you-re-not-left-brained-or-right

The post The Curious Case of the People With Split Brains appeared first on Today I Found Out.

from Today I Found Out
by Gilles Messier - January 31, 2023 at 09:04AM
Article provided by the producers of one of our Favorite YouTube Channels!

Force Z and the Death of the Battleship

On April 6, 1945, the Imperial Japan launched Operation Ten-Go, a desperate last-ditch naval attack against the Allied fleet supporting the invasion of Okinawa. Supported by the light cruiser Yahagi and eight destroyers, the charge was led by the pride of the Imperial Japanese Navy, the mighty battleship Yamato. A quarter-kilometre long, displacing 65,000 tons, and armed with no fewer than nine 46-centimetre guns firing one-and-a-half-ton shells, Yamato was the largest and most powerful battleship ever built and considered by Japanese high command to be a nigh-unstoppable weapon. But the glorious last ride of the Imperial Navy was not to be; before the task force could even reach Okinawa, it was set upon by over 400 warplanes launched from American aircraft carriers. Less than five hours after first contact, Yahagi, four destroyers, and even the mighty Yamato had been sent to the bottom. More than 4,000 Japanese sailors died in the engagement, for the loss of only 12 U.S. airmen. It was Japan’s last major naval operation of the war, and marked the end of an era. The battleship, once the last word in naval firepower, no longer ruled the seas. In retrospect, the Japanese should have seen this coming, for they themselves had taught the British Royal Navy this same lesson nearly four years before. This is the tragic story of the sinking of Force Z.

After the First World War, the British Empire reached its greatest extent, encompassing more than 26% of the world’s land area and 23% of its population. However, by the 1930s what had long been Britain’s greatest strength was fast becoming its greatest liability. The Empire’s far-flung colonies could only be protected and held together by the Royal Navy, and with post-war budget cuts and the onset of the Great Depression even this mighty force found itself stretched dangerously thin. This made Britain’s overseas territories tempting targets for other up-and-coming imperial powers – including the Empire of Japan. Following their stunning victory over the Russian Empire in the 1904-1905 Russo-Japanese War, the Japanese had come to see themselves as the natural masters of Asia and had pursued a policy of aggressive territorial expansion known euphemistically as the “Greater East Asia Co-Prosperity Sphere”. In 1910 the Japan annexed the Korean Peninsula, while in 1931 Japanese troops annexed the Chinese province of Manchuria and established the puppet state of Manchukuo. This was followed by a full-scale invasion of southern China in July 1937, while in September 1940 the Japanese captured the French colony of Indochina – today Vietnam, Cambodia, and Laos.

The latter development particularly rattled the British, for it placed Japanese forces within easy striking distance of Hong Kong, British Malaya, and the Dutch East Indies – colonies rich in the rubber, tin, and oil the Japanese needed to feed their imperial war machine. Anticipating such an invasion, in 1919 the British had established a large naval base at Singapore and developed the so-called “Singapore Strategy” to deter Japanese aggression. This strategy, developed from a series of war plans over 20 years, was to be carried out in three phases. At the onset of a Japanese attack, the British garrison would man and defend “Fortress Singapore”, holding out while the British Home fleet sailed for the Far East via the Mediterranean, Suez Canal, and Ceylon – today Sri Lanka. On arrival, the fleet would retake Hong Kong and relieve Singapore before sailing on to blockade the Japanese Home Islands. Since the Royal Navy was still the most powerful naval force in the world, the British were confident the Japanese would not risk a direct confrontation and would quickly capitulate. The Singapore Strategy became the cornerstone of British defence strategy in the Far East, its effectiveness considered so assured that, as Royal Navy Captain Stephen Roskill wrote in 1937:

“…the concept of the ‘Main Fleet to Singapore’ had, perhaps through constant repetition, assumed something of the inviolability of Holy Writ”.

But while the Singapore Strategy was impressive on paper, a combination of poor planning, budgetary limitations, political interference, and plain hubris led the British to make a series of major strategic blunders. For example, planners believed that the monsoon season would prevent Japanese forces from crossing the Gulf of Thailand from Indochina to Malaya until at least February 1942. Consequently, most of the aircraft defending the peninsula were diverted for service in the Middle East and Russia. Racist attitudes towards the Japanese also led the British to underestimate the capability of their armed forces, leaving Singapore, Hong Kong, and Malaya poorly garrisoned. As we shall see, this was to have disastrous consequences for the British Empire in the Far East.

There was also another major complicating factor: Nazi Germany. On September 3, 1939, the same day Britain declared war on Germany, the German navy launched a campaign of unrestricted submarine and surface warfare against British and Allied merchant shipping, hoping to starve the island nation into submission. Suddenly, nearly the entire Royal Navy was called upon to counter the Nazi threat, leaving few ships available to defend the Far East. In desperation, Britain called upon the U.S. Navy to contribute ships from its Pacific Fleet, based at Pearl Harbour in Hawaii. But the United States, which was still officially neutral in the conflict, was hesitant to deploy its ships in defence of British colonial interests, and decided instead to focus its efforts on the Atlantic theatre once it entered the war. The British therefore developed a strategy of replacement, whereby American ships deployed into the Atlantic would free up British ships for deployment to the Far East. The full build-up of this Eastern Squadron was to be completed within 80 days after the entry of Japan and the United States into the war, which was anticipated to occur sometime in late 1941 or early 1942.

Thankfully, U.S. President Franklin Roosevelt chose not to wait until the United States had officially entered the war to intervene in the Atlantic, meaning that by August 1941 there was sufficient U.S. Naval presence in the Atlantic for an Eastern Squadron to be deployed ahead of the anticipated Japanese invasion. The question now became: which ships to send? While Prime Minister Winston Churchill favoured sending the most advanced King George V-class battleships, First Sea Lord Sir Dudley Pound and Admiral of the Home Fleet Sir John Tovey disagreed, arguing that the class’s design made it unsuitable for operations in tropical climates. They also wished to keep the more powerful vessels in home waters to counter the German warships Tirpitz, Scharnhorst, and Gneisenau. In the end, however, circumstances decided the matter, for there were only six capital ships in a fit enough state to reach the Far East before the spring of 1942: the King George V-Class battleship HMS Prince of Wales, the Renown-class battlecruiser HMS Repulse, and the four Revenge-class battleships  HMS Revenge, Resolution, Royal Sovereign, and Ramillies.

HMS Prince of Wales was the Royal Navy’s newest and most advanced battleship. Launched in May 1939, she was still being fitted out when on May 24, 1940 she was called out to face the German battleship Bismarck. With workmen from the Vickers engineering firm still aboard scrambling to get her radar and gun-laying systems online, she steamed into the Battle of the Denmark Strait, receiving seven direct hits including a 15 shell that ricocheted through her compass platform killing everyone except the Captain and a signalman. But she gave as good as she got, and managed to land two crippling hits that contributed to the Bismarck’s eventual destruction three days later. After being repaired, in August 1940 Prince Of Wales carried Prime Minister Churchill across the Atlantic to a secret conference with President Roosevelt, then in September was assigned to Force H, escorting supply convoys to the Mediterranean island of Malta.

HMS Repulse, on the other hand, was a much older ship. Launched in January 1916, she participated in the 1917 Battle of Heligoland Bight – her only action of the First World War – and in the 1930s escorted merchant ships during the Spanish Civil War. Due to lessons learned during the war her armour and guns were upgraded in 1918 and again in 1934 and 1940. She was scheduled to receive a further upgrade to her antiaircraft batteries, but her assignment to the Far East task force resulted in this plan being abandoned, leaving Repulse vulnerable to air attack.

The Revenge-class battleships were even older, having been launched between 1913 and 1914. Considered obsolete and no match for the latest Imperial Japanese Navy vessels, the battleships were instead assigned to the 3rd Battle Squadron based in Ceylon, arriving in September 1941.

Meanwhile, Prince of Wales, Repulse, and the escorting destroyers HMS Electra, Express, and Hesperus were organized into Force G, under the command of Admiral Sir Thomas Phillips. Prime Minister Churchill and his naval advisors continued to argue over the composition of the squadron, so in the end a compromise solution was reached. Force G was ordered to sail for Cape Town, South Africa, where it would anchor and await instructions. The Admiralty would then review the strategic situation and decide whether to send the ships on to Singapore or retain them for use in home waters. Repulse, which had just finished escorting a supply convoy around the Cape of Good Hope, was ordered to Ceylon to rendezvous with the rest of the force. While Churchill had recommended the force be escorted by an aircraft carrier, the only available ship, HMS Indomitable, had run aground in Jamaica and would not be ready to sail until November. So, with time of the essence, Force G set off for Singapore without air cover, sailing from Greenock in Scotland on October 25, 1941. Though a far cry from the massive fleet called for by the original Singapore Strategy, Churchill was confident that the “smallest number of the best ships” would be more than enough to deter the Japanese.

Force G arrived in Cape Town on November 16. Admiral Phillips expected to remain in South Africa for at least seven days, a week of social events and media coverage having been planned for propaganda and morale-boosting purposes. But before the Prince of Wales even reached Cape Town, Phillips received orders to depart as soon as possible and rendezvous with Repulse in Ceylon. The anticipated review of Force G’s mission was, in fact, a ruse meant to deceive Winston Churchill; the Admiralty had already committed to carrying out the Singapore Strategy in full.

Meanwhile, Repulse was in Durban, South Africa, preparing to sail to Ceylon. There, her crew received a disturbing portent of things to come as South African Prime Minister Jan Smuts came aboard and delivered an address. While no official record of this speech survives, Able Seaman Ted Matthews later recalled the sobering tone:

From the onset he shattered our conceptions of the Japanese military stating in clear terms that if hostilities erupted we weren’t going to be confronted by a race of inferiors. To the contrary he felt the Japs weren’t in the least concerned by the possibility of conflict with Britain. He also made it clear despite what we’d been told in the past that they possessed a fully modern airforce. Though the one comment that’s never left me were the fatalistic words he feared many of us wouldn’t be returning from this mission and he’d pray for our safety during the troubled times ahead. None of us could possibly have imagined the accuracy of this prophecy.”

Prince of Wales and her escorts remained in Cape Town for only two days, departing on the afternoon of November 18. On November 29, Force G reached Ceylon, where it was joined by Repulse and the destroyers HMS Encounter and HMS Jupiter. Hesperus had already departed before the force reached Cape Town. Admiral Phillips disembarked and flew on to Singapore and the Philippines to meet with Allied commanders, while the fleet sailed on without him, finally reaching Singapore on December 2. On arrival, the destroyers Encounter and Jupiter were found to be suffering from mechanical faults and were replaced by the First World War-vintage destroyers HMAS Vampire and HMS Tenedos. Prince of Wales then entered the dry dock at the Singapore naval base and underwent a thorough cleaning of her hull and boilers.

Then, less than a week later, the long-expected attack finally came.

The Japanese surprise attack on Pearl Harbour on the morning of December 7, 1941 was merely the opening move in a massive, coordinated series of invasions all across Southeast Asia. Indeed, even before the Imperial Navy’s aircraft had even reached Hawaii, Japanese troops landed in Malaya, Hong Kong, the Dutch East Indies, the Philippines, Guam, and Wake Island. As these locations were across the International Date Line from Hawaii, the attacks were recorded as taking place on December 8. That same day, Japanese aircraft based in Indochina bombed Singapore. Force G – now redesigned Force Z – fired at the attacking aircraft while sitting at anchor, but made and received no hits. Once news of the attack reached Britain, Admiral Phillips was ordered to weigh anchor and intercept Japanese invasion convoys steaming across the Gulf of Thailand. Phillips hesitated, for the only Allied aircraft available to protect the fleet were the 10 slow and outdated Brewster Buffalo fighters of No. 453 Squadron RAAF stationed at Sembawang. Admiral Tovey’s misgivings about the ship had also proven correct: the hot, humid climate of Singapore had rendered Prince of Wales’s gun control radars inoperative and degraded her anti-aircraft ammunition, while the ship’s lack of air conditioning had led to increased crew fatigue. Nonetheless, Phillips elected to proceed, believing that Japanese aircraft could not operate so far from land and that Prince of Wales was all but impervious to aerial attack. After all, at that point no capital ship had ever been sunk by aircraft on the open sea. It was a gamble which was to cost him dearly.

Force Z departed Singapore at 17:00 hours on December 8 and sailed north to intercept Japanese forces landing at Khota Bahru on the northeast coast of Malaya. The following day at around 14:00, they were spotted by the Japanese submarine I-65, which shadowed the squadron for 5 hours, reporting their positions. This report soon reached the headquarters of the Japanese Navy’s 22nd Air Flotilla, which had just arrived at airfields in Indochina. At the time, 22nd’s aircraft were loading up with bombs for an attack on Singapore, but upon receiving news of Force Z’s sailing they immediately switched over to torpedoes. However, by the time the aircraft were ready, the sun was beginning to set, and the attack was postponed until the following morning. Meanwhile the Japanese 2nd Fleet was dispatched south from Indochina to intercept Force Z. While the two fleets never spotted each other, just before sunset Force Z was spotted by three seaplanes launched by Japanese convoy escorts. Realizing he had lost the element of surprise, Phillips abandoned his attack on Khota Bharu and turned back towards Singapore.

Just before 1:00 the following morning, Phillips received a radio message indicating that Japanese troops were landing at Kuantan on the east coast of Malaya. At around 7:00 hours Force Z reached the area and Prince of Wales launched a reconnaissance aircraft to investigate. When neither it nor the destroyer Express found anything, Phillips carried on towards Singapore. Little did he know, however, that he had been spotted by the submarine I-58, and that 34 Mitsubishi G3M “Nell” torpedo bombers and 51 G4M “Betty” high-level bombers of the 22nd Air Flotilla were on their way to intercept him.

The first ship the bombers spotted was not, however, the Prince of Wales or the Repulse, but the destroyer Tenedos, which had sailed for Singapore the day before refuel and was now 300 kilometres south of Force Z. Mistaking the destroyer for a battleship, at 10:00 the aircraft dropped several armour piercing bombs on the ship before realizing their mistake and breaking off. Minutes later a Japanese scout aircraft spotted Force Z and called in the rest of the bomber force on its position. The attack had begun.

The first wave of eight bombers attacked around 11:15, focusing exclusively on Repulse. The old battlecruiser proved surprisingly nimble, however, and managed to dodge most of the bombs, suffering only one minor hit to her seaplane hangar. In return, her anti-aircraft gunners damaged five of the attacking aircraft. The next wave arrived at 11:40, dropping eight torpedoes at the two ships. Only one struck home, hitting Prince of Wales’s outer port propeller shaft. The shaft, rotating at full speed, twisted and ripped through the bulkheads, sending 2,400 tons of water pouring into the ship’s engineering compartments. The explosion set off a chain of failures, jamming the ship’s steering, shorting out her electric generators, knocking out her bilge pumps, and preventing her electrically-driven antiaircraft guns from being trained. Fatally crippled, Prince of Wales steamed helplessly northwards and was struck by four more torpedoes and one bomb. Around 13:00, Phillips gave the order to abandon ship.

Meanwhile, Repulse fought on, successfully dodging 19 torpedoes and shooting down three aircraft. At 12:23, however, she, too, was struck by four torpedoes. The ship listed 65 degrees to port, hung on for several moments, then rolled over and sank. Less than an hour later at 13:18, Prince of Wales, pride of the Royal Navy, also capsized and slipped beneath the waves. From start to finish, the attack had taken little more than two hours.

Electra, Vampire, and Express moved in to rescue the survivors. In all, 840 British sailors lost their lives that day – 327 aboard Prince of Wales and 513 aboard Repulse, which had capsized before the order to abandon ship could be given. Of the senior officers, Admiral Phillips and Captain John Leach of the Prince of Wales chose to go down with their ship, while Captain William Tennant of the Repulse was among the survivors. Ironically, while it was Admiral Phillips’s brash decision to sail from Singapore which placed Force Z in the path of the Japanese bombers, it was his caution which sealed the squadron’s fate, for throughout the operation Phillips maintained complete radio silence so as not to give up his position. Indeed, it was not until an hour into the Japanese attack that the first radio signal was sent calling for air support. By this time, however, it was far too late, the aircraft of No. 453 Squadron arriving on the scene just as the Prince of Wales went under. As the squadron’s commander, Flight Lieutenant Tim Vigors later lamented:

I reckon this must have been the last battle in which the Navy reckoned they could get along without the RAF. A pretty damned costly way of learning. Phillips had known that he was being shadowed the night before, and also at dawn that day. He did not call for air support. He was attacked and still did not call for help.”

Back in Britain, Prime Minister Churchill was woken in the middle of the night by news of the sinking, later writing:

“In all the war, I never received a more direct shock… As I turned over and twisted in bed the full horror of the news sank in upon me. There were no British or American ships in the Indian Ocean or the Pacific except the American survivors of Pearl Harbor, who were hastening back to California. Across this vast expanse of waters, Japan was supreme, and we everywhere were weak and naked.”

But the worst was yet to come. Hong Kong fell on December 25 after 17 days of fighting. In Malaya, invading Japanese troops moved with terrifying speed, aided by excellent British-built roads, the inexperience and disorganization of the defending garrison, and an deceptively simple piece of technology: the bicycle – and for more on that, please check out our previous video How Bicycles Caused the Downfall of the British Empire. By January 31, 1942, the defending Commonwealth troops had been pushed off the peninsula and retreated to the island of Singapore, blowing up the Johore causeway behind them. The 85,000-strong garrison fought on for another two weeks, finally surrendering on February 15. Some 140,000 British, Australian, New Zealand, and Indian troops marched into Japanese captivity – the single greatest defeat in British military history. Within months the Japanese Empire controlled a huge swath of Asia and the Pacific stretching from northern Manchuria to the Solomon Islands. It would not be until the Battle of Midway in June 1942 that the tide finally turned and the Allies began the long, bloody island-hopping campaign that would take them from the island of Guadalcanal to Japan’s very doorstep.

By that time, however, the Allies had learned the hard lesson taught them by the tragic loss of Repulse and Prince of Wales. Never again would an allied capital ship operate without a protective screen of aircraft. For as the sinking of the Yamato proved beyond a doubt, the age of the battleship was over. The seas now belonged to the aeroplane.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References

Keegan, John (ed.), World War II: a Visual Encyclopedia, PRC Publishing Ltd, 1999


Klemen, L, “Seventy Minutes Before Pearl Harbor” – The Landing at Kota Bahru, Malaya, on December 7, 1941, The Netherlands East Indies 1941-1942, https://warfare.gq/dutcheastindies/kota_bharu.html


Garzke et al, Death of a Battleship: The Loss of HMS Prince of Wales, December 10, 1941, https://ift.tt/NTes5yE


Sinking of the HMS Repulse, History of Diving Museum, https://ift.tt/mVSnOUy

The post Force Z and the Death of the Battleship appeared first on Today I Found Out.

from Today I Found Out
by Gilles Messier - January 31, 2023 at 09:02AM
Article provided by the producers of one of our Favorite YouTube Channels!

Review: Kitty Takitos de maiz sabor Criollo

These rolled-up snacks were shaped much like Takis, though they chose a name closer to Taquitos. ...

from Taquitos.net Snack Reviews
by January 31, 2023 at 09:32AM

Friday, January 27, 2023

What’s Up With the Very Real ‘Doomsday Clock’?

On January 23, 2020, the Bulletin of the Atomic Scientists, a non-profit research and education organization based in Chicago, moved the hands on its Doomsday Clock forward to 100 seconds to midnight – the closest in its 74-year history. According the Bulletin, this change reflects the growing threat posed by climate change, nuclear proliferation, and misinformation, and the increasing unwillingness of world leaders to respond to said threats. But just what is the Doomsday Clock, anyway? Where did it come from, how its it updated, and what can it tell us about the ever-changing risk of global catastrophe in the 20th and 21st Centuries?

The Doomsday Clock traces its origins back to 1945 and the aftermath of the atomic bombings of Hiroshima and Nagasaki. In that year, a group of Chicago scientists who had worked on the Manhattan Project, including metallurgist Hyman Goldsmith and biophysicist Eugene Rabinowitch, founded the Bulletin of the Atomic Scientists of Chicago, a monthly newsletter aimed at keeping the public informed of the emerging danger of nuclear weapons. Two years later, when Rabinowitch and Goldsmith decided to expand the newsletter into a proper magazine, they asked artist Martyl Langsdorf, wife of physicist and Bulletin member Alexander Langsdorf, to design the cover. At first Martyl considered drawing a giant letter “U” to represent Uranium, but after listening to conversations between other Bulletin scientists, she realized that essence of the publication was not nuclear weapons themselves but the dire risk of global catastrophe they posed. Thus, according to the Bulletin’s website:

“She drew the hands of a clock ticking down to midnight. Like the countdown to an atomic bomb explosion, it suggested the destruction that awaited if no one took action to stop it.”

The Doomsday Clock debuted on the cover of the June 1947 issue of the Bulletin, with the hands set at seven minutes to midnight. Though this position originally had no particular meaning – Martyl admitting that she placed the hands for “aesthetic reasons” – it would nonetheless form the baseline for all future adjustments. The decision whether to move the hands – and how far – is made every January based on changes in technology and geopolitics over the previous year. Originally this decision was made by founding editor Eugene Rabinowitch himself, but after his death in 1973 the responsibility passed to the Bulletin’s Science and Security Board in consultation with its Board of Sponsors, which currently includes 13 Nobel laureates.

In the 74 years since its creation, the Doomsday Clock has been changed 24 times. The first change was made in 1949 in response to the Soviet Union detonating its first atomic bomb, an event which drastically changed the climate of the Cold War and lead the bulletin to move the clock to three minutes to midnight. Other events which pushed the clock closer to midnight include France and China developing nuclear weapons in the early 1960s, the escalation of the Vietnam War in 1968, and President Ronald Reagan pulling out of disarmament talks in 1980; while events which pulled back the clock include the world’s scientists collaborating during the 1957-58 International Geophysical Year, the United States and Soviet Union signing the Partial Test Ban Treaty in 1963, and the fall of the Berlin Wall in 1989. Prior to 2020, the closest the clock has come to midnight is 2 minutes in 1953, when the United States and the Soviet Union tested their first thermonuclear weapons within six months of each other, while the furthest it has been is 17 minutes following the collapse of the Soviet Union in 1991. Strangely, the 1962 Cuban Missile Crisis – the closest the world has ever come to all-out nuclear war – had no effect on the clock, as the crisis was resolved long before the Bulletin could meet to discuss it. Furthermore, the crisis resulted in major global policy changes – such as the creation of the famous Moscow-Washington Hotline – which made the world a significantly safer place.

Due to its simplicity and visceral immediacy, the Doomsday Clock quickly became an icon and an enduring symbol of the Cold War, inspiring countless works of popular art such the Iron Maiden song “Two Minutes to Midnight” and the Alan Moore graphic novel Watchmen. And as the times have changed, so too has the Doomsday Clock. In 2007, designer Michael Beirut updated the Clock’s design to give it a more contemporary feel, while in 2009, when the Bulletin retired its print edition and became a digital-only publication, the Clock also made the transition, and now appears as a regularly-updated logo on the Bulletin’s website. In 2016 the Bulletin also commissioned a physical Doomsday Clock to hang in the lobby its Chicago office, which attracts thousands of tourists every year.

Other changes have been more fundamental. While the Clock has long been associated with the threat of nuclear war, in more recent years the Bulletin has kept its eye on more current and emerging threats to civilization, including climate change, biotechnology, cyberwarfare, and even artificial intelligence. Indeed, the Bulletin’s rationale for moving the Clock to 100 seconds to midnight in 2020 – the closest in its entire history – was as follows:

“Humanity continues to face two simultaneous existential dangers—nuclear war and climate change—that are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society’s ability to respond. The international security situation is dire, not just because these threats exist, but because world leaders have allowed the international political infrastructure for managing them to erode.”

Nonetheless, the threat of nuclear war continues to factor heavily into the Clock’s setting, as evidenced by its being set forward to 5 minutes in 2007 following nuclear weapons tests in North Korea and the resumption Uranium enrichment in Iran.

Yet despite the Doomsday Clock’s iconic status, it has faced considerable criticism over the years, with many questioning the validity of the Bulletin’s process for setting its hands and even the clock’s very value as an indicator of global risk. Much of this criticism has centred on the clock’s representation of risk, which some like Anders Sandberg of the Future of Humanity Institute at Oxford University view as inherently flawed. According to Sandberg, the various risk factors measured by the Clock are fundamentally different and thus cannot be easily compared. They are also all manmade, meaning that:

“…the normal forms of probability estimate are not just inadequate, they are actively misleading. [The Clock is] not an exact measure and it’s also combining several things. It was perhaps much easier when they started, when it was just nuclear war, but since then we have gained other existential risks.”

But even when applied to nuclear warfare alone, says Sandberg, the Clock’s very design makes it less than useful as an indicator of risk, as its inexorable “countdown” model implies that global catastrophe is inevitable rather than something we can actively avoid. Furthermore, Sandberg argues that the clock’s fundamental mission – to remind humanity of how close it is to disasters-  may in fact be counterproductive, stating:

“You can’t live your life at 3 minutes to midnight.”

This view is shared by Katherine Pandora, a history of science researcher at the University of Oklahoma, who argues:

“Having authorities state that an emergency is at hand is an effective way to gain someone’s attention and have them primed to take immediate action, which is the logic behind the clock’s minutes-to-midnight gambit. Asking successive generations of people to sustain a constant sense of emergency is a contradiction in terms. The unintended effects of this directive can impede a successful resolution of the issue at hand and undermine the working relationship between experts and nonexperts. I don’t think that using apocalyptic rhetoric helps us to do the hard work of discussing difficult and complicated issues in a democracy.”

Nonetheless, Pandora praises the efforts of the Bulletin of the Atomic Scientists to keep the public informed about emerging global threats, stating:

“It is the prodigious amount of research and analysis that ground the conclusions in the reports that the Bulletin of the Atomic Scientists issues that are the real tools for mobilizing discussion among all of us on critical issues.”

The Doomsday Clock has also received criticism from right-wing commentators, who accuse it of being, in the words of journalist John Merline, “little more than a Liberal angst meter.” These critics argue that despite founding editor Eugene Rabinowitz’s assertion that:

“The Bulletin’s clock is not a gauge to register the ups and downs of the international power struggle; it is intended to reflect basic changes in the level of continuous danger in which mankind lives in the nuclear age.”

… the clock’s movements are motivated merely by political ideology, moving closer to midnight during Republican administrations and farther away during Democratic ones. However, a cursory look at the clock’s history reveals this to be untrue, as the clock was backed off significantly under Richard Nixon, Ronald Reagan, and George H.W. Bush, and moved forward under Harry Truman, Lyndon Johnson, Bill Clinton, and Barack Obama. Other right-wing criticism has centred on the Bulletin’s 2017 Doomsday Clock statement in which it argued:

“Information monocultures, fake news, and the hacking and release of politically sensitive emails may have had an illegitimate impact on the US presidential election, threatening the fabric of democracy.”

This has led commenters to accuse the Bulletin of equating “fake news” with nuclear warfare as an existential risk to civilizations.

But most criticisms, whether liberal or conservative, appear to miss the fundamental point of the Doomsday Clock. As the Bulletin states on its website:

The Doomsday Clock is not a forecasting tool, and we are not predicting the future. Rather, we study events that have already occurred and existing trends. Our Science and Security Board tracks numbers and statistics—looking, for example, at the number and kinds of nuclear weapons in the world, the parts per million of carbon dioxide in the atmosphere, the degree of acidity in our oceans, and the rate of sea level rise. The board also takes account of leaders’ and citizens’ efforts to reduce dangers, and efforts by institutions—whether of governments, markets, or civil society organizations—to follow through on negotiated agreements.

The Bulletin is a bit like a doctor making a diagnosis. We look at data, as physicians look at lab tests and x-rays, and also take harder-to-quantify factors into account, as physicians do when talking with patients and family members. We consider as many symptoms, measurements, and circumstances as we can. Then we come to a judgment that sums up what could happen if leaders and citizens don’t take action to treat the conditions.

The Bulletin acknowledges that at its heart, the Doomsday Clock is – and has always been – a symbol, an easily digestible representation of global risk intended to spark discussion and spur action. And in response to accusations of political partisanship, the Bulletin offers a sobering reminder:

“Ensuring the survival of our societies and the human species is not a political agenda. Cooperating with other countries to achieve control of extremely dangerous technologies should not involve partisan politics. If scientists involved with the Bulletin are critical of current policies on nuclear weapons and climate change, it is because those policies increase the possibility of self-destruction.”

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References

 What is the Doomsday Clock? Bulletin of the Atomic Scientists, https://thebulletin.org/doomsday-clock/

Mecklin, John, This is Your COVID Wake-Up Call: It is 100 Seconds to Midnight, Bulletin of the Atomic Scientists, https://thebulletin.org/doomsday-clock/current-time/


Doomsday Clock Moves Closest to Midnight in its 73-Year History, ABC News, January 23, 2020, https://www.abc.net.au/news/2020-01-24/doomsday-clock-moves-closest-to-midnight-in-73-year-history/11896294


Huffstutter, P.J, Doomsday Clock Moving Closer to Midnight? The Spokesman-Review, October 16, https://news.google.com/newspapers?nid=1314&dat=20061016&id=tGdWAAAAIBAJ&pg=5932,54244942006

Criss, Doug, Running the “Doomsday Clock” is a Full-Time Job. Really, CNN, January 26, 2018, https://www.cnn.com/2018/01/26/world/doomsday-clock-scientists-trnd/index.html


Benedict, Kennette, Science, Art, and the Legacy of Martyl, Bulletin of the Atomic Scientists, April 9, 2013, https://thebulletin.org/2013/04/science-art-and-the-legacy-of-martyl

Ukman, Jason, Doomsday Clock Ticks Closer to Midnight, The Washington Post, January 10, 2012, https://ift.tt/vhAmx72

Barasch, Alex, What the Doomsday Clock Doesn’t Tell Us, Slate, January 26, 2018, https://slate.com/technology/2018/01/what-the-doomsday-clock-doesnt-tell-us.html

Ghose, Tia, Is the Doomsday Clock Still Relevant? Live Science, 2016, https://www.livescience.com/53801-doomsday-clock-relevance.html


Hopper, Tristin, Why the Doomsday Clock is an Idiotic Indicator the World’s Media Should Ignore, National Post, January 25, 2018, https://nationalpost.com/news/world/why-the-doomsday-clock-is-an-idiotic-indicator-the-worlds-media-should-ignore


Merline, John, The Famed ‘Doomsday Clock” is Little More Than a Liberal Angst Meter, January 25, 2019, https://www.investors.com/politics/commentary/the-doomsday-clock-measures-liberal-angst-not-global-risk/

The post What’s Up With the Very Real ‘Doomsday Clock’? appeared first on Today I Found Out.

from Today I Found Out
by Gilles Messier - January 27, 2023 at 12:13PM
Article provided by the producers of one of our Favorite YouTube Channels!