Tuesday, July 27, 2021

Review: Chester's Poppers Cheddar Whirlz

This new snack from Chester was kind of a blast from the past, as we ate three snacks under the Chester's Poppers banner way back in 2014 — Pizza Waffle Rounds, BBQ Spirals and Cheddar Crunch Fries — but haven't seen those in years, nor any other new ones with that name, until this one appeared. ...

from Taquitos.net Snack Reviews
by July 27, 2021 at 02:23PM

Monday, July 26, 2021

Review: Lay's Doritos Cool Ranch Flavored

After many years of marketing their big snack brands largely independently, Frito-Lay is pushing some synergies lately, first with a (contrived) grudge match pitting Doritos against Cheetos for which is hotter, and now they've taken three of their non-Lay's brands (these, Cheetos and Funyuns) to create Lay's flavors based on them. ...

from Taquitos.net Snack Reviews
by July 26, 2021 at 08:32PM

Tuesday, July 20, 2021

Why Do We Call a Software Glitch a ‘Bug’?

“It’s not a bug, it’s a feature.” At one point or another we’ve all heard someone use this phrase or a variation thereof to sarcastically describe some malfunctioning piece of equipment or software. Indeed, the word “bug” has long been ubiquitous in the world of engineering and computer science, with “debugging” – the act of seeking out and correcting errors – being an accepted term of art. But why is this? How did an informal word for an insect become synonymous with a computer error or glitch?

According to the most often-repeated origin story, in 1947 technicians working on the Harvard Mk II or Aiken Relay Calculator – an early computer built by the US Navy – encountered an electrical fault, and upon opening the mechanism discovered that a moth had had flown into the computer and shorted out one of its electrical relays. Thus the first computer bug was quite literally a bug, and the name stuck.

But while this incident does indeed seemed to have occured, it is almost certainly not the origin of the term, as the use of “bug” to mean an error or glitch predates the event by nearly a century.

The first recorded use of “bug” in this context comes from American inventor Thomas Edison, who in a March 3, 1878 letter to Western Union President William Orton wrote: “You were partly correct. I did find a “bug” in my apparatus, but it was not in the telephone proper. It was of the genus “callbellum”. The insect appears to find conditions for its existence in all call apparatus of telephones.”

 The “callbellum” Edison refers to in the letter is not an actual genus of insect but rather an obscure Latin joke, “call” referring to a telephone call and bellum being the latin word for “war” or “combat” – implying that Edison is engaged in a struggle with this particular hardware glitch. In a letter to Theodore Puskas written later that year, Edison more clearly defines his use of the word: “It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise—this thing gives out and [it is] then that “Bugs”—as such little faults and difficulties are called—show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached.”

Where Edison himself got the term is not known, though one theory posits that it originated from a common problem plaguing telegraph systems. For almost 40 years since their introduction, electric telegraphs were limited to sending a single message at a time over a single wire. As the popularity of telegraphy rose through the mid-19th Century, this limitation became a serious problem, as the only way to allow more messages to be sent was to install more telegraph wires – an increasingly inelegant and expensive solution. This lead inventors around the world to seek out methods for transmitting multiple signals over a single wire – a practice now known as multiplexing. By the 1870s several inventors had succeeded in perfecting workable multiplex or “acoustic” telegraphs, which generally worked by encoding each individual signal at a particular acoustic frequency. This allowed multiple signals to be sent along a single telegraph wire, with only a receiver tuned to the sending frequency of a particular signal being able to extract that signal from among the others. Among the many inventors to develop multiplex telegraphs were Alexander Graham Bell and Elisha Gray, whose work on sending acoustic frequencies over telegraph wires would eventually lead them to discover the principles that would be used for the telephone.

In any event, while these early multiplex telegraphs worked reasonably well, they had a tendency to generate phantom signals in the form of loud “clicks” that reminded many telegraph operators of the sound of an insect. Thomas Edison himself patented an electronic workaround to this problem in 1873, which he referred to as a “bug catcher” or “bug trap” – suggesting this phenomenon as a likely origin for the term.

Another hypothesis points to the word “bug” being derived from the Middle English bugge, meaning “a frightening thing” or “monster.” This root is also the source of the English words bogeyman, bugaboo, and  bugbear – the latter originally referring to a malevolent spirit or hobgoblin but today used to mean a minor annoyance or pet peeve. Advocates for this hypothesis therefore posit that “bug” in this context was used in much the same manner as “gremlins,” the mythical goblins that WWII aircrews blamed for malfunctions aboard their aircraft.

Whatever the case, Edison’s frequent use of the term in his letters and notebooks lead to it being widely repeated in the press, with a March 11, 1889 article in Pall Mall Gazette reporting: “Mr. Edison…had been up the two previous nights working on fixing ‘a bug’ in his phonograph—an expression for solving a difficulty, and implying that some imaginary insect has secreted itself inside and is causing all the trouble.”

Edison and his so-called “insomnia squad” ’s habit of staying up all night to fix particularly stubborn technical problems was of particular fascination to the press, with Munsey’s Magazine reporting in 1916: “They worked like fiends when they [were] ‘fishing for a bug.’ That means that they are searching for some missing quality, quantity, or combination that will add something toward the perfect whole.”

The term was first formally standardized by engineer Thomas Sloane in his 1892 Standard Electrical Dictionary, which defined a “bug” as: “Any fault or trouble in the connections or working of electric apparatus.”

Three years later Funk and March’s Standard Dictionary of the English Language defined the term for the general public as: “A fault in the working of a quadruplex system or in any electrical apparatus.”

Thus by the early 20th Century the term was well-established in engineering circles, and soon began making its way into everyday usage. One notable early appearance was in a 1931 advertisement for Baffle Ball – the world’s first commercially-successful pinball machine – which proudly proclaimed “No bugs in this game.” Science fiction writer Isaac Asimov further popularized the term in his 1944 short story Catch the Rabbit, writing: “U.S. Robots had to get the bugs out of the multiple robots, and there were plenty of bugs, and there are always at least half a dozen bugs left for the field-testing.”

Despite being in use for over 70 years, it was not until the aforementioned moth incident in 1947 that the term “bug” would become inextricably associated with the field of computer science. The insect in question was discovered lodged in Relay #7 of the Harvard Mark II in the early morning hours of September 9. Later that day the night shift reported the incident to Navy Lieutenant Grace Hopper, a computing pioneer who would later go on to develop FLOW-MATIC, a direct ancestor of COBOL and among the very first high-level programming languages.

In any event, at 3:45 PM Hopper taped the slightly crispy moth into the computer’s logbook with sellotape, gleefully noting beside it: “The first actual case of a bug being found.”

As British cybersecurity expert Graham Cluley notes, Grace Hopper’s whimsical logbook entry clearly indicates that the term “bug” was well-known at the time, but:

“…while it is certain that the Harvard Mark II operators did not coin the term ‘bug’, it has been suggested that the incident contributed to the widespread use and acceptance of the term within the computer software lexicon.”

The historic logbook page, complete with preserved moth, survives to this day in the collection of the Smithsonian Museum of Natural History in Washington, DC, though it is not currently on public display. And in commemoration of the infamous incident, September 9 is celebrated by computer programmers around the world as “Tester’s Day” – reminding everyone of the vital role played by those who tirelessly hunt and slay the various glitches, bugs, gremlins, and ghosts in every machine.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Bonus Fact

 While we tend to think of software bugs as minor annoyances and inconveniences at worst, depending on what a piece of software is controlling, they can have serious real-life consequences. Among the most notable examples of this is the tragic case of the Therac-25, a computer-controlled cancer therapy machine produced by Atomic Energy of Canada Limited starting in 1982. The unit contained a high-energy linear electron accelerator which could either be aimed directly at the patient or at a retractable metal target, generating an x-ray beam that could reach tumours deeper inside the body. The machine could also be operated in “field light” mode, in which an ordinary light beam was used to line up the electron or x-ray beam on the patient.

While AECL had achieved a perfect safety record with its earlier Therac-6 and Therac-20 machines through the use of mechanical interlocks and other physical safety features, the Therac-25 dispensed with these entirely, its designers relying solely on the machine’s control software to ensure safety. Unfortunately, this software contained two serious software bugs which soon resulted in tragedy. The first of these allowed the electron beam to be set to x-ray mode without the metal x-ray target being in place, while the second allowed the electron beam to be activated while the machine was in field light mode. In both cases, this resulted in patients being bombarded with an electron beam 100x more powerful than intended. The initial effect of this was a powerful sensation of electric shock, which lead one patient, Ray Cox, to leap from the table and run from the treatment room. Between 1985 and 1987 six patients in Canada and the United States received massive radiation overdoses, resulting in severe radiation burns, acute radiation poisoning, and – in the case of three of the patients – death.

A subsequent investigation revealed the truly shocking depths of AECL’s negligence in developing the Therac-25. While the two lethal bugs had been reported during the control software’s development, as the software was directly copied from the earlier Therac-6 and Therac-20 and these machines had perfect safety records, the report and bugs were ultimately ignored.

Of course, the earlier machines relied on mechanical interlocks for safety and their software was written to reflect this, leaving the Therac-25 control software with almost no built-in failsafes and no way of communicating potentially lethal errors to the operator. Even more disturbingly, the software was never submitted for independent review and was not even tested in combination with the Therac-25 hardware until the machines themselves were installed in hospitals. Indeed, throughout the Therac-25’s development cycle little thought appears to have been given to the possibility of software error leading to dangerous malfunctions, with a Failure Modes Analysis conducted in 1983 focusing almost exclusively on potential hardware failures. Software failure is mentioned only once in the report, with the probability of the machine selecting the wrong beam energy given as 10-11 and the probability of it selecting the wrong mode as 4×10-9 – with no justification given for either number. This absolute confidence in the software ultimately served to prolong the crisis. Following the first two overdose incidents in 1985, AECL was ordered by the FDA to investigate and submit a solution. Refusing to believe that the software could be to blame, AECL concluded that the issue lay with a microswitch used to control the positioning of the machine turntable, and in 1986 submitted this fix to the FDA. This, of course, did nothing to solve the problem, leading to three further overdoses before the actual cause was finally tracked down.

Once the fault was uncovered, the FDA declared the Therac-25 “defective” and ordered AECL to develop a full suite of corrective modifications. These were all implemented by the summer of 1987, but no sooner was the Therac-25 returned to service, another patient in Yakima, Washington, received a massive overdose, dying of radiation poisoning three months later. This incident was caused by yet another software error – a counter overflow – which caused the updated software to skip a critical safety step and withdraw the x-ray target from the electron beam. In the wake of the six incidents AECL was hit with multiple lawsuits by the families of the victims, all of which were settled out of court. Since then no further accidents have been reported, with the original Therac-25 units continuing to operate for nearly two decades without incident.

The Therac-25 affair has become a seminal case study in safety and systems engineering, dramatically illustrating the dangers of blindly trusting pre-existing software and of not thoroughly testing hardware and software together as a complete system. It also serves as a stark reminder that in our modern, hyper-connected world, the effects of software are not limited to the inside of a computer; sometimes, they can slip out into the physical world – with devastating results.

Expand for References

McFadden, Christopher, The Origin of the Term ‘Computer Bug’, Interesting Engineering, June 12, 2020, https://interestingengineering.com/the-origin-of-the-term-computer-bug

Was the First Computer Bug A Real Insect? Lexico, https://www.lexico.com/explore/was-the-first-computer-bug-a-real-insect

Whyman, Amelia, The World’s First Computer Bug, Global App Testing, https://www.globalapptesting.com/blog/the-worlds-first-computer-bug-global-app-testing

Laskow, Sarah, Thomas Edison was an Early Adopter of the Word ‘Bug’, Atlas Obscura, March 16, 2018, https://ift.tt/2G08aFG

Magoun, Alexander and Israel, Paul, Did You Know? Edison Coined the Term “Bug”, IEEE Spectrum, August 1, 2013, https://spectrum.ieee.org/the-institute/ieee-history/did-you-know-edison-coined-the-term-bug

Leveson, Nancy and Turner, Clark, An Investigation of the Therac-25 Accidents, IEEE 1993, https://web.archive.org/web/20041128024227/http://www.cs.umd.edu/class/spring2003/cmsc838p/Misc/therac.pdf

Fabio, Adam, Killed by a Machine: the Therac-25, Hackaday, October 26, 2015, https://ift.tt/1PPIuc8

The post Why Do We Call a Software Glitch a ‘Bug’? appeared first on Today I Found Out.

from Today I Found Out
by Gilles Messier - July 20, 2021 at 11:40PM
Article provided by the producers of one of our Favorite YouTube Channels!

That Time an Oregon Free-Love Cult Launched the Largest Bioterror Attack in US History

On September 18, 2001, one week after the 9/11 attacks, mysterious envelopes began appearing at the offices of major American news outlets including ABC, CBS, and NBC, as well as Democratic Senators Tom Daschle and Patrick Leahy. The envelopes contained a strange brown powder, which quickly caused those who came into contact with it to fall seriously ill. That powder was Anthrax, a deadly biological weapon. By the time the FBI located and impounded all the envelopes, 22 people had contracted the disease, 5 of whom eventually died. Despite a 9-year investigation, the case has never definitively been solved, though the bulk of the FBI’s suspicion fell on Bruce Edwards Ivins, a vaccine expert at the bioweapons facility in Fort Detrick, Maryland, who committed suicide in 2008 before he could be questioned.

While the 2001 “Amerithrax” event is the most well-remembered bioterror attack in US history, it was not the first or even the largest. That dubious honour belongs to a largely forgotten incident in 1984 when a Hindu-inspired free-love cult called the Rajneeshees attempted to take over a small Oregon town by poisoning local salad bars with salmonella bacteria. It is a story truly stranger than fiction.

The Rajneesh movement was founded in 1970 by Rajneesh Chandra Mohan, an Indian philosophy professor and spiritualist better known as Bhagwan Shree Rajneesh or later simply as “Osho.” In 1974 Rajneesh founded an ashram, or commune, outside the Indian city of Poona, which soon began attracting thousands of mainly young, middle and upper-class followers from Europe and North America. His teachings, an eclectic mixture of Hinduism, Jainism, Buddhism, Taoism, Christianity, and even western psychotherapy and capitalism, denied the existence of God and promoted casual nudity and sexual freedom, placing him at odds with the more conservative Indian population. Nonetheless, the movement grew rapidly, and by the late 1970s Rajneesh had amassed over 200,000 followers in 600 meditation centres worldwide and enough personal wealth to maintain a fleet of 90 Rolls-Royces.

By the early 1980s, however, the Rajneeshees faced increasing pressure from the Indian government to leave, and in 1981 at the urging of his right-hand woman, Ma Anand Sheela – real name Sheela Silverman – Rajneesh moved his ashram to Montclair, New Jersey. After an extensive search for a larger territory in which to build his spiritualist utopia, Rajneesh purchased 65,000 acres of land called “The Big Muddy Ranch” outside the town of The Dalles in Wasco County, rural Oregon. More than 7000 followers would eventually settle in the new compound, which was incorporated later that year as Rajneeshpuram. The settlement quickly grew into a self-contained commune featuring its own communal farms, 4,200-foot airstrip, fire department, public transit system, sewage plant, and even zip code. The organizational structure of Rajneeshpuram was equally unusual. While Rajneesh was nominally in charge, upon arrival in Oregon he had taken a four-year vow of silence and rarely appeared in public outside his daily drive-throughs of the commune in his Rolls-Royce. Daily decision-making was thus left to Ma Anand Sheela and an inner circle of high-ranking women who became known as “Big Moms.” Ruthless against anyone who challenged their authority, the “Big Moms” became known among disaffected Rajneeshees as the “Dowager Duchesses.”

While the Rajneeshees initially enjoyed friendly relations with the residents of Wasco county, contributing some $35 million to the local economy, these relations soon soured as the group attempted to further expand Rajneeshpuram. Oregon zoning laws at the time placed severe restrictions on land use, and the Wasco County Commission, wary of the group’s growing population and political power, began denying them land-use permits and citing them for numerous building code violations. According to former Commission member Dan Eriksen, the Rajneeshees reacted violently to such challenges, threatening local government officials with libel suits and even death. The Commission’s fears were confirmed in early 1984 when the Rajneeshees took control of the nearby small town of Antelope by overwhelming its 75 residents in a local election. They then renamed the town “Rajneesh”, raised taxes, and carried out strange initiatives such as turning the town’s only business into a vegetarian restaurant called “Zorba the Buddha” and renaming the local recycling center the“Adolf Hitler Recycling Center.” No, really. Furthermore, this coup, along with the incorporation of Rajneeshpuram itself, gave the Rajneeshees the legal right to not only form their own police department, but also to patrol county roads and access State police training programs and even crime data networks. Rajneeshpuram thus organized a “Peace Force” of 60 officers who patrolled the roads around the commune with machine-gun armed jeeps.

Despite all this, however, attempts to expand the commune continued to be stymied by the Wasco County Commission. Furthermore, the U.S. Attorney’s Office in Portland had begun an investigation into the immigration status of many of the cult members and the legal status of Rajneeshpuram itself, threatening the commune’s very existence. Sheela and the other “Big Moms” thus realized that the only way for the commune to gain complete autonomy was to take control of the Commission itself. Fortuitously, two of the three seats on the Commission were coming up for reelection in November 1984, and so the Rajneeshees set to work trying to secure them. At first the cult attempted to find sympathetic politicians to run against the hostile commissioners, but when they failed to get enough signatures to get their preferred candidates on the ballot, they turned instead to straight-up voter fraud.

As the 15,000 registered voters in Wasco County outnumbered the Rajneeshees more than two-to-one, the cult initially planned to send members into The Dalles, the largest population centre in the County, under false names in order to vote twice. But this plan was quickly abandoned due to the high risk of discovery. Instead, the Rajneeshees launched a scheme called “Share-a-Home,” an ostensibly humanitarian venture in which some 2,300 homeless people from around the State were brought to Rajneeshpuram and given shelter and food on the condition that they vote for the Rajhneeshee candidate in the upcoming election. However, on October 10 the Wasco County clerk countered this tactic by evoking an emergency rule requiring all new voters to appear in person at eligibility hearings and present their qualifications – including a minimum 20-day residency requirement to vote. The Rajneeshees filed an injunction, but this was quickly struck down. Meanwhile, those the commune quickly discovered that housing and caring for more than 2,000 homeless people – many of whom were suffering from untreated mental illnesses – was rather more than they had bargained for, and there are reports of  “guests” being blindfolded and forced to listen to hours of religious chanting or being drugged to keep them under control.

With their attempts to stuff the ballot box thwarted, the Rajneeshees turned to ever more drastic measures, even plotting to assassinate Oregon District Attorney Charles Taylor in Portland. Taylor was stalked, firearms purchased, and an assassin even chosen, but the hit was never carried out. Another abortive plot involved crashing a small plane packed with explosives into the Wasco County courthouse. In the end, however, the Rajneeshees settled on an even more sinister option: biological warfare.

What would become the largest bioterror attack in US history was masterminded by Ma Anand Puja, a native of the Philippines who had worked as a nurse in California and Indonesia before moving to India in 1979 to join the Rajneeshees. Wielding power in the cult nearly equal to Ma Sheela, Ma Puja served as the Secretary and Treasurer of the Rajneesh Medical Corporation and the commune’s Pythagoras Clinic and Pharmacy. But she was far from the caring, benevolent nurse her responsibilities would suggest. According to one former cult member: “There was something about Puja that sent shivers of revulsion up and down my spine the moment I met her. There was nothing I could put my finger on beyond her phony, sickeningly sweet smile; it was years before she became widely-known as the Dr. Mengele of the [Rajneeshee] community, the alleged perpetrator of sadistic medical practices that verged on the criminal; my reaction to her seemed irrational [but] Sheela trusted her implicitly.”

Indeed, the mayor of Rajneeshpuram, David Knapp – then known as Swami Krishna Deva – later testified that: “[Sheela] had talked with [Rajneesh] about the plot to decrease voter turnout in The Dalles by making people sick. Sheela said that [Rajneesh] commented that it was best not to hurt people, but if a few died not to worry.”

In concocting the bioterror plan, Ma Puja reasoned that if the Rajneeshees couldn’t inflate their own voter numbers, they could suppress everyone else’s, and this she planned to do by infecting The Dalles’ water supply with bacteria and forcing large groups of voters to stay home on election day. To accomplish this, Ma Puja considered a number of different different diseases inclyding Typhoid Fever, Tularemia, and Beaver Fever, before finally settling on Salmonella typhimurium. A common cause of food poisoning spread through poor food-preparation hygiene, Salmonella was perfect for the Rajneeshees’ purposes as it causes severe vomiting and diarrhea for 4-7 days but is very rarely fatal, killing only around 600 Americans every year. If successful, an attack would incapacitate much of the town on election day while being likely to be dismissed as a natural outbreak.

Ma Puja ordered cultures of Salmonella from a Seattle-based medical supply company called VWR Scientific, along with industrial incubators and freeze-driers in which to grow and store the cultured bacteria. As the Rajneeshee Medical Corporation was an accredited medical facility, acquiring this equipment was straightforward and attracted little suspicion. Ma Puja also ordered cultures of Typhoid, Tularemia, and Shigella Dysentery and reportedly expressed interest in cultivating and spreading the HIV virus, but none of these other plans ever came to fruition.

The Salmonella bacteria were cultured and packaged in a secret lab at Rajneeshpuram, and by August were ready for small-scale field trials. On August 29, 1984, two members of the Wasco County Commission, Judge William Hulse and Ray Matthew, visited Rajneeshpuram on a fact-finding mission. During their visit the men were given glasses of water spiked with Salmonella, causing both to fall severely ill. Judge Hulse had to be hospitalized, and likely would have died without treatment. Whether this was intended to intimidate the Commission or simply to test the potency of the bacteria is unknown, but whatever the case soon after Ma Puja decided to move on to the next phase of testing. While selecting targets in The Dalles, she and other conspirators entered a local supermarket and contaminated some of the fresh produce by pouring Salmonella liquid over it. They also spread the agent on urinal handles and doorknobs in the Wasco County Courthouse. However, nobody reported falling ill from this attack.

Two attempts were also made to contaminate the town water supply, but on one occasion a police car arrived and scared off the conspirators, while on another they realized that they did not have enough Salmonella to effectively infect the entire town. Undeterred, Ma Puja suggested infecting the town with Giardia or “Beaver Fever” by trapping local beavers, pulverizing them, and pouring the remains into the water supply. For one reason or another, this plan was also never carried out. But in September 1984, five weeks before the election, Ma Puja and eleven others decided to carry out a full-scale dress rehearsal of their planned attack. Targeting the salad and salsa bars of 10 local restaurants, they poured Salmonella liquid from concealed plastic bags into the lettuce, salad dressing, salsa, coffee creamer, and any other communal food or condiment they could find.

The effects were dramatic. By September 24, more than 150 people had fallen violently ill with bloody diarrhea, nausea, vomiting, chills, and abdominal pain, with lab tests confirming infection with Salmonella. By the end of the month a total of 751 people would develop confirmed cases of salmonellosis, though as The Dalles lies on a major thoroughfare it is likely that many more were infected while passing through the town. The victims ranged in age from two days to 87 years old, with 45 patients requiring hospitalization. Miraculously, however, not one person died in the attack.

Yet despite these promising results, the attack did not have the effect the Rajneeshees had hoped for. Being the largest outbreak of food poisoning in the country that year, the attack attracted the attention of the Oregon public health authorities, who immediately launched an investigation. This increased scrutiny meant that the Rajneeshees were unable to launch a follow-up attack when election day finally rolled around. Furthermore, local voters, annoyed by the cult’s antics, showed up to the polls in record numbers and soundly defeated the Rajneeshee candidate, rendering the whole exercise moot. Incredibly, though many including Oregon Democratic Congressman James H. Weaver suspected that the Rajneeshees were responsible, the official Oregon Department of Health investigation concluded that the outbreak had been natural, caused by the restaurant workers’ poor hygiene.

And there the story might have ended. While Congressman Weaver continued to pressure the CDC to investigate the Rajneeshees and gave a speech in the House of Representatives accusing the cult of starting the outbreak, it would be a full year before the truth was finally revealed. On September 15, 1985,  Rajneesh emerged from his four-year vow of silence to hold a press conference, in which he announced that 19 high-ranking cult members including Ma Sheela and Ma Puja had fled to Europe, and accused them of having planned and carried out numerous criminal acts including the Salmonella attack without his knowledge or consent. In response, Oregon Attorney David B. Frohnmayer formed an emergency task force composed of Oregon State Police and FBI personnel and obtained search warrants for Rajneeshpuram. On October 2, 1985, 50 investigators raided the compound. According to Frohnmeyer, they discovered evidence of extensive crimes perpetrated by the cult:

“The Rajneeshees committed the most significant crimes of their kind in the history of the United States … The largest single incident of fraudulent marriages, the most massive scheme of wiretapping and bugging, and the largest mass poisoning.”

The investigators also found evidence of previous bioterror attacks on a nursing home and medical centre, that Ma Sheela had tried to murder Rajneesh’s personal physician, and that Ma Puja had been involved in the death of Sheela’s first husband and the attempted assassination of Oregon politician James Comni in a Portland hospital.

Rajneesh fled Oregon by plane on October 27, 1985, only to be arrested when he landed in Charlotte, North Carolina and charged with 35 counts of deliberate violation of immigration law. He plead guilty to two counts, received a ten-year suspended sentence and a fine of $400,000, and was deported and barred from entering the United States for five years. Baghwan Shri Rajneesh returned to India and died on January 19, 1990 at the age of 58, having never been prosecuted for the bioterror attack in Dulles. Soon after the Rajneeshpuram Commune collapsed as disaffected members began leaving en masse to testify for the prosecution. Ma Sheela and Ma Puja were arrested in West Germany on October 28, 1985 and extradited to the United States, where they were charged with one count of attempted murder, two counts of assault, product tampering, wiretapping, and immigration offences. Ma Sheela and Ma Puja were given prison sentences of 55 and 42 years respectively, though both were released on good behaviour after serving only 29 months. Sheela later moved to Switzerland where she ran two nursing homes.

The 1984 Salmonella attack on The Dulles has gone down as one of the most bizarre terrorist attacks in US history, and an unintentional demonstration of just how difficult it really is to commit voter fraud in America. But to Leslie Zaitz, the investigative reporter from The Oregonian newspaper who wrote the first detailed account of the attack, the real lesson of the Salmonella incident is how lax media coverage allowed the attack to go undetected for so long, and might have allowed further attacks to take place:

“If anything, the local news media were restrained and conservative in their coverage of the salmonella episode. There was nothing alarmist, nothing to trigger a public panic. More aggressive coverage perhaps would have heated up already tense community relations with the commune. Yet the benign treatment also gave the Rajneeshees comfort that they could get away with it .  Fortunately, the commune collapsed before that could happen. But consider this: if they knew reporters were watching closely, would they have even tried?”

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References

Thompson, Christopher, The Bioterrorism Threat by Non-State Actors: Hype or Horror? Naval Postgraduate School. Monterey, California, December 2006, https://web.archive.org/web/20080229164603/http://www.ccc.nps.navy.mil/research/theses/thompson06.pdf

Carus, Seth, The Illicit Use of Biological Agents Since 1900, Centre for Counterproliferation Research, February 2001, https://fas.org/irp/threat/cbw/carus.pdf

Grossman, Lawrence, The Story of a Truly Contaminated Election, Columbia Journalism Review, February 2001, https://web.archive.org/web/20081119154050/http://backissues.cjrarchives.org/year/01/1/grossman.asp

McCann, Joseph, Terrorism on American Soil, https://ift.tt/2RZhS2E

Bioterror’s First US Victims Offer Hope to a Nation, Taipei Times, October 21, 2001, https://ift.tt/3tVvhqA

Keyes, Scott, A Strange But True Tale of Voter Fraud and Bioterrorism, The Atlantic, June 10, 2014, https://www.theatlantic.com/politics/archive/2014/06/a-strange-but-true-tale-of-voter-fraud-and-bioterrorism/372445/

Thuras, Dylan: The Secret’s in the Sauce: Bioterror at the Salsa Bar, Arlas Obscura, January 9, 2014, https://ift.tt/16fxykV

The post That Time an Oregon Free-Love Cult Launched the Largest Bioterror Attack in US History appeared first on Today I Found Out.

from Today I Found Out
by Gilles Messier - July 20, 2021 at 11:31PM
Article provided by the producers of one of our Favorite YouTube Channels!

‘Kaputnik’: America’s Disastrous First Attempt to Launch a Satellite

On July 20, 1969, astronaut Neil Armstrong stepped onto the lunar surface and uttered the immortal words “That’s one small step for man, one giant leap for mankind.” While five more Apollo crews would land on the moon over the next three years, for many that moment marked the triumphant end of the Space Race, which over the previous twelve years had pitted the United States’ scientific and industrial might against that of its arch-rival the Soviet Union. But while the Soviets never managed to match Apollo and launch their own manned lunar missions, the Space Race was not always so one-sided. Indeed, for the first several years of the Space Age the Soviets always seemed to be one step ahead, with the Americans constantly on the back foot and scrambling to keep up. And no single event epitomizes these desperate early days like Project Vanguard, the United States’ ill-fated first attempt to launch a satellite.

On October 4, 1957, the Soviet Union launched Sputnik 1, the world’s first artificial satellite, into low-earth orbit. Though little more than a 58-centimetre-diameter aluminium sphere with two radio transmitters and four antennas broadcasting a steady pulsing signal, the satellite was nonetheless a stunning technical achievement –  and one which filled the Western world with a mounting sense of dread. For if the R7 rocket that launched Sputnik could carry a satellite into orbit, it could also carry a nuclear warhead – and drop it on any point on the globe. The Cold War had just taken on a terrifying new dimension.

But while Sputnik is commonly remembered as having taken the United States completely by surprise and triggered a national panic, the truth of the matter is rather more complicated. In fact, upon hearing news of the Soviet satellite, U.S. President Dwight D. Eisenhower actually breathed a sigh of relief. Worried that a lack of reliable military intelligence would cause both superpowers to stockpile dangerous amounts of weapons, Eisenhower had proposed an ‘Open Skies’ policy whereby the United States would be allowed to conduct reconnaissance overflights of the Soviet Union and vice versa in order to keep an eye on each others’ military strength. The Soviets, however, flatly rejected the proposal, not least because at the time they possessed no means of overflying the continental United States. But as a nation’s airspace extends all the way out of the atmosphere, Eisenhower saw the launch of Sputnik – which passed over the United States several times a day – as the Soviets setting a precedent for open skies. This in turn encouraged the president to approve further overflights of the Soviet Union using the high-flying Lockheed U-2 spy plane. Unable to admit this ulterior motive, however, Eisenhower allowed himself to be portrayed by the media as an out-of-touch old man asleep at the wheel, as in a whimsical poem composed by Michigan Governor G. Mennan Williams:

Oh little Sputnik, flying high

With made-in-Moscow beep,

You tell the world it’s a Commie sky,

And Uncle Sam’s asleep

 You say on fairway and on rough,

The Kremlin knows it all,

We hope our golfer knows enough

To get us on the ball

Also contrary to popular belief, the launch of Sputnik was not entirely unexpected, nor was the United States completely unprepared to answer the challenge. In fact, a U.S. satellite program had been in the works for several years, with one of the first, Project Orbiter, being proposed in 1954 by a small team from the Army Ballistic Missile Agency at Redstone Arsenal in Huntsville, Alabama. The leader of this team was none other than Dr. Wernher von Braun, who during the Second World War had led the development of the Nazi V-2 rocket, the world’s first operational ballistic missile, around 3,000 of which were launched against London and other Allied targets. Since the age of 16, von Braun had been obsessed with the dream of launching a satellite – and eventually humans – into space – so much so, in fact, that in 1944 he was arrested and imprisoned for two weeks by the Gestapo on the grounds that his work on the V-2 was focused more on his own spacefaring goals than the defence of the Third Reich. At the end of the war von Braun and many of his colleagues were captured by American forces, and under Operation Paperclip had their Nazi pasts expunged before being brought to the United States. There, building on their wartime experience, they developed the PGM-11 Redstone, essentially a larger, modernized V-2 and America’s first nuclear ballistic missile.

For Project Orbiter, von Braun proposed modifying a Redstone by elongating the propellant tanks and adding three additional solid-fuel rocket stages to create a vehicle he called Jupiter-C, which would theoretically be able to carry a small satellite into orbit. In order to come up with a suitable scientific payload for the satellite, von Braun asked his chief scientist, Ernst Stuhlinger, to find a “Nobel-level” scientist specializing in high-altitude physics. Stuhlinger immediately recommended Dr. James Van Allen of the University of Iowa, with whom he had worked on high-altitude cosmic ray research using captured V-2 rockets in the late 1940s. Using weather balloons and small research rockets, van Allen had discovered unusually high concentrations of cosmic radiation at high altitudes, and theorized that charged particles from the sun were being trapped and concentrated by the earth’s magnetic field into large belts of radiation. But without some means of reaching above the earth’s atmosphere, he could not confirm his theory. So van Allen readily agreed to von Braun’s proposal, and on January 26, 1956 at a symposium at the University of Michigan laid out the Army’s plan to develop and launch a small scientific satellite. Meanwhile, von Braun and his team developed the Jupiter C under the cover of an Army program to test the re-entry characteristics of ballistic missile nosecones. The first test flight, using only two additional rocket stages, took place on November 16, 1956, the rocket reaching an altitude of 1000 kilometres. Had the planned third stage been added it would have entered earth orbit, but von Braun was forbidden by the Pentagon to make the attempt.

But as luck would have it, world events had just provided the Army project with a legitimate reason for existing. In 1952, the International Council of Scientific Unions announced the International Geophysical Year or IGY, an 18-month period lasting from 1957 to 1958 during which teams from 67 countries would collaborate on experiments in meteorology, oceanography, seismology, cosmic rays, geomagnetism, and other earth sciences. On July 29, 1955, James C. Hagerty, President Eisenhower’s press secretary, announced that as part of the IGY, the United States would launch a small satellite into orbit. Four days later, at an Astronautical conference in Copenhagen, Soviet scientist Leonid I. Sedov announced that the Soviet Union would also be orbiting a satellite in the “near future.” Nonetheless, Eisenhower emphasized that the U.S. satellite program was being undertaken in the spirit of international scientific cooperation and not as a competition with other nations. And in any case, few in the United States believed that the Soviet Union – viewed by many as a primitive backwater – actually possessed the technical know-how to deliver on their promise. So the U.S. program proceeded at a leisurely, unworried pace.

But by now von Braun was not alone in his bid to launch a satellite; the U.S. Navy was also developing its own competing program called Project Vanguard. The task of deciding who would make the attempt fell to the Ad Hoc Committee on Special Capabilities lead by U.S. Secretary of Defense Charles E. Wilson. Despite the Army already possessing a proven launcher, on September 9, 1955, the Committee announced it had chosen Project Vanguard over Project Orbiter to launch the United States’ first satellite. The decision was an entirely political one. Given the ostensibly peaceful civilian nature of the IGY satellite project, President Eisenhower believed that using an Army rocket would appear too aggressive and wished to, according to Dr. Van Allen: “…avoid revealing the propulsive capability of the United States [and] alarming foreign nations with the realization that a U.S. satellite was flying over their territories.”

Tying in to Eisenhower’s desire for open skies, it was believed that the Soviets might not object to a civilian research satellite overflying their territory, setting a precedent for future military overflights. Plus there was the awkward fact that the Army’s Redstone rocket had been developed by, well, literal Nazis. By contrast, the Vanguard rocket, while developed by the Navy, had been assembled entirely from components of civilian rockets designed for peaceful research. On August 3, 1955, Project Orbiter was officially canceled. Undaunted, von Braun attempted to convince the Navy to use the Jupiter C instead of its unproven Vanguard, even offering to write “Vanguard” in big letters on the side of the rocket. But the Navy turned him down, and von Braun contented himself with setting aside a complete Jupiter C rocket in case it was ever needed, under the guise of performing an experiment on the long-term storage of missiles.

With Project Orbiter now on ice, Dr. Van Allen wasted no time in jumping ship to Project Vanguard and proposing his cosmic ray experiment for the Navy’s satellite. But so small was the Vanguard rocket’s maximum payload that there was no room for Van Allen’s radiation detector or any other scientific instruments. Instead, the spherical, 15-centimetre-diameter Vanguard satellite carried only two 108-MHz radio tracking transmitters powered by batteries and solar cells, as well as two thermometers to monitor the satellite’s internal temperature. The spacecraft’s diminutive size was widely mocked by the Soviets, with Premier Nikita Khrushchev referring to it as “the grapefruit satellite.”

In spite of rumours that the Soviets were making swift progress on their own satellite project, work on Vanguard carried on at a steady pace, with the first suborbital test of the rocket’s first stage, TV-0, taking place on December 8, 1956. This was followed by the two-stage TV-1 test on May 1, 1957. In June of 1957 the Soviet press announced the radio frequency on which their first satellite would broadcast its signals, but once again few in the United States paid much attention. Vanguard TV-2, the first suborbital test of all three rocket stages, was scheduled for September of that year, but technical problems resulted in significant delays.

Then, on October 4, 1957, while TV-2 was still on the launch pad, the Soviets announced that Sputnik 1 was in orbit. The news sent shockwaves through American society, with many wondering how the supposedly backwards Soviets could have accomplished such a stunning feat. One U.S. General, referring to von Braun and his team in Huntsville, supposedly exclaimed “we captured the wrong Germans!” Vanguard TV-2 was successfully launched on October 23, 1957, but this accomplishment was immediately eclipsed on November 3 when the Soviets launched yet another satellite, Sputnik 2, into orbit. But this time, the spacecraft had a passenger: an 3-year-old Moscow street dog named Laika – the first living creature to orbit the earth.

With American military and scientific prestige at an all-time low, the formerly peaceful and scientific Project Vanguard suddenly took on new urgency as the United States’ last hope of answering the Soviet threat. While the engineers had originally planned to use a dummy satellite for TV-3, the first all-up orbital flight, under intense pressure from the American press they reluctantly agreed to install the genuine flight article. Finally, on December 6, 1957, two months after the epoch-making launch of Sputnik 1, the countdown for Vanguard TV-3 began at Cape Canaveral in Florida. At 4:33 PM Greenwich Mean Time the countdown reached zero, the first stage booster ignited, and the pencil-like rocket roared off the launch pad.

Then, disaster. A mere two seconds after liftoff, the engines suddenly cut out. Then, in front of millions of Americans watching on live television, Vanguard TV-3 fell back onto the pad and exploded into a giant fireball. The tiny Vanguard satellite was thrown clear of the explosion, and in a scene witnesses described as “pathetic,” the satellite, lying bent and broken on the concrete, began transmitting its tracking signal as  if it had successfully reached orbit.

The press had a field day with the disaster, headlines variously referring to to Vanguard as “Flopnik,” “Dudnik,” “Oopsnik,” “Stayputnik,” and “Kaputnik.” The Soviet Union also joined in the mockery, with a Soviet delegate to the United Nations offering the United States financial aid from a fund reserved for “undeveloped countries.” The cause of the launch failure was never fully determined due to a lack of proper instrumentation, with engineers variously pointing to low fuel pressure or a loose fuel connection. But whatever the cause, the damage was done: the United States’ sense of technological superiority had been shattered. Yet America was not out of the game just yet, for thanks to Wernher von Braun’s foresight she still had an ace up her sleeve. In the wake of the Sputnik Crisis, on October 9, 1957 Secretary of Defense Wilson resigned and was replaced by Neil H. McElroy. One month later McElroy authorized Redstone Arsenal to revive Project Orbiter.

It was the moment von Braun and his team had been waiting for. The Jupiter C – now known as Juno I – was pulled from storage and fitted with a third stage and a small 14kg satellite called Explorer I, which among other scientific instruments carried Dr. James Van Allen’s cosmic ray detector. On March 17, 1958, Explorer I roared off the launch pad at Cape Canaveral and became the first U.S. satellite to enter orbit. While only the third satellite to be launched after Sputnik 1 and 2, Explorer I made up for its tardiness by becoming the first spacecraft to make a major scientific discovery in orbit. Readings from its onboard instruments confirmed the existence of large belts of trapped radiation girdling the earth, now known as the Van Allen Radiation Belts.

To the American public, however, the launch of Explorer I meant only one thing: America was finally back in the Space Race. But it would be a hard road to the stars, for on May 15, 1958 the Soviets launched Sputnik 3, a massive satellite a thousand times heavier than Explorer I. This would be followed by a long string of Soviet space firsts, including the first spacecraft to reach the Moon, the first spacecraft to take pictures of the far side of the moon, the first animals to be safely recovered from orbit, the first man in space, the first planetary flyby, and the first man to make a spacewalk. It would not be until 1965 during Project Gemini that the Americans finally exceeded the Soviets in manned spaceflight capability – a lead they would carry all the way to the moon.

And despite being the poster child for America’s early failures in space exploration, the much-maligned Project Vanguard may have gotten the last laugh. On March 17, 1958, three months after the embarrassing TV-3 disaster, Vanguard I was successfully launched into orbit. It remains the oldest man-made object still in space, Sputnik 1 having decayed from orbit in January 1958, Sputnik 2 in April 1958, and Explorer I in March 1970. And while its radios stopped transmitting long ago, Vanguard I is still actively tracked by radar, the shape of its orbit used to map the earth’s gravitational field. It is expected to remain in orbit for another 1000 years, a lonely relic of the heavy and uncertain days at the dawn of the Space Age.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Bonus Facts:

 Over the years, a number of common myths and misconceptions have grown up around Sputnik and the early days of the Space Race. For instance, after Sputnik was launched, many people claimed to be able to see the satellite with the naked eye as it passed overhead. However, given the small size of Sputnik this is unlikely; what people were actually seeing was the much larger upper stage of the R-7 rocket that launched Sputnik and which entered orbit alongside it.

Another commonly reported fact is that the steady beeping signal transmitted by Sputnik conveyed no telemetry information and was simply used to track the satellite. But this is not quite true; encoded in the pulses was data from a sensor which monitored the pressure inside the satellite’s body. This was essential, as a leak would have caused the satellite to fail. Soviet avionics were based on vacuum tubes rather than transistors and could not operate in a vacuum, so Sputnik’s body was filled with nitrogen gas and sealed shut. While somewhat bulky and crude compared to American transistorized systems, the Soviets would continue to use this unusual arrangement for years. For example, when Soviet cosmonaut Alexei Leonov became the first man to take a spacewalk on March 18, 1965, he had to exit through an inflatable airlock attached to the Voskhod 2 spacecraft’s hatch because the cabin could not be depressurized without causing the avionics to overheat. This in turn lead to the mission being plagued with problems as both the airlock and Leonov’s spacesuit ballooned in the vacuum of space, preventing him from fitting back through the hatch. Leonov had to partially deflate his suit, risking the Bends, before he was able to climb back aboard.

Another mission shrouded in myth and misconception is Sputnik 2 – especially regarding the ultimate fate of its passenger, the dog Laika. Following the success of Sputnik 1, the Soviet Politburo urged the space program’s chief designer, Sergei Korolev, to design and launch another satellite in time for the 40th anniversary of the Bolshevik Revolution in early November. This tight deadline left no time to design a recovery system, meaning that from beginning Laika was destined to die in orbit. Indeed, the satellite’s designers even placed a poison pellet in Laika’s food dispenser to euthanize her at the conclusion of the mission. Early Soviet reports variously claimed that Laika had died of oxygen deprivation or been euthanized after four days in space, but more modern sources indicate that she died mere hours after reaching orbit, the victim of heat exhaustion caused by a faulty cabin thermostat. It would not be until the Sputnik 5 mission on August 20, 1960 that living creatures – the dogs Belka and Strelka – would be recovered safely from orbit.

But perhaps the greatest myth of the early Space Race is that the Soviets enjoyed success after success while the American program was plagued with endless mistakes and failures. However, this notion is largely the result of the very different way in which the American and Soviet space programs were organized. The establishment of NASA as a civilian government agency meant that the U.S. space program was from its inception an open and transparent undertaking. This meant, however, that both failures and successes would take place in full view of the public. The Soviet space program, by contrast, was run by the military and carried out under the strictest of secrecy, with few details being revealed even to the Soviet people. Missions were not announced until after they had launched, and cosmonauts’ names were not even revealed until they had reached orbit. To the outside world this gave the impression of an unbroken string of unqualified successes, when in reality the failed missions were simply not reported. And there were a lot of failed missions. For example, 23 of the 59 R-7 rocket launches conducted between May 1957 and February 1961 were unsuccessful – a failure rate of 39% and comparable to that of the closest American equivalent, the SM-65 Atlas missile. This is not to say that Soviet achievements in space were not impressive or important; only that they should be evaluated in the context of the secrecy and propaganda that pervaded the depths of the Cold War.

Expand for References

Swenson, Lloyd; Grimwood, James & Alexander, Charles, This New Ocean, NASA History Series, 1989, https://ift.tt/36N5oiQ

Ludwig, George, The First Explorer Satellites, October 9, 2004, https://ift.tt/3xU5eCl

Vanguard – a History, NASA History Series, https://history.nasa.gov/SP-4202/toc2.html

Ackmann, Martha, The Mercury 13: The True Story of Thirteen Women and the Dream of Space Flight, Random House, 2003

Berger, Eric, The First Creature in Space Was a Dog. She Died Miserably 60 Years Ago, Are Technica, November 3, 2017, https://ift.tt/3rstZ6e

The post ‘Kaputnik’: America’s Disastrous First Attempt to Launch a Satellite appeared first on Today I Found Out.

from Today I Found Out
by Gilles Messier - July 20, 2021 at 11:18PM
Article provided by the producers of one of our Favorite YouTube Channels!

Thursday, July 15, 2021

Review: Ruffles Chili Cheese

When it comes to confusion over whether a new snack is really a new snack, Ruffles leads the league. ...

from Taquitos.net Snack Reviews
by July 15, 2021 at 07:22AM

Thursday, July 8, 2021

Review: Cornae Prime Corn Snack

This snack came in a package that looked like an instant ramen cup, but the pieces inside looked a lot like the bagged version of Cornae, with shapes that were kind of like Bugles, except flattened. ...

from Taquitos.net Snack Reviews
by July 08, 2021 at 11:46AM

Wednesday, June 16, 2021

That Time a Farmer was Given Ultimate Power Twice and Changed the World Forever By Walking Away Both Times

The subject of what a political leader in a democracy does after his term has ended and the merits of gracefully resigning from power has been on the news recently.

Enter the subject of today’s story which takes place in ancient Rome, at the dawn of the Republic Era. The person in question was Cincinnatus, whose actions in terms of political ethics not only shaped the political life of generations to come, but was linked with the essence of what democratical thinking is, so much so that founders of the American nation dubbed Washington with Cincinnatus’ name. So who was Cincinnatus and what made him rather unique compared to the vast majority of his political leader compatriots throughout history?

Lucius Quinctius Cincinnatus was born to the noble house Quinctii possibly around 519 B.C during the last years of the Kingdom of Rome. This means he belonged to the first generation to be raised within the just recently established grand experiment that was the Roman Republic.

In the 460s, Rome was in turmoil, with the main issue being the representation of the plebeians in government – those of its citizens not born to noble families. At one of the violent clashes, one of the two serving consuls, Publius Valerius Publicola, was killed. Cincinnatus rose to his position as replacement via a system vaguely similar to how a vice president can replace the president in the United States.

Cincinnatus therefore served a term in the highest political office in Rome. Ultimately, however, rather try to cling to power like so many others, he eventually chose to return to his private life. This was at the least unusual for various reasons. For one thing, he did not step down because he was fed up with politics. Far from it: He was highly opinionated regarding the issues of his day, with a strong stance against the plebeian demands for constitutional changes that would allow them to circumscribe the decisions of the consuls.

Furthermore, he was in a very difficult financial situation because of a fine he had to pay on account of his son Caeso, who – after causing political turmoil and violence – left the city before the court had reached a sentence. In the end, Cincinnatus had to pay a rather large fine in his stead, for which he had to sell his estate and instead live on a small farm across the Tiber (possibly around the Trastevere Region of Rome today). Thus, by stepping away he not only gave up incredible powe, but also was returning to the life, not so much as a wealthy noble as he had been before his term in office, but rather the life of a simple farmer.

While this all did nothing to advance his personal fortunes, his choice not to use his term as consul as means to broaden his political career, change his economic fortune or even to recall his son whom the republic had condemned, gained him the respect of his fellow Romans.

But the story of Cincinnatus was just beginning. Two years later, around 458 BC, Rome was once more in peril, as the army of the neighbouring nation of Aequi broke towards Rome, defeating one consular army while the other was far from the action.

To respond to this eminent threat, the senate decided to elect a dictator, which at that time was a title provided by the senate to a person who would have king-like powers for a fixed term: six months, after which the power would be returned to the senate. This enabled the appointed dictator to act swiftly, without asking for permission or waiting for the conclusion of further – and often extended – senatorial debates.

Naturally the person chosen for this role had to not only be imminently capable, but also trusted to actually step away when the term was finished. Thus, for this role, the senate chose Cincinnatus.

The historian Livy illustrates the scene. A group of senators approached the farm where Cincinnatus was working. He greeted them and asked if everything was in order. “It might turn out well for both you and your country,” they replied, and asked him to wear his senatorial toga before they spoke further. After he donned the garb of the office, they informed him of the senate’s mandate, hailed him as dictator and took him with them back to Rome.

Cincinnatus then got right to work mobilising the army, besieged the enemy at the Battle of Mount Algidus and returned victorious to Rome- all this in a span of two weeks.

After this huge success, all possible political exploits could have been available to him, especially as he was constitutionally allowed to stay in power for five and a half more months. Despite this, upon his return, he immediately abdicated and returned to his farm. The task at hand was complete, thus he saw no reason power shouldn’t be returned to the Senate.

Twice he could have used his position for his own gain, and twice he had not only chosen not to, but stepped away when his work was complete. But this isn’t the end of Cincinnatus’ tale.

Nineteen years later, in 439 BC, Cincinnatus was around 80 years old and once again asked to become dictator, this time to deal with inner political intrigue, as a certain Maelius was using his money to try to be crowned king – the ultimate threat against any republic. The episode ended with the death of the would-be king and again, his work done, Cincinnatus resigned after having served less than a month as dictator in this instance.

As you might expect from all of this, these practically unprecedented actions by a leader granted infinite power made his name synonymous with civic virtue, humility, and modesty. And they serve as an example of caring about the greater good.


To understand the importance of these actions one needs to zoom out and evaluate the time period in which they happened.

At the time, the system ‘republic’ was a novel occurrence in world history, to outsiders not necessarily different from a weird type of oligarchy. Furthermore, except for some initial reactions from the Etruscans directly after the founding of the Republic, the system, which dictates that the city leads itself, was not really put to the test. It would have been completely understandable if given the first opportunity, the city had turned back to a typical king-like government. The existence of a charismatic leader like Cincinnatus could easily be the catalyst to usher in the return to the era of kings, if the incredibly popular Cincinnatus was inclined to take the power. Yet he chose not to even after being granted ultimate authority twice.

This was crucial, as these events happened during the second generation of the Republic. And it was the deeds of the second and third generation after the founding of the Republic that were the ones that truly solidified the belief and generational tradition of the system which would come to be one of the most influential in human history. One can easily see how had Cincinnatus chosen to exploit his position and his popularity as the vast majority of world leaders have done throughout history, history itself as we know it might have been vastly different.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Bonus Facts:

Cincinnatus as a role model had many imitators throughout time – some more successful than others.


Continuing with Rome, during the Late Republic, the political Sulla was, let’s say… controversial to say the least. You know that retired authoritarian navy seals commander from any movie? Well, multiply this by ten, add some crazy slaughtering frenzies and there you have it. However, in 79 BC, after putting order to the Roman empire, and having been dictator since 81 BC, he resigned.

His supporters would like to compare this to Cincinnatus, but it is a rather different situation, seeing as he did not step down to resume a simple life, but rather to write his memoirs in a fancy resort. Plutarch states that he retired to a life of luxury, where, “He consorted with actresses, harpists, and theatrical people, drinking with them on couches all day long”. So rather than stepping down to a simple life, more of a retirement package filled with partying and bliss without the cares, intrigue, and dangers that come with being dictator of Rome.

In another contrast, his reforms did not ultimately make the impact he had hoped and their results were completely thrown over after his death, with the Empire being founded just a few decades after.


Another controversial Roman leader – now in the not-so-brand-new empire edition – marks Diocletian. Ruling as emperor from 284 to 305 AD, Diocletian achieved what few did during the so-called ‘Crisis of the Third Century’; he not only survived long enough to establish political reforms, but actually managed to stabilize the empire for the time being. In 305, he did what no Roman emperor had done before; he abdicated voluntarily and retreated to his palace on the Dalmatian coast – now the historic core of modern day Croatia’s city Split – where he famously tended to his vegetable gardens.

Not even lasting the duration of his retirement until his death in 311 AD, Diocletian’s established tetrarchy – the splitting of the empire among four rulers – collapsed into renewed chaos, and in 308, he was asked to return to power to help fix it. To this, he replied, “If you could show the cabbage that I planted with my own hands to your emperor, he definitely wouldn’t dare suggest that I replace the peace and happiness of this place with the storms of a never-satisfied greed.”

While at first, this may seem like the perfect comparison to Cincinnatus, it should also be stated that the reason for his retirement was first and foremost Diocletian’s failing health and wish to live out his last days peacefully rather than dealing with the political intrigue of the day. In fact, in contrast to Cincinnatus, Diocletian’s attitude can be seen more as abandoning the empire in a time of great need, something even the 80 year old Cincinnatus was unwilling to do.

George Washington

Skipping ahead hundreds of years and a vast number of governing changes in the old world, the American nation appeared in the world scene with a tempo. One of the most peculiar characteristics of it was the idea of a blend of republic and democracy with a small hint of dictator thrown in, but all carefully balanced to try to produce a system of government blending the best of human governing systems, while mitigating the downsides. Today it might seem trivial, but with very few exceptions – like say the Netherlands – at the time western countries had a king figurehead, with varying degrees of authority, even in cases where parliamentarism had had a long tradition, as was the case in England.

For many, this experiment of reviving a political system based on ancient Rome was seen as weird, even eccentric. One of the many concerns was the stability of the system. Would Washington – the Commander in Chief of the Continental Army – and someone vastly popular with the general public and politicians alike, step down after victory?

Well, no. No, of course he wouldn’t, he would become a king or something amounting to the same position, just using a different title and… what? He… he actually left office? But wasn’t he very popular?

Yes. Yes, he was. And paralleling Cincinnatus, he left office because he respected the constitution and the experiment that was this new form of government, a fact that demonstrated – among other qualities – civic virtue and modesty of character.

In a final appearance in uniform he gave a statement to Congress: “I consider it an indispensable duty to close this last solemn act of my official life, by commending the interests of our dearest country to the protection of Almighty God, and those who have the superintendence of them, to his holy keeping.”

It is difficult to imagine today, but stepping down after his presidential term was a sensation. See the counterexample of, say Napoleon crowning himself emperor or other personalities who would do anything to remain in power. Washington’s resignation was acclaimed at home and abroad, and showed a skeptical world that the new republic might just not degenerate into chaos or something completely different and more familiar to the world at the time.

The parallels with Cincinnatus are obvious and were made even then. After the fact, a society of veterans of the American Revolutionary War, the ‘Society of the Cincinnati’ was founded, with the motto Omnia relinquit servare rempublicam (“He relinquished everything to save the republic”). The first major city to be founded after the war was then aptly named Cincinnati, which is the genitive case of Cincinnatus, meaning ‘belonging to / that of Cincinnatus’.

Expand for References





Chernow, Ron (2010). Washington: A Life

Klaus Bringmann, A History of the Roman Republic 2007

https://thehistoryofrome.typepad.com/the_history_of_rome/page/5/  (The history of Rome podcast  (part 7)

The post That Time a Farmer was Given Ultimate Power Twice and Changed the World Forever By Walking Away Both Times appeared first on Today I Found Out.

from Today I Found Out
by Nasser Ayash - June 16, 2021 at 12:00AM
Article provided by the producers of one of our Favorite YouTube Channels!

Is it pronounced “Jif” or “Gif”?

It is the single most profound question of the 21st Century, a debate which has dominated intellectual discourse for more than three decades. Some of the greatest minds and institutions in the world have weighed in on the issue, from top linguists and tech giants to the Oxford English Dictionary and even the President of the United States. Yet despite 30 years of fierce debate, controversy, and division, we are still no closer to a definitive answer: is it pronounced “gif” or “jif’?

At its face, the answer might seem rather straightforward. After all, the acronym G-I-F stands for Graphics Interchange Format. “Graphics” has a hard G, so G-I-F must be pronounced “ghif.” Case closed, right? Well, not quite. As is often the case, things aren’t nearly as simple as they might appear.

The Graphics Interchange Format was first introduced in June of 1987 by programmer Steve Wilhite of the online service provider Compuserve. The format’s ability to support short, looping animations made it extremely popular on the early internet, and this popularity would only grow over the next two decades, with the Oxford English Dictionary declaring it their ‘Word of the Year’ in 2012.

As its creator, Wilhite should be the first and final authority on the word’s pronunciation. So how does he think we should say it?


Yes, that’s right: despite all arguments to the contrary, the creator of everyone’s favourite embeddable animation format insists that it is pronounced with a soft G. According to Wilhite, the word is a deliberate reference to the popular peanut butter brand Jif; indeed, he and his colleagues were often heard quipping “choosy developers choose JIF” – a riff on the brand’s famous slogan “choosy mothers choose JIF.” And he has stuck to his guns ever since. When presented with a Lifetime Achievement Award at the 2013 Webby Awards, Wilhite used his 5-word acceptance speech – presented, naturally, in the form of an animation – to declare: It’s pronounced ‘jif,” not ‘gif’

In a subsequent interview with the New York Times, Wilhite reiterated his stance: “The Oxford English Dictionary accepts both pronunciations. They are wrong. It is a soft ‘G,’ pronounced ‘jif.’ End of story.”

 While the debate should have ended there, language is a strange and fickle thing, and despite Whilhite’s assertions a large segment of the population continues to insist that the hard “G” pronunciation is, in fact, the correct one. In 2020 the programmer forum StackExchange conducted a survey of more than 64,000 developers in 200 countries, asking how they pronounce the acronym. A full 65% backed the hard G and 26% the soft G, with the remainder spelling out each individual letter – “G-I-F.” This seems to agree with a smaller survey of 1000 Americans conducted by eBay Deals in 2014, in which hard G beat soft G 54 to 41%. However, as The Economist points out, people often base their pronunciation of new or unfamiliar words on that of similar existence words, and the prevalence of the hard or soft G varies widely from language to language. For example, Spanish and Finnish have almost no native soft G words, while Arabic almost exclusively uses soft Gs. Those in countries that predominantly use hard Gs make up around 45% of the world’s population and around 79% of the StackExchange survey respondents. Nonetheless, even when these differences are corrected for, hard G still narrowly beats out soft G by 44 to 32%.

In the wake of Wilhite’s Webby Award acceptance speech, many prominent figures and organizations have publicly come out in favour of the hard-G pronunciation. In April 2013 the White House launched its Tumblr account with a graphic boldly announcing that its content would include “Animated GIFs (Hard G),” while during a 2014 meeting with Tumblr CEO David Karp, U.S. President Barack Obama threw his hat into the ring, declaring: “[It’s pronounced GIF.] I’m all on top of it. That is my official position. I’ve pondered it a long time.”

Many backers of the hard-G pronunciation, like web designer Dan Cederholm, focus on the pronunciation of the acronym’s component words, with Cederholm tweeting in 2013: “Graphics Interchange Format. Graphics. Not Jraphics. #GIF #hardg”

However, this argument ignores the many other instances in which the pronunciation of an acronym does not line up with that of its components. For example, while the A in “ATM” and “NATO” stand for “Automatic” and “Atlantic,” respectively, we do not pronounce them as “Awe-TM” or “Nah-tow.” Many also point out that there already exist words such as “jiffy” in which the same sound is produced using a J, but this too ignores exceptions such as the now-archaic spelling G-A-O-L for “jail.”

So if common sense and everyday usage can’t settle the debate, then how about the rules of the English language? As noted by the good folks at the Daily Writing Tips, words in which the G is followed by an e, i, or y – like giant, gem, or gym – are more often than not pronounced with a soft G, while all others are pronounced with a hard G. According to this rule, then, “G-I-F” should be pronounced the way Steve Wilhite originally intended: as “jif.” However, there are many, many exceptions to this rule, such as gift, give, anger or margarine. In an attempt to clear up the matter, in 2020 linguist Michael Dow of the University of Montreal conducted a survey of all English words which included the letters “G-I,” grouping them according to pronunciation. The results seemed to indicate that the soft G is indeed more common as many state, with about 65% using this pronunciation rather than the hard G. However, one thing missed with this argument is that many of these soft-G words, like elegiac, flibbertigibbet, and excogitate, are rarely used in everyday communication. When the actual frequency of a word’s use is corrected for, the number of hard and soft-G words commonly used becomes about equal.

The fundamental problem with such rules-based approaches is that unlike many other languages, English evolved rather chaotically without the guidance of a central regulatory authority like the Académie Française. Consequently, English has little in the way of consistent set of pronunciation rules, and the pronunciation of any given word depends largely on its specific etymology, common usage, or even the geographic region where it is spoken. Thus, so as far as the gif/jif debate is concerned, the linguistic jury is still very much out.

But of course, it wouldn’t be America without a major corporation weighing in on the issue. On May 22, 2013, shortly after Steve Wilhite received his Webby Award, Jif brand peanut butter took to Twitter with a post reading simply: “It’s pronounced Jif® .”

Seven year later, the brand teamed up with gif website GIPHY to release a limited-edition peanut-butter jar labeled “GIF” instead of “JIF.” In an interview with Business Insider, Christine Hoffman explained: “We think now is the time to declare, once and for all, that the word of Jif should be used exclusively in reference to our delicious peanut butter, and the clever, funny animated GIFs we all use and love should be pronounced with a hard ‘G’”.”

Alex Chung, founder and CEO of Giphy, agreed, stating in a press release: “At Giphy, we know there’s only one ‘Jif’ and it’s peanut butter. If you’re a soft G, please visit Jif.com. If you’re a hard G, thank you, we know you’re right.”

 Yet despite such efforts to force a consensus, the debate continues to rage and shows no signs of stopping anytime soon. While deferring to Steve Wilhite’s originally-intended pronunciation might seem like the most logical solution, that just isn’t how language works – as John Simpson, Chief Editor of the Oxford English Dictionary, explains: “The pronunciation with a hard g is now very widespread and readily understood. A coiner effectively loses control of a word once it’s out there.”

 As evidence, Simpson cites the example of “quark,” a type of subatomic particle. The word, derived from a passage in James Joyce’s 1939 novel Finnegans Wake, was coined in 1963 by physicist Murray Gell-Mann and originally rhymed with “Mark.” Over the years, however, the word evolved and is today pronounced more like “cork.”

More close to the web, the creator of the world’s first Wiki, WikiWikiWeb, Howard G. Cunningham, also pronounced this word differently than most people today. As for the inspiration for the name, during a trip to Hawaii, Cunningham was informed by an airport employee that he needed to take the wiki wiki bus between the air port’s terminals.  Not understanding what the person was telling him, he inquired further and found out “wiki” means “quick” in Hawaiian; by repeating the word, it gives additional emphasis and thus means “very quick”.

Later, Cunningham was looking for a suitable name for his new web platform. He wanted something that was unique, as he wasn’t copying any existing medium, so something simple like how email was named after “mail” wouldn’t work.   He eventually settled on wanting to call it something to the effect of “quick web”, modeling after Microsoft’s “quick basic” name.  But he didn’t like the sound of that, so substituted “quick” with the Hawaiian, “wiki wiki”, using the doubled form as it seemed to fit; as he stated, “…doublings in my application are formatting clues: double carriage return = new paragraph; double single quote = italic; double capitalized word = hyperlink.”  The program was also extremely quick, so the “very quick” doubling worked in that sense as well.

The shorter version of the name, calling a wiki just “wiki” instead of “Wiki Wiki” came about because Cunningham’s first implementation of WikiWikiWeb named the original cgi script “wiki”; all lower case and abbreviated in the standard Unix fashion.  Thus, the first wiki url was http://c2.com/cgi/wiki.  People latched on to this and simply called it a “wiki” instead of a “Wiki Wiki”.

So how was Wiki originally pronounced? “we-key”, rather than the way most today pronounced it, “wick-ee”. However, given the popularity of the mispronunciation of the word, as with “gif” now being popularly pronounced differently than the creator intended, Cunningham and others have long since stopped trying to correct people on the correct way to pronounce wiki.

Going back to gif vs jif, in the end, the choice is entirely a matter of personal preference, and as with all language and as many a linguist will tell you, how you use a word ultimately doesn’t matter as long as you are understood, and few are going to get confused on this one. But if you’d like to pronounce it the way its creator intended, go with jif, and if you’d like to follow the crowd like sheep, go with gif.

If you liked this article, you might also enjoy our new popular podcast, The BrainFood Show (iTunes, Spotify, Google Play Music, Feed), as well as:

Expand for References

Locker, Melissa, Here’s a Timeline of the Debate About How to Pronounce GIF, Time Magazine, February 26, 2020, https://ift.tt/2VpDmW0

Biron, Bethany, Jif is Rolling Out a Limited-Edition Peanut Butter to Settle the Debate Over the Pronunciation of ‘GIF’ Once and For All, Business Insider, February 25, 2020, https://www.businessinsider.com/jif-campaign-settle-debate-pronunciation-of-gif-2020-2

Gross, Doug, It’s Settled! Creator Tells Us How to Pronounce ‘GIF,’ CNN Business, May 22, 2013, https://www.cnn.com/2013/05/22/tech/web/pronounce-gif/index.html

GIF Pronunciation: Why Hard (G) Logic Doesn’t Rule, Jemully Media, https://jemully.com/gif-pronunciation-hard-g-logic-doesnt-rule/

Nicks, Denver, WATCH: Obama Takes a Stand in the Great GIF Wars, Time, June 13, 2014, https://time.com/2871272/obama-tumblr-gif-wars/

McCulloch, Gretchen, Why the Pronunciation of GIF Really Can Go Either Way, WIRED, October 5, 2015,

Belanger, Lydia, How Do You Pronounce GIF? It Depends on Where You Live, Entrepreneur, June 20, 2017, https://www.entrepreneur.com/article/296674

Webb, Tiger, Is it Pronounced GIF or JIF? And Why Do We Care? ABC Radio National, August 9, 2018, https://www.abc.net.au/news/2018-08-10/is-it-pronounced-gif-or-jif/10102374

The post Is it pronounced “Jif” or “Gif”? appeared first on Today I Found Out.

from Today I Found Out
by Gilles Messier - June 16, 2021 at 08:45PM
Article provided by the producers of one of our Favorite YouTube Channels!