Epidemiology, Infectious Disease, Uncategorized

Zombies!

Dicrocoelium-adult-freshImagine for a moment that you are a Lancet fluke (Dicrocoelium dendricitum), a tiny species of trematode flatworm that has a very complicated life cycle. In order to get to the host your offspring need (Sheep or other ruminant), you need not be proactive.

You lay a large cluster of eggs in the intestinal track of your host animal, and your work is one. The embryonic eggs, dropped in clusters imbedded in the feces of the host animal are now outside in nature. This is where the journey of a thousand miles (so to speak) begins.

The adult produces both eggs and sperm and can self-fertilize, although probably not necessary, as tens of thousands of these tiny creatures are commonly found in a single sheep. Like many higher forms of animals (including humans), the breeding interval depends on how often one fluke comes in contact with another, or whenever it produces both eggs and sperm to self fertilize.

Now on the outside, nature has provided a method for the egg to develop into a larva, and for that larva to get back into another sheep or other ruminant. What this involves two other tiny creatures, a form of snail and an ant.

The embryonic eggs of the Lancet are covered in a tough layer, and are ingested along with bits of fecal matter by snails. When the eggs are in the digestive track of the snail, they hatch into free-swimming larva called miracidia. The larva’s metamorphose into sporocysts (an adult form) and take up residence in the digestive gland of the snail. After a few months, these sporocysts form cercariae (the final free-swimming larval form), and invades the nail’s simple lung.

snail

As more larva develop and migrate into the snail’s lung, it eventually fills, irritating the lung tissue, causing the snail to cough or expel them along with mucus balls. Hundreds of lancets may be held in a single mucus ball, and the nails mucus prevents the larva from drying out. It has now traveled outside of the sheep as n egg, and through a snail in it’s larval stage. Enter the ant.

FormicaFusca

The Formica fusca is native to the United States. These ants gather the mucus balls as a source of moisture, however, when they ingest the mucus, they also and just dozens of Lancets. The Lancets create holes in the ant’s esophagus, preventing them from being swallowed. The lancets create a material that closes the hole, keeping the ant alive. It needs a healthy and mobile ant for the next leg of the journey. The Lancets migrate to the posterior portion of the abdomen, and develop into the adult form. The Lancet, now an adult, manipulates the subesophageal ganglion of its ant host. These ganglia control muscles and alter the behavior of the ant.

Instead of staying in the nest with her sisters (except for the drones that die after matting, all ants are female), at sundown it leaves the nest and climbs a tall blade of grass, and clings using us mandibles. If the ant is still alive at sunup, the Lancet releases control, and the ant returns to its normal daily routine. Both the ant and the Lancet would die if left exposed in the sunlight, so allowing the ant to return to its duty keeps them both alive. Temporarily.

Sooner or later, poised at the top of a blade of grass, a grazing animal will come along and eat the grass, ant and all. Now ingested, the ant quickly dissolves in the stomach of the sheep or other grazing animal. Not so the Lancet. Upon reaching the duodenum (the part of the small intestine just beyond the stomach), the lancets burrow into the blood vessels and follow the scent of bile through the bile duct to the capillaries of the liver. There they take up residence, feeding on the nutrients from the host animal the rest of their lives. After a few months in the liver, the lancets reach the adult stage, and soon after begin to produce eggs and the entire process starts over again.

Humans can and do play host to Lancets, and because the bodies of these parasites a long and narrow, infections are most frequently confine to distal part of the bile ducts. Because these infections reside in the biliary tree, symptoms are generally mild and often go unnoticed. In children, these infections can produce biliary colic and digestive disturbances, among them bloating and diarrhea. In extreme cases, the biliary epithelial maybe come inflamed, causing enlargement of the liver, or cirrhosis.

Reader questions, Uncategorized

Could we blow up the world with nuclear weapons?

chixu impactorEver since the latest ruler of North Korea began testing missiles; the threat of detonation of nuclear devices is more often on the minds of the public. Simply stated, as long as these weapons of annihilation exist, so too will the temptation to use them. Although military experts speak of the survivability of what they characterize as a limited exchange, they are talking of survivability for the short-term. If one nation launches nuclear weapons, it is likely that others who possess such weapons will also use them. The likely result will be a full nuclear exchange, rather than the limited engagement politicians so calmly reference. That being said, the opinions rampant on Face Book, that such a war would “shatter the earth” is complete hokum. Still, a full nuclear exchange would alter the world we live in drastic ways.

While many films and books have tried to demonstrate the death and destruction that would result from the initial blast and resulting fallout of a full nuclear exchange, few have studied the likely effects on the ecosystem of our planet. Scientists who focus on the climatic changes throughout Earth’s history describe how a change of just a few degrees can have devastating, planet-wide effects. Some suggest that a nuclear exchange would prompt a “nuclear winter.” Dr. Carl Sagan and others introduced this idea in 1983, in the journal Science. In this theory, after the explosions of a nuclear exchange have stopped, the real devastation will be just beginning.

The spread of ash and smoke in the atmosphere from fires will partially obstruct sunlight, leading to a drop in global-wide temperatures by as much as 5-10 degrees centigrade within a few months. Conservative models suggest that a change in the temperature of even two degrees Centigrade could unbalance the ecosystem, leading to issues of survival for some species. Such a change would drastically alter the established weather patterns, bringing drought to some areas, and monsoon conditions to others. The migration of species would be disrupted, and some would die off as a result of these climatic changes.

The result of major climatic upheaval due to the smoke and ash in our atmosphere is more than an idle guess, instead based on the results of events that have occurred on our planet every 60-80 million years, the last one, around 66 million years ago.

In 1979, Dr. Walter Alvarez, distinguished professor of Earth and Planetary Science at University of California, was sifting through sediments from Italy that contained significant amounts of a rare radioactive element known as iridium. Iridium is rare on Earth but found in large quantities in meteorites. Alvarez was finding it in sediments dating to the boundary layer between the Cretaceous and Tertiary periods known by archeologists as the K-T boundary. Alvarez’s discovery and ongoing research lent support to the impact theory of the periodic extinctions that have occurred every so often.

Recently, two researchers have uncovered new evidence that supports the link between cyclical comets or asteroid showers and mass extinctions, including the one that they believe killed the remaining dinosaurs some 66 million years ago. Michael Rampino, a geologist at New York University, and Ken Caldeira, an atmospheric scientist at the Carnegie Institution for Science, reviewed the mass extinctions from the geological record, and found that approximately every 26 million years or so, there were huge impacts of space debris, accompanied by major extinctions.

Although we don’t feel it, our planet is in constant movement. Not only are we revolving at nearly 1,000 miles per hour, but orbiting the Sun at nearly 19 miles every second. Our solar system itself is in constant motion, traveling at over 138 miles per second (along with the stars closest to Earth) that lie in a minor spiral arm of the Milky Way Galaxy called the Orion Spur. Our galaxy is so large that even at these speeds, it takes 230 million years for us to make one orbit of our galaxy. And it doesn’t stop there. The Milky Way itself is also moving, being drawn gravitationally toward the Andromeda Galaxy at nearly 3,000 miles every second; however, because the two galaxies are nearly 2.5 million light-years apart, even at 3,000 miles per second the trip will take about 4 billion years.

As our solar system slowly moves around our galaxy (slow being a relative term), we are brought into contact with other space debris, including comets, asteroids, and other objects, both large and small. The result is that we are continually peppered with debris from space, although most of it is dust-sized. About 40-60,000 objects over 10KG fall on the Earth every year, most falling in the ocean. Now and then, larger objects fall to Earth. In 2013, Ukrainian astronomers located an object they named 2013TV135, which passed very near the Earth. The space rock was about a ¼ mile in diameter. Google announced that, had the object struck, it would have “Blown up the earth.” Like much on Google, the facts are very different. Had the object struck, it would most certainly have caused a great deal of devastation.

A collision with object 2013TV135 would have released an energy equivalent to 3,300 megatons of TNT, or about the same energy release as about half of all the world’s remaining nuclear weapons detonated at the same time, in the same place. The result of the impact of 2103TV135 would be a crater about 12 miles across, and perhaps a ¼ mile deep. The energy released at the instant of impact would vaporize about a hundred million cubic meters of rock, resulting in a significant earthquake. If the collision occurred in water, we could add a tsunami to the above description. Nasty, but certainly not enough to cause a mass extinction of life on the Earth(let alone “shatter the planet” i.e. Face Book claims).

The piece of rock and iron that struck Arizona about 50,000 years ago was only about 40 feet across, yet it left a crater a mile wide, and nearly 600 feet deep, and vaporized nearly 175 million tons of rock. Although significantly rare, large space objects do collide with the Earth. There have been ten known impacts, each causing a massive loss of species. These are known as Extinction Level Events or ELE.

The Earth has experienced at least ten major ELEs caused by impacts from debris from space. The largest impact crater so far discovered resulted in the Vredefort Crater, caused by an asteroid nearly 9 miles in diameter, which left a crater with a radius of 118 miles some 2 billion years ago. Diminishing in size, the Sudbury Basin Crater in what is today Northern Ontario is 81 miles in diameter and was formed 1.8 billion years ago by the impact of a large comet. The 56-mile Acraman Crater was formed about 580 million years ago when an asteroid struck what is today Australia. About 215 Million years ago, the region that is today Quebec suffered a major impact, resulting in a crater 62 miles wide. The most well known of these ancient impacts, the Chicxulub Crater is over 150 miles wide, caused by the impact 66 million years ago by a chunk of rock and iron 6 miles wide that helped sound the death knell for all large life forms, including the dinosaurs. While there have been at least 10 of these impacts, the one that ended the reign of the dinosaurs is the most famous.

Before the impact that led to their extinction, the dinosaurs, which had dominated Earth for more than 150 million years had been the dominant species (at least on land). The few mammal species of the time were mouse-sized, nocturnal and secretive. They spent most of their time and energy evading the much larger dinosaurs. With the Chicxulub impact, almost all animals larger than about 50 pounds became extinct. Not all at once, and not overnight.

The conditions after the Chicxulub impact were extremely hostile to life. The sun was blocked by the ash and debris thrown into the atmosphere. After six months or so, the skies cleared somewhat, and acid rain fell. Most plants withered and died, and the Earth’s climate rapidly cooled. By the time the skies cleared, the enormous amounts of ice and snow that blanketed the earth melted, putting a great deal of water vapor into the atmosphere, creating a greenhouse that rapidly heated the planet. The Cretaceous–Paleogene extinction event is used to mark the K-T boundary. From the Chicxulub impact, the climate disruption helped bring about the extinction of an estimated 75% of all species, including almost all of the dinosaurs.

Amazingly, some species not only survived, they thrived. The mammals who were at first small and timid, would move into the niches now empty of larger animals, becoming the genetic ancestors of the mammals alive today, including us.

The oceans also experienced a significant loss of species, including the top predators like Mosasaur. These enormous animals were lizards not dinosaurs, and some were enormous, rivaling even the largest whales of today. All life in the ocean depends on a food chain that begins with plankton. Plankton requires sunlight. If sunlight is diminished for very long, plankton dies, and the food chain collapses. The collapse of the ocean food chain signaled the extinction of a vast number of species, including large fish, most sharks (although a few smaller species survived), several species of mollusks, and most species of plankton.

While almost all of the dinosaurs became extinct, one small group of bird-like dinosaurs of the genus Maniraptora survived. It is thought that their beaks allowed them to feed on seeds, thus staying fed even though the plants had died. Seeds survive conditions that kill off the plants, and any animal that can break open the seeds can find nutrients. This handful of species gave rise to the birds we have today. A few species of crocodiles survived as well. Crocodiles are the only known land animal over 50 pounds that managed to survive the die off. The crocodiles of today look very much like their fossilized ancestors from 80 million years ago.

Once the skies cleared and the planet warmed again, the survivors, most notably the mammals, were able to diversify and fill-in the niches left by the demise of larger animals. From a handful of small, mouse-sized species evolved new forms including horses, and pigs, to bats and whales, to primates including Homo sapiens.

Currently, the nations of the world have a stockpile of nuclear weapons. Although exact figures are secret, the Federation of American Scientists estimates there are around 20,000 nuclear warheads, nearly all of which are Russian and American. While the explosive power of these weapons varies, the total of all weapons currently is thought to be about 10,000 Megatons (the bombs dropped on Hiroshima and Nagasaki were about 0.015 megatons); the energy generated from the Chicxulub impact was the equivalent to 100 Teratons or TNT, or about 2 million of the largest nuclear weapons of today. So obviously, even if humanity became so self-destructive as to detonate all of its nuclear weapons, the world would keep on turning. About that….

And much of humanity would go on living, at least for a while. Beyond the ejected debris sent into the atmosphere by so many detonations, the aerosolization of small particles would rapidly reduce the amount of heat reaching the earth from the Sun, producing the nuclear winter physicists warn about, accompanied by massive environmental damage. Such an all-out nuclear exchange would unleash a pulse of electromagnetic energy that would destroy virtually every modern device; national power grids and microchips alike would be useless. Assuming that all weapons will detonate on the surface, spaced out across the globe, 100 square kilometers of land will be obliterated, and nearly one quarter million square kilometers of infrastructure will have been leveled or incinerated.

The subsequent ionizing radiation in the atmosphere will rain down on the survivors of the original blasts, contaminating those who manage to survive, causing radiation sickness. Those survivors in shored-up bunkers, or in far-flung remote locations who escaped the blasts and radiation poisoning will have to prepare for darkness. The soot produced by the detonations and resulting fires of man-made structures will further pollute and darken the skies, plunging the world into darkness, and halting photosynthesis. The total collapse of the ecosystem would follow soon after.

Any alien beings, if there are any close enough to visit, would hear a deafening silence a few months (or a few years) after our demise (radio and television signals travel at the speed of light into the cosmos), and perhaps some advanced beings might investigate why after nearly 100 years of noise (we really can’t expect alien life to understand the CBS Evening News, or the Lucy Show) everything had fallen silent?

If they were to grow curious and visit, they would be able to detect the radiation from orbit, and know that a civilization that arrogantly believed itself to be enlightened, the very pinnacle of creation, rather than put aside petty grievances, had chosen instead to self-destruct.

 

Epidemiology, Uncategorized

Being an undiagnosed Dyslexic in 1960’s American Public Education is not for the Meek.

dyslexiaThis is an excerpt from a talk I gave on my experience with Dyslexia in the American Public Health System in the 1960’s for a Learning Disabilities Podcast

When I recall the most memorable events of my childhood education, one in particular, comes immediately to mind. It was the late-1960’s in small-town Maine at our local public school, and I was standing at the chalkboard (we had chalkboards back then, with real chalk). I became acutely aware of the laughter behind me. I stood there at the teacher’s demand, trying to figure out how to correctly punctuate and spell a short sentence, although today I cannot recall what. I remember only the laughter and the look on my teacher’s face.

As if a reprieve the bell rang, and I could hear the rest of the 5th-grade class shuffling out to the waiting buses. I had been standing at that same board for nearly an hour, in fact, since the beginning of English class. I had been called to the board (despite being one of the few students who had not his hand) to correct the spelling and punctuation of a sentence that the teacher had written on the board. I could not amend the words or the punctuation because, at that time, I could barely read. I was not sure of the spelling of any word, save my name (My mother had taught me to spell it when I was five, and I had practiced it over, and over again out loud, much to her dismay I am sure). Apart from my name, I just could not spell anything. I would often reverse letters and numbers; my math skills were far below my grade level. Had I not possessed an incredible memory, I would not have gotten as far as I had.

My friend Roy, whose last name began with the letter G, was always seated next to me in the small classes of the time. Roy was brilliant, or at least more intelligent than I was. We walked partway home from school together, as he lived one street over from me. As I mention, it was a tiny town. Roy knew of my limits and allowed me to peek at his work, enough to get by with a “D-” or perhaps, with some concentration, a “D.”

On that rainy afternoon, standing at the chalkboard, Roy could not help me. No one could. Instead, they laughed. The teacher, whose name is not essential to this story, quietly sat at her desk and glared at me. After the rest of the class had exited the room, she explained to me how disgusted she was with me. She informed me that I was a stupid child; stubborn and willful.

As she explained it, she had hoped the embarrassment I felt at the chalkboard would have snapped me out of it. It was for my good. It always seemed to be for my good. Why was it that it only created hatred of education and educators in me? How could that torment and terror be for my good? It made no sense to me, but as it had been said, more than once, I was slow, and perhaps as my 3rd-grade teacher had told my mother, retarded.

My experience, at least this particular one, took place many years before the term learning disability had entered the vernacular, at least in rural Maine. Today, I very much doubt that any teacher would behave in that manner, at that time at least, she was hardly the first or only person to explain to me in plain and simple terms that I was not a normal person. By junior high, it seemed apparent to my teachers that I was not deliberately trying to fail. Fail I did. I failed in spelling, English, maths; you name it. I flunked it. I remember in seventh grade receiving four “F’s” on my report card my first term.

My father explained that I was expected to bring my grades up, and I was grounded for the ranking period. To be honest, This was pale compared to the usual abuse we received at the hands of my father, yet it would be impossible for me to accurately describe the way I felt being blamed for my poor performance. I tried to explain to anyone I thought might listen that I had tried as hard as I could but just could not understand what they asked of me. There was more than one discussion of holding me back to repeat the grade. And then again the next year. Nothing came of it apart from causing my mother anxiety. I struggled and cheated enough to bring my classes up to a D-. My math teacher felt sorry for me, which is evident in her grading for the next two semesters. She gave me every break she could. My spelling had still not improved, and how I managed to spell any word correctly was a complete enigma.

I continued to squeak by until tenth grade. After attempting to comprehend algebra and having yet to understand basic math, and being ridiculed by my math teacher and having a book thrown in my face, I walked out of class, out of the school, and never went back. I very much doubt that anyone in the administration was sorry to see me go, as I had begun by the middle of the ninth grade to be as big a nuisance as possible. I smoked cigarettes (often in school) I drank beer in the parking lot daily, I flooded bathrooms by stuffing toilets with paper towels, I pulled fire alarms.

I had found a group of students, most were 2-3 years older, who were both willing and able accomplices in mayhem. I attended school only to meet up with friends. By now, my parents had pretty much given up on me, as had my teachers. I knew education was not for me, and at age 16 I was free to get a job and enter the workforce. Of course, without a high school education, the roles were narrowed down a bit, and work as a laborer for a construction company was about all that I could find. A year or so later, I managed, barely, to complete a General Equivalency Diploma, or GED

The GED was supposed to be equal to a high school diploma, but I can tell you without hesitation, at least back in the mid-1970’s it was not even close. And, it seems this was common knowledge because even with my G.E.D the job market did not change. I managed to become a dedicated construction worker. Still, I changed jobs every 2-3 years out of boredom. While I managed to stay employed through most of the 1970’s and 1980’s, I did not enjoy the work, and always hoped that somehow I could do more with my life.

Just three months before my twenty-eighth birthday, a scaffolding I had been working on collapsed, and a fall of some 20 feet or so landed me in the hospital. I have fractured my neck in two places, cracked several ribs, and lost much of the sensation in my left side. After two surgeries to replace vertebrae and a plate and screws to hold it all in place, my doctor told me that it would be unrealistic to consider going back to work in the construction field.

I was a slow-witted 30-year-old with no job and no future. Because the injury had occurred at work, my company insurance policy had paid or the surgeries and my salary for the past two years. I was told I qualified for vocational rehabilitation, and I went to visit a professional rehabilitation counselor named Dianne. She was very friendly and seemed genuinely concerned for me. This compassionate and concern were primarily alien to me. The effort she spent on my behalf would never be forgotten. She mentioned to me that I might make a good counselor, and I mentioned this to the vocational psychologist she had ranged for me to visit. However. Dr. S. explained to me somewhat matter-of-factly that someone with my education and background was unrealistic in my wishes to attend a college of any kind.

I recounted the discussion I had had with Dianne about becoming a counselor. She thought that, with help, I might be able to earn an associates degree in some field of social services or counseling. He explained that the desire to go to college must match ability and I simply lacked that ability. He was the expert. I left the final meeting with Dr. S. angry, mostly at myself, at how unfair it was that I had such big thoughts but no intelligence. True, I had a steady income from disability as Dr. S. had pointed out, but at just 30 years of age, my future looked bleak, at least in my opinion. But unlike my childhood, I had someone who believed in me.

I had met and married Cathy just a year before my injutry, and when I returned home from my last meeting with the vocational psychologist and explained what he had advised, she suggested I ignore the advice of the experts.

I began by applying to an associate degree program at my local college in human services. It was a struggle to be sure, but with nearly eidetic memory, and the aid of tutors, I finished the first semester with an average GPA. During the spring break while I studied for the upcoming term in the school library, I had the chance to chat with some students who were in the Special Education Program. It was then that I learned about learning disabilities, and one or two of the students suggested that I look into it.  I arranged for testing through the school, and within a month or so I heard the word for the fist time: Dyslexia. They taught me a few tricks, such as speed reading (where you read only every other word), and computer programs such as dictating software that assist in writing. Math remained a problem until in my forties I stopped looking at it as “Math” but a symbolic language. Then it made perfect sense.

After my first year in the associate degree program, I transferred to the four-year bachelor’s program at the University of Maine at Farmington, graduating in just two years with a BA in Psychology. I went go on to graduate school, earning a MSHS in Community Health Psychology, a M.Div, with a concentration in history and ancient language, and a Graduate Degree in Family Therapy.

After working in the fields of Community Health Psychology, and Behavioral Psychology, I returned to graduate school, this time pursuing public health. I earned an MPH, before entering the Ph.D. in Public Health, where I concentrated in Epidemiology (graduating at the top of my classes). Since completing my Ph.D., I have concluded post two graduate programs, in Clinical Research Administration and Applied Risk Management, and I am currently enrolled in a MHA program where I am at the top of my class.

In the time since Dr. S. made his pronouncement, I have graduated at or near the top of my class nine times, written and published five books on topics as diverse as evolutionary biology, religious history, and natural science. I have authored and coauthored dozens of journal articles on issues from healthcare to neurology and psychology and learning disabilities, and spoken at national and international conferences and symposiums on a host of topics.

I guess the moral of my story if there indeed is one, must be that when a person allows others, even the experts to define them, they are already handicapped. From the age of five until the age of thirty, I knew I was different. What that difference was, I had to find out for myself. I can only imagine what my life would be like today, had I followed the expert’s advice.

Thank you.

 

Uncategorized

The Necessity of Moral Courage in Healthcare

Still image from police body-worn camera video of Nurse Alex Wubbels during an incident at University of Utah Hospital in Salt Lake City

This essay is in reflection of the incident from the Salt Lake City Hospital in Utah. When the police detective showed up at the ER, without consent, a warrant, or even a reason for obtaining a blood sample from a victim involved in a police-chase car accident. She was arrested for refusing to violate both the law, the constitution, and HIPPA.

As leaders in healthcare, we are forced to contend with complex ethical dilemmas. Clinicians, nurses, technicians, and managers practicing in the modern healthcare environment are met with increasingly complex problems. In situations where our desire to act rightly may be obstructed by the inconsistent values and beliefs of our coworkers, patients, and the community, we must fall back on our moral courage in demanding that right action is taken.

According to Murray (2007), understanding the importance of moral courage aids healthcare leaders to demonstrate moral courage when they face ethical challenges and the creation of an ethical environment. Moreover, a good understanding of our moral courage helps us face the inevitable ethical questions, and guides us at a time when doing the right thing is not easy or popular. To do so requires moral bravery, and a firm conviction to do what is ethical.

Samuel Taylor Coleridge (1772–1834) wrote that moral courage is that which enables us to remain steadfast and resolute despite disapproval. It is, above all other things, insisting that what is right is done, in spite of the risk of ridicule or loss of position. When moral courage fails, it often ends up splashed across television news stories and on the front pages of newspapers. It should not come as a surprise that such visible examples of moral cowardice have helped to fuel our cynical and divided society, even while the basic definition of moral courage has remained unchanged, a testament to both its necessity and is a rarity.

Beauchamp and Childress (2001) point out that ethics in healthcare have enjoyed a significant level of continuity since the time of Hippocrates, further attesting to the moral necessity to treat the sick and injured, preserving the fundamental principles of autonomy, benevolence, and justice. A recent example of moral courage can be seen in the events surrounding the head nurse at the University of Utah Hospital’s burn unit.

According to a September 1, 2017 Washington Post story and accompanying video, when a Salt Lake City Police Detective demanded to be allowed in the room of a badly injured and unconscious patient in order to take a blood sample, the nurse informed him that unless he had consent of the patient, the patient was under arrest, or a warrant for the blood sample —none of which he possessed —she could not allow him to collect a blood sample. This is not only hospital policy, but constitutional law. The detective grew angry and threatened the nurse with arrest. She still refused, citing her responsibility to protect the patient. She was arrested, placed in handcuffs, and shoved into a police car, and accused of interfering with an investigation.

Despite being assaulted by police (as is evident in the video), this nurse showed remarkable moral courage in protecting her patient against threats and coercion by outside agents. What is needed, one could say required for healthcare administrators, is a strong sense of right and wrong, and one that is reflected in action as well as word. We should not be satisfied to simply give lip service to the principles of moral courage, but to practice it in our dealings with our peers, our patients, our staff, and perhaps more importantly, ourselves.

Moral courage is not easy, nor should it be so; easy things do not require courage. Moral courage then is more than a quality, it is a practice. 

Apoptosis, Chronic Disease, Epidemiology, Infectious Disease, preventative medicine, Reader questions, Uncategorized

The Best of Times, the Worst of Times

Neurology_0

During the late teens and early 20’s of the last century, the War to end all wars (a misnomer if there ever was one) had ended, and the Spanish Flu (that originated in Kansas) had run its devastating course. Perhaps no period in American history saw such abrupt changes to society as the period of the early 20th century.

A few more notable events include President Woodrow Wilson’s “Fourteen Points” speech, his plan to end war forever. The Fourteen Points were enthusiastically adopted by ambassadors worldwide and became the framework for the League of Nations. At its height, the League of Nations had 58 member states. It is notable that the United States never joined, as Americans had suffered civilian casualties in the war, and many citizens wanted to keep America out of European affairs.

On September 16, 1920, suspected Italian anarchists detonated an improvised explosive hidden in a horse-drawn cart on the busiest corner of Wall Street, and nearly 40 bystanders were killed, and over 100 were injured. This was the worst terrorist attack in American history until the Oklahoma City Bombing in 1995. This act of terrorism had repercussions that included a campaign to capture and deport suspected foreign radicals. Over the next few years, thousands of accused communists and anarchists across the country were arrested in raids. The man behind the raids, a young lawyer named J. Edgar Hoover, would become head of the Federal Bureau of Investigation.

1920 was also the start of the influence of the “Lost Generation,” American writers living in Europe fallowing World War I. Books published during this period include Main Street, a critical examination of small-town America by Sinclair Lewis; This Side of Paradise, the debut novel of F. Scott Fitzgerald; and Flappers and Philosophers, Fitzgerald’s first collection of fiction. Fitzgerald also introduced editors to the work of Ernest Hemingway, who would go on to have some success as well. These significant changes in thought were not limited to printed media.

In November of 1920, the first commercially-licensed radio station began broadcasting live results of the presidential election. The real-time transmission of news was unprecedented. The world was captivated by the idea of instant news, and radios, “the talking box”, became very popular; in 1922 alone, Americans bought over 100,000 radios. The next year, they purchased over half a million. By the mid-1920’s, the number of commercial radio stations had grown to over 700, covering virtually every town in America. The Walton’s, a popular dramatic series that played through the 1970’s and early 1980’s, often showed the family gathered around the “wireless”, as the radios were sometimes referred to, listening to popular shows or the news.

While Americans reveled at the end of the war, delighting in the pages of Fitzgerald, Hemingway, and Langston Hughes, a more ominous player was entering the American consciousness. Although not a writer, it would, nevertheless, make an enduring mark on history.

The disease started with a high fever and severe headache, often leading to double vision and slowing of physical response. Often it progressed to a general lethargy, and a need for constant sleep. In acute cases, this was followed by coma and death. The condition caused swelling in the hypothalamus, the part of the brain that controls sleep, and came to be known as Encephalitis Lethargica due to the lethargy that victims experienced.

Today, a century has passed, and it is still not understood where the condition came from or its cause, although many epidemiologists and virologists agree that it was most probably viral. The first known case involved an unknown soldier from the battle of Verdun in 1916. The man was originally hospitalized in Austria, and then later sent to Paris for examination. Doctors examining him were puzzled. He slept constantly, and even when awake, did not seem fully conscious. Soon another 60 soldiers joined him. Despite examination and attempt at treatment, over half died from respiratory failure. Postmortems found the swelling in the hypothalamus.

Almost as quickly as it struck, Encephalitis Lethargica faded from history, even though some victims lived on, often in perpetual sleep. It was not until the 1960s when neurologist and author Dr. Oliver Sacks discovered a group of patients living in a hospital in the Bronx that the condition would once again be the topic of discussion. Working with the patients in the hospital, Dr. Sacks found that most would respond to some form of stimuli. For example, several responded to hearing music, while others would catch a ball if it were tossed to them. They did not, however, throw the ball back nor initiate any actions themselves. Dr. Sacks recounted the story of a patient in another part of the hospital who brought a poodle in. When the dog jumped up on a woman who had always been rigid and unmoving, she suddenly started talking about how she loved animals, and laughed as she stroked the animal. Once the dog was removed, she returned to her rigid, frozen state.

Dr. Sacks originally believed the patients were suffering from some form of Parkinson’s disease, a chronic and progressive movement disorder affecting nearly one million people in the United States. The exact cause of Parkinson’s disease is unknown, and there is presently no cure. However, there are some treatment options, including medication and surgery that help manage the symptoms of tremors in the hands, arms, and face; slowness of movement; rigidity of limbs; and impaired balance and coordination. What is known about Parkinson’s disease is that it seems to involve the malfunction and death of neurons in the area of the brain known as the Substania Nigra. One of the functions of neurons in this area is the production and release of dopamine, a chemical messenger used to communicate with another part of the brain responsible for movement and coordination. As the disease progresses, dopamine production decreases, leaving the person unable to control his or her movements.

Dr. Sacks began treating the Encephalitis Lethargica patients with the then-experimental drug Levodopa, or L-dopa, a precursor to dopamine among other neurotransmitters known collectively as catecholamines. L-dopa increases the concentration of dopamine and was found to be an effective treatment for some of the symptoms of Parkinson’s disease and other dopamine-responsive conditions in the late 1960’s. Using L-dopa, Dr. Sacks was able to temporarily revive some of his patients. Most became ambulatory and talkative; most suffered Parkinson’s-like side effects, but this was preferred to the frozen state they had been trapped in for decades. Some asked to be taken off the medication because they preferred a trance existence to waking up decades later. But even those who wished to maintain the treatment became tolerant of the treatment and returned to their frozen condition.

Dr. Sack’s work with the Encephalitis Lethargica patients is chronicled in the film Awakenings with Robin Williams.  

The cause of Encephalitis Lethargica remains largely unknown, despite causing the death of over 5 million people worldwide. Encephalitis Lethargica has not been diagnosed since the end of the epidemic in 1927. This does not, however, mean it has disappeared from human history. Few things do.

For an interesting look into the amazing neurological writings and work of Dr. Oliver Sacks, see his books The Man who Mistook his Wife for a Hat, Awakenings, and Seeing Voices. 

 

 

antibiotics, Epidemiology, Infectious Disease, preventative medicine, Uncategorized, vaccines

The (Not so) Spanish Flu, and How it Became the Deadliest Epidemic in Modern Time.

flu-635x372

I had a little bird,

its name was Enza.

I opened the window,

and in flew Enza.

     ~Children’s rhyme of 1917

In early March 2018 a mess cook at an Army base in Kansas reported to the infirmary complaining of sore throat, headache, and fever. After being checked over, the doctor could find no cause for alarm, and returned him to duty. By lunch time the infirmary was filled with soldiers complaining of similar symptoms, and by the end of the month the number of sick soldiers had grown beyond the capacity of the base hospital and a make-shift infirmary was created using an airplane hanger. By the end of the first month, 38 men had died. Influenza routinely killed 30% of those infected if they were under age 2. This influenza was killing health young men in the twenties. The soldiers were suffering with would become known as the Spanish Flu. You can imagine the worst place to have an infectious disease would be a place where tens of thousands of people were crowded together, as they were in training camps, where they prepared to go to war. The disease would spread to other training camps and eventually overseas.

Laura Spiney, in her book Pale Rider: The Spanish Flu of 1918 and How It Changed the World called the 1918 influenza epidemic, “The greatest wave of death since the Black Death.” A bit dramatic perhaps, but nevertheless accurate. What made this epidemic even more deadly, brought groups of infected and uninfected people together, and greatly helped the spread of this deadly disease, was the denials by those in power that anything was wrong.

In 1917, California Senator Hiram Johnson, an isolationist Progressive-Party-member-turned-Republican, states the first casualty when war comes is truth. At the time, Congress passed a measure, signed into law by President Woodrow Wilson, that made it punishable by up to 20 years in prison to “utter, print, write or publish any disloyal, profane, scurrilous, or abusive language about the government of the United States.” A clear violation of the First Amendment. Yet because Spain was neutral, its press was not under morality laws, so the pandemic they reported on became known as the Spanish flu.

In the cities of Minneapolis, St. Paul, Chicago, Philadelphia, New York, San Francisco, and several others, parades and events considered “important to the war effort” were not cancelled, even though they brought great numbers of people together. When public health experts demanded that such events (parades, rallies, etc.) would help spread the disease, they were reminded of the Morality Law, and the notion that “Fear kills more than the disease.”

The flu pandemic of 1918 to 1919 infected an estimated 500 million people worldwide, and claimed the lives of between 50 and 100 million people. More than 25 percent of the U.S. population became sick, and 675,000 Americans died during this pandemic, more than 10 times the number killed in Vietnam. The 1918 flu was first observed in Europe, the U.S. and parts of Asia before swiftly spreading around the world. By the time the disease had burned itself out, the total number of dead outnumbered those killed in both World Wars.

Why so deadly. Although many people had been exposed to H3N8 virus that had been circulating in the human population for about a decade, the human virus picked up genetic material from a bird flu virus just before 1918, creating a novel virus. The new virus had surface proteins that were very different, thus people’s immune system would have made antibodies, but they would have been ineffective against the virus. The high fatality was brought about by a combination of refusing to warn the public about exposure, refusing to allow newspapers to print stories of the epidemic, the novel makeup of the virus, the opportunistic bacteriological infections that thrived in weakened immune systems of many, and a process known as hypercytokinemia, or cytokine storm. Cytokines are molecules that aid cell-to-cell communication in immune responses and trigger inflammation. It is the overreaction of immune system (like fever and inflammation) that caused the high lethality of this influenza.

While most influenza viruses are dangerous for children, the elderly, or those with compromised immune systems, the 1918 strain was deadliest for those in their 20’s and 30’s in good health with a robust immune systems. The Spanish influenza strain provoked a manic immune response creating a potentially fatal immune reaction with highly elevated levels of various cytokines. In recreating the pandemic of 1918, medical research scientists used reconstructed 1918 influenza virus and injected in mice and monkeys to try to understand why it was so lethal. The animals’ immune systems responded so violently, the lungs filled with blood and fluids, essentially drowning them. Scientists have deduced that what made the Spanish flu so deadly was that it used the body’s own immune system to flood the lungs with fluid, and destroy the lining of the respiratory system, making it much easier for bacteria to infect the lungs. In this case, the healthier you were, the more violent the immune response to the virus.

In the end, the 1918 influenza virus pandemic was due to a combination of a novel virus, an official disregard for honesty to avoid damaging morale and the war effort, a cowardly press that refused to challenge this veil of secrecy that was the government’s propaganda machine. Let us home the government and those at the highest levels of power will treat the next influenza epidemic more honestly and openly.

Could it ever happen again? Could it happen again where a novel influenza virus becomes epidemic then pandemic killing millions? According to an article in The Lancet, flu pandemic like that of 1918-1919 were to break out today, it would likely kill 60-80 million (This is more than the total number of people that die in a single year from all other causes combined). The estimate stems from a new tally of flu deaths from 1918 to 1920 in different countries, which varied widely. To gauge the potential threat from the H5N1 avian influenza currently circulating among birds in Asia and Africa, the researchers reviewed the toll of the most severe previous case from 1918 as a benchmark.

Worried? You should be. If anything, the new estimates may be optimistic, according to epidemiologist Neil Ferguson of London’s Imperial College in an editorial published in The Lancet. High incomes may not protect rich countries as much as some writers have suggested. In 1918 pandemic influenza, being young, fit, and healthy was no protection. Public health researchers and epidemiologists warn that it is not a matter of if the next influenza pandemic strikes, but when.

Meanwhile, national governments slash public health funding and funds needed for health research.

 

antibiotics, Epidemiology, Genetics, Infectious Disease, preventative medicine, Uncategorized

The Eradication of Smallpox and the Helper T-Cells

immune-diagram_final

In May of 1980, the World Health Organization (WHO) pronounced, after two centuries, that the fight against smallpox had ended. This meant that there were no known cases of the disease anywhere on the planet. Many other infectious diseases have returned from the brink of extinction, but few have been so deadly as the only human communicable disease (thus far anyway) to be eradicated.

Most people are familiar with smallpox, if at all, from their history classes or, or films about the conquest of the Americas. But smallpox was not an invention of the Spanish Conquistadors, but something they had naturally grown resistant.

Medical Anthropologists believe that disease began to infect humans starting around the first agricultural settlements of the Old World. Despite such a long history, little evidence exists before 1570 BCE when it appeared in the New Kingdom in Egypt. Many historians believe that the Plague of Athens in 430 BCE and the Antoine Plague that lasted from 165-180 ACE and killed upwards of 7 million people, including the Emperor Marcus Aurelius and hastened the fall of the Roman Empire, were caused by smallpox.

Smallpox made its way to France sometime in the early 700’s. A clergyman writing around the time described the unmistakable symptoms as “a violent fever followed by the appearance of pustules.” He found that if the person lived long enough, the pustules developed scabs, after which the person survived. By the time the disease had reached the rest of Europe, it had spread across Africa and Asia. Smallpox was not a disease of the poor or the aged or the young. It was an equal opportunity killer. And in the Old World, smallpox killed approximately 30% of those who contracted it, while many more were left disfigured or blind. As devastating as smallpox was in the Old World, it was far more destructive in the New World. One significant reason for this great difference was in the immune systems of the two groups.

Helper T-cells are one of the most important cells in that comprise our adaptive immunity, and are a significant part of almost all our adaptive immune responses. Helper T-cells activate cytotoxic T cells that target and kill invading organisms. They also activate B cells that secrete antibodies and macrophages, ingesting and destroying microbes. But a relatively recent discovery is that the American natives possessed a different variant of the Helper T-cell than Europeans. Whereas Europeans maintained an immune response that developed over thousands of years of fighting off bacteria and viruses, the peoples of the Americas had developed an immune system that dealt with the daily concerns of parasitical infection. While their T-cells were better at recognizing invading parasites, combating parasites, they may not recognize many of the organisms the Europeans had adapted to and had brought with them.

The population of the Americas in the pre-Columbian era is estimated to have been between 25 and 60 million people. Of those populations, approximately 95% died as a result of European diseases. At the same time, the Europeans did not have the same kisses as the American natives and were spared the bulk of the infectious disease is of the Americas. With one or two significant exceptions: the reason that the Europeans were immune to so many possible infections in the Americas stems from the fact that Europeans have been caretakers of domesticated animals for several thousand years and had adapted to many common diseases found in domesticated animals that were used for food sources; adapted, but certainly not immune.

The American natives did not possess the same domesticated animals. Cattle, pigs, and horses were absent from the Americas. While the Spanish and Portuguese explorers met with some resistance from natives, the Incas and the Triple Alliance (Aztecs) had largely succumbed to smallpox by the time they arrived. Historians now calculate that the indigenous populations of both American continents were reduced by about 90% from the introduction of smallpox. The Great armies that the conquistadors faced were already greatly weakened by disease. This lesson was not lost on military leaders that would follow (Lord Jeffrey Amherst, the commander-in-chief of British forces in North America during the French and Indian war advocated handing out smallpox affect blankets to his native foes).

This helps explain how a group of fewer than 200 men, over half of which were on foot, manage to defeat an empire, at that time the largest in the world, with a reported standing army of over 70,000.

Today, thanks to the efforts of public health practitioners, medical researchers, and physicians, smallpox is, so far as we know, relegated to a bygone era. In fact, if you were born after 1972, you would not have received a smallpox vaccine. Still, what would happen if smallpox for re-introduced into American society today? The Variola major virus that causes smallpox killed a third of people infected, and was so virulent it claimed the lives of over 300 million people, just in the 20th century alone. Although estimates vary somewhat, the total number of persons killed by smallpox may exceed 2 billion. With an infectious disease so deadly, could ever make a comeback? If smallpox were to make a comeback, there would likely be two possible sources: intentional release, or unintentional release.

The intentional release is the release of the virus by a terrorist or group into a population. The unintentional release is through the thawing of the frozen virus. The residents of a Siberian town lost 40% of its population to smallpox in the 1890’s. The victims had been buried in the upper layers of permafrost along a river, whose banks have begun to erode, due to floodwaters from a warming climate. Russian scientist are concerned that the graves of anthrax-infected cattle can also be found across Russia, including in areas where the ground has thawed two feet deeper than normal. Recently, the thawing of one of the anthrax-killed animals claimed over 100 Reindeer and hospitalized 13 people living in the area. Scientists speculated that the Reindeer were succumbing to the high temperatures, ate the thawed remains of an infected carcass frozen for many years. From there, the infection was passed to the herders.

Regardless of the cause, The Centers for Disease Control and Prevention (CDC)’s Strategic National Stockpile is the nation’s largest supply of life-saving pharmaceuticals for use in a public health emergency severe enough to cause local supplies to run out. The stockpile ensures the right medicines and supplies are available when and where needed to save lives, and this includes, you may be relieved to know, a reported 400 million doses of smallpox vaccine.