iraq-mercury-poisoning-august-1971

Iraq Mercury Poisoning – August 1971

A donation of seed grain from the United States to Iraq was wrongly consumed by Iraqis because of language difficulties, leading to widespread mercury poisoning.

A large shipment of seed grain was sent from North America to Iraq in August of 1971. The vice president of Iraq at that time was Saddam Hussein. He came to that position when the Baath Party he led gained power in 1968. Ten years later he became president of the country and remained in that position until overthrown by the U.S. invasion of 2003. As vice president, in 1971 he was in charge of all military as well as all other government operations.

To preserve the grain shipment from damage by insects or rot, it had been treated with a mercury fungicide, harmless if the seed is used as seed but poisonous if eaten. The grain was sprayed with a pink dye to indicate the presence of poison and the Iraqi government warned people not to eat any of it. For various reasons many Iraqis ignored the warnings with the result that hundreds died and many thousands more were seriously injured.

As a part of the Baath Party, Saddam Hussein was vice president of Iraq in 1971. Years before the Baath Party came to power, Iraq, one of the first places in the world to grow wheat, often experienced extended times of low rainfall that ruined its wheat. Iraq in 1971 was still mainly an agricultural economy. Oil had not yet reached the proportions that give it such economic power today.

The year 1971 was one of those very dry years and the country decided to change to a new strain of wheat that would be more resistant to climatic shifts. Mexico, also a country of low rainfall, had developed wheat of this kind so Iraq ordered a large quantity of it in the late summer of 1971. A huge shipment of more than 70,000 tons was delivered to the port of Al Basra in southern Iraq and from there it was distributed throughout the country.

The grain was treated with an organic mercury compound, a fungicide to protect against rot or attacks from insects. This treatment is harmless if the grain is used for seed but poisonous if eaten. The sacks of grain were marked against consumption but, unfortunately, the lettering was in Spanish. The grain was sprayed with a red dye as an additional warning but this, like the Spanish words, carried no significance for the Iraqi workers.

All who handled the grain shipments were warned of the dangers of consumption and this warning was also relayed to all districts of the country receiving deliveries. The quantities allocated to different parts of the country were in keeping with the amounts used for seed in previous years. Nevertheless, it soon became quite clear that substantial quantities of the grain had been taken into homes and was being baked into bread or fed to animals.

Symptoms of trouble were not evident for some weeks but when they did appear they were catastrophic, the worst ever recorded for this type of poisoning in terms of numbers killed and injured. The epidemic was so great that the government appealed for medical help from European countries. At that early stage, no one knew exactly what had happened. When medical teams arrived they quickly diagnosed the epidemic as due to mercury poisoning.

The situation was difficult to monitor as grain had been shipped to several places inland and there was concern that people might panic. Radio blackouts were enforced to keep the information from spreading in order to allow the government time to take control of the situation.

Mercury poisoning is not a new problem. It was well known in England in the nineteenth century among workers in the hat-making industry. A mercury-based product was in use at that time to stiffen the brims of beaver-skin hats. As they worked from day to day, hatters inhaled the mercury fumes and in time suffered brain damage, in some cases to such an extent that they became insane. Lewis Carroll, the author of Alice in Wonderland knew all about this when one of the strange characters in that book was named the “Mad Hatter.”

In Canada, in the Province of Ontario, between 1962 and 1970, a chemical company dumped tons of mercury into the English–Wabigoon River, a river that was a source of food for the local Indian population of Grassy Narrows. The mercury found its way into the food chain and the fish became toxic. The people of Grassy Narrows regularly ate fish from the river and soon began to show the symptoms of mercury poisoning. The Canadian government took immediate action to help the people of Grassy Narrows and to stop the pollution but the damage had already been done.

In Japan, thousands were affected when a major petrochemical company dumped tons of mercury compounds into the Minamata River over a long period of time beginning in the early 1930s. Nothing was done about this disaster for twenty years despite the evidence of serious health problems in the neighboring town where fish from the Minamata River was a big part of the local diet. Unusually large numbers of mentallyretarded children along with equally disproportionate numbers of young people with other illnesses were noted but nothing was done about it. Autopsies on those who died revealed significant loss of brain cells.

The whole tragedy became known as the Minamata Disease. Some indication of the level of mercury poisoning taking place can be gauged if the average amounts found in ocean fish is compared with those caught in the Minamata River, three parts per million in the ocean and fifty parts per million in Minamata. A level of nine parts per million is usually considered very dangerous to health.

Some people in Iraq knew that the grain was poisonous but they thought that if they washed away the red dye they would get rid of the poison. They did this and made bread from the grain. Some seed was fed to animals and when these were killed for food there were significant quantities of mercury in the meat and so the level of mercury that had entered their bodies from the bread was further increased.

The compounding of the problem through the food chain did not end with the animals. When the Iraqi authorities were finally able to convince their people that the whole shipment was poisonous, grain that they did not intend to use for seed was thrown into the Tigris River. Once again, as happened with the animals, fish ate what was thrown into the river and when fish were caught from that same river, additional quantities of mercury were ingested.

The consumption of bread from treated grain became the main source of all that transpired. All the recorded cases of poisoning occurred in rural areas where bread was made at home. In the bigger cities where bread is prepared commercially from government-inspected flour there were no cases of contamination.

Within three months of the initial outbreak the number of cases peaked with hundreds arriving in hospitals daily. Males and females of all ages were affected with the largest number coming from those under the age of nine. There were equal numbers of males and females. Death rates were highest among the elderly and the very young. In order to minimize the destructive effects of the poison a type of resin was given to patients orally to hasten the elimination of mercury. The longer the poison stayed in the body the worse became the results.

First signs of poisoning were numbness in fingers and toes and other extremities of the body. This was followed by unsteadiness of gait and, where the quantities ingested were substantial, loss of coordination to the point that the person was unable to walk. Eyes were frequently affected with difficulties ranging from blurred vision to blindness. Slurred speech and loss of hearing were present in many cases. In all of these instances it is evident that brain damage was the focus of symptoms. Fatalities were the result of failure of the central nervous system. There was little evidence of damage to the cardiovascular or digestive systems. Those who were severely poisoned died in spite of the medical treatment they received.

There were numerous recoveries among those who had lesser doses of mercury. Among them were people who had become bedridden because they could not walk and who did learn to walk but were unable to regain full control of their body movements. Partial sight was recovered in a few cases where there had been blindness initially. The most persistent symptoms were in the extremities. Even when improvements took place in other damaged organs the sensations of pins and needles on peripheral nerves persisted. Hair samples provided dramatic evidence of the speed with which mercury was absorbed into the body after eating contaminated bread. It happened very quickly and it began to disappear slowly from the body as soon as eating stopped.

The mercury from treated grain can enter the human body orally, by inhalation, or just by skin contact. Oral contamination, the main mode of reception, came from contaminated bread, from meat and other animal products obtained from livestock that had consumed treated grain, from vegetation stored in sacks that had contained the treated grain, and from game birds and fish that had eaten the treated wheat. Unborn children of mothers who ate the contaminated bread received greater amounts of mercury than their mothers, proving that the damage to the unborn is the greatest of all affected populations. In one newborn the mercury concentration was three times that of the mother.

The wheat from Mexico was to be used as seed for the following year and the amounts allocated to the various regions of the country matched the quantities seeded in the previous year. Analyses of the amounts diverted in the rural areas to preparing bread showed that only a tiny portion of the total was used up in this way. On average, a maximum of five pounds of grain were consumed by each of the 6,000 who were admitted to hospitals. That total only amounts to one pound of grain from every six hundred pounds stored for seed, a negligible amount but a dramatic demonstration of the destructive power of mercury when it is wrongly used.

In summary, wheat was purchased by Iraq for use as seed but too many things went wrong. The lessons from Japan’s Minamata tragedy were well known and this should have alerted authorities to the dangers of mercury poisoning. The final outcomes of that disaster were published only nine years earlier and they showed that methylmercury was the source of the poison, the same substance that was used to protect the Iraqi shipment. Careful supervision of deliveries and more intensive efforts to alert people to the dangers of consumption could have prevented the disaster. The official figures for casualties listed 6,500 as being admitted to hospitals with severe problems. In total five hundred deaths were recorded but unofficial sources put the numbers very much higher.


thalidomide-drug-tragedy-october-1-1957

Thalidomide Drug Tragedy – October 1, 1957

Thalidomide was launched by Celgene Corporation to help pregnant women. Before it was withdrawn three years later thousands of babies had been born with severe physical deformities.

Thalidomide was launched in Germany on October 1, 1957. It was an effective hypnotic drug capable of inducing drowsiness and sleep so it was introduced as a sleeping pill for calming the symptoms of morning sickness and nausea in pregnancy. It was introduced into general worldwide use in 1958 but, before it was withdrawn in 1960, thousands of babies had been born with severe deformities.

Celgene Corporation launched thalidomide as a drug that was nontoxic, with no side effects, and completely safe for use by pregnant women as a sedative. However, soon after its launch, reports emerged that thalidomide had caused peripheral neuritis. Peripheral neuritis does not itself point to reproductive damage, but many scientists would take such an assault on the nervous system as grounds for concern.

One did, Dr. Frances Oldham Kelsey of the U.S. Food and Drug Administration (FDA) and, as a result, the United States had only seventeen deformities compared with an average of over two hundred for the forty-six countries affected. By the time the FDA was persuaded to give conditional permission for thalidomide, two and a half years later, warning flags were going up all around.

Dr. Kelsey was born and educated in Canada; she took bachelors and masters degrees in pharmacology before moving to the United States where she married another pharmacologist. The couple then went to work in Washington, D.C. In 1960, Kelsey was at work in the FDA when she was given the job of assessing the new drug, thalidomide. She had read that thalidomide caused some peripheral neuritis in patients and she was also concerned about the lack of evidence about its safety.

There was strong pressure on the FDA to approve the drug because of its widespread use throughout the world. However, Kelsey refused to approve it without more information. Even when additional data arrived she refused again. By that time the news of the drug’s terrible consequences was out and Kelsey was honored as the person who saved the United States from the disasters that overran many other countries. She was still doing research for the FDA in the year 2001 at the age of eighty-seven.

Celgene’s statements about thalidomide hid problems that were far worse than the danger of peripheral neuritis. Not one of the claims the company made in October 1957 was true for pregnant women. Animal studies did show that the drug was nontoxic in some instances but, unbelievably for a drug that was being recommended for use by pregnant women, there were no tests on pregnant animals.

Such tests as were done on animals are, in any case, inconclusive. Doctors and scientists have long warned that similarity to the human condition in animal experimentation may be a coincidence and can never be applied to humans until the experiment is scientifically carried out on a human population.

Warning flags may have been going up in 1960 but government agencies failed again and again to take action. In Japan, mothers were allowed to take the drug for a whole year after it had been withdrawn in other nations. Several countries failed to take action for periods of time ranging from three to ten months. Britain did not withdraw the drug until November of 1961.

To some extent these failures were due to a tragic assumption that the drug had been thoroughly tested before reaching the market and that the tragedies were due to some mistake that no one could have foreseen. Assumptions of this kind led the British distributor, The Distillers Company, to reject claims for compensation for years. Ultimately, The Distillers Company paid a final settlement of $15,000.00 for each deformed child. The knowledge and procedures that could have prevented every one of these tragedies were well known.

One example of how Celgene distorted information to promote the sale of thalidomide illustrates these points. Dr. Blasiu, a doctor who was paid by Celgene to test the use of thalidomide with three hundred adults in a nursing home, reported back that no side effects were observed. Celgene then used the content of this report in a letter to 40,000 doctors, assuring them of the safety of the drug for pregnant women.

There was absolutely nothing in Dr. Blasiu’s report on the subject of the drug’s use with pregnant women. When he was contacted later, Blasiu was shocked to learn how his report had been used. He added that it was his basic rule never to give sleeping pills or tranquilizers to pregnant women. In addition to this example of deliberate error, there were numerous cases of omission, awareness of tests that could and should have been done to ensure the safety of the drug.

Beginning approximately a year after it became widely used in 1958, baby after baby was born with the same type of problem—limbs or fingers missing. When one child enters the world with a deformity as serious as this, the news spreads quickly and many questions are asked. When a number appear, all very similar, the cry reverberates around the world. It took only a few weeks of simple research to discover the common element in the parents of the babies.

Each had taken thalidomide, or distaval as it was called in Britain, in the early phases of the pregnancy to induce sleep, at the very stage when limb buds begin to form in the fetus. Before it was banned in 1960, more than 12,000 babies had been born with severe deformities. They would be known thereafter as the thalidomide children.

Reactions from parents ranged from desperation to passivity. Some who were still pregnant and knew the likely outcome thought of abortion; they were convinced that their child did not have a worthwhile future. A few parents with new babies contemplated mercy killing their children for the same reason. Better judgment prevailed with all of these as a variety of community and national organizations stepped in to provide assistance.

The problems they faced were immense. Missing limbs and fingers were the most obvious disability. Some children were born with one arm complete while the other was just a small stump. Other children had both arms missing and were left with only a small protrusion at each shoulder. These more obvious visible limitations were only the beginning. Some babies were deaf or blind and some had cleft palates. Others had no hips or ears and there were also internal troubles such as poorly-developed lungs.

One-third of all the affected babies were born dead. Many died later while in their teens. Operation after operation was carried out on quite a number to rectify damage. Where children had flippers rather than arms, parents often chose to have them amputated and replaced with artificial arms. As these children grew and mixed with other children they had to cope with the pain of being stared at or, worse still, being avoided.

One successful agency dedicated to coping with disabled children was Chailey Heritage Craft School and Hospital. It was located in the countryside south of London and consisted of several buildings and a hospital all working as a single unit to prepare the children for life. Chailey took the babies soon after birth and cared for them until they were sixteen unless parents preferred to take them home before then.

The fact that thalidomide children formed only a minority of the two hundred at Chailey allowed them to get acquainted with a larger world and minimized to some extent the stigma that had become associated with the word thalidomide. Chailey had wards, classrooms, living quarters, a workshop with all kinds of mechanical aids, and specialists in speech as well as physical and occupational therapy. A baby with no limbs was first placed in a cylindrical piece of molded plastic with a seat at the base.

The truck of the body was thus held upright and the baby was placed alongside a low table having all kinds of things like toys, blocks, and clay to explore. The next stage of experience was a low platform on wheels and the child was soon able to propel itself along just by movements of the trunk. This provided access to the range of objects and children around the room. For a majority of them, intellectual abilities had not been impaired.

Around the age of two, when children want to walk, Chailey insisted that they try even though the staff knew there would be falls and accidents and tears. It was felt that the alternative was a vegetable existence and the staff felt that this was not acceptable. A rubber-like hat was put on the youngster to cope with falls and rigid struts on rockers were provided for whatever form of limb they had on their feet, and with these the children were able to maneuver by a series of jerks.

For those whose parents were able or disposed to provide them, this was the beginning of using artificial legs. Some who did get artificial legs employed an intermediate step in their development. They had an object on wheels ahead of them that they pushed rather like the practice of older people using walkers.

As the children reached an age of responsibility they were introduced to an entire range of objects and tools, all specially designed for their unique needs. Simple devices like lengths of wire to get hold of objects and pieces of plastic shaped to make it easier to feed oneself were introduced to begin the long road toward independence. There were devices to help them get in and out of a bath, and toilets carried a spray bidet.

The classes at school were small, never more than twelve, and again there were special aids that would never be available in the average school. Teaching machines, tape recorders, and typewriters were on hand, all operated remotely using a variety of hand and mouth devices. It is easy to imagine today, with the hindsight of thirty years, the huge difference it would have made if computers had been available for them in the 1960s. Even with all their limitations they were able to write letters to their parents, either by hand, slowly and tediously, or with a typewriter.

By the year 2001, most of the more than 12,000 thalidomide children worldwide were still alive. Those who had entered the workplace in the 1980s and 1990s found that the stigma remained in new forms. There were jobs they could do and there were those that were beyond them. Often an employer wants people who can do a variety of tasks. The tough road they had already traveled, learning to eat and do all the other things of daily life, continued to challenge them.

Some married and had normal children. Many found satisfying careers and rewarding relationships in niched situations, writing their stories or training dogs. One worked with an automobile company as its liaison officer for persons with disabilities. Another who had neither legs nor arms produced and marketed a talking machine for the blind.

In many countries the victims were able to sue their governments or the distributing agencies for financial compensation and they won in most cases. Evidence that the United States delayed approving the drug for good reason was a legal help in fighting these cases. In the early 1990s, one frightening claim emerged and received wide publicity before it was squashed. It purported to show that scientists in research labs found thalidomide- like deformities in rats in a second generation.

That meant, if it were true, that the devastating drug not only caused problems for the first pregnancies in which it was used but also for all future pregnancies. In other words, the drug had changed the genes and forever afterward abnormalities would keep recurring. The research on which this finding was based was subsequently totally discounted. Fortunately, not all the news about thalidomide is bad. A new use received a lot of attention in the last few years of the twentieth century.

Multiple myeloma is a cancer that has resisted the best efforts of scientists in their search for a cure. The chances of a patient being able to live on with this disease are not good. Chemotherapy is normally used as the best method available but even its success rate is very poor. After reports that thalidomide might be a cure for this cancer, extensive studies were conducted at the University of Arkansas for Medical Science in Little Rock.

Support for pursuing this research came from the experiences of New England doctors who used thalidomide in gradually increased doses with patients who had myeloma cancer. Under this regime, one-third of eighty-four patients showed substantial improvement within a year. In 1999 and 2000, the researchers at Little Rock saw some success. Over a period of a year the cancer in two patients went into complete remission while six others showed considerable decline.

The keys to success seemed to lie in the dosage rates and the length of time the patient had the cancer. One specialist calls it the first new class of drugs in thirty-five years for this particular cancer. Some tests are being conducted using a combination of chemotherapy and thalidomide. One reason for optimism lies in the fundamental difference between the two treatments. The former destroys cancerous cells in bone marrow but it also kills the healthy cells too.

The latter inhibits the formation of the new blood cells which are essential to tumor growth. Given the history of thalidomide and the behavior of the company, Celgene, that marketed the drug in 1957, specialists are slow to make a final recommendation on this new use of the drug. Early in 2001, the Federal Drug Administration had to write a letter of reprimand to Celgene for misleading advertising regarding its possible value as a cure for cancer. In the light of its earlier history the company certainly needed to be reminded of the dangers implicit in errors of this kind.


world-wide-flu-pandemic-1918

World-wide Flu Pandemic – 1918–1919

The disease spread from places where there were high concentrations of soldiers. It spread all over the world and took the lives of thirty million people.

In the last year of World War I, the world was hit with a flu pandemic, the H1N1 virus. It would prove be the deadliest disease in human history. Far worse in the total number of fatalities than the black death scourges of the Middle Ages or the total number killed in World War I, counting both military and civilian casualties. In less than six months of 1918, thirty million people worldwide died from this flu and most of those who died were young.

Many insisted that the total number of deaths was far greater than thirty million. Even worse than the number of deaths was the speed with which it spread and the terrible nature of the illness that people experienced. Common symptoms were severe coughing accompanied with bleeding from different places and the skin turning blue because of the lack of oxygen in the lungs. For large numbers of sufferers, the painful end came within a few hours as their lungs became filled with fluid and they suffocated.

Wherever people are crowded together, especially in the terrible conditions of World War I, in frontline trenches or on troop ships from the United States, conditions are ideal for the spread of a flu virus. Because the outbreak occurred at a critical point of the war, when it seemed for a time that German forces might succeed, strict secrecy about the spread of the disease was maintained by American and allied governments. The German government was equally secretive and the post-war publication of documents from that time revealed that the German army was being decimated by flu outbreaks. It was because of this secrecy that the name “Spanish Flu” came to be its popular name.

Spain was not involved in the war so they freely reported on their experiences of the flu and, because they were the first to report on it, everyone assumed that it had originated in Spain. Reports of the spread of the disease came in from all over the world. It traveled as quickly as a bushfire to the countries of Europe, to the United States, and to Asia. It is likely that the total number of deaths in Asia, given the high concentrations of people in many regions, was as great as the total number of deaths in Europe and North America combined.

In the United States there were half a million deaths from the flu pandemic, all within a period of six months. Most of these were recorded as deaths from pneumonia, the outcome that in most cases led to death. Another half million of soldiers from service in both Europe and the United States were hospitalized and a large number of these died, as many as had died in combat for the whole period of the war. In Britain 200,000 died from the same pandemic flu and, as happened everywhere else, they all died within the same months of 1918 and the beginning of 1919. The course of the disease, just as had been the experience of others with diseases in earlier centuries, was always the same: because the strain of the destructive virus was new, and people therefore had no existing immunity with which to counter it, death rates were high.

As immunity was built up among all the sufferers, the death rate decreased and finally reached the low level that is recognized as normal for any flu. In places around the world where population density was very low, and where because of their isolation there had never been exposures to the kind of flu we experience every year, the impacts were devastating. Whole Inuit villages in Northern Canada were wiped out, and on the Island of Samoa in the South Pacific a quarter of the population died.

By October, across the United States, as the numbers of dead mounted, a near panic atmosphere appeared. Theaters, movies, bars, and a host of other places were closed down. Even schools, churches, and public campaigns to raise money for the war were shut down. Laws were introduced that required everyone to wear a face mask in public. Stiff fines were imposed on those who sneezed or coughed in public without covering their mouths with handkerchiefs. With our current knowledge of viruses we can conclude that these precautions would not have stopped the spread of the 1918 flu but it was all that seemed possible to people of that time.

The Catholic Charities of Philadelphia hired several horse-drawn wagons with which they searched alleys and tenements for the bodies of abandoned victims. The city morgue had no space for the bodies so they had to be placed in mass graves, and without caskets too since Philadelphia, like many cities, had run out of them. Police departments suddenly found their work greatly eased. Crime had dropped to half what it was a few months earlier. The problem was quite different for the generals in charge of the army. This was a time of conscription, not the volunteer army of today, and those in charge of the draft and preliminary training before going overseas had to decide what to do.

Field Marshall General Crowder, in charge of the draft and training, was aware that crowded conditions in training camps were a recipe for the rapid spread of the flu. He knew what had happened earlier in 1918 at Camp Funston in Kansas, with the members of the Tenth Division, when soldiers in large numbers fell sick and many of them died. The officer in charge on that occasion wrote to the governor of Kansas, telling him that he had 1,440 admissions to hospital a day, that is to say about one a minute, and asking him to realize the strain that all of this was putting on his nursing and medical staff. General Crowder also knew that several other camps had, more recently, experienced a rash of flu infections.

Following consultations with both President Wilson and the army leaders in Europe, Crowder decided to defer the training of 142,000 registrants who were due to begin on October 7, and wait for the end of the flu pandemic before bringing them to camp for training. Settling the question of training camps was easier than the one that President Wilson now faced. The army in Europe wanted more troops in order to capitalize on the successes they were experiencing but medical authorities urged him to stop the mass shipment of troops until the flu abated. Death rates on troop ships were running higher than the numbers that died in battle. Wilson accepted the advice of Army Chief of Staff General March, and decided not to suspend troop shipments. Fortunately, the end of the war came in less than a month after the President’s decision.

In 1918 little was known about viruses and almost nothing was available to provide an adequate cure. Almost eighty years later, with concerns mounting that another flu pandemic could hit the world, scientists set about recreating the 1918 virus so that it could be tested out on lab animals to measure its strength. First they had to find a human body that had died as a result of the 1918 flu. They found one in Alaska that had been frozen in the Arctic permafrost soon after death so scientists were able to extract samples of lung tissue from it. The overlapping gene sequences were pieced together from this sample to give the full genome sequence and it was at that point that scientists became fairly certain that some ancestor had originally infected birds and the virus had moved from there into humans. A report from 1918, that was only investigated after 1998, confirmed this conviction.

veterinarian who was studying hog cholera in Iowa at that time discovered that the epidemic he was dealing with in pigs had symptoms that were identical to those he observed in the victims of the pandemic flu. He said in his report that whatever was causing the flu in pigs must be very similar to the flu affecting humans. Today, with all our understanding of viruses, experts are convinced that the pandemic flu of 1918 was indeed a mutation of one that had been resident in pigs. When the recently-reconstructed 1918 virus was tested in mice it was found to be extremely virulent, creating 40,000 more particles in a lung than happens with ordinary flu. All the mice that were tested with the 1918 virus died within six days. Samples of the virus are now stored in a secure vault at the Center for Diseases in Atlanta, Georgia, but fears exist over the risk of it getting into terrorist hands.

In 2005, the United Nations General Assembly called for immediate international mobilization against an avian flu that had already transferred into humans and, by early 2005, killed sixty-one people in Southeast Asia. There were fears that this virus could become a pandemic and be as destructive as the flu of 1918, a similar bird to human virus and the cause of the deaths of thirty million people worldwide. This new bird to human virus could be worse than the 1918 flu because, while there is much greater knowledge on how to cope with it, there is at the same time far greater and more frequent travel around the world. Finding a vaccine for a new type of virus, one that might change as it moves from bird to human, and then have it available in huge quantities at short notice is a huge challenge.

An examination of what happened in Central Africa in the past few years illustrates the problem. Within a year, an outbreak of the deadly Ebola virus took the lives of more than five hundred people in the Congo. The source of the virus was unknown for some time then, in December of 2005, a team of scientists found the virus in three species of fruit bats in the Congo. These bats were part of the human food chain in the Congo and it seemed likely that the transmission from bird to human occurred in this way. The possibility of a similar bird or bat to human transmission in Asia has raised concern everywhere. Especially since the 1918 virus, though unknown at that time, was a transfer from bird to humans.

The United States is not the only country that reconstituted the 1918 virus. The director and staff of the U.S. Federation of American Scientists’ Working Group on Biological Weapons are far from being satisfied with the level of security presently provided in Atlanta. They say that the risk of theft by a disgruntled, disturbed, or extremist lab employee at the facility is so great that it comes close to being inevitable. They have proposed raising the level of security to the highest possible. In 2003, they point out, a SARS virus escaped accidentally from a lab in Singapore and a year later there were two escapes of the same virus from labs in Beijing.

The avian flu virus was found in thousands of birds in Asia and in smaller numbers in many other countries, presumably carried worldwide by migrating birds. The detailed gene sequences of this virus are well known. It is defined as the virus H5N1 and it is one that has never before been experienced by humans except for those who died from it in Asia. There are therefore no antibodies in humans that could fight off infections from this virus, as is the case year by year when more familiar strains of flu viruses appear.

Dr. Andrew Fauci of the U.S. National Institute of Health is the United States watchdog for tracing the behavior of the H5N1 virus, monitoring it regularly as samples are collected from time to time, to check any mutations that develop and to give warning if any evidence of transfers from human to human are found. To date this has not yet happened. Soon after the first human death from H5N1, Fauci’s lab developed a vaccine that would be able to protect humans. It had to be tested out on mice before it was safe to be administered to humans and Fauci discovered that the dosage needed to protect test animals was far greater than required for traditional flu attacks.

Thus the various difficulties associated with responding to the United Nations appeal remained: how to produce enough vaccine in a very short time. That is, once the right mutation is present for creating a pandemic, how to cope with breakdowns in social organizations and institutions if large numbers of people die, and how to equip and protect adequately the care givers in hospitals so that the damage can be minimized.


london-black-death-plague-1665

London Black Death Plague – England – 1665 AD

The Asian Bubonic Plague, known as the Black Death, hits London.

No one knows why the bubonic plague, or Black Death as it came to be known in England, broke out in eastern Siberia in the 1300s and spread westward. There was very little knowledge, at that time, of the ways by which diseases are carried from place to place, so many of the efforts to get rid of them were ineffective. In later years it was discovered that infected fleas were the carriers. They passed on the disease to rats and when the rats died the fleas attacked humans. In 1347, a ship sailing from a Black Sea port to Messina in Italy, arrived at its destination with every person on board dead. It appears that the last to die were able to get the ship into port before they died. The port authorities in Italy, as soon as they saw what happened, had the ship carried out of the harbor, but their action was too late to stop the spread of the disease. Most of the people of Messina were already infected and, from this city, the disease spread quickly across Europe and across the English Channel, reaching London a year later, in 1348, where it killed close to half of the city’s people. Within the following three centuries London suffered several different epidemics but these and even the experience of 1348 were relatively benign compared with the violence of the outbreak of Black Death that swept across London in 1665.

The name by which the bubonic plague came to be known was related to the formation of black boils in the armpits, neck, and groin of infected people, which were caused by dried blood accumulating under the skin after internal bleeding. People first experienced the bacterium of Black Death as chills, fever, vomiting, and diarrhea. Frequently, the disease spread to the lungs and almost always in these cases the victims died soon afterward. The name pneumonic plague was given to these cases. In all victims the disease spread easily from person to person through the air and, in the vast majority of instances, death ensued. London’s population in 1665 was half a million; it was the biggest city in Europe. The first victim of the Black Death was diagnosed late in 1664 but it was in May of the following year that significant numbers of infections were being observed. By June, in the wake of a heat wave, more than seven thousand lives were being claimed by the Black Death every week. Those who could leave the city as the wave of death swept over it did so. The king and his retinue left. So did many of the clergy and nobility. The biggest surprise and the one that everyone condemned was the departure of the president of the Royal College of Physicians. All who were unable to leave the city, the vast majority, had to cope as best they could.

The usual practice of burying the dead in what was known as consecrated ground; that is to say, the cemetery on the grounds of the church, had to be abandoned as the number of dead mounted. Plague pits came into use to cope with the problem. As many as 100,000 lives were lost before winter killed the fleas and the epidemic began to taper off. The peak total of deaths came late in September, 1665, an interesting parallel with the time frame of the 1918 flu pandemic which lasted for a similar stretch of time. Presumably, those who are afflicted with such diseases develop antibodies after a time and fewer and fewer people then succumb. Doctors of the time could provide no explanation for the sickness, and most of them were afraid to offer treatment. In an attempt to keep from being infected, the few physicians who did risk exposure wore leather masks with glass eyes and a long beak filled with herbs and spices that were thought to ward off the illness. So terrified were the authorities that even if one person in a household showing plague-like symptoms a forty-day quarantine in the form of a red cross on the main door was imposed on the whole home. In many cases, it was a virtual death sentence for everyone living in the home because the black rat, the usual carrier of the disease, was an old inhabitant of London’s homes. When these rats died from the disease the fleas used people as their hosts.

Daniel Defoe, who was a youngster in 1665, later wrote extensively of the effects of the Black Death. He described London as a city abandoned to despair, a place where every home and every street was a prison. One area near the center of the city that had no buildings on it became a mass grave where the dead were dumped unceremoniously and covered with loose soil. Every day, thousands of bodies were brought to this spot in what was described as dead carts. Farther out from the center of the city, as the disease spread, a burial pit was dug, forty by sixteen feet and twenty feet in depth, and this served as a mass grave. Defoe stressed the eerie silence everywhere. There was no traffic except for the dead carts. Anyone who risked going outside always walked in the middle of the street, at a distance from any building and as far away as possible from any other person. London’s economic success, as evident in its huge population of half a million, led to overcrowding and neglect of hygiene, both conditions that encouraged the spread of diseases. Rat-infested slums that lacked running water added to the risk of infection. Paradoxically, the worst set of circumstances for those who showed initial symptoms was the five pest houses outside the city to which these people were sent. The unavoidable close contact with other patients made for easy transfer of the bacteria through breathing.

There have always been epidemics and outbreaks of sicknesses in London. This particular outbreak, the worst of all, had a predecessor in 1348, as has already been mentioned, which seemed to be the worst ever in its own time. A thousand years before the events of 1665 there was an earlier outburst of what must have been similar to the Black Death but was described differently at that time. Between the years 1550 and 1600 there were five severe attacks of Black Death, the last of which killed 30,000 Londoners. There were good reasons for these catastrophic experiences of diseases. Very little was known about public hygiene and open sewers were the norm. Homes were small and so tightly packed together that bacteria quickly moved around from person to person. Furthermore, London had for a long time been the center of national life and the place where there were opportunities in business and professional work. It had the biggest population of any English city. People kept arriving and living places became more and more crowded together in every part of the city.

There was a side effect from the frequency of diseases—the growth of what might be called healers. Charts were produced and circulated to show how the dates of saints or predictions about astrology related to the efficacy or otherwise for healing of different herbs. All kinds of superstitions were embraced, even the one about being cured if you touched the hand of a dead man. For centuries, the priests of the church were the doctors until the pope forbade them from drawing blood in any way. After that all kinds of lay doctors multiplied. Once a person managed to secure widespread publicity as a healer, large numbers of people accepted his cures without questioning them. The atmosphere of fear about new waves of disease was so great that the strangest type of cure was accepted. William Samson, a healer, practiced his art near the gates of Bartholomew’s Hospital, a much-respected institution. Because of the location, his proposed remedies were readily accepted at the price he asked. Samson happened to be a bit of a psychiatrist and had evidence of people whom he claimed to have cured.

Before the Black Death had run its course an unexpected “cure” appeared in a rural setting in September of 1665. A tailor received a parcel of cloth from London that also contained some plague-infected fleas. Four days later the tailor was dead and, by the end of the month, five others died. Everyone had heard of the tragedy in London and panic set in after the deaths of the five. The whole community gathered together and arranged to have their village quarantined to prevent the disease spreading throughout the region. It seemed like suicide yet, a year later, when the first outsiders entered the village, they found that most of the residents were alive and healthy. How did so many live through the attack of a disease that had been consistently taking the lives of almost all those infected? It is here that two extraordinary stories from 1665 emerged, stories that affect life today. The first relates to Isaac Newton, the famous scientist who was studying at Cambridge when the Black Death began to reach that city. His mother took him home to northern England for two years and it was during that time of enforced isolation that he did most of the work on his Principia, meaning mathematical principles of natural philosophy, often regarded as one of the greatest scientific works of all time.

The second story relates to the survivors of Black Death. In London, as well as in the village where the tailor received the cloth with fleas, there were accounts of people who survived the Black Death in spite of close contact with family members who had been infected and died. Elizabeth Hancock was one of these. In 1665, she had buried her six children and her husband within a single week but never became ill. The village gravedigger who had close contact with hundreds of dead bodies also survived. Were these people somehow immune to the Black Death? In the last few years, as concern mounted over the possibility of a flu pandemic reaching North America, Dr. Stephen O’Brien of the National Institutes of Health in Washington, DC, decided to investigate the accounts of seventeenth century survival. He searched for descendants of the village where a number of infected people had clearly survived the disease. This was not easy as a dozen or so generations of families had successively spanned the long period of time. He finally succeeded and took their DNA record.

Dr. O’Brien had already been working with HIV patients and had discovered in 1996 that the modified form of a particular gene in these patients, one known as CCR5 and commonly described as Delta 32, prevents HIV from entering human cells and infecting the body. Based on this finding and convinced that the way in which Delta 32 protects the body from infection might apply to other diseases he took DNA samples from the surviving relatives of the lucky ones in 1665. As he examined them he made two startling discoveries based on both his work with HIV patients and the experiences of the surviving relatives. One copy of the mutation enables people to survive although they get very sick. Two copies, that is to say one gene from each of two parents, ensure that an individual will suffer no infection of any kind. Delta 32 has not been found in parts of Asia or Africa or other areas where bubonic plague or Black Death did not occur so this, for Dr. O’Brien, raised an interesting question: did some natural event create this mutation so that some would survive? It has been said that a destructive bacterium or virus does not want to destroy all of its hosts so that it can continue to infect others later. Was this what happened in the case of Delta 32?


black-death-plague-constantinople-542

Black Death Plague – Constantinople – Byzantine Empire – 542 AD

Bubonic Plague from Asia arrived via Africa at Constantinople in 542

In 542, Roman Emperor Justinian was actively rebuilding the empire from its new headquarters in Constantinople, often referred to as the Byzantine Empire since there was so much Greek influence there. The old western part with its center in Rome had been taken over by barbarians, vandals, and others. Through a series of military victories, Justinian’s forces had been able to recapture much of Italy and had also been successful on other fronts. It was in the midst of these successes that Constantinople was ravaged by the first case of a Black Death pandemic. It reached Justinian’s capital from Egypt, probably carried by rats in ships. Historians have estimated that close to half the population of Constantinople died from the plague during its four or five months of active infections. The number of soldiers left for Justinian’s campaigns was completely inadequate so he had to step back from defending or further extending the historic frontiers.

Procopius, a historian living in Constantinople at the time, vividly described the plague and its effects. He pointed out that often, in the first day of infection, nothing very serious was evident but, on the second day, a bubonic swelling developed in the groin, armpits, or on the thighs and mental problems began to appear. Some went into a deep coma while others became delirious. Death came quickly to many while others lived for several days. When small black pustules appeared in the skin, the infected individual usually died within a day. Another symptom, vomiting blood, almost always led to death within a few hours. The physicians of that time tried a variety of cures but the results were always the same: again and again the cases that they fully expected to live died and the ones who seemed to the physicians to be hopeless lived on far beyond the period of the pandemic.

In the sixth century there was no significant understanding of bacteria and their role in the spread of diseases, and nothing was yet known anywhere about genes and their critical influence in determining who survived and who did not. These are the reasons for the perplexity experienced by the physicians when they tried their best to save the sick and the results were disappointing. It was the same centuries later when the same Black Death that overtook Constantinople in 542 swept over London in 1665. Many people in London, such as gravediggers, who were constantly exposed to infected bodies, stayed quite healthy while those who had just a single exposure to the infection died within two days. U.S. researchers who investigated this problem in the late 1990s solved the problem: those who had a particular gene, commonly known as Delta 32, did not catch the disease if they inherited this gene from both parents. If they received the gene from only one parent, they got sick but they recovered.

The same gene in HIV patients is now known to be the reason for them escaping the consequences of that particular infection. Procopius went on to describe the disease in Constantinople by showing how it affected pregnant women. Here, as in the general population, death came to both mother and baby but, in a few instances, either a mother died at the time of childbirth while the child survived or else a child died and the mother survived. It seems likely that in these rare cases the Delta 32 gene had been inherited from one parent. One common cause of death that seemed to have escaped the attention of Procopius was inflammation of the lungs, usually followed by spitting blood. Death followed quickly in these cases. Overall, the 542 pandemic ran its course over a time period of four or five months, a common sequence in other places at different times. At its peak, 10,000 died daily. The disposal of dead bodies overwhelmed the whole city. At first, friends and relatives attended to burials but very soon, with bodies being left unattended in streets and homes, huge pits were dug for mass disposal of the dead. Even these arrangements were inadequate so, with more and more bodies piling up, men removed the tops of the towers on the city walls and threw bodies into the spaces inside the wall.

The great leaders of the Roman Empire saw the whole inhabited world as their domain of responsibility, yet when Justinian became emperor there had not been any additions to the empire’s traditional territory since its acme in the second and third centuries. A glance at a map of the empire in second century and then one in the sixth reveals the enormous amount of shrinking that had taken place in the intervening four hundred years. Justinian was determined to change this condition and push back the existing frontiers to encompass as much as possible of the known world. The Greek city of Byzantium became the new Rome in the year 330. It was named Constantinople in honor of Emperor Constantine who established Christianity as the official religion of the Roman Empire. Justinian ruled this eastern empire from 527 to 565 and, in the first half of that period, he set about restoring the size and prestige of the former empire. In many ways his reign therefore represented a preservation of the Roman past.

There was an unbroken tradition in Roman law that had continued from the earliest days of the empire into the sixth century. Justinian felt that the preservation and renewal of these laws was as important as the recovery of former territory and he set about getting the work done. It was an immense task, one that was to last far beyond the life of the Byzantine Empire and serve in later centuries as the basis of European jurisprudence. The work was begun in 528 when Justinian appointed ten jurists to compile a new codification of the statute law and it was completed a year later. The next task was even bigger, the preparation of a summary of jurisprudence from the great Roman lawyers of the second and third centuries. This involved the reading of two thousand manuscript books, assessing the key matters of content, and reducing the total amount of material to one fifth of the original. All this took three years of hard work. Justinian had a reputation of being a very hard worker and he inspired these writers by his own example. His staff often used to find him busy at work in the middle of the night.

Once the work of codification and summary of jurisprudence had been completed, no further commentary on the law was permitted. The code and the summary, or digest as it was called, now represented the whole of the law. Any new legislative acts were referred to as novels; they usually dealt with issues in ecclesiastical and public affairs. One very long novel dealt with Christian marriage law. It was a sign of the times, particularly the changed times that accompanied the move of the empire’s capital from Rome to Constantinople, that all of these novels were written in Greek, not Latin. Furthermore, Justinian knew that many Roman laws had never been popular in the Greek east so local preferences frequently replaced old Roman laws. Hellenic traditions affecting family, inheritance, and dowry, for example, appeared in the new legislation. In addition, the power of the father, traditional in old Roman thought, was now considerably weakened. Christian influences too appeared in much of the newer legislation.  There was a desire to make law more humane, an emphasis that came from Justinian’s interest in including the idea of a love of humanity, and it was expressed in laws protecting the weak against the strong, favoring the slave against the master, debtor against creditor, and wife against husband. These improvements may seem small today but they represented a huge advance from the days of the old Roman Empire.

Justinian’s role in the Black Death pandemic needs to be examined because it was he who greatly extended the activities of the empire into Africa, the place that was the source of the Black Death. His first moves were directed at recapturing some of the lost lands of the west. His armies invaded the Vandal and other kingdoms, one after another, in a series of bitter wars from 540 onward, and in all of these he achieved considerable success. He made the Germanic kings servants of the eastern empire but there remained the difficult problem of religious purity. Justinian was devoted to the Nicene Creed, brought in by Constantine as the official religion of the empire, but the Germanic kings were practicing and preaching a form of Christianity considered heretical by the established church. The Vandals were most zealous and quick to seize orthodox churches in order to convert them into different places of worship. The Vandals were so few in number that they resorted to terror in order to control their subjects so their kingdom became a police state in which orthodox Christians were stripped of property rights, and frequently of freedom and even of life. When a delegation of orthodox Christians from Africa appealed to Justinian to fulfill his role as defender of the faith, he decided that the time had come to bring Africa back under the control of the empire. The immediate incentive for attacking the hundred-year-old Vandal kingdom in Africa was soundly based. Their king, Hilderic, had fostered good relations with the orthodox Christians. Exiled bishops had been recalled and churches reopened, but in 530 he was deposed by his cousin Gelimer and, from his prison, Hilderic appealed to Justinian. Even so, Justinian was uncertain about taking action because an earlier expedition had led to disaster. Finally, after much deliberation, Justinian went ahead with the invasion of Africa, convinced that the restoration of true Christianity justified it. The expedition set sail in 533 under the command of Belisarius.

The field army numbered 18,000 men, 10,000 of them infantry and 5,000 cavalry. There were also some others. In Sicily, Belisarius got the welcome news that Gelimer was unaware of the offensive and had sent 5,000 men and 120 ships under his brother Tata to put down a rebellion in Sardinia. The expedition from Constantinople landed in Tunisia, and the army marched along the coast toward Carthage while the fleet accompanied it offshore. Gelimer’s reaction was to put Hilderic to death and then march out to resist the invasion. His tactics were poor, perhaps due to inadequate planning, and he was routed. Belisarius marched on and took possession of Carthage. Gelimer fled westward and joined his troops who had been recalled from Sardinia, but within a few months suffered another defeat near Carthage. Gerlimer hid for a time with local tribesmen but finally surrendered. Belisarius went back to Constantinople with his captives and booty and Justinian arranged a victory celebration for him when he arrived, somewhat like the old Roman celebrations that followed successful military campaigns. About two thousand of the captives were conscripted into the army of Justinian.

Quite apart from his military successes and defense of traditional Christianity, Justinian achieved fame because of his extensive building program. The outstanding illustration of his work, one that still survives in the Istanbul of today, is the Hagia Sophia. There was an earlier church on the site that would become Hagia Sophia’s, built by Constantius in 360. He was the son of Emperor Constantine who had liberated the Christian faith from centuries of persecution. The earlier church was known as the Great Church. In 404, this church was destroyed by mobs and, later, in 415, rebuilt. It too fell victim to a rampaging mob of heretics in 532. The new emperor, Justinian, firm defender of orthodoxy, made short work of the howling heretics and ordered that construction begin on a brand new basilica. Construction work lasted from 532 to 537 and the new church was consecrated in 537. Architecturally the grand basilica represented a major revolution in church construction. It had a huge dome and this demanded new skills and new materials in order to support the weight of the dome. No one had ever previously attempted this. There were no steel beams available at this time so the dome had to be supported by massive pillars and walls. The church itself measured 260 by 270 feet, the dome rose 210 feet above the floor, and the overall diameter of the dome was 110 feet.

Some awareness of the danger of earthquakes was known at the time but everyone was convinced that the huge structures employed would meet any threat. They were wrong. Parts of the church and dome were destroyed subsequently in an earthquake and large buttresses had to be added to the supports. In 1204, Roman Catholic crusaders attacked and sacked Constantinople and Hagia Sophia, leaving behind a lasting legacy of bitterness among Eastern Christians. For more than 1,000 years Holy Wisdom had served as the cathedral church of the Patriarch of Constantinople as well as the church of the Byzantine court, but that function came to an end in 1453, when the Ottoman Turkish Sultan seized the Imperial City and converted Hagia Sophia into his mosque. Today, Justinian’s dreams of restoring the greatness of the old Roman Empire are long forgotten but the magnificent Church of the Holy Wisdom, which is the interpretation of the words Hagia Sophia, is still admired. It is a tourist attraction because it dominates the skyline of the modern city. Such was its stability over the centuries that, during an earthquake in Constantinople in 1999, the safest place for people was considered to be the Hagia Sophia. It is the mother church of all Eastern Christians of the Byzantine liturgical tradition both the Orthodox and the Greek Catholic.

The reign of Justinian proved to be a major factor in all of the history of late antiquity. Paganism finally lost out and the Nicene Creed that Constantine had established in the fourth century was almost universally recognized. From a military viewpoint, it marked the last time that the Roman Empire could go on the offensive with any hope of success. Africa and many other areas had been recovered. When Justinian died, the frontiers he had secured were still intact but it was the degree of restoration of the old empire that he had won back and the accompanying greatly expanded trade with the rest of the known world that led to the pandemic which destroyed so much of Constantinople and cut short all further military campaigns. Justinian had not created the disease, but he created the pandemic, which followed the movements of men and goods in Justinian’s greatly expanded empire. Without the empire and its huge shipments of grain and cloth from Africa, it is difficult to imagine how the First Pandemic could ever have hit Constantinople at such an early date.

Istanbul is Turkey’s largest city, and its cultural and economic center. It is located on the Bosphorus Strait, and encompasses the natural harbor known as the Golden Horn in the northwest of the country. Istanbul extends both on the European (Thrace) and on the Asian (Anatolia) side of the Bosphorus and is, thereby, the only metropolis in the world which is on two continents. Its 2000 Census population was 8,803,468 (city proper) and 10,018,735 (province), making it, by some counts, one of the largest cities in Europe. The census bureau estimate for July 20, 2005, is 11,322,000 for Istanbul province, which is generally considered as the metropolitan area, making it one of the twenty largest metropolitan areas in the world. Istanbul is located at 41° N 28° E, and is the capital of Istanbul Province. Istanbul, formerly known as Constantinople, had been the popular name of the city for five centuries already, a name which became official in 1930. Due to its three-thousand-year-old history, it is considered as one of the oldest still existing cities of the world. Istanbul has been chosen as the European Capital of Culture for 2010. Istanbul is sometimes called the “City on Seven Hills” because the historic peninsula, the oldest part, was built on seven hills, and is also represented with seven mosques at the top of each hill.


preventing-bioterrorism

Preventing Bioterrorism

The American Society for Microbiology (ASM) discusses with the U.S. Congress issues related to the adequacy of federal law relating to dangerous biological agents. It is the largest single life science society in the world with a membership of 42,000, and represents a broad spectrum of sub-disciplines, including medical microbiology, applied and environmental microbiology, virology, immunology, and clinical and public health microbiology. The Society’s mission is to enhance microbiology worldwide to gain a better understanding of basic life processes and to promote the application of this knowledge for improved health, economic and environmental well being.

It has a long history of bringing scientific, educational and technical expertise to bear on the safe study, handling and exchange of pathogenic microorganisms. The exchange of scientific information, including microbial strains and cultures, among scientists is absolutely essential to progress in all areas of research in microbiology.

It understands the unique nature of microbiology laboratories, the need for safety precautions in research with infectious agents and the absolute necessity for maintaining the highest qualifications for trained laboratory personnel. It conducts education and training programs, as well as publication of material related to shipping and handling of human pathogens. Through its Public and Scientific Affairs Board, it provides advice to government agencies and to Congress concerning technical and policy issues arising from control of biological weapons. The Society’s Task Force on Biological Weapons Control assists the government on scientific issues related to the verification of the Biological Weapons Convention (BWC).

It is acutely aware of the threat posed by the possible misuse of microbial agents as weapons of terror. Concerns that bioterrorists will acquire and misuse microorganisms as weapons have resulted in stricter controls on the possession, transfer, and use of biological agents to restrict access to only legitimate and qualified institutions, laboratories, and scientists. Over the past ten years, the ASM has worked with the Department of Health and Human Services (DHHS), the Centers for Disease Control and Prevention (CDC), the Department of Agriculture (USDA), and Congress to develop and establish legislation and regulations that are based on the key principle of ensuring protection of public safety without encumbering legitimate scientific and medical research or clinical and diagnostic medicine for the diagnosis and treatment of infectious diseases. The ASM has been an advocate of placing responsibility for the safe transfer of select agents at the level of individual institutions supported by government oversight and monitoring to minimize risks without inhibiting scientific research.

It notes that national security efforts to control biological weapons require that the United States increase biodefense and public health capabilities at the same time that it tries to develop safeguards to prevent the misuse of biological agents to harm the public health. Limiting the threat of bioterrorism includes reducing access to biological agents that might be used as weapons; however, combating infectious diseases and increasing medical preparedness against bioterrorism necessitates increasing biodefense, biomedical, and other life sciences research, including work on the same “threat” agents that could be used as biological weapons. As safeguards are developed, we must ensure that biomedical research, public health, and clinical diagnostic activities are not inhibited or we risk jeopardizing the public’s health and welfare.

Congress already has established a legal and regulatory framework to prevent the illegitimate use of toxins and infectious agents, outlawing virtually every step that would be necessary for the production and use of biological weapons. In doing so it has balanced assuring the availability of materials to the scientific and medical community for legitimate research purposes with preventing access to these agents for bioterrorism. For instance, the 1989 Biological Weapons Act authorizes the government to apply for a warrant to seize any biological agent, toxin, or delivery system that has no apparent justification for peaceful purposes, but exempts agents used for prophylactic, protective, or other peaceful purposes. Prosecution under this statute requires the government to prove that an individual did not intend to use the biological agents or toxins in a peaceful manner. The law also enables federal officials to intervene rapidly in cases of suspected violations, thereby decreasing the likelihood of bioterrorism while protecting legitimate scientific endeavors, such as biomedical research and diagnosis of infectious diseases.

The Antiterrorism and Effective Death Penalty Act of 1996 (the Act) broadens penalties for development of biological weapons and illegitimate uses of microorganisms to spread disease. ASM testified before the 104th Congress with respect to the control of the transfer of select agents that “have the potential to pose a severe threat to public health and safety” and contributed to the passage of Section 511(d) of the Act. The Act was intended to protect dual public interests of safety and free and open scientific research through promulgation of rules that would implement a program of registration of institutions engaging in the transfer of select agents. The transport of clinical specimens for diagnostic and verification purposes are exempt, although isolates of agents from clinical specimens must be destroyed or sent to an approved repository after diagnostic procedures are completed. The CDC is responsible for controlling shipment of those pathogens and toxins that are determined to be most likely for potential misuse as biological weapons. The ASM believes the CDC regulatory controls provide a sound approach to safeguard select agents from inappropriate use and should serve as a worldwide model for regulating shipment of these agents.

In her April 22, 1998, testimony before the Senate Subcommittee on Technology, Terrorism and Government Information Committee on the Judiciary and Select Committee on Intelligence, Attorney General Janet Reno stated that “mere possession of a biological agent is not a crime under federal law unless there is proof of its intended use as a weapon, notwithstanding the existence of factors, such as lack of scientific training, felony record, or mental instability, which raise significant questions concerning the individual’s ultimate reason for possessing the agent.” She, like other law enforcement officials, are troubled by the fact that someone can possess a biological agent that could be used as a weapon and not be in violation of a law unless one can establish intent. It is our understanding that the Department of Justice and other federal agencies have reviewed federal criminal statutes that could be expanded to make possession of certain biological agents illegal.

The ASM agrees that enhancing security and safety is a critical necessity when bioterrorism poses a credible threat to society. However, proposals intended to promote safety should not pose a threat to biomedical or other life sciences research and clinical diagnostic activities that are essential for public health. Unintended consequences could stifle the free exchange of microbial cultures among members of the scientific community and could even drive some microbiologists away from important areas of research. Ironically, extreme control measures to prevent bioterrorism, instead of enhancing global security, could prove detrimental to that goal if scientists can no longer obtain authenticated cultures. A key point is that natural infectious diseases are a greater threat than bioterrorism. Infectious diseases remain the major cause of death in the world, responsible for seventeen million deaths each year. Microbiologists and other researchers depend upon obtaining authenticated reference cultures as they work to reduce the incidence of and deaths due to infectious diseases. Dealing with the threatened misuse of microorganisms, therefore, will require thoughtful consideration and careful balancing of three compelling public policy interests.

We must acknowledge the terrible reality of terrorism within the United States and abroad from both foreign and United States origins. We cannot discount the possibility that, as unfathomable as it may be to the civilized mind, terrorism may take the form of bioterrorism. Most certainly, therefore, the government and scientific communities are duty bound to take every reasonable precaution to minimize any risk of terrorist use of microorganisms. The ASM is taking a proactive role in this regard. Even as we strive to prevent bioterrorism, we must candidly recognize that no set of regulations can provide absolute assurance that no act of bioterrorism will ever occur. Therefore, as we strive to prevent such acts, we also have a duty to pursue research and public health improvements aimed at developing the most effective possible responses to acts of biological terror. Research and public health responses related to effectively combating an act of terror are a critical component of the public policy response to the threat that exists.

While the possibility of a future act of biological terrorism is a terrible threat with which we must and will deal, the scourge of infectious diseases is a terrible reality that daily takes the lives of thousands of Americans and tens of thousands around the world. Infectious diseases are now the third leading cause of death in the United States. Research on the prevention and treatment of such diseases is critical to the well being of our entire population. In responding to the threat of terror, therefore, we must minimize any adverse impact upon vital clinical and diagnostic research related to infectious diseases.

Congress and federal agencies have appreciated these competing considerations and have sought to minimize interference with research through such measures as recognizing appropriate exemptions in regulating the handling of pathogenic microorganisms. As we have stated, past legislation has recognized the need for balancing these concerns. We know that such balancing will continue, and the ASM is committed to providing all available assistance in achieving balanced and effective responses to the threat to the public welfare.

It supports making it more difficult for bioterrorists to acquire agents that could be used as biological weapons and to make it easier for law enforcement officials to apprehend and to prosecute those who would misuse microorganisms and the science of microbiology. Its code of conduct specifies that microorganisms and the science of microbiology should be used only for purposes that benefit humankind and bioterrorism certainly is inimical to the aims of it and its members. The ASM established its Task Force on Biological Weapons to assist the government and the scientific and biomedical communities in taking responsible actions that would lower the risks of biological warfare and bioterrorism.

It supports measures to prohibit possession of listed biological agents or listed toxins unless they are held for legitimate purposes and maintained under appropriate biosafety conditions. Accordingly, it supports extending the current regulations implemented by the CDC to oversee the shipment of listed agents to include possession of cultures of those agents.

Although the ASM will not offer specific proposals today, we do think it will be useful to outline certain basic principles that we believe should be considered. Governmental responsibility for establishing, implementing, and monitoring programs related to biosafety should remain with the DHHS and CDC for human health and the USDA for animal and plant health. The CDC possesses institutional knowledge and expertise related to issues of biosafety and the designation, transportation, storage, and use of select agents. The CDC is well qualified to balance the real need for biosafety regulation with the critical need for scientific research, especially clinical and diagnostic research for the prevention, treatment, and cure of infectious diseases.

The CDC’s responsibilities should include duties to continue to establish and periodically revise the list of select agents; and in accord with proper administrative procedures, promulgate any additional regulatory measures related to registration of facilities, establishment of biosafety requirements, institution of requirements for safe transportation, handling, storage, usage, and disposal of select agents, and the auditing, monitoring, and inspection of registered facilities. The CDC should notify the Department of Justice about any concerns that it may have about institutions that possess select agents. Congress and the Administration must recognize that any expansion of existing regulations will require additional financial and other resources by the CDC. Based on surveys that ASM has performed, it is estimated that approximately 300 institutions possess select agents. Approximately half of those institutions are currently registered with the CDC pursuant to existing law. Registration of an additional 150 institutions, therefore, would impose additional expense and resource burdens upon the CDC that should be recognized and funded to ensure the timely and complete fulfillment of the CDC’s critical mission.

Congress, the CDC, and any other relevant governmental agencies must maintain their focus on the legitimate, important, and fundamental issues related to biosafety. In this regard, biosafety initiatives should be directed toward, and focused on institutions that utilize select agents for scientific purposes, regardless whether such institutions are in the academic, commercial, or governmental sectors. As in other areas concerning biological, chemical, and radiological safety, the focus for ensuring safety should be on the institution. The institution rather than any individual scientist should be responsible for registering possession and maintaining the proper biosafety conditions for storage and usage of the agent.

In this context, ASM supports registration with the CDC of every institution that possesses and retains viable cultures (preserved and actively growing) of select agents along with the concomitant duty to follow all regulatory requirements related to such possession and usage. Institutions and individuals, thus, would be prohibited from possessing cultures of select agents unless the agents are maintained under appropriate biosafety conditions.

The DHHS/CDC, acting in cooperation with the scientific and biomedical communities and with public notice and input, should establish the rules and provide for governmental monitoring. However, the registered institution must be responsible for assuring compliance with mandatory procedures and for assuring fully appropriate biosafety mechanisms, including appointment of a responsible official to oversee institutional compliance with biosafety requirements.

These institutional responsibilities include assuring safety through proper procedures and equipment and through training of personnel. Thus, the institution would bear the responsibility for training employees regarding the biosafety requirements, including the absolute necessity for following those requirements, and such duties as reporting isolation of select agents or any breach in a biosafety protocol.

As institutions comply with appropriate safeguards, scientists may undertake their research with knowledge of clear procedures and with assurance that compliance with such procedures will fulfill all governmental requirements related to select agents. The institutions would be required to maintain records of authorized users and to ensure that they are properly trained as is currently the case for work with radioisotopes. Intentional removal of select agents from a registered facility would subject the individual to criminal sanctions.

Congress and the CDC must balance the public interests of minimizing the threat of bioterrorism and assuring vigorous scientific research, especially research relating to clinical and diagnostic methods and to protecting the nation’s food supply. We must recognize that we are dealing with naturally occurring organisms that cause natural diseases. The focus should be on cultures of biological agents and quantities of toxins on the CDC select agent list in order to address any problem arising from an individual who may unknowingly pick up a dead deer mouse with Hantavirus, a handful of soil with Bacillus anthracis, a jar of honey with Clostridium botulinum, or contract an infectious disease with one of the select agents, and who could be in technical violation of a law prohibiting possession. Because microorganisms, including listed agents, are invisible and widely distributed, there is no way of knowing what you might possess unless you culture the organisms or use sophisticated molecular diagnostic procedures.

The CDC, working with the scientific community, should develop a comprehensive definition of a culture of a biological agent that would include microorganisms growing in artificial media, animal cells, and preserved viable materials from such cultures, which are the materials of concern.

Congress should recognize that the need to deal with the threat of biological terrorism will be an ongoing duty for the indefinite future and will continually require balancing competing considerations as discussed in our earlier testimony. Therefore, Congress, acting through the DHHS and CDC, should provide for continuing consultation with the scientific and biomedical communities regarding the substance and procedures of regulations governing select agents. The CDC should be empowered to act swiftly to adjust definitions, substantive duties, and procedural requirements to the inevitable changes resulting from scientific research. ASM is committed to working with Congress and the DHHS and CDC to protect against threats of terrorism while engaging in vigorous research for the betterment of humankind.