Part 2: Process and Politics
Busting A Myth
I was actually going to bring this up at the end of part 1, but it will go just as well here. A myth/rumor/conspiracy exists concerning mefloquine and the CIA’s attempt at creating a line of super-soldier using mind control. Let me address this right now.
Like most myths, there is some basis for them in true life events or situations. It’s just that you need to remember to understand things from their proper perspective. It is true that mefloquine was one of the drugs that were tested by the CIA during its MK Ultra project. There is documented evidence of this, and I have no doubts this happened. The fact that governments undertake dubious projects such as this should come as a surprise to nobody by now.
However (and a big however at that) there is more that you need to consider. For instance, mefloquine was one of the thousands of drugs tested in that program, and there was a good reason for it to be tested. Like other malaria drugs, mefloquine is in a family of drugs known as quinolines. Another anti-malarial, chloroquine, is a quinilone as well, and they are all derivatives of quinine, the toxic extract first known for its anti-malarial properties.
These are not to be confused with quinolones. In this family of drugs are antibiotics, most notable fluoroquinolones like ciprofloxacin. These drugs are also known for their potential side effects, and care must be used when taking them. Although not known to cause serious psychiatric issues, fluoroquinolones can cause Central Nervous System reactions such as headache, dizziness, or insomnia.
There is nothing significant in the fact that mefloquine was being tested in this program, which ended in 1973 by the way. It was tested because other drugs with a similar makeup to it were known to have mind-altering side effects. To be on the safe side, I spent some time looking into the probability that there might be some truth to the rumour. I could find absolutely no credible evidence that would convince me that it would warrant further investigation. This myth is just that, a myth, and it has been busted.
The Business and Politics of Pharmaceuticals
In the 1980s, the regulation of prescription drugs had become a topic of great discussion in a number of circles. There was a growing call to reduce the time that it was taking for drugs to hit the market, and an interesting mix of parties was making this call.
Naturally, the pharmaceutical industry had a vested interest in this. With billions of dollars at stake, the sooner a company can bring a new drug to market, the sooner it can begin to recoup the development costs for that drug, and hopefully make a profit before the patent on the drug expires. Pharmaceutical companies spend billions of dollars every year on the research and development of new therapies, so being able to recoup these costs helps to ensure that this research can continue into the future.
Within the medical community, there is a desire to eliminate the bureaucratic processes that are delaying the availability of potentially life-saving drugs. Many doctors have had patients who would benefit from a new therapy that is awaiting regulatory approval. It can often be a long and drawn-out process, and patients can die waiting for a new drug to transit its way through the bureaucracy.
Any doctor will probably tell you that they have faced a conundrum or two in their career, and many of their patients face the same conundrums. One of the biggest relates to medication. There is no such thing as a “perfect medicine”, every drug has side effects. The problem emerges when the side effects of the medication end up being worse than the disease it is meant to treat. Cancer patients on chemotherapy can attest to this firsthand.
It is something that I am also all too aware of, my mother having been diagnosed with Rheumatoid Arthritis in the past year. It was recommended she take methotrexate, a drug also commonly used in the chemotherapy regimens of people battling various forms of cancer. Given the trade-off, we are looking into other alternatives, which is what many people would do.
When it comes to many scientists however, the thing that usually matters the most is the efficacy of the drug. Can it cure the disease effectively within a specified period of time. When the early results of a trial or experiment show that a compound has the ability to do what it was meant to do, and do it well, there is then a push to further proceed with more studies that will confirm the findings. That someone may be harmed or die from the compound has not entered into the equation at this point.
As further studies are done, the rates of success in curing the disease often overshadow any potential adverse effects that might be experienced. There might be a tendency to play down such results as coincidental in order to portray the results in a more positive light.
1985: An Important Year
Lariam was approved for use in Switzerland in 1985 by the Interkantonale Kontrollstelle ftir Heilmittel (lKS), the regulatory body responsible for the approval of prescription drugs in Switzerland at the time. An excerpt from a paper published in the 1986 Annual Review of Public Health reads:
In Switzerland, the IKS is authorized to assess new therapeutic agents prior to marketing and to notify the cantons (local government units responsible for health care) of its appraisal regarding composition, advertising, and price, as well as its decision concerning approval or denial of authorization for sale (20). Specific requirements for registration, including data on safety and efficacy, are included in regulations formulated in 1955, 1963, and 1972. Since 1973, the IKS has also been empowered to control the manufacturing of pharmaceutical agents. The Swiss system of drug approval is notable for its simplicity and lack of detailed specifications and requirements. A Swiss university pharmacologist has noted that the high level of cooperation between the IKS and the industry is a remarkable featureAnn. Rev. Public Health. 1986. 7:217-35 Copyright © 1986 by Annual Reviews Inc. All rights reserved
That same year was also an important year for the drug in the United States. That year, the FDA recommended mefloquine be approved for use in the United States. The FDA Medical Officer making the recommendation was Dr. Celia Maxwell. Reports published at the time seem to indicate that Dr. Maxwell was very enthusiastic about mefloquine, almost to the point of appearing to be a cheerleader for it.
These reports also reveal that despite her apparent opinion of mefloquine, Dr. Maxwell has never taken the drug herself, citing “multiple drug sensitivities”. She currently holds the position of Associate Dean of Research at Howard University in Washington D.C.. I have sent emails to Dr. Maxwell, asking if she would be willing to discuss this matter with me, but to date have received no response. I’ll keep trying, and if I actually get a response, I will post an immediate update.
1985 was a difficult year for the FDA. First, congress was applying pressure to reduce the amount of time it took to get a drug to market, and they focused their attention on the regulator to get them to do something about it. Congressional officials were being courted by the powerful pharmaceutical lobby, and with millions of dollars worth of potential campaign contributions on the line, they acted swiftly.
At the same time, the FDA was embroiled in a corruption scandal, when payoffs from generic drug companies to FDA employees were uncovered. The scandal appears to have only been related to generic drugs, and there is no indication that the recommendation given by Dr. Maxwell was given in consideration for compensation from any party. Still, it was a major shake-up in the culture of the FDA.
Throughout the 1980s there was an ever-increasing number of studies on mefloquine. Most involved the interaction of mefloquine with other drugs and would take place in locations around the world, and with varying numbers of participants. Again, there were some reports of severe adverse events that involved psychological symptomology, but the studies were overwhelmingly supportive of mefloquine.
Understanding the Process: Clinical Trials
Because clinical trials play an important role in this story, and in the approval process involved for the medication you take, I thought I’d walk you through it to help you put things into perspective.
There are no hard and fast rules governing clinical trials, nothing specific as to the required number of participants in a study, and other things of that nature. Clinical trials, in theory, are designed to achieve the best possible result in a best-case circumstance. Many times, the circumstances are far from the best case, particularly when it comes to the number of subjects (people) that are available to participate in a study.
Typically, there are five phases to a clinical trial, though only four are referred to. The first is referred to as the phase zero clinical trial. These trials are different from other clinical trials in that they are exploratory studies used to speed up the approval process of a new drug, and are usually done on only a few patients. Many times these are investigational drugs and are done in exceptional circumstances.
I’m going to save myself some grief and just copy and past this next section on human clinical trials from centerwatch.com.
Phase I studies assess the safety of a drug or device. This initial phase of testing, which can take several months to complete, usually includes a small number of healthy volunteers (20 to 100), who are generally paid for participating in the study. The study is designed to determine the effects of the drug or device on humans including how it is absorbed, metabolized, and excreted. This phase also investigates the side effects that occur as dosage levels are increased. About 70% of experimental drugs pass this phase of testing.
Phase II studies test the efficacy of a drug or device. This second phase of testing can last from several months to two years, and involves up to several hundred patients. Most phase II studies are randomized trials where one group of patients receives the experimental drug, while a second “control” group receives a standard treatment or placebo. Often these studies are “blinded” which means that neither the patients nor the researchers know who has received the experimental drug. This allows investigators to provide the pharmaceutical company and the FDA with comparative information about the relative safety and effectiveness of the new drug. About one-third of experimental drugs successfully complete both Phase I and Phase II studies.
Phase III studies involve randomized and blind testing in several hundred to several thousand patients. This large-scale testing, which can last several years, provides the pharmaceutical company and the FDA with a more thorough understanding of the effectiveness of the drug or device, the benefits and the range of possible adverse reactions. 70% to 90% of drugs that enter Phase III studies successfully complete this phase of testing. Once Phase III is complete, a pharmaceutical company can request FDA approval for marketing the drug.
Phase IV studies, often called Post Marketing Surveillance Trials, are conducted after a drug or device has been approved for consumer sale. Pharmaceutical companies have several objectives at this stage: (1) to compare a drug with other drugs already in the market; (2) to monitor a drug’s long-term effectiveness and impact on a patient’s quality of life; and (3) to determine the cost-effectiveness of a drug therapy relative to other traditional and new therapies. Phase IV studies can result in a drug or device being taken off the market or restrictions of use could be placed on the product depending on the findings in the study.
It’s clear that it would be extremely difficult to perform the perfectly ideal clinical trial, that is just the way things are. This risk goes with every drug on the market, and we take that risk with every dosage of a new medication. It ultimately comes down to the question of whether or not the reward is worth the risk. Sometimes it is, and tragically, sometimes it isn’t.
This is a question many would be asking after 1989, the year Lariam was approved for use in the United States by the Food and Drug Administration. Unfortunately, many of those would get the answer they didn’t want to get.
In part 3, the reports of adverse events start to come in, and the drug is implicated in a murder.