Bioethics Discussion Blog: April 2008





Monday, April 28, 2008

Conflict of Interest?: The Researcher has the Disease under Study

I have a simple question to put to my visitors: should a researcher who has the very disease under investigation regarding therapy be a major or significant investigator in the study or should the researcher recuse him/herself and simply become a patient subject of the study?

There is past and current concern in the medical literature about the conflict of interest which may be held by an investigator who is working for or in any way given financial or other support by a pharmaceutical company who will produce the drug under investigation. The concern is that there may be actual bias involved in the design or the carrying out of the study toward the benefit of the pharmaceutical company with lesser concern for the science involved. If there is no intentional bias, perhaps to others it may appear as a possibility and will affect how others look at the results of the research study.

Now with regard to the investigator, him or herself, if the investigator was a patient with the disease, regardless of the financial support of the study, would conflict of interest exist as a patient that might affect the study and its results? If so, how would the conflict manifest itself? An analogous question might be why are patients as subjects of a drug study not given an opportunity to make suggestions in the design and development of the study?

I would welcome comments from my visitors on this aspect of conflict of interest in research. By the way, for those who want to read about another kind of conflict of interest, specifically where the investigators in the study of the cause of autism are or appear to be associated with activist organizations or organizations holding a particular point of view? I posted that thread on November 21, 2007. ..Maurice.

Saturday, April 26, 2008

Is There Certitude in Medicine?

Just in case there is anyone out there that believes that with all the current technology there is certitude in the practice of medicine, I wrote the following crude poem to express my understanding. Any comments to the contrary would be appreciated-- in a poem format might be interesting ..Maurice.

Is There Certitude in Medicine?
By Maurice Bernstein, M.D.

As a doctor, I know
What I should know
But is that knowing enough?

As a doctor, I see
What the disease might be
But there is uncertainty

Since I wonder do I really know
That what I see, is what I know?
My patient wants to know

Do I know what I see?
What the disease might be?
Should I tell the uncertainty?

Unlike mathematics which has a unique conclusion
Unlike physics which points to a principled conclusion
Medicine may provide only a confused conclusion

Truly, medicine is still rather crude
Surely, let’s tell that there is a lack of certitude
But to the patient, would that honestly be understood?

Sunday, April 20, 2008

Patients Volunteering for Research and the “Therapeutic Misconception”

A patient volunteers to participate in a drug research project which is related to the patient’s own illness. The question is: why would the patient want to participate? An obvious answer might be that the patient is motivated about the possibility that the new drug will benefit recovery or management of the illness better than the drugs which the patient has been prescribed. The patient may believe that he or she will, in the study, will have access to the new drug. This thinking and believing really represents a misconception. It is based on the patient’s conception, which may be fostered by inadequate or misleading information about the study, that sufficient benefit to the patient’s illness will be the result of participation in a clinical trial that will trump any of the risks of the study. This is the “therapeutic misconception”, a term invented by Roth, Appelbaum and Lidz some 25 years ago, regarding patient involvement in clinical research.

What is the misconception? Well, first might be the patient considers him/herself as a patient in the study. Many studies are designed not to treat the volunteer with the appropriate illness as a patient but to consider the volunteer as a experimental subject. Yes, in the United States there are overseeing governmental mechanisms in clinical research attempting to protect the human volunteers as human subjects but not necessarily as patients. Being attended to as a patient demands that the professional keep only the best interests of that patient in the decision making. But, in clinical research, the goal is to attend to the best interests of the research study. For example, toxic effects of drugs on the volunteers are generally monitored and if the risk to the volunteers, through statistical monitoring, becomes greater than initially anticipated, the study is stopped. Well, one would agree with that kind of interest in the study subject. But sometimes, if the beneficial results of the drug study are strongly positive, and this is also monitored, then the study also may be stopped to conserve the costs but perhaps only to the detriment of the patient whose illness was improving with the drug under study. There, unlike in usual medical practice, benefit for the patient is ignored. A complicating issue in this conflict of benefits is when the patient’s physician is also a member of the research team (a topic which I covered on this blog in February 2005 as “Wearing Two Hats: Clinician and Researcher”). How can the physician resolve the conflicting responsibilities?

Another misconception is that the patient will receive a drug in the study which could be to the patient’s benefit. The basis of a true clinical research random controlled study is not to study the known but to study the unknown and undetermined. The usual first approach to study the unknown is to discover if a drug has benefit for a patient as compared with no drug (a placebo pill). If there are already drugs available to treat the illness, then another unknown would be to determine if the new drug’s known benefit is better than the other drugs benefit. This can be tested by a study comparing the beneficial effects of the new drug against the established drug. The studies are performed by, in the first case providing the new drug to one group of randomly selected patients and providing the second group of randomly selected patients the placebo. The second group will not be treated with an active medication. If it turns out that the new drug is ineffective, the first group will get no benefit. The risks include the possible absence of an effective drug in the first group and both groups may not be getting benefit from an established drug for their illness during the study.

In the second case, one randomly selected group will get the new drug and the second randomly selected group will get the established drug. The first group may not be benefitted by the new drug. The second group may not have the benefit of the new drug, if that drug is found better than the established drug. Again, the risks for the first group would be not getting benefit from an established drug if the new drug is less beneficial. For the second group, they would be missing the opportunity, if the new drug is more effective, to take the new drug.

So what does this all mean? It means that volunteers must realize that a clinical study is not a treatment, it is an experiment. The volunteers should be aware that they are not patients, they are experimental subjects. The volunteers should know that they will be randomly selected, will not know to which group they have been selected and they may not be given a drug that is effective treatment for their illness or may be given one that is less or no greater benefit than established drugs for their illness. They should know that during the study, the pill they are taking may be inert or less effective than the drug they had been taking previously. Finally, there is no guarantee for any participant that an effective drug found by the study will be accessible or available for treatment of the patient after the study is finished.

What this all this also means is that all volunteer patients should personally make certain they obtain and understand, as part of the informed consent,the full nature of the study they are considering to enter, the role of their own physician in the study, the risks that they are undertaking and the hoped for later benefits and understand that their participation really should represent their own altruism, to sacrifice themselves for other’s (and hopefully their own) later benefit. This “conceptual clarity” on the part of the patient at the outset should remove the possibility of the therapeutic misconception. ..Maurice.

Wednesday, April 16, 2008

In Healthcare, Words, Words, Words:Do We Understand Them?

I attended a lecture today given by Tina Castanares, M.D. for the Providence Healthcare System on the medical/social issues regarding access to care for Latino immigrants. Many of the immigrants know no English or understand English only poorly. She suggested that physicians and other healthcare providers who care for these immigrants should be cautioned not to take for granted that the patient would understand many words that are casually spoken to them as part of professionals discussing the immigrant's care and disposition.

What struck me was that besides Latino immigrants, I am not sure that the English speaking non-immigrants, one born and living all their life in the United States really know what these same words fully mean and represent. By the way, many of the words are readily used in other countries. I decided to create this thread to find out what words that are commonly used in healthcare mean to my U.S. and foreign English reading visitors. Let me know which words are unknown, ill-defined, mis-defined, misunderstood. About which particular words should professionals talking with patients be made aware that the patient may not understand. Good communication is an essential component of healthcare. In terms of potential benefit to the patient, good communication may be as valuable as the drug prescribed as therapy.

Here are the words as Dr. Castanares listed them:

"hospice, palliative care,comfort care,futile care,risks of care,ubiquity of medical error, assisted living,adult foster home, medical social workers, discharge planning,home health,gatekeepers,primary care, care coordination,case management, withdrawal or withholding of life support, patient's rights, ethics, advance directive,POLST,healthcare representative, decisionmaking capacity,informed consent,HIPAA, privacy, confidentiality, records release, intensive care, resusitation"

Words,words,words.. but do we really understand them? ..Maurice.

Addendum 8-4-2008: You may be interested in reading another thread about the new medical care words that patients must learn including a link to a quiz testing your knowledge.

Sunday, April 13, 2008

50 Years of Medical Practice: Changes, Benefits, Costs, Dilemmas

Last evening I attended the medical alumni reunion representing 50 years since we all graduated from the UCLA School of Medicine. Obviously, the first thing we all noted was that we were getting older and the graduation pictures on our badges were only a memory of how we looked and behaved then.
I am sure that most of us must have had also a moment of reflection about how the practice of medicine changed over the intervening 50 years. I certainly did and I decided to take more than a few moments to write some comments about this on my blog.

For medicine and medical care it has been a changing and challenging 50 years in many ways and by many events both in the realm of basic science, applications of science, development and general use of new tools for diagnosis and treatments and the specifically in the United States the way that medical care is provided (or not provided) to the public. During the past 50 years, we have gone from the development of the cardiac pacemaker to open heart surgery and heart transplant. We have gone from vaccines for polio and mumps, the global eradication of small pox to fighting a new pandemic HIV/AIDS. In the United States, Medicare and Medicaid didn’t exist when I graduated in 1958 but was part of the practice of medicine when passed by Congress in 1962 and a couple decades later the treatment of patients migrated from full control by physicians to the “employment” of physicians in what has been known in the U.S. as HMOs (health maintenance organizations). There is so much to show as a timeline regarding innovations and changes in medical practice between 1958 and 2008 that I can’t and won’t list them all here. However, for those who would to systematically go over the changes throughout the interval between those years, here is a link to Medical History Timeline for the clinical and some social events.

Tom Mayo, a lawyer, teacher and ethicist has provided me with a timelinewhich he uses as a teaching tool for his health law class. It covers the social, legal, economic and political changes which affected healthcare starting in 1700 but continuing on to detail what was going on in our past 50 years.

From my point of view as a physician but also a hospital ethics committee representative, teacher of first and second year medical students and writer of a bioethics blog, it is in the area of medical ethics and professionalism that much has changed along with the other clinical and governmental policy changes. Can you believe that when I graduated medical school there was no generally accepted concept of patient autonomy? Patient autonomy came as medical consumerism became an underlying concept within the doctor-patient relationship. When I graduated it was the paternalistic directions by the physician as the basis of decisions applied to the patient. There was no such thing as a patient being allowed to make a decision for life supportive treatment to be withheld or discontinued. This all came later when the potential for sustaining life but without curing the condition or returning of the patient to their desired quality of life was made available by the development of the ICU (intensive care unit) environment with the CPR (cardio-pulmonary resuscitation) responses, modern ventilators, hemodialysis machines, pacemakers and a host of other supportive yet not necessarily curative treatments.

The presence of hospital ethics committees were created and began to become available to help sort out ethical dilemmas that were beginning to appear. They began as ordinarily fatally ill babies with severe birth defects were found, through technology, to be able to be kept alive for varying periods of time, when it was not clear whether terminating life support represented killing, when conflicts began to occur between decisions of patients or their family surrogates with the healthcare team such as continuing life support in an otherwise terminal cancer patient or refusing life saving blood transfusions. With the acceptance of patient autonomy, the attention to the issue of informed consent and the introduction of advance directives has led to such conflicts to which ethics committees began to handle.

My entry into the 1950s world of medicine included my understanding of death and under what circumstances I would pronounce a patient dead. It was the presence of an unresponsive patient who was not breathing and who had no heartbeat or pulse. Later, with the advent of organ transplantation, the need for more organs to transplant into the increasingly larger population of potential recipients, criteria for pronouncing death changed. The heart could continue to be beating but if the patient met neurologic criteria of whole brain death including the brain stem with no spontaneous breathing, the patient could then be pronounced dead and the organs procured. This additional definition required ethics committees and others to establish ethical as well as clinical guidelines for incorporation into the protocol for selecting the donor candidate. This involvement of ethics committee protocol writing, consultation and supervision was also needed for procurement of organs from terminal patients who wished to have their life support ended and was observed without support until the heart stopped for 5 minutes, pronouced dead and organs quickly procured.

I could go on and on regarding the changes in medical practice in the past 50 years. For more about the development of medical ethics in these years, read the chapter “History of Bioethics as Discipline and Discourse” by Albert R. Jonsen, page 3, in “Bioethics” by Jecker, Jonsen and Pearlman and available as a Google book reproduction with only a couple pages missing from the chapter.

There have been many changes, many benefits, much cost and with the changes, dilemmas. Have these changes given those of us living in the United States the best in medical care? Not with the millions of people in the United States who have no medical insurance and may not be able to take advantage of all the benefit from the technical successes in this last half century. This is not at all an isolated issue only for the United States. In fact, the reader might want to read about the upcoming April 15th 2008 Public Broadcasting System television presentation of FRONTLINE titled “Sick Around the World”. You can get further information at this link and if you are reading this blog after that date, you should be able to view the entire program online at that same link.

I would be very much interested in reading from my older visitors who remember what medical practice and care was like in the 1950s and earlier, what changes they have noticed personally in their medical care experience as a patient in current times and what needs yet to be improved. (No names please.) ..Maurice.

Thursday, April 10, 2008

Ethics of the Donation and Reproductive Use of Frozen Embryos

I received today in the snailmail Volume 1 Number 1 issue of the "Embryo Connection", a publication of the National Embryo Donation Center which received a 2 year federal grant from the United States Dept. of Health and Human Services "to provide an evidence-based asssessment of embryo donation and adoption." The results of their beginning data base can be found at their website:

To my recollection, I don't think we have previously discussed on this blog the ethical issues involved in the utilization of the frozen excess embryos of IVF through the donation by one couple to be nurtured thru pregnancy to birth by another couple and the resultant child parented by that second couple. (IVF,In Vitro Fertilization is, as you may know, the fertilization of the eggs by the sperm of a couple and allowing for an embryonic forms to develop some of them may be implanted into the uterus of a woman for development and birth or the excess embryos stored in a frozen state for later use or to be discarded.)

Some of the ethical and legal issues I see are:

Are the frozen embryo use analogous to organ donations by a living or deceased donor to the benefit of the recipient?

Beyond the donor couple giving permission to use the embryo, should there be monitary compensation for that use?

Should the donor couple have any say in the final outcome of the child after birth, such as the recipient couple deciding later to give the child up for adoption?

How much medical and genetic information should the receiving couple obtain about the donor couple before decision for embryo use?

Should both couples share experience with the born child?

When the child is born to the second couple, who is the legal mother? Is legal adoption of the child necessary?

Does the recipient couple have the responsibiltiy to inform the child, at some age, that the child was originally a frozen embryo created by IVF by another couple?

Is preserving excess embryos for later implantation and birth really the right thing to do in a world with increasing population and poverty, hunger and poor health care for some thoughout the world including the United States?

Any more that I missed? And what do you think the answers should be regarding the ones I thought of above? ..Maurice.

Sunday, April 06, 2008

The Fifth Vital Sign: Fetish or A Functional Parameter?

I would like to extend the discussion of the December 23, 2006 thread on pain and the relief of pain.

As many of my visitors might know, in the United States there has been an addition to the list of physical examination “vital signs”, which formerly only included the patient’s temperature, the respiratory rate, the heart or pulse rate and the patient’s blood pressure. Because of the concern that patients were not being initially asked about nor monitored for pain, within the past decade, pain was included as a necessary requirement to be included in the list of vital signs. Patients were to be asked about the intensity of any pain at the time of the examination which was to be expressed as numbers from 0 to 10, where 0 was no pain, 1 was the most minimal pain and 10 was the most severe pain that the patient could imagine.

It had been hoped that by physicians and nurses alerted by the patient’s numerical expression of their current pain, the information would encourage appropriate physician initiation of treatment to reduce the pain or if treatment was already in progress to evaluate the efficacy of the treatment and make adjustments in therapy if not adequately effective, the goal being the professional duty to attempt to relieve the patient’s pain and suffering.

So, we teach our medical students, nursing students, physicians and nurses about the “fifth vital sign” and to never forget to take it. The question is how consistently has this requirement been followed by the healthcare providers and also whether the inclusion of pain in the vital signs has made any difference in the quality of pain management. I can’t find much in the way of studies on this topic beyond a 2006 publication of a study at a Veteran’s Affairs medical center which showed that “the routine documentation of pain levels, even with system-wide support and broad-based provider education, was ineffective in improving the quality-of-care.”

The question is whether requiring the asking about and quantifying pain to be included as the fifth vital sign represents simply a kind of routine clinical fetish (an irrational, or abnormal, fixation or preoccupation ) or a rational and functional parameter which can and does promote benefit for the patient.

Without naming names in your commentary, I would most like to read from my visitors whether their doctors or nurses routinely ask them about whether they had any pain and, if present, instructed them to formally quantitate (1-10) their pain when the other vital signs were taken or independently at other times. And, if so, then, was there some action taken based on the information provided with the initiation of pain therapy, change of therapy or ordering of additional diagnostic testing regarding the origin of the pain? ..Maurice.

Saturday, April 05, 2008

Getting The Eggs for Stem Cell Research: The Risks and the Ethics

The need for human eggs to be used in stem cell research has posed an ethical challenge regarding how to treat those women who allow their eggs to be removed for the research purposes. Eggs have been obtained after stimulation now for a number of years as part of the invitro fertilization process which enables the woman to have her eggs fertilized and then implanted in her uterus to attempt a pregnancy or to have the eggs donated to another family for reproduction purposes. In the case of current stem cell research, the eggs would not be used for producing a pregnancy but to create the stem cells and the tissues under study. Therefore, the woman’s participation would yield neither immediate personal benefit nor any immediate benefit for another family. Presumably, the woman’s participation would be motivated by altruism and most likely some financial benefit. The questions include whether this is sufficient to overcome the potential risks to the woman’s health that comes from this donation. What are the risks?

The risks include the ovarian hyperstimulation syndrome (OHSS) which is related to the hormones used to stimulate the ovary to produce more eggs. Between 0.3 and 5% or perhaps up to 10% of women who undergo ovarian stimulation to procure eggs experience severe OHSS, which can cause pain, and occasionally leads to hospitalization, renal failure, potential future infertility, and even death. Later and long term effects of hyperstimulation on fertility and hormone dependent cancers are also being considered. The incidence in healthy short term donors is felt to be less. Rare operative complications can occur from retrieval of the eggs from the ovary.Finally there is a possibility that psychological issues may appear with regard to the screening, the procurement process and after procurement.

And what is the benefit? Again, there is no immediate benefit to the healthy woman. In later studies, the eggs may be obtained from women who themselves or family members are ill from a genetic abnormality or who have the abnormality without symptoms. Studies with these eggs may lead to a more immediate benefit to the donor or their family.

The question also arises as to whether those women who donate who are not ill but which involves a surgically invasive procedure, should be treated as patients or as research subjects. The confusion here in defining them is because clinical research subjects are usually treated by a drug or procedure whose outcome is unknown. Their participation is experimental. The process itself, for which the egg donation women may be at risk, is not an experiment and the outcome of ovarian stimulation and egg procurement is known. Indeed, the experimental part of the research is that carried out with the procured eggs. There is also the question of whether the women should be considered analogous to a living organ donor. The difference, of course, is that the procured organ is used immediately for the benefit of a specific patient, while the eggs obtained from these women are not of immediate reproductive or therapeutic benefit to anyone. Why are these differences important? Well, it has to do with the ethics of balancing the risks vs benefits to the woman and whether protocols that guide the researchers in the procurement of the eggs should therefore be made more specific to egg donation process itself and perhaps more strict than protocols for other clinical research.

In this regard, California Institute for Regenerative Medicine (CIRM), the institution given the responsibility for the oversight and distribution of California funds for stem cell research commissioned the Institute of Medicine and the National Research Council to form the Committee on Assessing the Medical Risks of Human Oocyte [egg] Donation for Stem Cell Research in September 2006. The committee organized a workshop and prepared a workshop report of the current state of knowledge of the medical risks of human oocyte donation for stem cell research and CIRM then produced draft guidelines for the procurement and followup process.

CIRM indicated the guidelines are intended to provide institutional review boards and research oversight committees with a set of criteria to evaluate clinical protocols. If you read the summary, you will get a fuller picture of what is involved in the risks to the women who donate their eggs for research and what special precautions are felt necessary to be taken.

The purpose of this thread is to report what our California state stem cell funding organization is currently considering as important in egg procurement for stem cell research. But also I would like to get the view of my visitors regarding the whole concept of egg procurement, not for attempt to reproduce a human being, but for research into curing diseases or injury and also attempting to prevent the appearance of genetic diseases. From what you read, do you think that the benefit is worth the risks? Do you think that more women should be encouraged to participate? If so, should the motivation to participate despite the very slight to mild risk be one of altruism (a sacrifice for the benefit of mankind) or for anticipated financial benefit? Would you find that if most women participated for the cash would this motivation be acceptable? If you could participate, would you? ..Maurice.

Tuesday, April 01, 2008

Bush's War and the Smallpox Scare: Science vs Policy

The PBS production “Frontline” recently presented a two day documentary titled “Bush’s War” and included factual details of what led up to the war in Iraq. It was clear from the documentary that the U.S. administration was attempting to provide to the American public a rationale and encourage popular support for going to war with Iraq. Weapons of mass destruction and biologic warfare preparations by Iraq were attributed to valid sources but, of course, subsequently no such preparations were found and the sources were shown to be faulty.

What was not revealed in the “Frontline” story was the Bush administration apparent attempt to further encourage public support for war by implying that Iraq had become a potential threat to cause a fatal smallpox epidemic within the United States and that the threat was sufficient to begin a mass smallpox vaccination program for the American public. This was against professional scientific evidence and advice given to the leader in public health matters, the U.S. Centers for Disease Control and Prevention (CDC), which operated the program. These facts were developed through an investigation by the Institute of Medicine (IOM)* with its publication in 2005. The entire story of the IOM investigation was summarized by Matthew K. Wynia in the March/April 2006 issue of the American Journal of Bioethics. Here are some excerpts from Dr. Wynia’s article “Risk and Trust in Public Health: A Cautionary Tale”.

Indeed, according to the IOM the vaccination program the Administration created, and CDC endorsed, was "an unprecedented departure" from routine vaccination policy making and there is, "little to suggest that scientific and public health reasoning that typically characterized public health policies was a priority in this case"

According to the investigation, it appears that the CDC was pressured by the administration to participate in the vaccination program.

In fact, the smallpox vaccination program was created just as, "the administration was beginning to build a case for war against Iraq." Dr. Julie Gerberding, the Director of the CDC, in October 2002 drew the connection between vaccination and war planning when she explained the decision not to follow earlier advice from ACIP [Advisory Committee on Immunization Practices] by noting that no new "imminent" smallpox threat existed but, "we are in the process of considering war on our enemies. The context has changed a bit". This contextual change was apparently a very strong influence, because, the IOM reiterates, "There was no apparent public health reasoning behind the decision to offer the vaccine to the public" and indeed, to do so, was "contrary to the basic precepts of public health ethics, which focus on a fair and reasonable balance of risks and benefits among individuals and for the population as a whole."

The IOM criticizes government leaders for not providing any clear rationale, but suggests that a rationale could be inferred from statements by the President and other administration leaders. The IOM notes, for example, that the President, in explaining the vaccination program on December 13, 2002 stated that, “we believe that regimes hostile to the United States may possess this dangerous virus”, The report further notes that press accounts claimed, "two unnamed U.S government officials ... revealed that the federal government had information about Iraq's possession of smallpox virus" and the federal government had "named Iraq as one of the nations suspected of possessing smallpox”.

Though the vaccination program never met the President’s goal of an initial 500,000 civilians vaccinated (only 38,004 were vaccinated with 100 adverse reactions, 2 permanent disabilities and 2 deaths), after April 2003 (“mission accomplished”) and there was no admission by the administration of a lowered threat but the vaccination program was essentially abolished presumably because, as Dr. Wynia writes “public levels of fear were no longer needed to support the invasion."

What this story is all about is grossly unethical behavior by our government, as revealed by the Institute of Medicine investigation, in promoting a vaccination program not based on scientific evidence or scientific advice but to gain public support to begin a war. What was gained was public acceptance of a pre-emptive strike on Iraq. What was lost? Well, as Dr. Wynia emphasizes in his article, it could well be the future trust by the public and healthcare community in the CDC. Will the CDC present to the public the science of an issue or be pressured to follow the orders of another administration? ..Maurice.

*The Institute of Medicine serves as adviser to the nation to improve health. Established in 1970 under the charter of the National Academy of Sciences, the Institute of Medicine provides independent, objective, evidence-based advice to policymakers, health professionals, the private sector, and the public. The mission of the Institute of Medicine embraces the health of people everywhere.
Reference: IOM (Institute of Medicine) 2005. Committee on smallpox vaccination program implementation, board on health promotion and disease prevention. A. Baciu, A. P. Anason, K. Stratton and B. Strom, eds, “The smallpox vaccination program; Public health in an age of terrorisrn” Washington DC: National Academies Press.