Which Ethics For Science and Public Policy?


B. Institutional decision making

But how does this relate to decision making on the organizational or social levels? Briefly, we need to consider the correct understanding of the "common good". Besides being an individual, "man" is, according to Aristotle, also a "social animal" (Aristotle, "Politica", in McKeon, 1941, p. 1129), and therefore requires relationships with "others" in order to flourish. But the mistake should not be made that the "common good" is anything other than the equivalent of what is objectively and commonly good for each individual human being as human. Organizations and societies are not their own raison d'etre. They should exist fundamentally to foster or further on a social level the ability of each individual human being to flourish. In other words, there is really no such thing as a big individual particular substance called a "Committee" or a "Society" which is walking down the street with its own Good. It is not a thing or an individual itself - but only a concept or a term which we use to name or designate a collection of real individual human beings who make it up. And it is in their roles of fostering individuals (Aristotle, "Ethica Nichomachea", in McKeon, 1941, p. 946) that societies or institutions constitute their place in an even larger Circle (as will be indicated below).

In sum, viable ethical theories can not contain principles such as autonomy, beneficence and justice which become "separated" or "split" from each other, or from the real, live, integrated whole human being - or from his or her critical relationship with society and the environment. In fact, an individual and social ethics would be more realistic and objective if based on all of these ingredients. Unfractured and whole human beings who happen to be autonomous (and who are simply therefore ethically responsible and accountable for their decisions and actions) would ultimately make decisions based on what they can know is objectively true or good or beneficent for themselves as human persons, for others who are human persons (whether autonomous or not), and for their commonly shared environment - i.e., according to what Aristotle called "right reason" (i.e., not simply pure, unadulterated, isolated "reason") (Aristotle, "Ethica Nichomachea", in McKeon, 1941, pp. 1035-1036; McInerny, 1982; Fagothey, 1963). And in so doing they act justly and should be fostered by collective decision-making which is also in accord with those legitimate human goals which human beings hold in common as objectively good.

Application: The Role of Ethics in Science and Public Policy

If scientists are really serious about "self-regulation" in order to prevent the prevalence of scientific fraud in contemporary science, such "self-regulation" will succeed only to the extent that individual scientists identify for themselves those elements of a viable individual and social ethics - an ethics which ensures the integrity of the scientist who produces the "data", as well as of those social institutions which so deeply impact on the scientific enterprise itself. What, then, is the role that such an ethics could play in science and public policy?

A. The ethical scientist

Individual "ethics" - accurately understood - does not immediately determine the accuracy of scientific data, statistical probability, computer analysis, the peer-review, grant funding, government regulatory or Congressional decisions. But it does immediately affect the integrity of the "man" who is running these machines, and his decisions - i.e., the person behind the starched white lab coat, as Kornberg might have put it! If the "man" or his decisions are factually wrong - or his or her conception of "ethics" is wrong - - - well, "a small error in the beginning ......". It is often easy to loose sight of the fact that first and foremost the "man" is a human being and a moral agent; and then he or she is a scientist, peer-reviewer, granting agent, government official, journalist or Congressman. Indeed, this is precisely why individual ethical responsibility and accountability accrues all along the entire length of the Great-Chain-of-Decision-Making-Circle. Just as important to understand is that when the individual is a part of a "collective" policy decision making process, the impact of the individual's decisions and actions on others - both positive and negative - are greatly multiplied.

Applicable to the individual scientist is the critically important job of starting off the entire chain of decision making with the correct experimental design - which requires the correct information about his or her particular field of science. Thus the first ethical duty as a scientist (because of the expertise possessed) is to be academically competent in the knowledge and information of his or her field. If the scientist is incompetent about the correct empirical facts of the objective physical world which is within his purview, this incompetence becomes disastrous for everyone and everything that follows. Autonomy for everyone who uses this incorrect data is immediately precluded; harm is caused instead of beneficence or justice. Therefore, in terms of the correctness of the experimental design, or the correctness of the interpretations or accuracy of the data, the scientist him or herself takes responsibility for his or her own scientific competence. In this regard, no one has a greater ethical responsibility in the Great Chain than the individual scientist.

But deliberating about the design and analyzing the data can often be seriously affected by outside pressures as well. For example, clinical researchers have expressed their concerns that several pharmaceutical companies - all of which are simultaneously supporting them on grants - put pressures on them simultaneously to design the protocol (or, interpret the data) in such a way as to make their own drugs or devices look better experimentally than they really are. Or - does this scientist or clinician own stock in the drug companies, biogenetic companies, scientific or ethical software, etc., with which they deal - and therefore "fudge" the data for the "company's" benefit, which is thus to their benefit as well? Or, is a favor owed to the colleague in Neurosurgery - who chairs the university grants program, or who also owns stock?

The list of conflicts of interest affecting the scientist's decision making is endless - our point being that even the very design of the protocol - not to mention the production or the interpretation of the data - can be considerably influenced by various pressures which tantalizingly or threateningly weaken the scientist's resistance and lead to not only incorrect, misapplied or even fraudulent data - but to the very breakdown of his or her character, his or her own integrity as a human being, his or her very commitment to objective truth. And this in turn can become habit forming, becoming easier to do each time the occasion arises anew. Aristotle's "reversal" kicks in - the result, bad data. No longer devoted to unraveling the ever-fascinating mysteries of nature, the scientist instead becomes a mere and groveling pawn in the hands of corrupt "others", being corrupted and used, instead of respected and admired.

Individual scientists themselves need to effectively identify and resist these corrupting outside pressures. Every scientist is a moral human agent himself, and responsible for his or her own decisions and actions. And they should be respected as such by the "others". Better for a scientist to refuse complicity, rather than to allow one's self to be corrupted. A scientist's integrity should come to be recognized by the scientific community and political institutions as more important than racking up the publications, volumes of articles and books; millions of dollars in grants; medals, honors and accolades in boundless quantities - or even Nobel Prizes - especially if all of these are really built on a house of cards that will fall, eventually, dragging him or her down with it, and causing real harm to all of those "others" along the way as well.

"Harm" is often not considered by the bench scientist. But a scientist is not an island unto him or herself; he or she does not work in a vacuum - as some would often like to believe. Particularly today, when a scientist's work is applied in myriads of ways, he or she can no longer glory in the absolute "freedom of inquiry" in which we all were educationally drenched. "Absolute Freedom of Inquiry" is as mythical as "Absolute Autonomy", "Pure Beneficence", or "Perfect Justice" are in bioethics - they are all really myths or fictions - and sometimes actually only rationalizations. There are limits or bounds to what any one can do - and that includes a scientist. If a scientist's work is going to be applied to potentially millions of innocent human beings or to our shared environment, then this cherished "vacuum" of absolute solitude evaporates - and the scientist does bear moral responsibility of harm or injury done to those "others" because of his or her scientific incompetence, or freely choosing to succumb to corrupt institutional pressures. This is particularly true of the clinical researcher, who's incompetence and moral cracks are born by his or her very vulnerable human patients whose good, or so it is espoused, is primarily the goal.

B. Ethical institutions

On the "collective" level, such "institutions" also do not immediately determine the accuracy of the information or data. But as an integral part of the Great Chain-Circle, they do immediately cause such excessive and misdirected influences and pressures that, again, these influences and pressures in turn can seriously compromise the integrity of the individual scientist and the "others" with whom he or she are affiliated, eventually causing similar harm along the entire length of the Great Chain-Circle, calling into question the integrity of the institution and indeed of the entire scientific enterprise itself. "Collectively" this is perhaps even more problematic, given the hundreds and thousands of individuals whom these institutions directly influence and pressure on a daily basis.

Rather than fostering the common good, institutions can in fact abuse as well as use the scientist - in politics, committees, or organizations; or through unrealistic or overburdening regulations; or by the media. These outside pressures also affect the scientist's decision making processes. For example, inappropriate or inordinate pressures might sometimes be placed on the scientist to conform to supposedly "standard" or currently popular, or politically correct, or politically motivated "scientific" frameworks of reference or explanation - even if those basic "frameworks" might be objectively false. The "politicization" of science seems to be running at an all time high.

Similarly, unimaginative and stubborn resistance to innovation is one thing; blocking it, because a committee, organization, industry or political party with power and influence does not want the old "status quo" to be demonstrated wrong - is ethically reprehensible. It corrupts the scientist as both human being and scientist by bringing sometimes unbearable pressures on him or her. It also corrupts the other co-committee members who are also pressured into compromising themselves as moral human agents in consort with each other. And it often corrupts the institution itself. Such pressures or influences require nothing less than the rejection or suppression of the true facts about nature and the substitution of incorrect facts in their place - which facts in turn are used as the false starting point of the Great Chain-Circle.

Or again, a scientist can be abused by oppressive and costly governmental over-regulating, which crushes creativity and unnecessarily bogs down the entire scientific enterprise itself. Nor should undo pressures influence the scientist to produce "perfectly auditable" data, which must sometimes be made to artificially conform to a "financial set" theory. How many times in the history of science have major breakthroughs occurred because of creative and innovative approaches or explanations, or because of simple errors which were honestly recorded and admitted - and which turned out to be true, even though they threw the T-test off?

Nor should scientists be abused by pressures from a media which can hype up the public with unrealistic caricatures, supposedly promising new "miracle" cures or treatments; or which misconstrues or misinterprets a scientific "controversy" - again misinforming the public, sometimes even breaking confidentiality and destroying the reputations and careers of scientists - in order to get a "great story" out first.

On a higher "collective" level, research institutions are causing numerous pressures and challenges among themselves which at times are conflicting. For example, there are great pressures on universities to cash in on their discoveries (Sugawara, 1993); yet at the same time there are serious concerns about the presence of a conflict of interest (Shamoo, 1992A, 1993A). There are pressures on the FDA to accelerate drug approvals, e.g., for AIDS patients; yet the FDA must continue to be concerned with the safety and efficacy of these drugs (Stolley and Lasky, 1992). There are pressures from universities on funding agencies to use merit in funding research projects; yet universities themselves exercise the power of pork barrel to achieve an increasing amount of funding, bypassing all peer-review systems (Marshall, 1992).

Finally, consider the effect on the integrity of decision making of the current "fad" of "consensus ethics". Probably due initially to fear of legal liability, many group or institutional decisions are now often based on what the majority agrees upon, or the "consensus" of the group. This is not necessarily an ethical judgment or decision, and is unfortunately sometimes used only as a means by which to dilute the moral (or legal) responsibilities of the members. And given that such institutional policies themselves are often the targets of outside pressures, their construction and promulgation by others often serve to provide merely a psychological cover of semi-anonymity for many of those individuals taking part in such collective decision making processes.

It is worth considering that "majority" or "consensus" decisions - even on the national scale - have at times been simply wrong and unethical. Most of the time it seems to work - but not necessarily always. We need to question constantly whether the "collective" decision of the majority actually compromises the integrity of each of the individuals in the minority - or of the institution itself? Ethically, at least, each and every member of that committee or institution taking part in that "consensus" bears the individual moral responsibility and accountability for his or her own decision.

In addition to "consensus ethics", we are now seeing a move toward "consensus science". But how can we ensure that this "consensus science" is actually scientifically correct? The scientific establishment wants "consensus science" to be the only science accepted in the courtrooms (Marshall, 1993). These same groups complain bitterly about government guidelines to be used for ensuring the integrity of scientific data. Such guidelines, they complain, will stifle new and creative science, and by definition these guidelines are not approved by "consensus". Yet often there is simply no consensus within science itself on important and critical issues.

For example, there is the major ongoing debate about the very definition of "scientific misconduct" itself between the two most important agencies of the federal government which fund science - the NSF and the NIH. NSF would like to retain in the definition of "scientific misconduct" the broad clause: "or other serious deviation from accepted practices in proposing, carrying out, or reporting results". Yet NIH has dropped this section from their definition (Zurer, 1993), thus narrowing the definition as well as tying the hands of future investigative bodies. The claim that scientists are usually objective and unbiased would seem to run contrary to NIH's insistence on such a narrow definition of "scientific misconduct" because they fear potential abuse if the definition is "too broad". Such lack of objectivity within this scientific institution calls to mind Gerald Geison's comments on Pasture's "scientific objectivity". "Pasteur's message for contemporary science" was to puncture the "hopelessly misleading" image of science as "simply objective and unprejudiced," a myth that scientists have perpetuated in order to advance their work and attain a "privilege status" (Russell, 1993). Will the scientists' call for "self-regulation" be ultimately a myth as well?

Conclusions

One last check on reality, then. There are no easy answers to the questions about government regulations vs. self-regulation. But if overly-burdensome government regulations, required to assure the integrity of scientific data - as well as the public policies which are often based on that data - is so objectionable to scientists, "self-regulation" is in order. Scientists can't have it both ways. Especially in view of the very real harm which scientific fraud can and does cause, it must be dealt with clearly, firmly and unambiguously - one way or the other.

Yet "self-regulation", invoked by the scientific community to assure scientific integrity, must be based on much more than what is "consensual", "efficient", "productive", or statistically valid. It involves more than considerations of protocol designs and accurate data. Cryptically or not, the wisdom of the empirical experience of the centuries is unmistakably clear: real "self-regulation" requires a consideration of the integrity of the individual scientist as a human being and as an individual decision maker and actor. It also involves consideration of the integrity of the many institutions whose decisions and actions heavily influence and supposedly foster the scientist both as an individual and as a member of a "collective". This requires a grounding in a theoretically and practically viable personal and social ethics, itself grounded in a realistic and objectively based philosophy. The only other viable alternative, we would argue, is governmental regulation.

There is more at stake, then, than the integrity of experimental designs, the integrity of information, or the integrity of the data. More fundamentally at stake is the integrity of us as human beings - whether scientist, clinical researcher, quality analyst or controller, peer-reviewer, university, industry or government funder, regulator, journalist or Congressman. The Great Chain does not just "start" with "information" - but with a human being - who produces, insures, reviews, funds, regulates, reports on or governs - or, to complete the Great Circle, who is harmed by or compromised by not only the information or data, but by each other in their various levels of relationships, decisions and actions. Ultimately, it is the scientific enterprise itself, we would suggest, that is at stake.


References

*** An edited version of this paper was originally presented by Dr. Irving at the Third Conference on Research Policies and Quality Assurance, Baltimore MD, May 2, 1993.

  1. McKeon, Richard (1941) The Basic Works of Aristotle. New York: Random House.
  2. Beauchamp, Tom L. and Childress, James F. (1979) Principles of Biomedical Ethics. (1st ed.) New York: Oxford University Press, pp. 7-9, 20; ibid. (1989) (3rd ed.), 51.
  3. Beauchamp, Tom L. and Walters, LeRoy (1978) Contemporary Issues in Bioethics. (1st ed.) Belmont, CA: Wadsworth Publishing Company, Inc., pp. 3, 51; ibid. (1982) (2nd ed.), pp. 1-2, 23; ibid. (1989) (3rd ed.), pp. 2, 12, 24, 28.
  4. Bourke, Vernon J. (1951) Ethics. New York: The Macmillan Company, p. 192.
  5. Burns, Stephen J. (1991) Auditing high technology ventures. Internal Auditor 486:56-59.
  6. Copleston, Frederick (1962) A History of Philosophy. New York: Image Books, Vols. 1-9.
  7. Cottingham, John, Stoothoff, Robert and Murdoch, Dugald (trans.) (1984) The Philosophical Writings of Descartes. (Vol. 2) Meditations on First Philosophy. New York: New York Press Syndicate of the University of Cambridge, (Sixth Meditation) p. 59-60.
  8. Crombie, A.C. (1959) Medieval and Early Modern Science. New York: Doubleday Anchor Books.
  9. Edwards, Paul (1967) The Encyclopedia of Philosophy. (Vol. 1, reprint edition 1972) New York: Collier Macmillan Publishers, pp. 352-353.
  10. Englehardt, H. Tristram (1985) The Foundations of Bioethics. New York: Oxford University Press, p. 111.
  11. Fagothey, Austin (1963) Right and Reason. (3rd edition) Saint Louis, MO: The C.V. Mosby Company, pp. 92ff, 101-113, 198.
  12. Finnis, John (1982) Natural Law and Natural Rights. Oxford: Clarendon Press, pp. 85-97, 134-156.
  13. Gilson, Etienne (1963) Being and Some Philosophers. Toronto: Pontifical Institute of Mediaeval Studies.
  14. Hamel, Ron P., DuBose, Edwin R. and O'Connell, Laurence J. (1994) A Matter of Principles? Ferment in U.S. Bioethics. Valley Forge, PA: Trinity Press International.
  15. Hare, R.M. (1988) When does potentiality count? A comment on Lockwood. Bioethics 2 3:216, 218, 219.
  16. Irving, Dianne N. (1991) Philosophical and Scientific Analysis of the Nature of the Early Human Embryo. (Doctoral dissertation, Georgetown University: Washington D.C.).
  17. Irving, Dianne N. (1992) Science, philosophy, theology and altruism: the chorismos and the zygon. in Hans May, Meinfried Streignitz, Philip Hefner (eds.) Loccomer Protokolle. Rehburg-Loccum: Evangelische Akademie Loccum.
  18. Irving, Dianne N. (1993A) Scientific and philosophical expertise: an evaluation of the arguments on personhood. Linacre Quarterly 60, 1:18-46.
  19. Irving, Dianne N. (1993B) The impact of scientific 'misinformation' on other fields: philosophy, theology, biomedical ethics, public policy. Accountability in Research 2, 4:243-272.
  20. Irving, Dianne N. (1993C) Philosophical and scientific critiques of 'autonomy-based' ethics: toward a reconstruction of the 'whole person' as the proximate ground of ethics and community. (delivered to the Third International Bioethics Institute Conference, San Francisco, CA, April 16, 1993.)
  21. Jowett, B. (1937) The Dialogues of Plato. New York: Random House.
  22. Kischer, C. Ward (1993) Human development and reconsideration of ensoulment. Linacre Quarterly 60 1:57-63.
  23. Klubertanz, George P. (1963) Introduction to the Philosophy of Being. New York: Meredith Publishing Co.
  24. Kornberg, Arthur (1992) Science is great, but scientists are still people. Science (Editorial) 257:859-859.
  25. Kuhn, Thomas (1970) The Structure of Scientific Revolutions (2nd Edition). Chicago: The University of Chicago Press, p. 7.
  26. Kuhse, Helga and Singer, Peter (1986) For sometimes letting - and helping - die. Law, Medicine and Health Care 3:4; also, Kuhse and Singer (1985) Should the Baby Live? The Problem of Handicapped Infants. Oxford University Press, p. 138.
  27. Lockwood, Michael (1988) Warnock versus Powell (and Harradine): when does potentiality count?. Bioethics 2:3:187-213.
  28. Marshall, Eliot (1992) George Brown cuts into academic pork. Science 258:22-22.
  29. Marshall, Eliot (1993) Supreme Court to weigh science. Science 259:588-590.
  30. McCormick, Richard A., S.J. (1974) Proxy consent in the experimentation situation. Perspectives in Biology and Medicine 18:127.
  31. McInerny, Ralph (1982) Ethica Thomistica. Washington, D.C.: The Catholic University of America Press, p.63-77.
  32. Moore, Keith L. (1982) The Developing Human. Philadelphia: W.B. Saunders Company, p. 1.
  33. Pellegrino, Edmund D. (1993) Ethics. Journal of the American Medical Society 270 2:202-203.
  34. Robertson, John A. (1986) Extracorporeal embryos and the abortion debate. Journal of Contemporary Health Law and Policy 2:53.
  35. Russell, Cristine (1993) Louis Pasteur and questions of fraud. Washington Post Health. February 23, 1993, p. 7.
  36. Shamoo, Adil E., and Annau, Z. (1989) Data audit: historical perspectives. in Principles of Research Data Audit. ed. A.E. Shamoo. New York: Gordon and Breach, Science Publishers, Inc., Chapter 1, pp. 1-12.
  37. Shamoo, Adil E., and Davis (1990) The need for integration of data audit into research and development operations. Principles of Research Data Audit. A.E. Shamoo (ed.). New York: Gordon and Breach Science Publishers, Inc., (Chapter 3) pp. 22-38; also in Accountability in Research 1:119-128.
  38. Shamoo, Adil E. (1991) Policies and quality assurances in the pharmaceutical industry. Accountability in Research 1:273-284.
  39. Shamoo, Adil E. (1992A) Role of conflict of interest in scientific objectivity: a case of a Nobel Prize work. Accountability in Research 2:55-75.
  40. Shamoo, Adil E. (1992B) Introductory Remarks. Accountability in Research 2:i.
  41. Shamoo, Adil E. (1993A) Role of conflict of interest in public advisory councils. in Ethical Issues in Research. D. Cheney (ed.), Frederick, Maryland: University Publishing Group Inc, (Chapter 17) pp. 159-174.
  42. Shamoo, Adil E., and Irving, Dianne N. (1993B) The PSDA and the depressed elderly: intermittent competency revisited. Journal of Clinical Ethics 4, 1:74-80.
  43. Shamoo, Adil E., and Irving, Dianne N. (1993C) Accountability in research using persons with mental illness. Accountability in Research, August (present journal volume).
  44. Singer, Peter (1981) Taking life: abortion. in Practical Ethics. London: Cambridge University Press, pp. 122-123.
  45. Stolley, Paul D., Lasky, Tamer (1992). "Shortcuts in drug evaluation". Clinical Pharmacology Therapeutics 52:1-3.
  46. Sugawara, Sandra (1993). "Cashing in on medical discoveries". The Washington Post/Business. January 4, 1993, p. 1.
  47. Tannenbaum, A.S., and Cook, R.A. (1978). Report on the Mentally Infirm: Appendix to Research Involving Those Institutionalized as Mentally Infirm. 1-2.
  48. Tooley, Michael (1974) Abortion and infanticide. in Marshall Cohen et al (eds.) The Rights and Wrongs of Abortion. New Jersey: Princeton University Press, pp. 59, 64.
  49. U.S. Codes of Federal Regulation (1989) 45 CFR 46.
  50. U.S. National Academy of Science (1989) On Being A Scientist. Washington, D.C.: National Academy Press, p. 9.
  51. Veatch, Henry B. (1974) Aristotle: A Contemporary Approach. Indiana: Indiana University Press.
  52. Vlastos, Gregory (1978) Plato: A Collection of Critical Essays. Indiana: University of Notre Dame Press.
  53. Wilhelmson, Frederick (1956) Man's Knowledge of Reality. New Jersey: Prentice-Hall, Inc.
  54. Zurer, Pamela S. (1993). Divisive dispute smolders over definition of scientific misconduct", Chemical and Engineering News. April 1993, 5:23-25.

1, 2,