Friday, March 10, 2006

 

An Early Draft

First Incomplete Draft

A LONG, SLOW JOURNEY FROM MORAL DOGMATISM TOWARDS “ETHICAL MINDFULNESS” AND “CRITICAL THINKING” ABOUT MORAL ISSUES

A double-assignment
for 2004-5 and 2005-6

by

Jim Byrne, MA(Ed) Dip.CP.Psych.(Rus)

Professional Doctorate in Counselling
University of Manchester
Faculty of Education (ESI)


Contents Page
Summary……………………………………………………..
Preface……………………………………………………….
Foreword…………………………………………………….
1. Introduction………………………………………………
2. Literature Searches……………………………………….
3. Methodology……………………………………………..
4. Distinguishing and Defining Ethics………………………
5. Distinguishing and Defining “Mindfulness”……………..
6. My Developmental Stages and Strategies…………………
7. Learning to Reflect Critically upon Ethical Problems in Research Activities ………………………………..
8. A Supplementary List of Issues……………………………
9. Dr Bond’s Ethical Guidelines…..………………………….
10. Summary of My Learning…………………………………
11. A Brief Detour through the Free Will versus Determinism Debate…………………………………..
…continued…
12. Notes from Arnaud (2000), with Comments and Reflections…………………………………...
13. Review of Manchester University Coursenotes on Ethics………………………………………..
14. Conclusion……………………………………..………..

Appendices
A: Mahrer’s Critique and Colleagues’ Responses (Regarding the publication and reading of research results)
B: Overview of the Educative Components of Bond (2004b) – “The Road to Ethical Mindfulness?”
C: Moral Thinking as ‘Critical Thinking’ (Using Hare 1981 as a point of departure)

###








Summary
The summary will be derived, by contraction, from the Conclusion of this paper.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Preface
In some respects, this paper could be summarized as follows: I originally thought I had a research project idea that had no substantial ethical problems inherent in it. Then I realized that I was not actually thinking in terms of ethics, but rather in terms of how to avoid ethical difficulties: because I believed I already knew what was ‘right’ and ‘wrong’, intuitively. Gradually, through much reading and thinking, I realized there were actually eight distinct problems, or potential problems, with my research project design; and this has since increased to fourteen!

However, this description omits the most important part of the story. This involved my realization of a huge training need, or self-training need, on my part. This need dawned on me when I realized that I was incapable of thinking about ethics and morality at all. Then I realized that, if I am currently incapable of thinking critically about ethical dilemmas, and it is important to do so in order to be a “safe” researcher, then it is also important that this paper be about how to train myself to think ethically. And that necessarily means, to make the journey from having “ingrained intuitive hunches” about moral/ethical issues, and no more, to the position of being able to reflect critically upon moral intuitions and prima facie principles, using a range of critical thinking skills adequate to the task: (Hare, 1981: 155-156). I was unclear if this might prove to be similar to, or related to, what Bond (2004a and b) had in mind when he talked about “ethical mindfulness”; and that question will be addressed later.

Foreword
“Ethics codes cannot do our questioning, thinking, feeling, and respondingfor us”.

Pope, K. S., & Vasquez, M. J. T. (1998). Ethics in Therapy & Counseling, Second Edition. San Francisco: Jossey-Bass.

This document began its life as Appendix B to my (now abandoned/suspended) draft project plan, which was entitled: Planning a Qualitative Research Project in Counselling and Psychotherapy: Using a Heuristic Approach. I created that draft project plan on 11th April 2005, after abandoning my (earlier) original research proposal. A few days into my second draft project plan, I realized that I could not continue to work on that new plan without clarifying my “ethical foundations” to the point of some kind of conclusion. Therefore, I switched to working on what was then Appendix B of that plan, (and which is now this document). However, within a couple of weeks it became clear that there was so much work to be done on thinking about ethical considerations that, at the very least, a 10,000 word paper on ethics would be required; (and this has now grown to an almost 20,000 word document. [18,000 words in September 2005; with about 30 percent of the work still left to do!]; and potentially a thirty- or forty-thousand word document!). Consequently, I set out to consider some of the following questions:
· What is my stance on ethics in general, and “applied” professional ethics in particular?
· How is my stance on ethics impacted by my earliest, and subsequent, moral training?
· How can I develop a reasonable level of confidence in my ability to design a research project that would be clearly ethical?
· How can I become “ethically mindful”, rather than (merely) a dogmatic responder to emotive intuitions?
· And, can I become a “critical thinker” in the realm of ethical debate?



1. Introduction
“It is necessary to give careful consideration to ethical issues at all stages of the research process: planning, implementation and dissemination of results”.
McLeod (1994: 165)

I originally wrote a draft research proposal, commencing in November 2004, filled with confidence that I had a good research idea for my second year of doctoral study. That proposal had the following draft title:

A Consideration of the Placebo Effect and the Possibility of Identifying Some ‘Active Ingredients’ in the Stories of Six Counselling and Psychotherapy Clients: A qualitative phenomenological study.

Using Interpretative Phenomenological Analysis (IPA), I was going to interview six individuals who had had a positive therapeutic outcome from counselling and therapy, to see if I could identify some “active ingredients” in their counselling/therapy that could be said to have brought about the positive outcomes reported by these participants: (Smith, 1995, 1996, 1999, 2004. Wampold, 2001a, 2001b; Wampold, Ahn and Coleman, 2001). The individual respondents would come from a variety of backgrounds, in terms of the type of counselling and therapy undertaken (e.g. psychodynamic, person-centred, cognitive behavioural, etc.,), and none of them would be former clients of mine. Therefore, these “active ingredients” would not be “therapy specific”, or “specific ingredients” as described in Wampold (2001b: 14); but rather “common factors” as defined by Wampold (2001b: 22). If I could show that my six respondents had indeed identified some kinds of “common elements”, or “active ingredients”, then the idea that they could have achieved a positive therapeutic outcome entirely or largely as a result of a placebo effect – as implied or asserted by Horgan (1999) and Erwin (1997) - could be seriously challenged, destabilized and brought into question. (See also Griffin and Tyrrell 2004: 223).

However, when I finished my draft proposal, I realized that it was “a bit thin” on the subject of research ethics, and, in particular, I had “bolted” the ethics section on to the end of the proposal, because I “had to” consider this topic. My feeling at that time was that I “really” did not need to do anything on the subject of ethics in order to make my research project safe. I considered that it was about as benign as could be, and that it was just the formal requirement to include something “credible” on ethics that was concerning me. I mentioned to Clare Lennie that I could see that my ethics section was thin, and that it was not an integral part of the proposal, and she agreed with me that further work on ethics seemed to be called for. I therefore set out to find some way of improving my ethical stance on research issues in general, and on the ethical foundations of my research proposal in particular: not because I wanted to, but because I felt I “had to” in order to avoid a mere “fudge” of the topic for the sake of appearances. Soon afterwards, once I had begun my rethink, I came to appreciate that this is actually a very important and rewarding area of study. What follows is the beginning of an enquiry designed to promote the development of some kind of new, personal, ethical foundation, based upon thinking rather than intuitions: (or, rather, thinking about intuitions and prima facie principles, rather than emotively acting upon intuitions alone: (Hare, 1981: 25-26 and 87-92).

2. Literature Searches
It is now 16th June 2005, and I realize that, although I have been reading around the subject of ethics in general, and ethics in counselling and therapy in particular, since sometime in March/April 2005, I still have not conducted a detailed or systematic literature search. Therefore, in this section, I will review my ad hoc literature search process, and outline a more complete literature review process, and some of the results of that process. I think this is important to reassure the reader that I have “considered all factors”.

In the first draft of my original proposal, in November/December 2004, I outlined the literature survey strategy linked to my research project idea, and I have done a lot of work on that strategy, including acquiring a broad range of documents which have been read to varying degrees, and filed away for later consultation and note-taking. Indeed, I estimate that I have more than two years’ work ahead of me, just to read and digest all of the sources I have so far identified. However, all of this searching occurred before I had begun to identify the need for a paper specifically on ethics and morality.

When I began to identify the need for a study of ethics, I realized that I did not have a viable grasp of any system of ethical thinking, and thus I wanted to develop an “advance organizer” of the field of moral philosophy: (Ausubel, 1968). Therefore, I began a search for overviews of moral philosophy on my own bookshelves, and in local and regional bookshops, and found and read several of these, and began to form a sense of the shape of the territory that I need to study. I also ran a number of internet searches, and found a few sources, such as Pope (1999), Arnaud (2000), Bond (2004) - of which I was already aware - plus Newman, Walker, and Gefland (1999); Newman, Willard, Sinclair, & Kaloupek (2001), and a few other sources. I was also aware of West (2002) and McLeod (1994). These sources gave me a few ideas on the background to research ethics in professional activities in the modern world, plus an insight into some previously identified problems of ethics in research activities.

I then decided to review the history of western philosophy, to identify major moral theorists: and this led me to Hamlyn (1990); Kenny (1994); Billington (1993); Blackburn (2003); and others. These studies helped me to contextualize my readings of Bond (2004a, 2004b); McLeod (1994); de Bono (2005); Harrison-Barbet (1990); Chaffee (1997); Magee (2001); Mair (1999); and Kant (1790/1987). I also found other titles, from various sources, including bookshop searches (Waterstone’s in Leeds and Manchester), and library searches (in Halifax and Hebden Bridge), both in terms of physical books and books listed in bibliographies. These included: Sieber (1992); Sieber and DuBois (2004); Mauthner, Birch, Jessop and Miller (2002); Oliver (2003); and Soltis (1990). Some of these I have read, and some are awaiting reading.

I also consulted Pope (2005) on how to run a literature search on the internet, and I have implemented that strategy. When I checked at Amazon.com, I found 8757 titles on Research + Ethics + Counselling. I reorganized these titles in date order from most recent downwards. This did not help, as a great number of titles were clearly irrelevant to my need. Therefore, I refined the search by adding the words “theory or guidance”. This reduced the field to 147 titles. And from this field, by reviewing about 25 percent of the titles, I was able to find, by searching in order of relevance: Lee-Treweek and Linkogle (2000); Jenkins (2002); and McLeod (1999). I then adjusted the search in date order, most recent date first. This produced: Emanuel et al (2004); Eckstein (2003); and Robson (2002). Most of these remain for me to read.

Later I ran a search on SwetsWise for psychology articles on “research ethics” from 2000 to 2005, and found 11 articles. I reviewed the abstract of each article. Only Vollmann (2000) proved particularly interesting; although I found a lead to Forsyth (1980) from Kleiser, Sivadas, Kellaris and Dahlstrom (2003). This seems like a potentially very fruitful source.

I also reviewed the journal entitled Ethics and Behavior – at http://www.leaonline.com/loi/eb;jsessionid=ndOn0gRwRLdg - and identified the following two articles: Smythe and Murray (2000), and Hadjistavropoulos and Smythe (2001). I have yet to order these titles.

Then I went to Google and conducted a search for . This search yielded the following sources: Forsyth (1993); Forsyth and Nye (1990); Forsyth, Nye and Kelley (1988); Forsyth (1985); Forsyth and Scott (1984); Forsyth and Pope (1984); Forsyth and Berger (1982); Forsyth (1981); Forsyth (1980); Schlenker and Forsyth (1977). I intend to consult each of these papers: and have already ordered most of them from the Inter-Library Loan Service (ILLS): (September 2005).

I also searched the main catalogue of the John Rylands Library, and found no response to . When I eliminated the <+counselling> delimiter, I got 66 titles. Searching through this list, I found much that was irrelevant to my enquiry; and eventually found:

Author: Homan, Roger
Title: The ethics of social research
Publisher: London: Longman, 1991

I decided to check out the reading lists mentioned in connection with Homan (1991), and found:

Author: Lee, Raymond M.
Title: Doing research on sensitive topics
Publisher: London: Sage, 1993

I then returned to the original list of 66 titles, and found:

Author: Kitchener, Karen S.
Title: Foundations of ethical practice, research, and teaching in psychology
Publisher: Mahwah, N.J.: Lawrence Erlbaum Associates, 2000

There were no other titles on this list that seemed relevant to my study. And again, most of these titles have been listed for ordering from the ILLS.

Finally I consulted Hart (1998: 34-35) and confirmed that my literature review seems sufficiently comprehensive, relative to the flow charts of the literature search constructed by Hart. Indeed, so comprehensive is my literature review that I now have to admit that it will be impossible to read all of the identified sources in the academic year 2004-5. Thus I am obliged to rethink my research plan, (beginning on 29th June), and may have to diversify into a study of ethical problems encountered by counsellors/therapists, in dealing with clients or in dealing with research issues, instead of my original idea for a study of what clients get from counselling and therapy, and what they believe to be the ‘active ingredients’ in their counselling/therapy experience.

Key learning points related to designing and implementing my research project:
B1: If I am not diligent in my literature search activities, I could overlook an ethically important book or paper, which could impair my ethical judgements about my research project design.
B2: There is so much literature to review that it may not be possible to get around to doing my original research with six participants!


3. Methodology
Today, 18th June 2005, I realized that it is important to clarify the methodology that I am using in this investigation of research ethics. This will necessarily include ontology, epistemology and the methods of enquiry: (Hart 1998: 51, and Bond 2004b: 15). It seems to me that the inclusion of this section marks a decisive stage in the development of my capacity to think about ethics.

Ontology: What is the ontological status of “moral behaviour”? This question is impacted by ‘modern ethics’, which analyzes the language used in ‘classical’ ethical theories, and classifies theories accordingly – into objectivist, subjectivist, emotivist, motivist, deontological and consequentialist. (Popkin and Stroll, 1993: 50-60). When we conclude that an ethical theory is “objectivist”, we note that the author is claiming that their theory represents expressions about “objective reality” rather than mere preferences, emotions or value judgements: (Hare 1981: 78-80). However, can we really ‘see’ a moral act? For example, when we see somebody paying for his or her purchases at the supermarket checkout, is that a moral act? Or when we see one of our neighbours kissing his wife in public? Or we see somebody driving on the correct side of the road? What do all of these ‘moral acts’ have in common other than conformity with a set of rules? Or even an unconscious failure to deviate! Would it not be more accurate then to say that morality seems to be (conscious or unconscious) conformity with a set of rules? That would not make the visible behaviour “objectively moral” – since soldiers, police officers and civil servants, who shipped the Jews to death camps in Germany and Poland were in conformity with a set of rules laid down by their government and society: but this was far from being moral in the minds of most reasonable people today. (“Reasonable people”, or “people who are widely considered to be reasonable”; or “people whose reasons are widely accepted”?) Or people who I consider to be reasonable!)

Kant’s transcendental idealism is one example of an “objectivist” theory of morality, in which it is claimed that the a priori categories of the mind make it possible to make objective judgements about morality. (Hamlyn, 1990: 218-219). However, Kant’s theory is also a consequentialist theory, in that he is frequently obliged to appeal to the social consequences (e.g. of lying, or breaking promises) to make the case for moral rectitude: (Blackburn, 2003: 100-107). However, objectivist theories of morality are subject to the same critique as are all claims to certain knowledge: that is that all knowledge seems to be based on experience (according to the British Empiricists, especially Locke and Hume: [Aune 1995: 48]). Moreover, all experience is subjective – that is to say, all experience is processed by ‘a being’ (in our case, a human person) who is dependent upon their fallible senses, shaped by their experience, for the collection of sense data. Kant tied himself in knots of self-contradiction in trying to resolve this conundrum. Firstly, in the Critique of Pure Reason, in an effort to save ‘God’ and ‘a priori knowledge’ from the British empiricists, he argued that God resides in the ‘noumenal realm’, (of things-in-themselves, which precede conception/cognition), beyond empirical investigation; and that experience is preceded by the ‘categories of the mind’, which comprise, as a minimum: space, time and causality. Thus knowledge is a product, not of pure experience, but of the experiencing-‘understanding’ of a person. (‘Understanding’ in Kant’s writings is roughly what we call ‘perception’ in modern cognitive psychology). In this way he saved a priori knowledge, in the form of the ‘categories’ or the ‘manifold of concepts’, while abandoning ‘pure reason’. However, this produced a problem for religious morality, and Kant was castigated by his king for this, and so Kant was forced back to the ‘drawing board’, in the Critique of Practical Reason, in which he tried to maintain the absolute nature of human morality, by making it ‘unreasonable’ for people to behave in a clearly immoral manner; suggesting that people will accept the dictates of ‘reason’ - (it’s back again. in a fairly ‘pure’ form!) - in deciding on their actions in relation to others. However, it is now widely agreed that Kant failed in his attempt to establish an absolute basis for human morality, and so we are left with the (socially agreed) relative merits of this or that course of action, in a specific set of circumstances. And the definition of moral behaviour becomes conformity with a set of values, normally expressing one of about three or four different approaches to morality – the virtue-ethics approach of Aristotle; the deontological (or ‘duty based’) approach of prescriptive religions, such as Christianity/Judaism/Islam, etc.; the deontological/consequentialist approach of Kant’s ‘universalization rule’; and the utilitarian approach of Bentham and Mill: (Kenny 1994: 107-192). It is probably no longer tenable, in most educational establishments, to use arguments about ‘God’s Law’, and so we are reduced to three types of argument for moral action.

My own view concerning what “exists”, which some would call “moral reality”, is that humans seem to have some innate tendencies and attitudes (or sentiments). The Christian tradition emphasized what it saw as innate sinfulness, and overlooked what seems to me to be innate goodness, which is emphasized by some humanists (Rogers, XXXX; Nelson-Jones, XXXX, and Buddhists (Source, XXXX)). My view is that both of these innate tendencies seem to exist in humans, and these I characterise as the negative and positive sides of “the human heart”, which is, of course a metaphor, rather than a biological description. This “heart” is what Hume called “passion” or “sentiment”: (Blackburn 2003: 95). (It is notoriously difficult to establish the innateness of tendencies, such as altruism/empathy, but some progress was achieved by So-and-So, XXX [cited in one of my Rusland assignments]). It then depends enormously what kind of family an individual is born into whether the negative or positive side of the heart becomes prominent, and this can also vary from situation to situation. The wider society also has an influence on the individual’s moral development. (And I have argued elsewhere that the individual also has some [small!] degree of free will in deciding which aspect of the “heart” to manifest; though I would not want to over-emphasize this element). Therefore, there seems to be a tendency towards morality (based on empathy and identification with fellow humans) inborn in humans, but there is also the potential for immorality (based on selfishness, anger, greed, and so on): (Keown 2005: 12-13).

Epistemology: Following on logically from my ontology above, it would seem that knowledge about “the good” comes to us first of all in the form of sentiments, of affection and sympathy. However, this lives right alongside self-centredness and anger/rage, or whatever we could call those attitudes in young children. And this is how morality originally got into human society – by being inborn right alongside immorality. Therefore, what Plato called “the good” probably began as “my good”, and was then socialized to “our good”. And Plato’s project was to represent the “our good” of the Athenian ruling class as somehow, objectively, “the good”, which is why his project proved so complex, difficult, and ultimately impossible!

My guess is that, given the need for order in the earliest human families, and extended social groups, rules were constructed based upon the sentiments of sympathy and compassion, and the attitudes of fairness and revenge, to cite the most obvious motivational candidates. In addition, of course politics with a small p soon came on the scene, and morality then became not just about relations between individuals, but how to justify patent economic unfairness among social classes as a morally acceptable state of affairs. This, as indicated above, seems to have been Plato’s project in the Republic.

In the light of Hume’s Law – that you cannot derive an ought from an is – what is the legitimate source of ethical thinking, and how can ethical statements be justified as “knowledge”? Aristotle mentions that humans are political animals: (Blackburn 2003). This is an important part of understanding the shape of moral prescriptions and rules. He also established the Golden Mean, which says that we are behaving ethically when we steer a moderate course between the extremes of vice and virtue. The value of this system of Nicomachean ethics does not lie in its accuracy or usefulness, since it can easily be faulted by pointing to the fact that many vice-virtue pairs have no mean. For example, there is no middle way between lying and telling the truth. You either lie or tell the truth. The value of Aristotle’s observations about vices and virtues is that humans seem to be born, and become socially shaped, with propensities to behave in ways which show up as both vices and virtues. We could say that, originally, the vices are those behaviours which are seriously anti-social, and which threaten the peace, or even the survival, of the clan or tribe. In time, as more hierarchical power structures emerge, there is a tension between those behaviours which are anti-social per se, and those which are merely, or more obviously, opposed to the interests of the power elite: whether patriarchs, patricians, slave owners, feudal lords, or capitalists, (or the bureaucrats in state-communist societies [as was!]). Thus we can say that there is also a problem with Hume’s Law, in that, although we cannot LOGICALLY derive an ought from an is, we can certainly make such a derivation both emotionally, and later on, politically! For example, if I am a cave dwelling tribe member, and I perceive that other clan members’ defecating inside of our cave is harmful to me and my offspring, I may emotionally decide that this is BAD behaviour, and that the appropriate sanction is to strike the next perpetrator over the head with a large bone! And whereas there is no formal logical connection between these two observations, there is an emotional logic which is undeniable for me. If other members of the tribe begin to copy my behaviour, then we have established a moral rule – ‘Thou shalt not crap in our cave’ – which may cause our clan to thrive, and to begin to take over the hillside, as other families die out from crap-born diseases. Thus, the emotional logic of my action proves to have survival value, and my ‘morality’ begins to spread, while less pragmatically healthy moralities die out with their disease-ridden carriers. Moreover, everybody who leaves my cave system to defecate without any awareness of the virtue of their actions – protecting the health of the tribe members – are nevertheless acting morally in the normal sense of following the social rules. (Kant would not agree with this claim, but the Utilitarians and the Aristotelians would).

We could then say that moral principles are driven by human emotions, informal social agreements, and later by larger, more authoritarian human political systems. The ‘knowledge’ thus generated is probably related to the survival instincts of a large social group, and is constructed on the basis of the precise history of that group, including the emergence of power hierarchies, infighting, resistance, revolts, conformity, and so on. Thus moral theories, and moral knowledges, are not identical to scientific or philosophical knowledges, but are apparently special forms of emotional/political knowledges. Thus they are more open to variation from place to place and time to time than even scientific or philosophical knowledges, which are nevertheless themselves also variable over time, and to a lesser extent over space (e.g. distinct paradigms in different universities, for example). Moreover, all forms of knowledge, as Xenophanes pointed out in the 6th century BCE, are constructed by us: (Magee 2001: 16).

Methods of enquiry: This year’s study is primarily a philosophical enquiry, largely based upon my activities in “thinking on paper”, reading widely, combined with active reflection, and the use of heuristic devices to structure my thinking process. It also involves a review of my personal history in terms of my moral education and subsequent learning experiences; and the review of a considerable body of literature from diverse sources. Since ethics seems to be both a critical-thinking process (superficially) and a cognitive-emotive-political-kinaesthetic process of appraisal/evaluation (at a deeper level), based on innate sentiments and social conditioning, this work also involves the development of my own personal commitments in terms of political and emotional position taking.

4. Distinguishing and Defining Ethics
"What is bad to you, do not do to others. That is the entire law; all the restis commentary."--Rabbi Hillel (30 B.C. - A.D. 10)

Rabbi Hillel’s statement is a good example of how to over-simplify ethics and morality to the point of misconception and uselessness. For example, if I am a person who is attracted to sado-masochistic behaviour, and I apply Rabbi Hillel’s rule, then I will expect you to accept my slaps and blows joyously. This seems to place a huge question mark over the so-called Golden Rule, which has been around since the time of Confucius. Let us therefore go back to the drawing board, and define ethics and morality from scratch.

Ethics is a branch of moral philosophy, according to McLeod (1994), and I learned my moral philosophy from the Catholic Church, from birth to the age of eighteen years. Thereafter, I learned further refinements from British secular culture, to the age of 22 years; then Marxist materialism from 22 to 29. And later, in my thirties onwards, I learned from Zen Buddhism, ‘est’[1], and Rational Emotive Behaviour Therapy, and a number of other philosophical and educational experiences. In all these cases, I placed my trust in the ethical thinking of others, and agreed to follow their prescriptions, and avoid their proscriptions. Although this is clearly “duty ethics”, or “deontological ethics” - (Billington 1993: 112-115; Blackburn 2003: 52) - it also notably seems to involve what we normally think of as ‘framing’ and ‘interpretation’ of social behaviours, in which “Interpretation of the simplest utterances depends therefore on acts of framing of which readers [and ‘hearers’ – JWB] are generally unaware, and are often beyond their control”: MacLachlan and Reid, 1994: 4). Therefore, the hallmark of my moral experience may have been “unconscious compliance”. I have normally assumed that I am “a moral person”, and that it is unnecessary for me to “actively think” about questions of ethics, since I can “feel” what is right and wrong: (the “emotivist” approach). And I think I probably mistook those feelings for thoughts. This seems to me to be at the opposite extreme from Tim Bond’s (2000, 2004a, 2004b) concept of “ethical mindfulness”. Ethical mindfulness, as far as I can decide (June 2005), seems to be a form of “reflective practice” with “moral and immoral behaviours” as its focus. Thus I’d better learn how to do that! (Dr Bond’s ethical code is based on trust building and trustworthiness, and takes account of risk management, relationship building, and research integrity. The trust building and trustworthiness aspects of this approach seem to be closely related to the ‘ethical living’ approach of person-centred theory. [Nelson-Jones, 2001: 96]). I will take a closer look at Bond (2004a, 2004b) near the end of this paper, and use his summary of key ethical issues to bring some (additional) structure to my reflections.

However, I had also better register the fact that, in the past, my ethical or moral codes were also the ethical or moral discourses of the communities in which I lived; or the communities to which I attached myself. “That is to say that the real” – (including ‘real ethical behaviour’: JWB) – “is actually the intersubjective meaning arrived at by a community in semiosis”: (Cobley and Jansz, 1999: 161; emphasis added, JWB). In other words, I was locked into a shared meaning with significant others, based on ideological-linguistic labels. Or as Vivien Burr would render it: “A discourse” – which must necessarily include ethical discourses – “refers to a set of meanings, metaphors, representations, images, stories, statements and so on that in some way together produce a particular version of events”. (Burr, 2003: 64). And versions of events (or “is’s”), and attitudes towards those events (including “ought’s”), are handed down by families and communities to their own members. Thus I’d better acknowledge that my ‘old’ approach to ethical beliefs and ethical engagement is explicable in terms of social constructionist thinking and the post-modern critique; and it would have been very difficult for me to have behaved other than the way I actually behaved!

The new situation is that I am an ethical rule-follower who (believes he) is being exhorted to engage in ethical mindfulness. This (seems to) inevitably stretch my entire ethical wiring to breaking point (in June 2005; though it looks less destroyed in September 2005, having gone back and re-read Bond 2004a and b)!

Three points are worth making at this point:
1. All moral systems seem to be constructed by humans from their innate predispositions and their social relations of power and care.
2. Being ethically mindful might amount to no more or less than asking myself, not what Rabbi Hillel asked – What are my preferences? – but rather: What are the preferences of those others with whom I associate? And how can I help others, and avoid harming them?
3. I am a small fish in a huge pond; and others are already in the position of specifying much of the ethical territory that I must traverse, plus most of the rules that I will be obliged to obey, whether or not I am “mindful”. This is what is implied by the existence of systems of professional ethics and imposed codes.

Key learning points related to designing and implementing my research project:
A1: How can I check that my experience has produced moral attitudes and ethical beliefs in me that are appropriate to counselling and therapy research?
A2: What are the present social influences that are driving my moral behaviour and ethical thinking, and are they appropriate and acceptable to the task of conducting counselling and therapy research?
A3: In what ways do I need to change?

5. Distinguishing and Defining “Mindfulness”
This is where we focus in on the apparent fact that “mindfulness” is easy to SAY, and not so easy to DO. What is it, what does it consist of, and how can we train ourselves to be “ethically mindful”?
Here is a brief definition of “mindful” from Chambers Dictionary:
“Mindful, adj., bearing in mind; taking thought or care; attentive; observant … mindfulness, n.” Chambers (2003: 943).

So mindful is an adjective, as in the phrase “a mindful attitude”, or “a mindful person”; while mindfulness is the noun form, as in “She exuded mindfulness”. But let us consider the sense in which Dr Bond uses it (Bond 2004a and b). When he talks about “ethical mindfulness”, is he asking us to:
1. Bear ethics in mind?
2. Take some thought about ethics?
3. Take care when considering ethical issues?
4. Be attentive to ethical problems or threats? And/Or:
5. Be observant in your work, and look out for ethical issues?

If one or all of these questions can be answered in the affirmative, then the next two questions for me have to be as follows:
1. How are counsellors and therapists (and researchers) to achieve this kind of mindfulness?
2. What concepts, principles, theories, skills or techniques are they to employ?

For answers, I have no alternative but to turn to Bond (2004a and b) to see what I may find.

Firstly, I eliminated Bond (2004a) on the grounds that it is a general introduction to the new Ethical guidelines for researching counselling and psychotherapy, which he has written for the British Association for Counselling and Psychotherapy. I then focused in on the guidelines themselves (Bond 2004b: 10-19).

I read Bond (2004b) thoroughly, several times, and identified those principles which are relevant to my research project (which is most, but not all, of those in the guidelines. For example, section 3.5 is not relevant to my work, as I do not propose to work with any of my own clients). I then classified each identified principle as either an educative or an imperative principle. I did this because of Dr Bond’s introductory comments about these guidelines, as follows:
“This guidance is designed to address two levels of ethical practice. Baseline issues that concern protecting public safety are indicated by the use of the imperatives, such as ‘must’ or ‘should’. Matters which are directed towards informing the practice of ethically conscientious practitioners working above a minimal level of safety are presented in a more educative voice in order to promote ethical mindfulness in all aspects of research”. (Bond 2004b: 10) (Emphasis added – JWB).

I wanted to do two things here:
1. To find out the approximate balance between the imperative and the educative components of these new guidelines; and:
2. To be able to separate out the “educative principles” to see what kind of “mindfulness” might be promoted by their contemplation.

What I found was that the parts of these guidelines that prove relevant to my study (which is most of them) contain twenty imperative principles and twenty-one educative principles (approximately). Thus, the balance between the directive and the educative is about fifty-fifty. This of course depends upon how the principles are classified, and by whom. Perhaps I used an ‘overly-generous’ system of classification, in that I did not (normally) include those principles containing the words Require, Required, or Responsibility in the imperative category: (except very occasionally, when other words combined with them to make them seem imperative). Thus others reading this set of guidelines might conclude that it is primarily directive and imperative, and only minimally permissive in its educative function.

Next I want to separate out the educative principles (as defined by me, using my liberal classification), and to reflect upon them, to determine what kind of “mindfulness” is being promoted here, and to see if I can achieve that kind of “mindfulness” in my research work. I will do that in Appendix ‘B’, and summarize the results here:

>>>>>>>>>>>>>>>>>>>>>>>>>>.

6. My Developmental Stages and Strategies
“It is reasonable to conclude that any research design will generate ethical dilemmas. The implication is not that research should be abandoned, but that every effort should be made to examine the effect that a study will have on all of the people who participate in it (Firth et al., 1986). Although the research literature contains many examples of solutions to ethical problems, it would be valuable if more researchers were to write about their experiences of these issues, thus allowing a wider awareness of good practice”.
McLeod (1994: 168-169)

Initially I did not consider that my research project idea contained any significant ethical dilemmas or difficulties, and so I was clearly not thinking along the lines indicated by McLeod (1994) above. For example, when I submitted my draft proposal to Clare Lennie on 17th March 2005, I clearly saw my proposal as quite benign, as follows:
“There is minimal potential harm that could occur to my research participants. The only major issues that I can identify are (i) breaches of confidentiality, which I have strategies to prevent, and (ii) re-stimulation of pre-therapy states of consciousness. This latter factor is less of a problem with clients who have had positive outcomes of therapy, it seems to me, and I will offer appropriate support and/or counselling to anybody experiencing negative responses to any of the questions in this schedule”.

However, at the same time, I was clear that this was not “good enough”, and said as much to Clare Lennie: inviting the advice to take ethics more seriously, and to try to find a more “solid foundation” of ethics to stand upon. Even if my ethical position was then inadequate, I was at least committed to going back to the beginning and trying to formulate some kind of ethical foundation – if Clare agreed with my perception - which would help me to think about the ethics of my research proposal in a credible and convincing, and therefore viable manner.

Since I clearly do not know how to think ethically, I had better define a development programme for myself, and put myself through it, so that I am up to the challenge of developing an ethically defensible research proposal. Here are some of the stages:

1. Review my personal history of moral education and development: Summarize what I learned.

2. Review the history of ethics in the philosophical traditions of West and East: Summarize what I learned from that section.

3. Identify one principal author to use as a point of departure in setting up my own approach to ethical thinking, and describe my approach: Review Hare 1981, and describe my agreements and disagreements with the author. (This is my main statement on how to think ethically. It involves asking questions…and what else?>>>>>). I will do this work in Appendix B, and summarize my results here:

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

4. Develop a ‘personal training plan’ for my ethical training needs: This is John Chaffee’s critical thinking programme:

a. Make morality a priority;
b. Develop a clear moral code;
c. Adopt the Ethics of Justice;
d. Adopt the Ethics of Care;
e. Universalize my moral choices;
f. Treat people as ends, not means;
g. Accept responsibility for my moral choices;
h. Seek to promote human happiness;
i. Develop an informed moral intuition;
j. Discover the ‘natural law’ of human nature;
k. Choose to be a moral person.

5. Apply my learning to drawing up a brief and clear code of ethics for my research project: Use the little book on The Ethics of Research (Gregory 2003) as the main structure here, and add in whatever seems useful.

Assuming I now have a much clearer idea of how to think ethically, and to be ethically mindful, I can now begin to apply these ideas to the eight ethical dilemmas, which I have identified within my research proposal; and I will present them next.

7. Learning to Reflect Critically upon Ethical Problems in Research Activities
Now that I have done a considerable amount of work on the background to ethical thinking, (16th June 2005), I want to review the notes I have taken (in my pocket notebook) on the growing number (eight) of ethical problems that I have identified with my research project idea. (This activity resulted in the presentation, below, of eight ethical problems with my research proposal). And, today, 11th October 2005, that number has increased to fourteen. My intention is to list all of these problems, and then to reflect upon them, and make some kind of analytical comment upon each of them.

1. If I were to draw my research participants’ attention to the ‘possibility’ of a ‘placebo effect’ having made some contribution to their therapeutic outcome, this could have the iatrogenic effect of causing them to lose faith in their therapy, which could cost them whatever placebo gain they made.
This was the first potential ethical problem that I identified concerning my research idea. Initially my response was apparent rationalization. It went like this: “Since I do not believe that the placebo effect is particularly significant in counselling and therapy, there is unlikely to be any real loss of therapeutic benefit. However, if I withhold the information about the nature of my study, participants will not know it involves some consideration of the placebo effect. Therefore, they cannot have an iatrogenic effect. I will inform them of the placebo consideration after the data analysis has been shared with them, during which they will almost certainly gain reinforcement of their belief in counselling and therapy. After that point, learning that some authorities believe that the benefits of counselling and therapy could be primarily placebo effects is unlikely to have any significant negative effect on the participants”. I no longer believe that this rationalization is acceptable, and I will discuss my new thinking later.
Later (Date?) Critical Reflection: >>>>>

2. It may be unethical to waste the time of research participants if we have good reason to believe that our research report or article(s) will never be read by our colleagues or peers.
In February 2005, I found and read Mair (1999), who quoted Mahrer (1998) as follows: “Psychotherapists can be proud of their researchers because the researchers do not really bother practitioners much. Every so often researchers grumble that practitioners pay little or no attention to their findings, and they are right. In general, the practice of psychotherapy is essentially undisturbed by whatever researchers do. The practice of psychotherapy would probably be insignificantly different if researchers had instead spent their time playing volleyball. Precious little, if anything, of what practitioners actually do was given to them by researchers….It seems that researchers have an earned status of being essentially irrelevant, except that they help make the field look respectable”. (pp2-3).This seems to suggest that I could be wasting my time running my research study, since it could be that practitioners are not particularly interested in the topic I intend to investigate. And, whereas it is up to me if I wish to ‘waste my time’ while earning a doctoral degree, it seems to me to be unethical to waste the time of my potential research participants, who would be asked to give up a good deal of their time and energy to produce the data which I would then analyze.
My solution to this ethical problem was to decide to consult colleagues about whether or not they would read my paper, and whether or not they considered it ethical to run my study, given Mahrer’s argument. (See Annex 1 to this document). >>>Place conclusion of Annex 1 here once completed<<<. And include DATE!!!

3. It may be somewhat questionable practice to restimulate memories of pre-therapy unhappiness in research participants:
It occurred to me when designing my research schedule of questions that some questions, which ask the participant to recall what they felt like before counselling/therapy, that some of them might experience mild restimulation of negative feelings (or emotional suffering). My idea of how to deal with such an eventuality, with any of my six respondents, was to offer counselling support myself, on the spot. However, Brannen and Collard (1982) say that “the researcher should avoid being drawn into the role of counsellor”: (McLeod, 1994: 83). This discussion is presented by McLeod in the context of his awareness of “the potential to re-stimulate painful memories or unresolved emotional conflicts”: (McLeod, 1994: 82). Brannen and Collard’s solution was to have an observer present with the interviewer, and they found that “the presence of the observer enabled (the respondents) to maintain a necessary sense of balance and containment in this highly emotional situation”: (McLeod, 1994: 83). However, I do not think it would be feasible for me to arrange an observer for my research project.
New critical reflection (Date???): >>>>>

4. It is almost certainly indisputable that it would be unethical to restimulate memories of trauma in victims of trauma.
At the beginning of May 2005, I found a document (Pope, 1999) on the ethics of restimulating old memories of trauma, on the internet. This is it: Invited Guest Editorial: ‘The Ethics of Research Involving Memories of Trauma’, by Kenneth S. Pope (1999):
‘Informed consent and other ethical principles have long been fundamental to research involving human participants, as emphasized for example by the Nuremberg Code (2).

‘In the early 1990s, the topic became fiercely polarized. … Advocates (of ‘false memory syndrome’) claimed that they had discovered a new clinical syndrome whose diagnostic criteria were explicitly analogized to borderline personality disorder, and an epidemic of False Memory Syndrome was announced. …

‘Newman, Walker, and Gefland's study … reminds us that not only are ethical considerations crucial to research in this area, but that they may have additional implications. Informed consent, for example, rests on participants' ability to understand adequately the effects a research project may have on them. But as this study … found, some participants who have experienced major trauma may not realistically anticipate the distress such research can cause: "Not surprisingly, individuals with histories of maltreatment, especially sexual maltreatment, were more likely to underestimate their level of upset from research participation on both questionnaires and interviews."
‘Studies that invite participants to remember trauma may, as this study suggests, themselves be traumatic. As Primo Levi … wrote: "the memory of a trauma suffered or inflicted is itself traumatic because recalling it is painful or at least disturbing" (p. 24).’

It is my belief that these considerations are relevant to my research proposal, in that my interviews could restimulate memories of trauma in my research participants. Therefore, it seems to me to be unethical to allow any such traumatized individuals to take part in my research project.
Later critical reflection (Date???): >>>>>>

5. McLeod (1994) draws attention to the fact that qualitative research can normally be expected to be “painful and distressing”. This results from the depth of relationship established between researcher and research respondent in qualitative research, and the encouragement given by the researcher for the respondent “…to write or talk openly and honestly about themselves”: (McLeod, 1994: 167).
I do not know how to deal with this problem! The only clue I have comes from Willig (2001: 79) where she cites Stake’s (1994) observation that: “Qualitative researchers are guests in the private spaces of the world. Their manners should be good and their code of ethics strict”. McLeod (1994) mentions that qualitative interviews call for detailed accounts of the experiences of clients. “What emerges for the informant may be painful and distressing, and it is the responsibility of the researcher to do everything possible to ensure the well-being of the person”. (e.g. Brannen and Collard, 1982): McLeod, 1994: 167.
However, I have my doubts about proceeding with this kind of enquiry, and I am therefore (29th June) under pressure to focus on ethics instead.
Later critical reflection (Date???): >>>>>

6. There is an apparent contradiction, or an acute tension, between the apparent need to engage in some small degree of deception in connection with the purpose of some research projects, on the one hand, and the stressed importance of being trustworthy, and building a relationship of trust with the participant, on the others.
I had planned to withhold the information that I was conducting my research to try to invalidate the placebo explanation for positive outcomes in counselling and therapy, for two reasons. Firstly, to prevent harm to the participants’ placebo gains (if any); and secondly, to avoid creating the idea in the participants’ minds that I want them to look for ‘active ingredients’ other than a placebo type effect. However, Willig (2001: 18) specifies the rule that there should be “No deception”. “Deception of participants should be avoided altogether”. This seems quite clear, and very strong. She does specify one exception: “The only justification for deception is when there is no other way to answer the research question and the potential benefit of the research far exceeds any risk to the participants”. On the other hand, BPS (2000: 6) offered a different exception clause: “…the central concern (is) the reaction of participants when deception (is) revealed. If this (leads) to discomfort, anger or objections from the participants then the deception was inappropriate”. BPS (2000) also made a distinction “…between withholding some of the details of the hypothesis under test and deliberately falsely informing the participants of the purpose of the research”. I will have to consider how these ideas from Willig (2001) and BPS (2000) apply to my original research design, and then what to do about it.
Later critical reflection (Date???): >>>>>

7. If I don’t withhold some information about my goals in relation to invalidating the placebo explanation for positive outcomes in counselling and therapy, then the ‘demand characteristics’ of my study will probably induce participants to ‘find’ (or create) ‘active ingredients’. (This would affect not only the validity of my results, but also the integrity of my study).
Given that I believe the above statement (or believed it on 16th June!), it seems to me that it would be unethical for me to proceed with this study, without withholding some information, since I know that giving the participants all of the information about the research design will enable them to ‘help me’ to produce the result that I ‘want’!
Later critical reflection (Date???): >>>>>

8. If I believe that people engage in ‘just so’ stories about their lives, including their experiences of counselling and therapy, and that their stories are not some kind of ‘objective truth’, but I report their stories as if they represent anything other than their ‘psychological reality’, then I am acting unethically.
If this statement is true, then what is the value of my proposed research study? I am going to spend a couple of years collecting just-so stories, based on fallible memories, and to shape them into a just-so story of my own, and present it as a picture of the ‘psychological reality’ of six individuals who have been through counselling and therapy. What is the value of this kind of activity? I just do not know.

9. An additional problem, when we get beyond the interviews with research participants, concerns the (psychological) effects on participants and third parties of what we write in our reports. “Smyth and Murray (2000) discussed one pervasive risk factor in narrative research in terms of ‘the subtle and often unforeseeable consequences of writing about people’s lives’ (p. 321)”, according to Hadjistavropoulos and Smythe, (2001: 165).
One of the main problems mentioned by these authors is that, because of the perceived authority of the researcher/writer as an interpreter of the experiences of the participants, some participants may have negative emotional experiences arising out of introjecting the interpretations and judgements of the researcher/writer. They cite a study by Josselson (1996) in which one of Josselson’s participants felt that one of Josselson’s interpretations of the participant had “…put a definite constraint on his subsequent interactions with others”. Josselson went to on to describe this outcome as the participant “…feeling that I had invaded him – and in a way that he had not felt about his analysis. I had captured him in a category that he could either explore or escape from, but it was a cell that bounded how he could think about himself”: (Hadjistavropoulos and Smythe, 2001: 165).
Hadjistavropoulos and Smyth, 2001, commented further: “If the participant in this instance, himself a psychologist and psychoanalyst, was affected to this extent by the researcher’s authorized interpretations of his experience, the effect on the average person would likely be much greater. One should not underestimate the subtle risk factors entailed by this form of authoritative reinterpretation of experience”. (Page 165).
Later critical reflection (Date???): >>>>>

10. Hadjistavropoulos and Smyth (2001: 165) draw attention to the danger of individual participants, or third parties, in written narratives, being identified: “Qualitative research frequently involves the use of narratives derived from very small numbers of participants. Such narratives often appear verbatim in publicly accessible documents such as theses, dissertations, and published articles. … the increased risks of qualitative research are largely the result of the public availability of such narratives and of the possibility of identifying the participants as well as third parties mentioned in the narratives”.

11. Risks to unidentified depressed clients: Hadjistavropoulos and Smyth (2001: 167) point out that: “It is not difficult to imagine a situation in which a research participant who (possibly unbeknownst to the researcher) suffers from major depression and participates in a study that involves a narrative about his or her life’s experiences. In the course of the interview, a negative social experience could exacerbate the negative mood state of the alredy depressed participant. The relatively unstructured nature of qualitative research would allow for the possibility of topics of discussion that induce negative mood states. … Qualitative interview procedures in particular have the potential to re-evoke painful memories or emotional conflicts for participants both during the interview and afterwards …”

12. There are other similar but distinct problems, which are also addressed by Hadjistavropoulos and Smyth. For example: “…it is possible to encounter individuals suffering from undiagnosed conditions who engage in a variety of defensive strategies (e.g. avoidance coping) or are unaware that they have a serious psychiatric disorder. This unawareness is likely to exist for both the researcher (interviewer) and the participant. Extensive questioning could undermine these defensive strategies and cause decompensation by leaving the person to bear the full emotional brunt of the underlying maladjustment”. (Page 167).

13. There is also a problem with ethical controls, according to Hadjistavropoulos and Smyth (2001: 167-168), as follows: “In qualitative research, …, the exact nature of the questions often is undetermined, and risk may be more significant. If the qualitative researcher is intent to reach a certain depth of information or disclosure without specifying the questions to be asked, no independent ethical body can verify whether the questions posed are reasonable or appropriate”.

14. The final point made by Hadjistavropoulos and Smyth (2001: 168-170) concerns more general damage to third parties, which might have legal, and other, consequences. “Narrative discussions about third parties could be damaging to such persons who never consented to participate in the research. In terms of realizing an element of risk, the participants should be aware that such third parties may not only become angry at the participants, but also could sue them for defamation nd perhaps even violation of privacy. Ethical and legal issues become intertwined in this example”. These authors go on elaborate this point as follows: “In the small sample of theses we examined, for example, there were frequent references to family and marital difficulties; money and employment problems; dating and relationship issues; health problems; drug and alcohol abuse; verbal, physical and sexual abuse; unethical and illegal activities; professional misconduct; and more”. … “…the use of pseudonyms offers insufficient protection when the narrative information provided by participants is so detailed that the third parties involved would easily recognize themselves in the transcripts (or would be recognizable to others who know them)”. … “The main ethical problem stems from the fact that these individuals did not give consent to have stories about them circulated in this way (i.e., in a public document), and research participants generally are not authorized to give consent on behalf of the other individuals they may happen to mention in their narratives”.

a.

Key learning points related to designing and implementing my research project:
F1:


8. A Supplementary List of Issues
Review McLeod (1994), and add in any issues that are not covered by Bond (2004b). Then review each issue, and critically reflect upon how this might affect my research.

At the end of this section, it is probably time to use Tim Bond’s structure to take another look at what is involved in thinking ethically about my work.

9. Dr Bond’s Mindfulness Structure
This will theoretically work like a kind of framework…

10. Summary of My Learning
What have I learned about ethical mindfulness, and how has it changed my approach to ethics in connection with my research project…?




XXX (To be incorporated, in summary form, above^^^) European Philosophy and Self Reflexivity
“The philosophy of the early modern period is no longer clerical. Medieval philosophers were, without exception, bishops, priests, monks, or friars…”
Kenny (1994: 109)

In European thought, after the fall of Athens and prior to Locke, Hume and Kant – that is to say in the medieval and early modern periods - the nature of ethical thinking and moral behaviour was unproblematical. For the vast majority of people, the nature of ‘good behaviour’ was dictated by the Catholic Church, and later its Protestant offshoots. In the Catholic tradition, there were the Ten Commandments of God, and the Six Commandments of the Church. Taken together, these commandments were seen as largely sufficient to inform the ‘devout laity’, or ‘the masses’, of how to behave in any situation. There was a further refinement, which was a Catechism, or locally produced set of guidelines, written in the form of questions and answers, which was issued by Arch Bishops at a metropolitan or regional level. This is the first major recognizable form of deontological ethics in the medieval world: meaning “the ethics of duty”, based, in this case, on the will of God. (Spade 1994: 55-106). In addition, of course, Christian morality and ethical theory continued into the early modern period, where it co-existed with newly emerging secular forms of ethical thinking.

After Kant had responded to Wolff and Hume, there was the delusion that an “absolute morality” remained, despite the absence of a verifiable and consultable God to specify what it consisted of. (Kenny, 1994: 190-192). Kant tried to maintain that the “categories” of the human mind “implied” or “demanded” moral behaviour – the Categorical Imperative. His interpretations and re-presentations of the Golden Rule – which had been around since the times of Confucius and Plato - were adopted by society, including the Catholic and Protestant Churches, but were ultimately based upon the exigencies of keeping a society functioning well. Therefore, although it was a deontological philosophy, it used many consequentialist arguments in its own defence. These were not consequentialist in the Utilitarian sense of promoting human happiness and avoiding human misery, but rather in terms of holding society together – the old Platonic obsession (which, of course, has it value!). However, Kant’s “universability” rule proved difficult to apply in many situations. (Kant’s universability or universalization rule was this: “Do not promote any maxim that you would not at the same time will to be a universal rule”). For example, a person with sadistic or masochistic tendencies might will the universalization of various forms of pain that most other individuals might strongly oppose. Thus, Kant failed to provide any easy answers to the question: “What ought we to do to be ‘good’?”

The vernacular form of the Golden Rule was this: “As you would that others should do to you, do you also to them in like manner”. Or, in other words, treat people as you would wish to be treated. Without constant recourse to the needs of a form of “social justice” and “fairness”, it was impossible for Kant to maintain that morality could outlive the inability of clerics, scientifically or rationally, to prove the existence God. (Kant preserved God as a “subject of faith”, lacking all forms of empirical evidence as support: Kenny, 1994: 187-190). And Kant’s deontology could not solve problems of choosing between moral options: such as lying to a murderer in order to save the life of the murderer’s intended victim. The days of absolute morality were close to being over, and relativism was about to be instated (or perhaps re-instated) in European thought. (Sections of ancient Greek society had had a good working knowledge of the relativity of morality, especially from the Sophists, sceptics and cynics in the pre-Socratic period of ancient Athens: Clark, S.R.L. [2004]). However, cultural relativity of morality does not necessarily imply that human beings do not have some form of innate sense of morality, or moral sentiments, (as well as immorality and amorality!) It only implies that, if such a sense of innate morality (and immorality and amorality) exists, then it takes many different social forms, depending upon the culture into which the individual is born; and the balance of power relations therein.

Kant had been critical of the Utilitarian philosophy of Bentham and Mill, which maintained that the highest principal of morality was the promotion of the greatest happiness of the greatest number. (Kenny, 1994: 343-345; and Hamlyn, 1990: 279). Bentham and Mill believed that it was possible to construct a calculus of human happiness, based on the effects of particular social or political actions, which seemed to provide a basis for human morality for a time, before it began to be obvious that it was often very difficult to do the sums. For example, a surgeon who killed a hospital visitor in order to spread his/her organs among eight patients would be acting morally according to strict act utilitarianism: since he or she would be making eight people happy while simultaneously making only one person very unhappy! But most of us would not consider it moral for a surgeon to kill a visitor to hospital, no matter how many transplant patients were thus made deliriously happy.

And, of course, the stimulus for the later work of Hegel, Nietzsche, Heidegger, Foucault, Lyotard, and Derrida, came out of Kant’s work – especially the Critique for Pure Reason. These postmodernist philosophies make it very difficult to promote any form of durable ethics, since all the ‘meta-narratives’ are apparently challenged and deconstructed, and there seem to be no permanent principles upon which to predicate ethical behaviour. All we seem to be left with are historically specific discourses about ethical options; compromised by being entangled with issues of power, and struggles for supremacy.

The other major school of ethical philosophy is “virtue ethics” which comes from Aristotle’s Nicomachean Ethics. Instead of duty, Aristotle starts from self-interest. He assumes that all humans desire to be happy, and that our best chance to be happy is to live in harmony with our society. Thus, he developed a kind of Greek form of the Buddhist Middle Way, in which he advocated the Golden Mean between extremes of behaviour. This, he assumed, would produce a balanced character who would always behave in a morally desirable way, practicing moderation in all things. Indeed, the Golden Mean can be a helpful guide – but it unfortunately breaks down all too frequently. What is the middle way between a lie and the truth? How does the Golden Mean help me to think ethically about my research project? (Magee 2001: 38-39).
Key learning points related to designing and implementing my research project:
C1: Which discourse(s) am I going to be guided by in constructing my research project?
C2: What are my ‘duties’ in relation to my research participants?
C3: What virtues am I going to model in my research work?
C4: How can I promote the greatest good of the greatest number of people in the design of my research project?

XXX (To be incorporated, in summarized form, into a section above) First-hand Experience
“And when He had passed out of the city He saw seated by the roadside a young man who was weeping.
“And he went towards him and touched the long locks of his hair and said to him, ‘Why are you weeping?’
“And the young man looked up and recognized Him and made answer, ‘But I was dead once and you raised me from the dead. What else should I do but weep?’”
Oscar Wilde (1948: 844)

The quote above nicely demonstrates the relativity of perceptions, including moral and ethical perceptions, in a somewhat shocking way.

I grew up in the Catholic tradition, in Southern Ireland, and was a devout follower of the Ten Commandments and the guidance given in the Catechism issued by the Arch Bishop of Dublin. I thus imbibed a particular set of moral guidelines, and absolute prescriptions and proscriptions, from the first beginnings of my encounter with my mother. However, these guidelines and rules might have appealed to an innate sense of justice and of care, and of wanting to be trustworthy. However, if it did so, then it seems important to also posit a “dark side of the human heart”, which each of us seems to share, at least potentially, which promotes injustice, carelessness, laziness, greed, ambition, selfishness, ignorance and sloth.

Interestingly, when I was growing up in this devout Catholic environment, I was unaware that Christianity, in its earliest days, had been a virtual doomsday cult. (Explain: from Encyclopaedia Britannica). >>>>>

The Reformation failed in southern Ireland, leaving a seriously divided population on the island as a whole: the Orange-Green divide. In England, it produced a Protestant revision of earlier Catholic dogma, from the seventeenth century onwards; the suppression of the Catholics; and this culture became increasingly secularized during the twentieth century, and especially from the end of the Second World War onwards. Thus, when I moved to England, on my own, in 1964, I walked into a very different morality from the one in which I had grown up. This was allegedly a more ‘free-thinking’ kind of culture, in which individuals were thought of as ‘making their own decisions’. Indeed, having a ‘personal relationship with God’, rather than one mediated by priests, was one of the main principles that were held to distinguish Protestantism from Catholicism. However, in effect, ‘British subjects’ were strongly shaped by Church of England schools, state television and radio stations, the ‘independent’ Radio Caroline (with the ‘radical DJs’ Tony Blackburn and John Peel!), and the ‘bourgeois’ press. One of the effects of my immigration was subtly to change my morality, especially in relation to the acceptability of sex before marriage; but for most practical purposes, I continued to be dominated by the Ten Commandments, even after I stopped going to Mass, which was almost immediately.

In 1969, I got involved in Marxist politics, and rejected ‘God’ and Christianity, and all forms of religion, on the grounds that they were “the opium of the people”. However, in effect, Marxist morality – founded on the belief that capitalism is exploitation of the working class, and that it’s not only okay to overthrow this capitalist class in a violent revolution, but is in fact ‘historically necessary’ – became overlaid upon, and grafted to, the Catholic morality of my childhood conditioning. That is to say, that I “read” Marx as a set of “duties” that I was obliged to carry out in order to be a “good person”. I was not particularly driven by Utilitarianism, in that I did not often think of the happiness that would reign once the revolution had succeeded. That was a more remote thought than my sense of moral duty to do “the right thing”. (A similar sense of duty seems to have fuelled fascism/Nazism, and so it seems that “reasoning about duty” does not always produce conventionally good results! Or, as Hume claimed, the sole function of reason is to serve the passions!)

In 1973, I attended an Ethics class at Ruskin College, Oxford. The tutor, Paul Brodetsky, was quite radical, probably left-wing social democrat with libertarian tendencies, and the ‘star pupil’ was a socialist-Marxist called Malcolm Saunders. Brodetsky and Saunders would debate fine points of the ethics of bourgeois culture in an animated manner, while the rest of the group looked on in silent … silence. I could not participate in this class – despite having read much of Marx and Engels – because I did not consider that morality was ‘up for discussion’. (I could not bring myself to read Hobbs, Lock, Rousseau, Bentham or Mill – since they were clearly “bourgeois ideologues”, and therefore “degenerate”!) There was nothing left to discuss about ‘degenerate bourgeois morals’! Morality was something that I ‘felt’; and I just ‘knew’ when I was ‘right’ and others were ‘wrong’! (Oh my ‘god’!) And I carried out my duty silently and obediently, as I had once walked to the altar rail to say my penance prayers. (Of course I had a big beard now, and knew all the phrases to utter at meetings!)

In 1976, I had a “transformative” experience whilst attending a meeting in the Communist Party bookshop in Oxford. With the benefit of hindsight, I think it was a kind of “transcendental” experience, in which time stood still, and the quality of my consciousness changed from one of great anger and political agitation to one of great peace and detachment! (Wow. I resigned from the party that very day and never went back!)

In 1977-79, I lived and worked in Bangladesh and Thailand, and was influenced to some extent by Islam (especially Sufism) and Buddhism. I also had American friends who had been through the Esalen Institute at Big Sur, in California, and who operated from person-centred and Gestalt principles of personal functioning; and I was influenced by their beliefs and attitudes. Thus my Catholic/Marxist conditioning was being more and more refined by exposure to other cultures. However, all these processes of change and refinement were unconscious!

In 1980, I met my current wife, who introduced me to Zen Buddhism, and I began to read Zen texts and to practice meditation. My morality was again subtly shifted, though I did not undertake a conscious reflection process upon the question: ‘What are my moral guidelines, and how do I make ethical decisions?’ (Indeed, in 1980, I tried to become self reflective, and to write a document entitled: ‘On First Looking Inside Myself’, which failed totally. I couldn’t find anything ‘inside’! So much for unaided introspection). I was so accustomed to holding moral/ethical guidelines implicitly, tacitly, and thus unconsciously, that I was not in the habit of thinking about them. And indeed, I am not sure – as I set out to write this document - that I know how to think about them today!
Key learning points related to designing and implementing my research project:
D1: I am a product of my experience – including my self-reflective experiences. Which experiences am I going to bring to the design of my research project?
D2: Do any of my personal experiences support the view that I am trustworthy? And how can I become optimally trustworthy in preparation for my research project?

11. A Brief Detour through the Free Will versus Determinism Debate
One of the major considerations about ethical behaviour is that “an ought implies a can”. If a person cannot do something, then they cannot be held responsible for not having done it. This raises the question of just how free humans are to make choices about moral issues. Am I free to make moral decision about my research project, or am I constrained by my past experience, social environmental pressures, or other factors? I have already written 20,000 words on the subject of free will versus determinism, in December 2002, as an assignment in counselling psychology and psychotherapy. The question was this:
To what extent would you agree with the proposition that your life is totally determined by human evolutionary development and the environment?

My answer was as follows:

Preface
This may seem like a simple question, but it has caused me an enormous amount of thinking, research and creative intellectual effort. With the benefit of hindsight, now that I have answered the question fully, I can summarize my answer in just a few statements, as follows: I do not agree to any extent that my life is totally determined by evolutionary and environmental factors. I do however think that I have areas of freedom of expression and action, which vary in degree from situation to situation. The most eliminative materialistic biological scientist cannot (apparently) ever eliminate all potential “hiding places” for free will within the human psyche.[i] On the other hand, philosophy can never – it seems – demonstrate conclusively that free will exists.[ii] There are certain things I can control and certain things I can’t.[iii] I am, ultimately, constrained by my genotype and my environment – but I (the thinking, feeling, acting Jim) am substantially determined by myself, my will, my goals, my intellect, and my self-education, especially my self-training in creative thinking.[iv]

I would not want to exaggerate the degree of freedom I experience, but neither would I want to exaggerate the extent to which I am constrained. My degree of freedom varies enormously: from very little, in those situations where I am controlled by a simple reflex action (such as sneezing); to very considerable in areas of self-expression, such as thinking, communicating, artwork, and so on. Most of my other behaviours fall on a continuum somewhere between those two extremes. Given a choice between having to exaggerate the degree to which I am determined, or the degree to which I am free, I would prefer to over-estimate my scope for freedom than to under-estimate it and thus miss an opportunity for self-actualization! This is based on my own creative insight into one of the paradoxes in Epictetus’s view that “there are certain things we can control and certain things we can’t”. In my experience, it often (or even normally) proves to be the case that we cannot tell in advance which things we will prove to be able to control and which will prove to be beyond our control! Therefore, it makes sense for me to assume that I am free to take any particular action for which I do not have substantial counter-indicative evidence. And I normally persist, persist, persist with any chosen course of action until such time as I have collected irrefutable evidence that this particular avenue or road is closed to me, at this point in time. (Of course, I do not persist in the sense of bashing my head against a brick wall. I persist intelligently; by moving along the wall, pushing and probing, in an effort to find a doorway to my goals).

Clearly this answer implies that I do believe that I have sufficient freedom of will and action to find some way to think my way through the ethical issues involved in my research proposal. There may be some constraints, based upon my previous social experience; and some of my emotional barriers may even be innate; but both of these seem far from insurmountable. Thus I can realistically seek to identify those things that I preferably ought to do in relation to my research proposal, since I believe I can discipline myself to do those things that I agree I preferably ought to do.
Key learning points related to designing and implementing my research project:
E1: I am apparently substantially free to re-think my approach to ethics, to my research design, and to implementation in the form of interviewing skills and writing up processes.
E2: The degree to which I am constrained to repeat the patterns of the past is not available to me as a clear insight. Thus I may make mistakes, and miss opportunities. Too bad! I am a fallible, error-prone human.
E3: However, some evidence (on 29th June) of my ability to think freely and to decide for myself is provided by the fact that I am now seriously considering dropping my original research proposal, and developing a proposal to study ethical dilemmas and difficulties faced by counsellors and therapists. I am considering making this change despite the fact that I am emotionally much more attracted to my original idea.


XXXX. Earlier Writing on Ethics (Incorporate this in Section 8, above, in summary form, extracting anything that duplicates Bond 2004b)
“We are now in a position to understand the true importance of the study of student misconceptions. … (These misconceptions) provide one clue as to how (their) schemata differ from the expert’s. We cannot effect scientific understanding without grasping the depth and tenacity of the student’s pre-existing knowledge”.
Carey (1986: 113)

Carey (1986) is here suggesting that to teach science to students it is important to grasp the strength of the hold that their pre-scientific distinctions have over them. Moreover, by analogy, in my own case, it is important to grasp the hold that my previous approach to ethics and morality has had, and still to some extent has, over my ability to “hear” new distinctions in this domain. Carey (1986) emphasizes that not only in the step between teacher and student, but also in the gap between different schools of thought among experts, the same words have subtly or even radically different meanings. Even such “simple” and “clear” words as “light”, “force”, “moment”, and so on, have different meanings in the Newtonian, Einsteinian and Quantum Mechanics schools of thought. How much more scope there must be for misunderstanding and confusion in the domain of languaging about ethics and human psychological processes, given the large number of schools of though in counselling, psychotherapy, psychology and philosophy.

Having suggested earlier that I may not know how to think about ethical issues, I do now want to emphasize that I have some experience of writing about ethics in counselling and therapy. In May 2001, I wrote a PhD proposal for the University of Leeds, which was not accepted because of staff cuts in the Psychology Department, and project-cost considerations. In that proposal I included an appendix on ethics, (Appendix E), which I am now proposing to review to see how much of that work is applicable to my present study, and what changes I now want to make. (See below). My results will be reported later in this paper. The main thing I want to note about that appendix is that it was very broadly based upon the work of McLeod (1994) – plus some ideas from McLeod (2000) and Maglennon (1993) - which, for me, represented a kind of Catechism of Ethical Commandments! (I have recently – 26th April 2005 - gone back and re-read McLeod 1994, and noted the following points: McLeod’s ethics chapter was “tacked on at the end”! Only the concluding chapter followed it. The ethics chapter [along with the introduction] was one of the briefest chapters. It was just eleven pages in length, compared with an arithmetic mean of twenty pages for all other chapters, excluding the introduction. Of course, McLeod does mention ethical issues on page 36, and he emphasizes that it is important to keep ethical issues under review “(t)hroughout the planning stage of a study”. In addition to these points, I also noted a few extra points that I had not noted in 2001, and these will be dealt with below).

I was recently reminded of the concept of “ethical mindfulness” – Bond, 2000, 2004a, 2004b – which is very different from Ethical Commandments. For Bond, “Being ethical not only involves wrestling with the issues in a systematic and considered way but also taking personal ownership of the responsibility for acting ethically”. (Bond, 2000: 243; and University of Manchester, 2004: 7). This is a decisive move away from following – consciously or unconsciously – the prescriptions of others. As mentioned earlier, Bond (2004a and 2004b) identified the following elements of ethical mindfulness: trust building and trustworthiness; and he takes adequate account of risk management, relationship building, and research integrity.

In order to pursue the concept of ethical mindfulness, I wanted to do some background reading, to try to find some concepts that I could begin to manipulate for myself. On the internet, I found a review of Thomson (1999) and Weston (1997), reviewed by Arnaud (2000). I will recapitulate my learning from that experience next.
Key learning points related to designing and implementing my research project:
G1:

12. Notes from Arnaud (2000), with Comments and Reflections
“Conscience would … appear to belong to the ‘reason’ rather than to ‘feeling’, though we may sometimes refer to conscience as the ‘moral sense’. … …we should probably be on safer ground if we thought of conscience as the capacity of a rational individual to recognize and be sensitive to the demands of the ‘moral imperative’ as articulated by a ‘consensus’. It might indeed be regarded as akin to a skill … which can be improved and refined through constant training and practice, so that it can become almost instinctive. This kind of approach … rules out an extreme individualistic view of conscience, in that the ‘inner voice’ has to be grounded in a shared and therefore ‘objective’ moral standard”.
Harrison-Barbet (1990: 186)

Harrison-Barbet (1990) is here helping to clarify that reasoning is a social process, in which the individual is a social being, locked into a consensus about what is the case and what ought to be done.

Arnaud (2000) reviews the following two books:
’Thomson, A. (1999) Critical Reasoning in Ethics: a practical introduction. London: Routledge.
’Weston, A. (1997) A Practical Companion to Ethics. Oxford University Press.
(The number 7 indicates that these two books will be amongst the seventh order, each of eight books and/or papers, which I shall request through the national interlibrary loan service).

Arnaud explains that Thomson’s book is in the tradition of ‘critical thinking’. Her assumption is that ethical thinking implies the ability to think logically, and to arrive at improved ethical decisions by distinguishing clearly between good and bad reasons for actions. Thus, “good moral thinking involves the analysis of ethical arguments”. (Arnaud, 2000: 2). Knowing the form that an argument takes, and how an argument differs from other forms of discourse; and being able to spot an argument about ethical or moral content: these are the starting points of the task of critical reasoning about ethics. (This is the discipline of meta-ethics). The distinguishing feature of an ethical argument, according to Thomson, is the presence of absolute should’s and ought’s. Arnaud points out that this differs from ethical theories based on the idea of ‘social contracts’: as in applied professional ethics. My own preference is for ethics based upon social agreements, laws and contracts, rather than the notion that there is an Absolute Truth available to humans concerning what is moral or ethical. Of course, in many if not most situations, it is quite clear what the moral course of action had better be, for the socially responsible individual: in terms of avoiding harm to others; preserving life; and so on. (Moreover, Bond, 2004a and 2004b, pages 9 and 10, is in no doubt that it is advisable to think of ethical issues of avoiding harm as ‘shoulds’ and ‘musts). However, these obvious conclusions are normally based upon emotional agreements about common interests. Thus, they are social contracts, and not absolute imperatives of the Christian or Kantian varieties. In addition, while they do involve ‘critical thinking’, this is not an unemotive form of ‘thinking’: completely separate from the human emotional attitudes that prompt empathy, care and fairness.

Kant considered that “…morality is founded on reason” – Magee (2001: 137) – but for Thomson (1999) it is “…more art than science”. (Arnaud, 2000: 2). It is about recognizing and appraising arguments about moral options and ethical implications. Central to such arguments are “moral concepts”, such as: right, wrong, good, bad, should, ought, etc. Unfortunately for Thomson, Hume’s Law states that you can’t derive an “ought” from an “is”. That is to say, moral statements are not statements of facts, but rather of values, attitudes and preferences. Hume considered that the sole function of reason was to “serve the passions”. My own view is that there is a dialectical relationship between “reason” and “emotion”, and that they influence each other reciprocally. And I believe that it is out of this relationship of reason-emotion, as socially conditioned, and carrying certain innate tendencies, that moral preferences and ethical commitments emerge.

According to Thomson there are also moral principles, such as: “respect life”, “tell the truth”, “cause no harm”, “operate from integrity”, “help others”, and so on. However, there does not seem to be any absolute basis upon which these principles could be grounded. Kant agreed that appeals to God were not viable, since God is a subject of faith, and not an objective feature of the empirical world. It seems that the most obvious basis for such moral principles is a social agreement aimed at producing a stable society.
The two moral principles I have discovered from Chaffee (1998) are: (i) the “ethic of Justice” and (ii) the “ethic of Care”. Hume considered that justice and rights are social creations, and include contracting and property rights, all of which are covered by the ethics of duty, or deontology. (Blackburn, 2003: 78). However, these are still just preferences and wishes, and not “absolute imperatives” and inevitabilities!

From Bond (2004b) I learned of the “ethic of Trust”. In trying to understand where this fits into the system of ethics described in the literature I have consulted, it seemed to me that this is a form of “virtue ethics”, which originated with Aristotle, and is based on the idea that people will be motivated to work on their own self interests, and one such interest is in behaving ethically if they believe that behaving ethically will provide them with a happier life (or eudaimonia). The ethic of trust could also be related to Kant’s universalization rule, which states that, if I do not behave in a trustworthy manner, and that this behaviour becomes universalized, then there will be nobody I can trust. Without trust, all social intercourse breaks down. Therefore, if I wish to be trusted enough to be included in a social group, then I’d better demonstrate that I am trustworthy myself. Furthermore, in the consequentialist philosophy of utilitarianism, it makes sense that if people enjoy the idea that significant others can be trusted, then my behaving in a trustworthy manner brings them pleasure, derived perhaps from a sense of security, predictability and a degree of certainty in an otherwise uncertain world.

Later I will consider the implications of these three principles – the ethic of justice, the ethic of care, and the ethic of trust – as well as Chaffee’s and Bond’s other guidelines, for my research project.

For the moment I want to argue that, if morality is founded on reason, as Kant claimed, then reason is founded on the human passions, as Hume claimed. Or, as Ellis, (1962, 1994) more accurately claimed, thinking and feeling are not completely separate functions; but significantly overlap, and are in many respects essentially the same thing. So that morality is a function of “perfinking” – or perceiving, feeling and thinking, all of which are shaped by innate tendencies under the influences of environmental factors, like family values, schooling, training, ‘good influences’, ‘bad influences’, and so on. Indeed, Harrison-Barbet (1990: 146) recognized this when he noted that Hume’s ethics “…is … lacking in any recognition that ethical decisions might perhaps be better regarded as being made by ‘the whole (wo)man’, an integrated personality in whom reason and the feelings are intimately fused”.

Weston’s Perspective
Unlike Thomson, Weston (1997) looks at ‘wider issues’ that stop individuals from behaving in an ‘ethically mindful’ manner. (Arnaud, 2000: 4). He is critical of dogmatism, rationalization and relativism, which he sees as fake forms of ethical mindfulness. I can agree to the first of these. Statements based on religious or political dogma are clearly not expressions of any kind of mindfulness; and certainly not ethical mindfulness. Relativism, on the other hand, seems to me to be problematical in this connection. If I think about the relative merits of trying to save the life of one child as I flee from a burning school, or to stay inside to search for others, thus risking my own life and that of the one child that I have definitely found, I am clearly being ethically mindful. And even if I decide to settle for saving myself and that one child, rather than staying inside with the potential of saving several others, I am no less clearly engaging in ethical mindfulness, in that I am consciously weighing up the two sides of this particular moral argument. (However, I would want to emphasize that my conscious weighing up is not ‘pure reason’, or ‘rationality’. It is rather a form of cognitive-emotive-kinaesthetic processing, which can vary from cool, to warm to hot. And in the midst of a school fire, it’s not likely to be particularly cool!) Others might find me wanting for not staying inside and looking for other children to rescue, but that is a matter of their judgement, and does not necessarily impinge on my conscience, since I have applied my own moral rules, admittedly under circumstances of great stress, in a way that made cognitive-emotive sense to me. If I work in a school, then I have a contractual duty of care towards the children; and I am bound by that practical professional ethic to do my best in the case of a school fire to care for all the children. It could be said that I am obliged by my contract to try to maximize the number of rescues I achieve, to promote the greatest good of the greatest number (of children, and their [relieved] parents). The fact that my duty is contractual rather than being based on ‘natural law’ does not alter the seriousness with which I execute my duties. And I may also be motivated by the ‘ethic of courage’, in that I know I will personally benefit from a reputation for courage, rather than a reputation for cowardice. But when all is said and done, a bell will sound and I will go into ‘automatic mode’. There won’t be time to think. And I will do what I have been programmed to do, by myself, my society, my school, my contract and my ego-image. And much of the decision-making will be done unconsciously, by a quick wrestling match between by ‘id’ and my ‘superego’. And something similar, though less pressured, will happen when I finally sit down to plan my research project. It will “settle itself”, in the “basement” of my mind. Because there is no bell ringing in the background, and no thick black smoke, there will be time to write up my plan of action, to review it, to get feedback on it. But it will “settle itself” on the basis of my perfinking in a highly influential social context.

Weston (1997) and Chaffee (1998)
Weston is opposed to relativism. However, his argument against relativism is his claim that relativists engage in “…denying that there is any such thing as thinking better or worse about an ethical issue”. (Arnaud, 2000: 4). A similar case is made by Chaffee (1988: 38-43). Chaffee talks about “the road to becoming a critical thinker”, and distinguishes between three “stages of knowing”. He is critical of the first two stages, and supportive of the third, as follows:

Stage 1: “The Garden of Eden”. This is authoritarian dogmatic thinking, including black and white thinking, based on an inherited view from authorities. (Note the lack of mention of emoting!)

Stage 2: “Anything Goes”. This stage is said to follow the rejection of the dogmatic, authoritarian approach, once individuals have spotted that authorities contradict each other, and cannot be relied upon to provide accurate and useful guidance. (This is the Sophistic or post-modern position). Chaffee includes in this stage the concept of relativism, which he characterizes as positing this view: “The truth is relative to any individual or situation, and there is no standard we can use to decide which beliefs make most sense”. (Chaffee 1998: 41; emphasis added).

Stage 3: “Thinking Critically”. This stage is seen by Chaffee as the synthesizing of stages 1 and 2. He maintains that “…some viewpoints are better than other viewpoints, not simply because authorities say so, but because there are compelling reasons to support these viewpoints”. (Chaffee 1998: 42; emphasis added).

I think there are two major errors in John Chaffee’s thinking. Firstly, he is probably overemphasizing the distinction between stages two and three. Many famous relativists, such as Hume and Wittgenstein, nevertheless argued for some particular perspective which they considered worth promoting. They did not behave as if all viewpoints were equally valid or equally appealing to them. And, secondly, Chaffee’s claim that some viewpoints ARE better than some other viewpoints seems to me to be unsupportable with raw data, since a widespread consultation on virtually any viewpoint will normally elicit those who support it and those who do not. The most that can be said about the value, or virtue, of any particular viewpoint seems to me to be this: “I think idea ‘x’ is better than that other idea”; “We think it is better than some other idea”; or “Our local agreement is that this viewpoint seems to us to be better than some other idea”. Universal agreement about any idea’s being absolutely better than any other idea seems to often or even normally be remote to the point of apparent impossibility: except in the most obvious cases covered by the Golden Rule – such as the preservation of life, the rule of law, and the avoidance of harm. And even in these ‘clear cut’ cases, we cannot always get universal agreement about what would constitute ‘harm’; or what would be a just law.

Chaffee also seems to refer to “compelling reasons” as if they were “things in themselves” (or noumenal realities); rather than socially agreed (phenomenal) beliefs about what seems to be so. My “compelling reasons” may be somebody else’s “poor logic”. However, that does not exempt me from the obligation of continuing to try to rationally (meaning reasonably) justify my morality, and refine my morality, in the light of my learning. But I’d better recognize and remind myself that morality, as is seems to me, is “my morality” (which is a subset of “our morality”): and not “the morality”! And that it depends as much on my “unconscious emoting”, and certain social discourses, as it does on my “critical thinking”.

Furthermore, I cannot go along with Chaffee (1998) because of the “reality” on the ground:
(a) The whole of science is fragmented into various “camps” (or paradigms);
(b) The entire religious world is fractured into various “faiths”;
(c) The political world is shattered into various “parties”, “factions”, “classes”, spread along a continuum from far left to far right; and:
(d) The entire social world is subdivided into cultures, with contrasting and conflicting beliefs and attitudes.

Social constructionism teaches that “…societies construct the ‘lenses’ through which their members interpret the world”: (Freedman and Coombs 1996: 16). And this, it seems to me, applies as much to ethics and morality as it does to politics and culture. Thus it seems the nature of knowledge is that it is constructed, by individuals and groups, in line with broad strands of social agreement; and that we are each born into a ready-made social morality, or relative lack of morality. Or as the followers of Vico concluded: “reality is a human or social construction”: (Hamlyn, 1990: 215).Therefore, ‘ethical mindfulness’ among counsellors and therapists, although practiced by individuals in the form of ‘critical thinking about moral issues’, is likely to conform to broad strands of agreement, such as those established by the counselling and therapy accrediting organizations; the feminist critique; the critical psychology critique; the individual’s religious or moral beliefs; the various other ways in which the individual’s innate moral sense has been shaped by “good” and “bad” influences; and so on. And while many if not most counsellors and therapist clearly strive to be ethical in their work, some (small minority?) clearly do not do so; or fail in their attempts to do so.

Additional Arguments
Weston (1997) is also critical of substituting rule-following for ethical mindfulness as “rules are at best rough guides with exceptions”. (Arnaud 2000: 5). However, once I have engaged in sufficient reflection as an ethically mindful individual, I may well construct a set of rules to guide me through my research work; and more especially to communicate a kind of ‘contract’ to my research participants/co-constructors. Furthermore, I think it is perfectly valid within the field of applied professional ethics for counsellors and therapists to follow sets of rules set up by their accrediting organizations, including codes of research ethics. (Bond, 2004a, 2004b, BPS XXXX(date)). However, it is also important to be alert to exceptions to the rules, and to the possibility of unforeseen dangers. And this can only be done by working at the process of questioning and challenging ethical ideas – or becoming ethically mindful.

But ultimately, we’d better acknowledge that humans seem to operate mostly on ‘automatic pilot’. We do not get up in the morning and ask ourselves: ‘Which muscles shall I engage first, in order to get out of bed? Or, on what part of the carpet shall I place my first foot? How shall I stand up in order not to fall over? Where shall I take my first step?’ Etc.. etc. And when we get to the bathroom, we do not ask: ‘How shall I pick up my toothbrush?’ And so on, and so forth. We seem to be creatures that operate for the most part on automatic stimulus-response pairings, or pattern matching systems, which allow us to work quickly and efficiently, using our previous experience as an unconscious guide to action. For most practical purposes, this is also how we operate morally. It is only when we find that our practical activities, or our moral actions, are not working for us that we then stop, think, question ourselves, and perhaps arrive at new decisions, which in time will become new habitual patterns. One such potential new habitual pattern, in my present context, would be to train myself to normally ask: “How can I be ethically mindful here?” “Am I being ethically mindful at the moment?” “Do I know the moral implications of this action?” But once these new patterns become set, I will be back to functioning more or less automatically, much or most of the time. Anything else would be so inefficient that I would have to give up all normal activity so that I could constantly monitor, or consciously watch, the ethical implications of every step I take, like a Buddhist brushing the pavement before him/her-self during the “bug season”, in order to avoid stepping on a flying insect and thus taking its life. But even more than this, I would also have to consciously monitor every though about every step, to make sure I am not letting myself off the hook. This would undoubtedly tip me into the ditch with the centipede in Alice in Wonderland.

Weston (1997) argues against seeing ethical problems as dilemmas, with two equally undesirable options to choose between. He thinks we “…need to be more creative” in generating alternative ways of looking at the problem. (Arnaud 2000: 5-6). One way of doing this could be to use de Bono’s “six thinking hats” heuristic, and especially the “green hat”. (de Bono 1995).

Weston (1997) also advocates the development of “preventative ethics”. (Arnaud 2000: 6). In relation to my research project, this involves thinking through the potential problems, and making sure they are eliminated, resolved, minimized or managed.

Finally, Weston (1997) advocates a kind of mindfulness which is “open hearted”, and empathic about people’s “feelings and needs”. (Arnaud 2000: 7). How can I ensure that my heart is “open”, and that I am relating sensitively to my potential research participants/co-constructors? This is about the Red Hat in De Bono’s system of thinking.
Key learning points related to designing and implementing my research project:
H1:

XXX. Revised Leeds University PhD Proposal (Incorporate this into section 8 above, at least those bits that are not already covered in Bond 2004b)
Appendix E: Ethical Issues in Counselling and Therapy
In order to save myself time in constructing a statement on research ethics, based on McLeod (1994), I re-read the whole of chapter 10; marked some sections that I had overlooked the first time around; and I will now simply rewrite Appendix E from my Leeds University PhD proposal, as follows:

Reviewing McLeod (1994)
In order to identify a comprehensive range of ethical issues that I had better take into account in designing my project, I reviewed chapter 10 of McLeod (1994) and identified at least eleven issues to be considered and discussed. Those eleven issues form the structure of this Appendix. I also reviewed McLeod (2000), and established that three important elements of ethical consideration in counselling research are: (1) Participant confidentiality; (2) Informed consent; and (3) Voluntary participation. Of course, there are at least three other key issues, as discussed in point 1 below. And there are also at least another two key problems discussed elsewhere. In point 2, below, I list the four most important issues in counselling and therapy research, according to McLeod (1994); and in point 4 below, I list and comment upon his seven dilemmas. These various ways of classifying and dividing up the key issues, important points, dilemmas, and so on are but one system of classification, and do not amount to a definitive or non-overlapping system of classification.
Reviewing the Issues
The major issues that I want to consider are listed below, under 11 headings:

1. The volume of literature on research ethics in therapy and counselling seems to be quite limited. But research ethics in general is derived from moral philosophy. Therefore it seems perfectly feasible for me to specify the kind of general moral philosophical issues that I'd better address. In addition to the three considerations emphasized in McLeod (2000), there are also considerations of (i) not causing harm to the client; (ii) promoting the well-being of the client; and (iii) respecting the right of the client to take responsibility for themselves. My methods of avoiding harm to participants will be discussed in detail later.

2. The three most important issues are:
Acknowledging the right of the participants to take responsibility for themselves;
Avoiding doing any harm to participants or their interests; and:
Ensuring that all participants are treated in ways that are demonstrably and arguably fair and just.

Of course, this does not mean that confidentiality, informed consent and the principle of voluntary participation will be ignored!

3. The three caveats are:
All research design seems to involve making value judgements and decisions, and the resulting design cannot ever be ethically neutral.

Decisions about ethics are likely to affect the quality of the outputs of the research project, even if only in minor ways.

McLeod (1994) considers that the ethical issues that arise during research projects follow a similar pattern to those that tend to arise during day to day counselling and therapy work; and will need to be looked out for and managed as the project unfolds.

4. McLeod (1994: 166) lists seven dilemmas that I had better address. They are as follows:
McLeod's Dilemmas
My solution or response
(a) Innovative or experimental treatments may be harmful to clients.
Strictly speaking, I am not proposing to engage in any form of treatment. I am going to interview six research respondents/co-constructors. I will study interview techniques to ensure that I do not engage in any harmful behaviour. There is one sense in which interviewing individuals could be harmful, and that is in re-stimulating traumatic experiences which preceded their counselling/therapy. (Pope, 2005). This is discussed in point (d) below.
(b) How to avoid excluding people from therapy unless they take part in research.
This will not arise. There is no offer to provide therapy linked to my research project.
(c) How to avoid infringing client confidentiality by allowing text-based or taped material to be witnessed by unauthorized others.
Research questionnaire will be identified only by a numerical code, and will not contain the individual respondent’s name. I have a secure filing cabinet in my home office in which I will store all audio tapes of interviews; and all transcripts of interviews; and make sure they are kept locked away. I will either return tapes to the participants whose voices are recorded on them; or destroy the tapes after the transcripts have been completed and backed up. My computer files are protected from hackers by two secure firewalls: one provided by Microsoft and one by Symantec. I will offer each participant the option to be identified or not in the final report. If they choose to be anonymous, then I will make sure that their identity cannot be detected in any way from the final text.
(d) How to minimize the effect of research activities (e.g. reviewing sessions) triggering off painful memories or emotions.
Six years ago, Kenneth Pope wrote an invited guest editorial on ‘The ethics of research involving memories of trauma’ - (Pope, 1999) – in which he said: “Newman, Walker and Gefland’s (1999) study … reminds us that not only are ethical considerations crucial to research in this area, but that they may have additional implications. Informed consent, for example, rests on participants’ ability to understand adequately the effects a research project may have on them. But as this study … found, some participants who have experienced major trauma may not realistically anticipate the distress such research can cause. ‘Not surprisingly, individuals with histories of maltreatment, especially sexual maltreatment, were more likely to underestimate their level of upset from research participation on both questionnaires and interviews’.” For these reasons, I will indicate that I am not willing to interview any individual who has had any form of sexual abuse in their background; or any other form of serious abuse; and any form of serious trauma. If I exclude these individuals, and warn all potential participants that remembering what they felt like before they underwent counselling/therapy may have an upsetting effect upon them, then I think I will have discharged my responsibilities.
(e) How to prevent therapist-stress from harming the effectiveness of the project therapists.
This will not arise as there are no therapists involved in this research.
(f) How to avoid using archival material without the consent of clients.
This will not arise, as I will only be using the transcripts of my interviews with the participants, and not case notes or other private material.
(g) How to avoid allowing unconscious urges to steer project activities towards the desired outcome.
I am absolutely committed to being open-minded, and to accept whatever emerges from immersing myself in the transcripts of these interviews. I would prefer to learn something that seems “wrong” than to “fix it” so that I “learn” what I already know/believe.

5. It seems important to enquire what the project will feel like for the participants. McLeod (1994: 167) recommends that I ask myself: "What would this project experience feel like if I were one of the participants?" Whereas with my Leeds proposal I engaged in a visualization of what it would be like to be one of my own research participants, on this occasion I will actually:
· Arrange to be interviewed by a colleague, using my own research schedule (questionnaire);
· Arrange to interview a colleague, and to get feedback from them concerning how it felt being interviewed by me, and responding to the specific questions on the research schedule (questionnaire).

6. In the process of asking my participants the planned research questions, I had better make sure I am sensitive to the potentially painful nature of this activity for each participant. To develop sensitivity and competence in this area, as indicated above, I will review Rennie (1994), and Brannen and Collard (1982), and learn from their experience.

7. I will not be intruding in any way into therapy sessions. All six of my participants will have completed their counselling and therapy within the previous twelve weeks. All of them will also have experienced positive outcomes, as a requirement of participating in the research.

8. With regard to selection of participants, I will advertise for individuals who have had a “positive therapeutic outcome” from counselling and/or therapy sometime in the previous twelve weeks. I will seek to get a broad range of counselling traditions represented among the respondents. I am expecting to have some difficulty finding enough respondents, and so I have not planned any system for selecting individuals, other than their availability, their willingness to participate, their positive outcome, and their recency of completing counselling/therapy.

9. The next major issue is that of "informed consent". I will provide potential participants with a detailed project design, in a comprehensive word-processed document, one copy of which they will sign once they are happy that they understand the procedures, risks, potential gains, and their personal responsibility for their mental well-being. They will keep another copy for themselves. This document will follow the structure recommended by McLeod (1994: 169), as follows:

§ My name, office address and telephone number (and possibly the name of my supervisor);
§ A description of the goal and purpose of my study;
§ A description of the research design, processes, procedures, (excluding the truth ???? about my interest in rejecting the placebo explanation, for obvious reasons). This is now a research dilemma. I am now uncomfortable misleading my participants, since this is not the actions of a “trustworthy person”. Willig: see below: NO DECEPTION!!!
§ A statement of the risk of re-stimulated emotional pain arising from thinking about the answers to questions about how life was before counselling and therapy.
§ A statement of the support available in the event of re-stimulation of painful memories.
§ A detailed description of what will be expected of the participants, in terms of processes, type of participation, times, dates (or frequency), duration, and so on;
§ A statement of the participant's right to withdraw from the project at any time;
§ A description of the potential benefits to the client's personal development;
§ An outline of the steps taken to ensure confidentiality;
§ Information about the uses to which collected data will be put;
§ A name, address and contact telephone number for complaints about the researcher;
§ A name, address and contact telephone number for support with any emotional problems that get re-stimulated by the research questions, after the interviewer has departed; and:
§ Information about the process of debriefing to be held at the end of the project, so that lessons can be learned for the future about the design of similar projects.

10. With regard to debriefing, I will be informed by point 5 of the British Psychological Society's ethical principles for conducting research with human participants. (Maglennon, 1993, page 180). In particular:

§ Although participants will have a pretty good idea what the research project is about before they join, there will be a major gap in their understanding in relation to the attempt to eliminate placebo explanations for their positive outcome. Therefore, to complete their understanding of the nature of the research, I will explain the "invisible" or "opaque" aspects of the research design. I now have an ethical problem with this aspect, and am planning to change it. See Willig (2001: 18) – NO DECEPTION!!! NO DECEPTION!!!

§ I will conduct an exit interview with each participant to monitor for any unforeseen negative effects or misconceptions, with a view to rectifying them.

§ I will not use the fact of a final debriefing to justify including any unethical aspects in the research.

§ The debriefing plan will be seen as an essential commitment of the researcher to ensure that the participants do not leave the project without getting a face to face intervention designed to clear up all outstanding issues.

11. I will produce, and adhere to, a written policy guideline on each of the following key areas of ethical consideration:

(1) Enhancing the well-being of the participants;
(2) Avoiding harming the participants;
(3) Ensuring they are self-responsible;
(4) Promoting fair and just treatment;
(5) Ensuring confidentiality;
(6) Promoting informed consent;
(7) Emphasizing the right to withdraw without any explanation or notice.
(8) Ensuring voluntary participation; and:
(9) Providing final debriefing sessions, in groups or individually.

12. Four Key Ethical Considerations.
“Studies that invite participants to remember trauma may, as this study suggests, themselves be traumatic. As Primo Levi (…[1988]) wrote: ‘the memory of a trauma suffered or inflicted is itself traumatic because recalling it is painful or at least disturbing”. (Pope, 1999)

McLeod (1994: 175) identifies four key questions which can help to identify potential ethical problems with counselling research. They are as follows:

“1. What harm might possibly occur to any of the participants in the study, or to those excluded from the research?” My answer, in relation to my own proposal, is this: There is minimal potential harm that could occur to my research participants. The only major issues that I can identify are (i) breaches of confidentiality, which I have strategies to prevent, and (ii) re-stimulation of pre-therapy states of consciousness. This latter factor is less of a problem with clients who have had positive outcomes of therapy, it seems to me, and I will offer appropriate support and/or counselling to anybody experiencing negative responses to any of the questions in this schedule. THIS NOW SEEMS TO ME TO BE A MAJOR ETHICAL ISSUE!

“2. What procedures can be established to minimize harm and also to respond appropriately to distress or needs stimulated by participation in the study?” I will anonymise the interview records from the first moment of the interview; and make sure that all audio tapes are locked away as soon as they have been transcribed; and that transcripts cannot be traced back to the person interviewed. I will also switch into counsellor mode in the event of any re-stimulation of negative or disturbing pre-therapy states of consciousness. EXCLUDE INDIVIDUALS WHO HAD COUNSELLING OR THERPY FOR TRAUMATIC PROBLEMS, SUCH AS SEXUAL ABUSE OR OTHER SERIOUSLY DISTURBING PROBLEMS.

“3. How can the confidentiality of information gathered during the research be safeguarded and respected?” I will anonymize all records from the start of the first interview. All tapes will be locked away. Identities of respondents will be disguised. All transcripts will be checked with respondents before being published.

“4. What are the broader moral implications of the study, in terms of the ways that results will be used?” My intention is to help individuals who have no idea of the value of counselling and therapy to understand that there are significant gains to be made. I am also interested in defending counselling and therapy from its detractors – e.g. Erwin (1997) and Horgan (1999). The results will be used to educate the public regarding the “mechanisms of change” that may apply in counselling and therapy – as illustrated by six case studies. The results will also be of interest to the respondents, in that it will give each of them a context into which they can fit their own experiences – the context of five other case studies. If the results do not strongly refute the ideas of Erwin and Horgan, then that result will be honestly promulgated. If ‘active ingredients’ of counselling and therapy do not “emerge” from immersion in the data of this study, then my failure to find ‘active ingredients’ will be honestly reported.

Potential participants will be issued with a ‘briefing sheet’, as indicated above, and asked to sign an ‘Informed Consent Form’. The Informed Consent Form will be based, broadly, on the one outlined in McLeod (1994: 169).
Key learning points related to designing and implementing my research project:
I1:


XXX Additional Learning Points from McLeod (1994) (Incorporate anything that does not duplicate Bond 2004b in section 8 above)
Note the new “circled” points in Chapter 10 of McLeod, and write about them…
1. In-depth, qualitative interviews call for detailed accounts of the experiences of clients. “What emerges for the informant may be painful and distressing, and it is the responsibility of the researcher to do everything possible to ensure the well-being of the person”. (e.g. Brannen and Collard, 1982): McLeod, 1994: 167.
2. “… any research design will generate ethical dilemmas”. McLeod 1994: 168.
3. I will not hide behind “informed consent” to avoid dealing with any “negative consequences of participation”. McLeod 1994: 169-170.
4. Protecting confidentiality by “destroying notes and tapes after the completion of a study, or offering to return tapes to informants”. Ibid, 171.
5. I will ask my research participants “to read a (pre-submission) draft of (my study) so that he or she can make up his or her own mind about whether sufficient anonymity has been achieved, and if necessary make suggestions for further amendments” Ibid, 171. (By the same token, I will also offer participants the option to be identified in the study report, if that is what they wish!)
6. “It is … good practice to explain to informants at the start of a study, for example through informed consent procedures, the methods that will be employed to ensure confidentiality”. Ibid, 172.
7. “There is … an ethical responsibility on researchers not to spoil it for others, and to invest as much care and attention into negotiating the ending of a research project as they would into negotiating access at the start of a study”. Ibid, 174.
Key learning points related to designing and implementing my research project:
J1:

13. Review of Manchester University Coursenotes on Ethics
“Plato, Kant, Mill, Nietzsche, (and Sartre, and Ayer – JWB) and so on were undoubtedly men of high intelligence and possessed of considerable philosophical acumen. But the ethical systems they constructed in such detail and with moral insight differ radically from each other. They cannot all be correct, you may say. Perhaps none of their theories is correct. How can we tell? There does not seem to be any ‘meta-ethical’ criterion to which we can appeal”.
Harrison-Barbet (1990: 180)

Place results here…………………
Key learning points related to designing and implementing my research project:
K1:

XXX. Dr Bond’s Ethical Guidelines for Research in Counselling and Therapy (Incorporate in Section 9 above)…
“The arguments we are developing … are based on the premiss that ethics is not an absolute or immutable system but a dynamic day-to-day relationship and intercourse of human beings in society. It is as it were the oil that reduces the friction in the social machinery. Any consensus must be firm and flexible. It must be sufficiently firm to resist both the ‘free-play’ of subjectivism and the corrosion of what we may term ‘closed value systems’, both of which will in due course affect the efficiency of the ‘machine’.” (Emphasis added – JWB).
Harrison-Barbet (1990: 185).

Bond (2004b: 19) offers an elegant structure for consideration of key ethical issues that I had better think about during the development of my research proposal. Below I have reproduced the main, relevant elements of that structure as a framework for my own ethically mindful reflection on the details of my research proposal. All of the elements borrowed from Bond (2004a) are shown in double quotations marks “…”. Additional items have been reproduced from the BACP’s ‘Ethics for Counselling and Psychotherapy’ code (as revised); and these are shown in single quotation marks (‘…’).

“Ethical orientation”
· “Ensuring that the research is consistent with the requirements of trustworthiness in the practice of counselling and psychotherapy”. Bond (2004b).
o ‘Fidelity - honouring the trust placed in the practitioner’: (BACP[2]).
o ‘ethic of relationship’ (BACP2).
o ‘trustworthiness, boundaries, confidentiality, mutual respect’: (BACP2).

· ‘Personal moral qualities’
o ‘Empathy’
o ‘Sincerity’
o ‘Integrity’
o ‘Resilience’
o ‘Respect’
o ‘Humility’
o ‘Competence’
o ‘Fairness’
o ‘Wisdom’
o ‘Courage’ (BACP2)


“Risk”

“Relationships with research participants”

“Research integrity”

Key learning points related to designing and implementing my research project:
L1:


14. Conclusion
Pull out the main points of this paper and arrive at some kind of overall conclusion.


###


Update reference list:
REFERENCES
American Psychological Association, Committee for the Protection of Human Participants in Research. (1982). Ethical principles in the conduct of research with human participants. Washington, DC
Arnaud, D. (2000) Two key texts in practical ethics: a comparative review. Practical Philosophy, 3.2: 38-43. Available online: www.practical-philosphy.org.uk/ Volume3Articles/TwoKeyTextsInPracticalEthics. Downloaded: 13th January 2005.
Aune, B. (c1970) Rationalism, Empiricism, and Pragmatism: an introduction. New York: McGraw-Hill.
Ausubel, D. (1968) Educational Psychology: a cognitive view, New York, Holt, Rinehart and Winston Inc.
Badiou, A. (2001) Ethics: An essay on the understanding of evil (originally published 1998, trans. P. Hallward). London: Verso.
Baker, R. (2002) Fragile Science: the reality behind the headlines, London, Pan Books.
Billington, R. (1993) Living Philosophy: an introduction to moral thought. Second edition. London: Routledge.
Blackburn, S. (2003) Ethics: A very short introduction. Oxford: Oxford University Press.
Bond, T. (2000) Standards and Ethics for Counselling in Action. Second edition. London: Sage.
Bond, T. (2004a) An introduction to the ethical guidelines for counselling and psychotherapy. Counselling and Psychotherapy Research, Vol.4, No.2 (October), pp 4-9.
Bond, T. (2004b) Ethical guidelines for researching counselling and psychotherapy. Counselling and Psychotherapy Research, Vol.4, No.2 (October), pp 10-19.
BPS (2000) Ethical principles for conducting research with human participants. In: The British Psychological Society Code of Conduct. Leicester: BPS.
Brannen, J. and Collard, J. (1982) Marriages in Trouble: the process of seeking help. London: Tavistock.
Brown, L.S. (1997) Ethics in psychology: cui bono? In: Fox, D. and Prilleltensky, I. (eds) (1997) Critical Psychology: an Introduction. London: Sage. (Brief notes taken).
Burr, V. (2003) Social Constructionism. Second edition. Routledge: Hove, East Sussex.
Cary, S. (1986) Cognitive science and science education. In Murphy, P. and Moon, B. (eds) Developments in Learning and Assessment. Kent: Hodder and Stoughton.
Chaffee, J. (1998) The Thinker's Way: 8 steps to a richer life. Boston, Little Brown and Co.
Chambers (2003) The Chambers Dictionary. Edinburgh: Chambers.
Chastain, G., & Landrum, R. E. (Eds.). (1999). Protecting human subjects: Departmental subject pools and IRBs. Washington, DC: American Psychological Association.
Clark, S.R.L. (2004) Ancient philosophy. In Kenny, A. (ed.) The Oxford History of Western Philosophy. Oxford: Oxford University Press.
Cobley, P. and Jansz, L. (1999) Introducing Semiotics. Cambridge: Icon Books.
de Bono, E. (1995) Teach Yourself to Think. London: Viking/Penguin.
de Bono, E. (2005) The Six Value Medals. London: Vermilion.
Eckstein, S. (ed.) (2003) Manual for Research Ethics Committees: Centre of Medical Law and Ethics, King’s College London. Sixth edition. Cambridge: Cambridge University Press.
Edmonds, B. (2000) ‘Towards implementing free will’, at http:/bruce.edmonds.name /tifw/tifw_ 1.html .
Ellis, A. (1962) Reason and Emotion in Psychotherapy. New York: Carol Publishing.
Ellis, A. (1994) Reason and Emotion in Psychotherapy. Revised and Updated. New York: Carol Publishing.
Emanuel, E.J., Crouch, R.A., Arras, J.D., and Moreno, J.D. (2004) Ethical and Regulatory Aspects of Clinical Research: readings and commentary. Baltimore, Maryland: The Johns Hopkins University Press. (Section 2 (26) is on the duty to exclude people at undue risk from research!)
Epictetus (1991) Enchiridion, New York, Prometheus Books.
Erwin, E. (1997) Philosophy and Psychotherapy: razing the troubles of the brain, London, Sage.
Firth, J., Shapiro D. and Parry, G. (1986) 'The impact of research on the practice of psychotherapy', British Journal of Psychotherapy, 2(3): 169-179.
Forsyth, D. R. (1980). A taxonomy of ethical ideologies. Journal of Personality and Social Psychology, 39, 175-184.
Forsyth, D. R. (1981). Moral judgment: The influence of ethical ideology. Personality and Social Psychology Bulletin, 7, 218-223.
Forsyth, D. R. (1985). Individual differences in information integration during moral judgment. Journal of Personality and Social Psychology, 49, 264-272.
Forsyth, D. R. (1993). Honorable intentions versus praiseworthy accomplishments: The impact of motives and outcomes on the moral self. Current Psychology, 12, 298-311.
Forsyth, D. R., & Berger, R. E. (1982). The effects of ethical ideology on moral behavior. Journal of Social Psychology, 117, 53-652.
Forsyth, D. R., & Nye, J. L. (1990). Personal moral philosophy and moral choice. Journal of Research in Personality, 24, 398-414.
Forsyth, D. R., Nye, J. L., & Kelley, K. N. (1988). Idealism, relativism, and the ethic of caring. Journal of Psychology, 122, 243-248.
Forsyth, D. R., & Pope, W. R. (1984). Ethical ideology and judgments of social psychological research: A multidimensional analysis. Journal of Personality and Social Psychology, 46, 1365-1375.
Forsyth, D. R., & Scott, W. (1984). Attributions and moral judgments: Kohlberg's stage theory as a taxonomy of moral attributions. Bulletin of the Psychonomic Society, 22, 321-323.
Freedman, J. and Combs, G. (1996) Narrative Therapy: the social construction of preferred realities. New York: WW Norton and Company.
Gregory, I. (2003) Ethics in Research. London: Continuum.
Griffin, J. and Tyrrell, I. (2004) Human Givens: A new approach to emotional health and clear thinking. Chalvington, East Sussex. Human Givens Publishing Limited.
Hadjistavropoulos, T. and Smythe, W.E. (2001) Elements of risk in qualitative research. Ethics and Behavior, 11:2, 163-174.
>>>>>>ORDER NUMBER ONE TO THE ILLS CAME DOWN TO THIS POINT>>>>
Hamlyn, D.W. (1990) The Penguin History of Western Philosophy. London: Penguin.
Hare, R.M. (1981) Moral Thinking: its levels, method and point. Oxford: Clarendon Press.
Harrison-Barbet, A. (1990) Mastering Philosophy. London: Macmillan.
Hart, C. (1998) Doing a Literature Review: releasing the social science research imagination. London: Sage Publications.
Heylighen, F. (1991) ‘A cognitive-systemic reconstruction of Maslow’s theory of self-actualization’, PESP, Free University of Brussels, Pleinlaan 2, B-1050 Brussels, Belgium.
Homan, R. (1991) The Ethics of Social Research. London: Longman.
Jamieson, D. (1993) Method and Moral Theory. In, Singer, P. (ed) A Companion to Ethics. Oxford: Blackwell.
Jenkins, P. (ed.) (2002) Legal Issues in Counselling and Psychotherapy. London: Sage.
Kant, I. (1790/1987) Critique of Judgement. Trans. Werner P. Pluhar. Indianapolis: Hackett Publishing Company.
Kenny, A. (ed) (1994) The Oxford History of Western Philosophy. Oxford: Oxford University Press.
Keown, D. (2005) Buddhist Ethics: a very short introduction. Oxford: Oxford University Press.
Kitchener, K.S. (2000) Foundations of Ethical Practice, Research and Teaching in Psychology. Mahwah, New Jersey: Lawrence Erlbaum Associates.
Kleiser, S.B., Sivadas, E., Kellaris, J.J., and Dahlstrom, R.F. (2003) Ethical ideologies: efficient assessment and influence on ethical judgements of marketing practices. Psychology and Marketing, 20:1, 1-21.
Korzybski, A. (1933/1990) Selections from Science and Sanity: an introduction to non-aristotelian systems and general semantics, Englewood, New Jersey, The International Non-Aristotelian Library Publishing Company.
Leaman, O. (2000) Eastern Philosophy: key readings. London: Routledge.
Lee, R.M. (1993) Doing Research on Sensitive Topics. London: Sage.
Lee-Treweek, G. and Linkogle, S. (eds.) (2000) Danger in the Field: Ethics and risk in social research. London: Routledge.
Levi, P. (1988) The Drowned and the Saved. New York: Vintage International.
Magee, B. (2001) The Story of Philosophy, London, Dorling Kindersley.
Maglennon, K.B. (1993) Essential Practical Psychology, London, Collins Educational.
Mauthner, M., Birch, M., Jessop, J. and Miller, T. (eds.) (2002) Ethics in Qualitative Research. London: Sage.
McLeod, J. (1994) Doing Counselling Research, London, Sage.
McLeod, J. (1999) Practitioner Research in Counselling. London: Sage Publications.
McLeod, J. (2000) Research issues in counselling and psychotherapy, in Palmer, S. (ed.) Introduction to Counselling and Psychotherapy: the essential guide, London, Sage.
Nierenberg, G.I. (1982) The Art of Creative Thinking, New York, Simon and Schuster.
Newman, E., Walker, E.A. and Gefland, A. (1999) Assessing the ethical costs and benefits of trauma-focused research. General Hospital Psychiatry, 21:3, 187-196. Available online: http://kspope.com/ethics/editorial.php . Downloaded 10th May 2005.
Newman, E, Willard, T., Sinclair, R. & Kaloupek, D. (2001). The costs and benefits of research from the participants' view: The path to empirically informed research practice. Accountability in Research, 8, 27-47.
Oliver, P. (2003) The Student’s Guide to Research Ethics. Maidenhead: Open University Press.
Parker, I. (2005) Qualitative Psychology: introducing radical research. Milton Keynes: Open University.
Pope, K.S. (1999) Invited guest editorial: The ethics of research involving memories of trauma. General Hospital Psychiatry, 21:3.
Pope, K.S. (2005) Some suggestions for doing online literature searches. Available online: http://www.kspope.com/litsrch/index.php . Downloaded: 21 June 2005.
Pope, K. S., & Vasquez, M. J. T. (1998). Ethics in Therapy & Counseling, Second Edition. San Francisco: Jossey-Bass.
Popkin, R.S. and Stroll, A. (1993) Philosophy. Third edition. Oxford: Made Simple Books/Butterworth-Heinemann.
Rennie, D.L. (1994) 'Strategic choices in a qualitative approach to psychotherapy process research: a personal account', in Hoshmand, L. and J. Martin (eds.) Method Choice and Enquiry Process: lessons from programmatic research in therapeutic practice. Place?: Publisher?
Robson, C. (2002) Real World Research: a resource for social scientists and practitioner-researchers. Second edition. Oxford: Blackwell Publishers.
Schlenker, B. R., & Forsyth, D. R. (1977). On the ethics of psychological research. Journal of Experimental Social Psychology, 13, 369-396.
Sieber, J. E. (1992). Planning ethically responsible research. Newbury Park, CA: Sage.
Sieber, J.E. and DuBois (eds.) (2004) Using our Best Judgement in Conducting Human Research: a special issue of Ethics and Behavior. New Jersey: Lawrence Erlbaum Associates. (ISBN: 0-8058-9494-2). Book.
Smith, J.A. (1991) Conceiving selves: a case study of changing identities during the transition to motherhood. Journal of Language and Social Psychology, 10: 225-243.
Smith, J.A. (1995) Semi-structured interviewing and qualitative analysis. In J.A. Smith, R. Harré and L. Van Langenhove (eds.) Rethinking Methods in Psychology. London: Sage.
Smith, J.A. (1996) Beyond the divide between cognition and discourse: Using interpretative phenomenological analysis in health psychology. Psychology and Health, 11, 261-271.
Smith, J.A. (1999) Towards a relational self: social engagement during pregnancy and psychological preparation for motherhood. British Journal of Social Psychology, 38: 409-426.
Smith, J.A. (2004) Reflecting on the development of IPA and its contribution to qualitative research in psychology, Qualitative Research in Psychology, 1, 39-54.
Smythe, W.E. (2000) Owning the story: ethical considerations in narrative research. Ethics and Behavior, 10:4, 311-336.
Soltis, J.F. (1990) The ethics of qualitative research. In E.W. Eisner and A. Peshkin (eds.) Qualitative Inquiry in Education: the continuing debate. New York: Teachers College, Columbia University. pp247-257.
Spade, P.V. (1994) Medieval philosophy. In Kenny, A. (ed) The Oxford History of Western Philosophy. Oxford: Oxford University Press.
Thomson, A. (1999) Critical Reasoning in Ethics: a practical introduction. London: Routledge.
üVollmann, J. (2000) ‘Therapeutic’ versus ‘non-therapeutic’ research: a plausible differentiation in medical ethics? Ethik in der Medizin, 12:2, 65-74. (Don’t order, I have the abstract!!! SwetsWise1).
Wampold, B.E. (2001a) Contextualizing psychotherapy as a healing practice: culture, history and methods, Applied and Preventive Psychology 10: 69-86.
Wampold, B.E. (2001b) The Great Psychotherapy Debate: Model, methods, and findings. Mahwah, NJ: Lawrence Erlbaum.
Wampold, B.E., Ahn, H., and Coleman, H.K.L. (2001) Medical model as metaphor: Old habits die hard. Journal of Counselling Psychology, 48, 268-273.
Weston, A. (1997) A Practical Companion to Ethics. Oxford University Press.
Wilde, O. (1948) The doer of good. In The Works of Oscar Wilde. (Edited with an introduction by G.F. Maine). London: Collins.
Willig, C. (2001) Introducing Qualitative Research in Psychology: adventures in theory and method. Maidenhead: Open University.
Wood, D. (1988) How Children Think and Learn. Oxford: Blackwell.




###

5,446 words on 26th April 2005
5,566 words on 9th May 2005
5,738 words on 19th May 2005.
8,573 words on 26th May 2005
11,694 words on 9th June 2005
12,103 words on 16th June 2005
16,096 words on 28th June 2005.
20,775 words on 26th September 2005.
24,520 words on 29th September 2005.
25,171 words on 5th October 2005.
28,000 words on 10th October 2005.

Appendix ‘A’: Mahrer’s Critique and Colleagues’ Responses

Appendix ‘B’: Overview of the Educative Components of Bond (2004b) – The Road to Ethical Mindfulness
As explained in the main text, I have divided the relevant principles of the guidelines on research ethics - (Bond, 2004b) - into two groups: the imperative and the educative, using a quite liberal definition of the educative principles. I now want to sample the educative principles, and to analyze them for their structure and content, so I can make some decisions about what they imply for the concept of ethical mindfulness. Since there are 21 educative principles (in my classification), I propose to take principles 1, 11 and 21, and to analyze them. I may include others later, if necessary.

“It is part of the practitioner’s responsibility to be sufficiently trustworthy to enable constructive working relationships with clients. Trust requires a quality of relationship between service user and provider that is sufficient to withstand any challenges arising from inequality, difference, uncertainty and risk in their work together”. (Page 10).

“There is an ethical expectation of researchers that they actively seek opportunities to communicate any learning from research that is relevant to participants, practitioners, policy makers, academics and others with valid interest in the research”. (Page 15).

“Research integrity is strengthened by deploying strategies and procedures for responding to complaints promptly and fairly”. (Page 17).

Before going on to reflect upon these three selected principles, I want to mention that Bond (2004b) has provided what might be called an “orienting device” to help researchers to know “where” to look, and “what” to look for: namely to look at their own trustworthiness:
“The distinctive ethical dimension of counselling and psychotherapy practice is the trust placed by clients in practitioners. This trust is not only essential to achieving the client’s aspirations but also for the practitioners to establish the quality of relationship and interaction that makes the work possible”. (Page 10).

This seems to me to be quite a clever device, in that it rolls virtue ethics and consequentialism into a neat partnership, which is quite likely to motivate readers of these guidelines to strive for trustworthiness. This seems to be true, firstly, because many if not most people desire to think of themselves as being virtuous. I do not agree with Plato that nobody would knowingly do wrong, and that wrongdoing is the result solely of ignorance. This is quite clearly not true. However, what may be true is that many if not most people would like to have a reputation for being trustworthy, and these guidelines tell the researcher how to get that reputation – by behaving in a way that is patently trustworthy. The second motivational element is the consequentialism, which amounts to the claim that: If you behave in a trustworthy manner,(which is the initiating action), you are much more likely to establish the kind of relationships which will get you to the point of successfully being able to complete your research (which is the desirable consequence). Many if not most researchers, anxious to complete their research successfully, are likely to be strongly affected by this promise of the self-efficacy of trustworthy behaviour. Thus, this central organizing principle of the new guidelines is surprisingly elegant in providing a virtue-ethics and consequentialist (or utilitarian) motivational commitment for what follows: which is a mixture of prescription and recommendation, or deontology and exhortations. The prescriptions, or imperatives, are unremarkable; and my focus is upon the so-called educative principles, which, it is claimed, will promote mindfulness.

So now, here is the crunch question: In what way or ways do the three non-imperative principles, presented above, promote mindfulness, which is not also true of the following three imperative principles. (I have selected imperative principles 1, 10 and 20, because there is no imperative principle 21).

“Whenever unavoidable risks are identified, the researcher should consider, in consultation with appropriate others, whether it is ethically justifiable to carry the research forward and, if so, what safeguards are required”. (Page 11).

“Good practice requires: 1. A firm commitment to striving for fairness and honesty in the collection and analysis of all data and in how those findings are presented, as fundamental to the integrity of the research”. (Page 15).

“The competence of the researcher(s) to undertake the proposed research should be considered in the initial risk assessment”. (Page 16).

There is no obvious difference in terms of mindfulness promotion, in my view. Both the educative and the imperative principles refer to various problems that might arise with a research project, so in that sense they are both educative. The imperative ones not only educate you with regard to what they are, but also insist that they be implemented. The educative ones educate you with regard to what they are, and strongly influence you to implement them, in order to be “a virtuous researcher” and to be “efficacious”, as mentioned in my discussion of the “orienting device” above.

In this sense, mindfulness is just “awareness”, and not a capacity to think; and certainly not a system of thinking, or skills for thinking about ethics, or techniques, etc. Therefore, this is a code, a set of guidelines, and not an introduction to a system of mindfulness. And yet, I am strongly of the opinion that that is precisely what I require in order to be a good researcher – a system of critical thinking skill which will allow me to work out the implications of ethical dilemmas from first principles! There are no first principles in this set of guidelines, only derived and fully formed, and set in concrete, principles, both imperative and strongly recommended.

The kind of mindfulness advocated by Bond (2004b) seems to me to be achievable by reading these guidelines repeatedly, and then relating them to our own research project ideas. The kind of mindfulness I want to try to develop would involve being able to think for myself from diverse sources of ideas, attitudes, principles, systems, and so on. And that is what I shall attempt to develop in Appendix ‘C’, and section 5 of this paper.


Appendix ‘C’: Moral Thinking as ‘Critical Thinking’ (Hare 1981)
Somewhere in May or June 2005, I found Hare (1981) in the Oxfam bookshop in Bradford. I had spent weeks trying to figure out how to think about moral issues, and I was really floundering. Suddenly, here was a dusty old book right in front of me entitled: ‘Moral Thinking: It’s levels, method and point’. I was drawn to the possibility of learning a method of moral thinking, not because I (consciously) thought that there is an absolute moral reality, or any absolutely certain moral knowledge, but because I wanted to get beyond simple rule following. (At least that was how it seemed to me at that time. I now wonder if there might not have been an unconscious desire for certainty). I had identified eight actual or potential problems with my research proposal, and I wanted to be able to think my way through those problems in a way that would help me to be reasonable sure that I was designing a safe and helpful proposal; and one I could defend from all potential detractors. Over the summer, I read this book by Hare, and reviewed my notes from it many times, and still I am not clear what the promised ‘method’ of moral thinking is. Therefore, in this appendix I want to do some thinking on paper about Hare (1981), so I can reach some kind of conclusion to this quest.

According to Hare:
“We want the moral philosopher to help us to do our moral thinking more rationally. If we say this, we presuppose that there is a rational way or method of going about answering moral questions; and this means that there are some canons or rules of moral thinking, to follow which is to think rationally. The moral philosopher asks what these canons are”. (Hare 1981: 1)

Hare twice eschewed ontology, which he does not consider essential to moral philosophy. (Hare, 1981: 6). This is rather convenient for him, in that he has sidestepped the question of the ontological status of moral claims, such as: “This is a good act”. However, he can justify this step, since he does not consider that moral statements are statements of (descriptive) fact, but rather statements of prescriptive preferences, which have been universalized using a combination of utilitarianism and Kant’s deontology. (In other words: “…moral words have … a commendatory or condemnatory or in general prescriptive force which ordinary descriptive words lack…”: Hare, 1981: 71)

His reasoning goes something like this: Because I would prefer it if nobody harms my body, which I definitely do prefer, I am logically obliged to insist that this not happen. That is to say, it seems intuitively logical to me that other people MUST NOT harm my body. The same principle would apply to my property, possessions, and relatives, and so on. Moreover, since I feel this way, it seems to me that other humans would feel likewise, and I can collect evidence about this if I wish, and indeed can hardly avoid collecting such evidence. The next step is to fully identify with other individuals in my life who are subject to some threat, as if I might be in their position; and, by empathizing with them I will now want for them what they (implicitly) want for themselves: to escape from that threat. (This is the ethic of care – Chaffee, 1998: 330). Therefore, it would be wrong of me to inflict upon them anything that I would not wish them to inflict upon me. (And this is the ethic of justice – Chaffee, 1998: 228). From this we get to the stage of working out the ‘universal preferences’ of all individuals, as if we knew their minds, and to then prescribe that they MUST be treated in this way, and MUST NOT be treated in some other way. (Of course, this kind of reasoning often breaks down; as we do not all share the same preferences in practice. [However, there is probably a good deal of overlap in the preferences of most individuals in relation to certain major possibilities, such as assault, pain, pleasure, death, and so on. {Which is not to deny that there are very real cultural and sub-cultural, as well as some psychological, differences of taste}]).

However, the musts described in the previous paragraph are not the musts of modal logic: (Hare 1981: 23). They are the musts of logically derived prescriptions based originally on preferences and emotional commitments. In addition, just as prescriptions are not descriptions, moral rightness is not empirical correctness:
“Nowhere here can we find a way of checking the correctness of any moral opinions. To suppose that we can is to confuse moral philosophy with various kinds of empirical science such as anthropology and linguistics, as some moral philosophers appear to be doing…” (Hare 1981: 15).

In chapter 2, Hare deals with moral conflicts, or conflicts between two moral intuitions, such as I experience in relation to problems 6 and 7 in my problem list. That is to say, generically, if my solution to problem 7 is to proceed with action x, then I am in default of my preference expressed in problem 6, and vice versa. This is perhaps the strongest case that we can make for having a system of critical thinking about moral intuitions, which is what Hare’s method of moral thinking is essentially about. That is to say, if our innate tendencies and our earliest education results in us having moral intuitions that guide our actions for the rest of our lives, this may work perfectly well if we always and only have one intuition in any one situation. But what shall we do when we find ourselves in a situation where we experience conflicting intuitions, and thus conflicting action-tendencies, in a given situation? We clearly need to be able to evaluate the two intuitions, and choose between them on some basis. And the basis chosen by Hare (1981) is utilitarianism, supported, for very good reasons, by Kant’s universalization rule.

Hare (1981: 25) proposed a two-level system of moral thinking: the intuitive and the critical. He traces such a system back to Plato and Aristotle, and speaks approvingly of Plato’s distinction between “knowledge and right opinion”. This refers back to Plato’s analogy of the cave, and the shadows which normal humans take to be realities. Plato labelled the misperceptions of the cave dwellers as (mere) “right opinion”, based on imperfect senses. And he contrasted these distorted perceptions against the “true perceptions” of the rationalist philosopher, which he labelled as “knowledge”. I have to reject this view because, in Plato’s theory, true knowledge was derived entirely from introspective reasoning, and was not in any way “sullied” by experience, or what we would today call “empirical data”. This makes the two levels problematical for me.

However, if we detach this model from Platonic idealism, we can say that all humans seem to have intuitions, or ‘automatic thoughts’, and urges, when faced with what we call a ‘moral dilemma’. We have a sense of what would be the ‘right’ thing to do, and this seems to be strongly shaped, or even originally constructed, by our family group and society, with our participation. Moreover, we probably have some innate tendencies that get shaped in that socialization process. (In the case of counsellors and therapists, our intuitions also seem, often or mainly, to be influenced by professional codes of ethics. However, this is not always the case, otherwise we would not have the occasional scandals about counsellors and therapists exploiting or abusing a client that we regrettably do have).

So what then is Hare’s (1981) model of critical reflection upon moral intuitions? What are its main elements? And how is it to be applied? Can I apply it to my eight (or later, nine) ethical dilemmas?

Irritatingly, his next discussion point is not the critical thinking model at all, but rather experimenting with the idea of dispensing with that model, and simply juggling the two moral intuitions that are in conflict. In my case, his advice would amount to this. Let us say that my current position is this: Proceed with act 6 (which disallows act 7); and proceed with act 7 (which disallows act 6); and these are clearly in conflict. Hare (1981) suggests that I could identify which of these actions would be the more harmful, and then to choose as follows: e.g. “Do not proceed with act x, unless this would allow the worse act y to occur”. This would be a strategy of minimizing harm, as opposed to doing no harm; and as such, it might not be acceptable to me. Thus, a one-level system of moral reasoning does not seem to be suitable for my purposes: (Hare 1981: 32-35). This amounts to a rejection of intuitionism.

When there is a question about conflicting intuitions, it needs to be settled by appeal to factors rather than our intuitions, otherwise we are stuck in our conditioning, pure and simple.
“What will settle the question is a type of thinking which makes no appeal to intuitions other than linguistic. I stress that in this other kind of thinking, no moral intuitions of substance can be appealed to. It proceeds in accordance with canons established by philosophical logic and thus based on linguistic intuitions only. …
“Critical thinking consists in making a choice under constraints imposed by the logical properties of the moral concepts and by the non-moral facts, and by nothing else. …” (Hare 1981: 40).

Hare (1981: 42) goes on to present a brief outline of his method of critical thinking, as follows:
“What critical thinking has to do is to find a moral judgement which the thinker is prepared to make about this conflict-situation and is also prepared to make about all the other similar situations. Since these will include situations in which (s/he) occupies, respectively, the positions of all the other parties in the actual situation, no judgement will be acceptable to (him/her) which does not do the best, all in all, for all the parties”.

He is here invoking Kant’s universalization principle, which is one of Kant’s three forms of the Golden Rule: Do not act on any principle that you would not at the same time have instituted as a universal rule, to which you would personally be subject.
“Thus the logical apparatus of universal prescriptivism, if we understand what we are saying when we make moral judgements, will lead us in critical thinking (without relying on any substantial moral intuitions) to make judgements which are the same as a careful act-utilitarian would make”. This is how “…the utilitarians and Kant get synthesized”.

In this way, Hare (1981: 42-43) reveals himself to be a utilitarian who has the categorical imperative as his safety net when confronted by those who would push him into the inferred immoralities of utilitarians who would seem to be obliged to murder one person to save the lives of two. When utilitarianism gets combined with the universalization principle, this challenge to utilitarianism falls away. Moreover, in the case of implementing professional codes of counselling and therapy research ethics, this amounts to saying: Try to promote the greatest good of the greatest number of individuals and/or groups; and harm nobody.

However, I also have decided to adopt the ethical guidelines outlined in Bond (2004b), which means that I’d better learn how to think as a rule-utilitarian, in relation to the educative principles, and as a prescriptivist (in relation to the imperative principles). Thus, I will treat the imperative principles in those guidelines as non-overridable; and the educative principles as overridable in a rule-utilitarian manner.

So Hare (1981: 87) is concerned with thinking about moral issues critically and rationally; and “…rationality is a quality of thought directed to the answering of questions…”. This is something I deduced intuitively, and I have been striving in this direction throughout this paper. I have been struggling to formulate questions which would help me to think my way from my problem with eight (now nine) ethical dilemmas to a solution which involves being able to make justifiable or defensible decisions with which I am emotionally and intellectually satisfied.

We now come to a major statement that requires some analysis and clarification. I have highlighted in yellow the phrases that I particularly want to clarify:
“We shall see that the method of critical thinking which is imposed on us by the logical properties of the moral concepts requires us to pay attention to the satisfaction of the preferences of people (because moral judgements are prescriptive, and to have a preference is to accept a prescription); and to pay attention equally to the equal preferences of those affected (because moral principles have to be universal and therefore cannot pick out individuals).

Firstly, the idea that anything is “imposed on us” is a particularly opaque claim. There are, for example, amoralists: individuals who refuse to conform to anybody’s moral prescriptions, and thus it can be said that it is not easy (an certainly not effortless) to impose anything upon anybody, in a totally reliable way. Nevertheless, we do have a number of psychological experiments – e.g. Milgram (XXXX) and Zimbado (XXXX) – which imply that we humans are particularly biddable and conformist. But is cannot be “the logical properties” of anything that imposes anything upon us, but rather the “[logical] beliefs” of our earliest carers, and our most enduring social milieu. Certainly, there seem to be “categories of the mind” as claimed by Kant – inherent senses of space, time and causality. This view has been expanded and elaborated by Korzybski (1933) and Nierenberg (1982). These authors maintain that we have innate senses of structure, order and relations that we impose upon our sensate experiences of the world from the very beginning of our lives. This process precedes, or perhaps more accurately, facilitates, Piaget’s sensorimotor stage of learning: (Wood ,1988).

Thus, I am unwilling to go along with Hare’s (1981) inference that there is a kind of “objective imperative” that we adopt particular ways of thinking. I do not believe that this is the case. Rather there seems to be a “subjective sense of imperative” (derived from ‘sensibilities’, including empathy, and formal logic, that we’d better adopt some particular pro-social and generally life-affirming behaviours, and to exclude, as far as possible, the anti-social and the life-denying, otherwise we shall ourselves suffer the negative consequences of anti-social and life-denying attitudes. In other words, we have a sense that we had better pay attention to the preferences of others, otherwise they are unlikely to pay attention to our preferences. But you have only to look at the average school playground to find mountains of evidence that, in the absence of social controls, there is a strong tendency among many young humans to precisely give not one fig about the preferences of others. And that it is only through the successful socialization of the positive urges of the human heart that we arrive at “the logical properties of the moral concepts”, and normally as dogma rather than logical reasoning; though it can be supported by formal logical processes. That is to say, most humans have a single-level moral thinking process: they have moral intuitions that, if we are lucky, they will normally follow. That is it for most people.

“To have a preference is to accept a prescription”, according to Hare, in the quote presented above. This is all too often and sadly true-for-individuals, as opposed to objectively true. Moreover, this seems to be the basis of most human disturbance. That is to say, people find they prefer X, and then they demand that they absolutely must get X, otherwise their life is awful, and hardly worth living. The X in this case can be long, red fingernails, a fast car, a job, a sex-love partner, or love and acceptance from others in general. However, is it true to say that, logically, and rationally, to have a preference is to accept a prescription? No, I do not believe it is. For example, I would prefer it if I looked like Michael Douglas, but would it be logical for me to turn this into a prescription: “I absolutely must look like Michael Douglas!”? Of course not. However, emotionally, most humans seem to be wired up in this way. Moreover, we can fault this view by considering the question: “Is there any evidence that my preferences are valid imperatives?” Certainly, this seems not to be the case in the modal logical sense. Therefore, there seems to be a job to do here in extricating “morally logical (and therefore pro-social) decisions” from “illogical moral (and therefore neurotic) decisions”.

And just at this moment – noon, on Thursday 29th September - I had the breakthrough that I have been working towards for several months. I appreciated the following insights, which are all interrelated:
1. Hare’s (1981) arguments, while interesting, are laboured, exaggerated and faulty. His promised “method” is shallow and bitty. And although it advocates the “universalization” principle from Kant, it is far from being a universal system of thought about morality. It is a specific ideology.
2. Hare (1981) is seeking what I was seeking – a method of reasoning about ethics and morality which is defensible, at the very least. However, we were probably both aiming beyond the defensible towards the “absolute”. Hard as it is for me to admit it, given that I am a relativist in relation to empirical enquiry, I seem to be driven by an absolutist urge towards certain knowledge in the moral domain. (This is probably a hangover from my absolutist moral education at the hands of the Catholic church in Ireland, including having the De La Salle Brothers as my teachers!)
3. It was this second point that made me realize that I was making the same mistake as Plato and Descartes (among the rationalists) and J.S. Mill and other scientists (among the empiricists), in thinking that it might be possible to develop a method of reasoning which can generate certain knowledge. I had better learn to settle for viable knowledge about human moral tendencies, and viable approaches to moral philosophizing.
4. I had not previously realized that I was seeking certain knowledge, since I know (intellectually) that morality cannot even be equated to ‘scientific’ knowledge – which is itself limited, propositional, hypothetical, and subject to challenge and change.
5. I was clear that Hare (1981) was elaborating a system of meta-ethics called “universal prescriptivism”, and I wanted to understand how I could master and apply such a system, to make this paper defensible. I had not realized until this moment that this quest was at odds with my ontology and epistemology, as outlined above. We seem to have innate tendencies, or urges, towards empathy and indifference, altruism and selfishness, and fairness/justice and unfairness/injustice. Additonally, we construct our moral philosophies based on our social “thrownness”, group negotiations and political impositions, rather than pure logic or detached reason.
6. A moment ago, I realized that Hare (1981) is advocating a form of ‘foundationalism’ – Jamieson, 1993: 481-482 – which is akin to Descartes’ attempt to develop a ‘method’ of producing ‘certain knowledge’. Given my ontology and epistemology, I cannot go along with this idea. I also cannot be satisfied with ‘coherentism’ – Jamieson, 1993: 482. Both seek a defensible set of moral beliefs, as I do, but I am increasingly aware that I had better settle for ‘pragmatically defensible’ principles, based on viable approaches, rather than a foundation or coherence that puts my principles beyond challenge! (All forms of foundationalism and coherentism are susceptible to challenge and invalidation).

The first and most important pragmatic principle that I shall invoke is Heidegger’s concept of my ‘thrownness’: that is to say that I am thrown into a particular culture which already/always has a matrix of moralities/immoralities woven into it. Therefore, I inherit a system of morality in my encounter with my parents, teachers, and so on.

The second pragmatic principle that I shall use is that, by virtue of my work and my academic role, I am already/always under pressure to accept one or more of the ethical codes of the professional bodies that have produced research guidelines: (e.g. APA, CPA, BACP). And I choose to adopt Bond (2004b) as the core of my ethical code, as modified by my personal morality, and my learning from my wider reading during the construction of this assignment.

As to Hare’s (1981) method of critical thinking, it amounts to no more than an amalgam of utilitarianism and Kant’s universalization principle. And, as I shall mention later, it is but one of a range of moral ideologies. It will be interesting to see if this approach helps me to think more effectively about my eight (or nine) ethical dilemmas; and how it may be affected or changed by my consideration of other moral ideologies.

Two days after I wrote the section above about my insights into Hare (1981), the latest trench of articles (and one book), on the subject of ethics/morality, arrived from the Inter Library Loan Service (ILLS), and gave rise to these further considerations:
Forsyth (1980) presents four distinct ethical perspectives, or ideologies, one of which seems to be somewhat close, but still distinct, from Hare (1981). This one ideology, which seems similar to Hare, is called “absolutism”, and uses “inviolate, universal moral principles to formulate moral judgements”: (Forsyth 1980: 175). Although Hare (1981) begins from the utilitarian end of his model, and proceeds to the deontological, the effect is to produce universal prescriptions, just like those of an absolutist in Forsyth’s (1980) taxonomy. The remaining three ideologies identified, and investigated by Forsyth (1980), are as follows:
(a) Situationism: “which advocates a contextual analysis of morally questionable actions”.
(b) Absolutism: as above.
(c) Subjectivism: “which argues that moral judgements should depend primarily on one’s own personal values”. And:
(d) Exceptionism: “which admits that exceptions must sometimes be made to moral absolutes”. (Forsyth 1980: 175-176).

It is not that Hare (1981) was unaware of other ideologies, and indeed, he explicitly argues against subjectivism, naturalism and amoralism in developing his own model of moral reasoning. It is just that in attempting to elevate his own model above all others, he was in danger of misleading his readers. The specific misdirection could have been in conveying the illusion that humans mainly operate from “level one thinking” – or moral intuitions; which is probably normally correct - and that virtually all humans could learn to operate out of “level two thinking”, or reasoning from “universal prescriptivism” in choosing courses of action in difficult situations of conflicting principles. This would involve reasoning about consequences and intentions, but in a fundamentally absolutist manner – the imperative backstop. And this ignores the apparent fact that there are at least three other common ideologies, and that these are unlikely to disappear because Hare (1981) has advocated a single ideology.

So, at the very least, there seem to be four ethical ideologies, one of which, absolutism, seems close to Hare’s (1981) approach. In an earlier paper, in 1978, “Forsyth … presents evidence suggesting that the ethical ideology people adopt influences their moral judgements”. (Forsyth, 1980: 182). However, strangely enough, the ideology they adopt does not reliably predict their behaviour! (Forsyth, 1980: 182). Therefore, there are probably about four or five ethical ideologies, at least; and none of them probably significantly affect human behaviour, which seems to be more strongly controlled from the emotional/motivational level. (Source: XXXX).

It also may be that individuals’ ethical ideologies change from context to context. This postion comes from my own experience of experimenting on myself. This came about because I have already used Forsyth’s Ethics Position Questionnaire, which is used to allocate individuals to one of his four ethical ideologies. I found this questionnaire on the internet during the month of June 2005, and competed it on the basis of three conditions:
CONTEXT
ELABORATION
Free: (Free Condition):

Meaning, nobody else will see my results, and I cannot therefore be judged (morally) by anybody.
Academic: (Responding to academic enquiry)
Under ‘interrogation’ by the University of Manchester, which might be concerned about my ability to offer protection to my research participants?
Professional: (Responding to professional body enquiry)
Under ‘interrogation’ by one of my professional associations, which might be ‘frightened’ by ‘too much relativism’?

Here is the questionnaire:

Ethics Position Questionnaire
Items from the EPQ were originally published in Forsyth, D. R. (1980). A taxonomy of ethical ideologies. Journal of Personality and Social Psychology, 39, 175-184. The original response scale used was a 9-point scale, although the version presented here uses only a 5-point scale. Idealism scores are calculated by summing responses from items 1 to 10. Relativism scores are calculated by summing responses from items 11 to 20. As this scale is used for research purposes primarily, normative data are not available.

Please indicate if you agree or disagree with the following items.

Each represents a commonly held opinion and there are no right or wrong answers. We are interested in your reaction to such matters of opinion.

Rate your reaction to each statement by writing a number to the right of each statement where:
1 = Disagree Strongly
2 = Disagree
3 = Neutral
4 = Agree
5 = Agree Strongly




MORAL PRINCIPLES AND RULES


1. Free
2. Academic
3. Professional
4. October 5th 2005
1. People should make certain that their actions never intentionally harm another even to a small degree.
5
5
5
5
2. Risks to another should never be tolerated, irrespective of how small the risks might be.
3
1
5
3
3. The existence of potential harm to others is always wrong, irrespective of the benefits to be gained.
1
1
5
2
4. One should never psychologically or physically harm another person.
5
5
5
5
5. One should not perform an action which might in any way threaten the dignity and welfare of another individual.
5
4
5
1
6. If an action could harm an innocent other, then it should not be done.
5
4
5
4
7. Deciding whether or not to perform an act by balancing the positive consequences of the act against the negative consequences of the act is immoral.
5
1
2
2
8. The dignity and welfare of the people should be the most important concern in any society.
5
4
5
5
9. It is never necessary to sacrifice the welfare of others.
2
5
5
2
10. Moral behaviours are actions that closely match ideals of the most "perfect" action.
4
5
5
2
Subtotal 1*
40/50
34/50
47/50
31/50
11. There are no ethical principles that are so important that they should be a part of any code of ethics.
1
1
1
2
12. What is ethical varies from one situation and society to another.
3
4
1
3
13. Moral standards should be seen as being individualistic; what one person considers to be moral may be judged to be immoral by another person.
4
2
1
2
14. Different types of morality cannot be compared as to "rightness."
4
5
1
2
15. Questions of what is ethical for everyone can never be resolved since what is moral or immoral is up to the individual.
5
1
1
2
16. Moral standards are simply personal rules that indicate how a person should behave, and are not to be applied in making judgments of others.
4
1
1
2
17. Ethical considerations in interpersonal relations are so complex that individuals should be allowed to formulate their own individual codes.
2
1
1
5
18. Rigidly codifying an ethical position that prevents certain types of actions could stand in the way of better human relations and adjustment.
2
1
1
5
19. No rule concerning lying can be formulated; whether a lie is permissible or not permissible totally depends upon the situation.
5
5
5
5
20. Whether a lie is judged to be moral or immoral depends upon the circumstances surrounding the action.
5
5
1
4
Subtotal 2*
35/50
26/50
14/50
32/50
Ratio of “idealism” to “relativism”:*
40:35
34:26
47:14
31:32

* The subtotals and the final line do not appear in the questionnaire normally. They have been added here to facilitate the computation of my results.

Let us assume that this questionnaire provides reliable evidence concerning my ethical ideology – in four different contexts – and see what this “proves” about me. Firstly, it would seem from column 2 (40:35) that I have a slight preference, on average, for idealism over relativism – 40/75, or 53.3% to 46.7% - in the “free” condition. I was travelling on a train to Manchester, with no witnesses to my selections, when I marked this first column. And this score might seem to suggest that, in Forsyth’s (1980, 1993) terms, I am ideologically a Situationist: high on idealism and high on relativism. However, his definition, in Forsyth (1993: 299) does not seem to fit my self-perception: “Approach to Moral Judgement: Reject moral rules; ask if the action yielded the best possible outcome in the given situation”. The first part of this approach seems wrong; while the second seems broadly correct.

Secondly, I began to wonder if I might mark this questionnaire differently if asked to do so by William West or Clare Lennie, who would supervise and mark my work. This result is shown in column 3 (34:26), which suggests that my preference for deontology/idealism is about 34/60, or 56.7% against 43.3%, which is not very different from my “free decisions”, although the precise ratings of individual items are different in 14 out of 20 selections. And again, looking at Forsyth (1993), this time it seems I might fit the category of “Exceptionist” for my ethical ideology. “Approach to Moral Judgement: Feel conformity to moral rules is desirable, but exceptions to these rules are often permissible”. Again the first part of this approach seems broadly correct for me, but the second part does not fit so well. For example, I do not think I would want to allow any exception to the rule “Thou shalt not kill”, meaning deliberately decide to kill an innocent civilian, as a civilian; which is different from the whole “justified war” argument. It is when we move away from the big issues of murder, rape, and so on that morality becomes much more difficult, and here I am in favour of agreeing a set of appropriate rules to suit the situation, by some kind of social agreement. Therefore, I am less an Exceptionist or Situationist than a social constructionist. And this implies that there are more than the four ideologies of Forsyth and colleagues.

Thirdly, I wondered if I might answer these questions differently if the International Society of Professional Counsellors (ISPC) was interrogating me concerning my ethical commitments. This result is shown in Column 4 (47/14), which suggests that my preference for deontology/idealism over relativism is about 47/61, or 77% to 33%! This is a significant increase over the previous two results, suggesting that I would be very cautious in answering these questions in a context in which I might be judged harshly on my paper results by individuals who do not know the “real me”! Moreover, being judged harshly by the ISPC could have serious professional consequences for my work and me. According to Forsyth (1993: 299), this result makes me an “Absolutist”: High on idealism and low on relativism. “Approach to Moral Judgement: Feel actions are moral provided they yield positive consequences through conformity to moral rules”. This is also the postion of Hare (1981). And, strangely enough, I cannot fault this as a statement of my own position!!! However, if somebody accidentally produced a negative result while pursuing a moral course of action, I would make an exception in judging his or her behaviour to be moral, which would make me an Exceptionist. Moreover, I feel least affinity with the positions of Situationist and Subjectivist, both of whom reject moral rules.

Fourthly, I decided two days ago, on October 5th 2005, to check to see if my preferences have changed significantly now that I have done a considerable amount of reading and thinking about ethical and moral issues. I knew my results would be incorporated in this relatively public document. This result is shown in column 5 (31/32), and suggests that my preferences have changed slightly, so that I now seem to have a very slight preference for relativism over idealism/deontology of about 32/63, or 50.8% to 49.2%. Thus, I seem to be almost equally balanced between deontological idealism and (cultural) relativism. And this, in Forsyth’s (1993: 299) view, makes me an Exceptionist; and I have found a way to come to terms with this approach to moral judgement, by seeing that “exceptions to these rules are often permissible” if it can be shown that some particular rule or rules are not directly relevant to the particular case in hand.

The next question, however, is this: Is this questionnaire really measuring anything; and is it doing so accurately? I have some reservations about this. For examples: In a minority of these 20 statements – 2, 13, 15, 16 and 19 – the statement comes in two parts, and, while I might want to agree with one part, this necessarily commits me to the other part. The second problem is that some of the phrases are quite vague. For examples: “harm another”; “risks”; “potential harm”; “perform an action”; “harm an innocent other”; “sacrifice the welfare of others”. Each of these phrases can be seen to represent a continuum, from the most minor scratch or momentary mental agitation to a fatal wound or sever psychological trauma. Where on that scale of seriousness was I focusing my awareness when I chose my answers? (I did not know then, and I do not know now!) Moreover, where were you (the reader) focusing on that same scale as you read my answer? (Did you know, or do you now know? Moreover, how does your location on that scale relate to where I was when I chose my answer? You cannot know!) This is a major problem in terms of interpreting what it “means” that I responded to the statements in one particular way; and an even greater problem given that I have responded to each statement in four different ways on four different occasions, with different audiences in mind on three of them.

Forsyth and colleagues has done a lot of practical experimentation with ethical ideologies and individual judgements and individual behaviours. Forsyth (1980: 182) indicates that Forsyth (1978) “…presents evidence suggesting that the ethical ideology people adopt influences their moral judgements”. And Forsyth (1980: 182) refers to the fact that Forsyth and Berger (1982) “…indicates that ethical ideology does not predict moral behaviour”! Forsyth (1993) fairly convincingly demonstrated this in a psychological lab experiment of individual reactions to success or failure when working for themselves or for a noble charity.

Interestingly, enough, three new papers arrived from the ILLS soon after I completed the previous section, on the Ethics Postion Questionnaire. One of these – Forsyth and Nye (1990: 398-399) – supports my findings that my ethical position changes depending upon the context, and that this is predictable. This is how they summarize the research in this area:
“Contemporary analyzes of moral phenomena have increasingly emphasized the impact of interpersonal processes on individuals’ thoughts, feelings, and actions in morally toned situations (Hogan & Emler, 1978; Hogan, Johnson & Emler, 1978; Waterman, 1988). Haan (1978; 1986; Haan, Aerts & Cooper, 1985), for example, argues that individuals’ moral behavior varies because interpersonal demands vary across situations. Haan feels that moral action is ‘informed and influenced by variations in context’ and by individuals’ ‘own strategies of problem solving’ when they confront a moral dilemma (Haan, 1986, p.1282). Similarly, Kurtines, by asking individuals to predict how they would behave in various social roles, found that individuals’ use of principled moral reasoning varied across these role settings (1984, 1986). His findings prompted him to conclude that ‘the most critical conceptual limitation of individualistic orientations is their inability to provide a theoretically meaningful account of the effects of situation-related variables on decision making’ (1986, p.790)”.

…End of appendix.
ABC Coaching and Counselling Services.
Jim Byrne’s Email Address.
Campaign to Reinstate Dr Ellis.
Regular Campaign Updates.
The Campaign Archive Page.
Albert Ellis's Official Website.
Albert Ellis Biography and Research Site.
Albert Ellis Discussion Forum.
Online Petition for the Reinstatement of Dr Albert Ellis.
First Albert Ellis Campaign Archive.

Albert ELLIS CAMPAIGN ARCHIVE TWO AT BLOGGER.COM - 2nd November to 30th November 2005.

ALBERT ELLIS ARCHIVE THREE AT BLOGGER.COM - 5th December to 31st December 2005.

The Original Albert Ellis Campaign Page – now at Blogger.com.



[1] est = Erhard Seminar Training: a form of experiential training which is based on a fusion of a number of philosophical systems, including Zen, Gestalt, phenomenology, constructivism, and so on.
[2] ‘Ethics for Counselling and Psychotherapy’ – BACP PowerPoint presentation. Available online: www.bacp.org . Downloaded, May 2005.
[i] Baker, Robin (2002) Fragile Science: the reality behind the headlines, London, Pan Books.
[ii] Erwin, Edward (1997) Philosophy and Psychotherapy: razing the troubles of the brain, London, Sage Publications.
[iii] Epictetus (1991) Enchiridion, New York, Prometheus Books.
[iv] Heylighen, Francis (1991) ‘A cognitive-systemic reconstruction of Maslow’s theory of self-actualization’, PESP, Free University of Brussels, Pleinlaan 2, B-1050 Brussels, Belgium. And also: Edmonds, Bruce (2000) ‘Towards implementing free will’, at http:/bruce.edmonds.name /tifw/tifw_ 1.html

This page is powered by Blogger. Isn't yours?