He's pissin me off cause I dont understand him and I have to write a paper on him and his view of immortality...
>>By me (Thursday, 10 Apr 2003 06:31)
I recently read an article that gave some rebirth to a few of Hume's ideas. The article discussed a branch of "neuroethics" that attempts to locate regions of the brain used when making "moral" decisions. Scientists have been successful in narrowing down certain regions involved through the use of hypothetical situations and MRI technology. The results suggest a general pattern in all subjects. One of the primary issues in the article is the historical view that logical thinking and emotional experience are seperate functions of the brain. What researchers and theorists are coming to realize is that all rational functions in the brain begin with an emotional experience which guides our decision making. This is a point that Hume tried to make in contrast to previous philosophers. Kant, perhaps our most prolific semi-recent moralist held to the seperation model and left all moral decisions to be dealt with using rationalism or reason alone. He believed that moral truths were objective and universally agreeable. Hume argued that the seperation was but an illusion, rationalism appearing objective and therefore was typically believed to be so.
The article mentions a few fascinating thoughts. First, it proposes an argument against innate "good" or "bad." If a thought can be located in the brain, and that location refers to corresponding electric activity, then the only culprit for bad behavior is the electric actions itself, hardly something you can rightly blame. This opens the door to a new world of morality which, to me, resembles Locke's "tabula rasa." Our emphasis then must go into forming behavior patterns by both studying the brain itself and the resulting behavior from controlled stimulus. We must also acknowledge how behavior is affected by our experience. especially with regard to emotional development. If the researchers are on the right path, we must treat emotional development as a foundation by which all moral decisions are made. Secondly, the author discusses how we react differentlly to moral dilemnas depending on exactly what it is we are being asked to do. For instance, most subjects showed that it was far easier to make moral decisions when they would not have to hurt anyone directly whereas they stumbled and showed less stable brain patterns when the decision involved hurting somebody directly. Theorists attribute this difference in brain activity to two situations with the same outcome but different means to reach that outcome as a major part of our evolutionary development. They argue that through evolution humans have become hardwired to experience negative emotions when forced to potentially hurt another human being. On the other hand, decisions that involve modern ideas or technologies like flipping a switch to save lives are not hardwired into our brains and are therefore easier decisions to make. So it is up for us to decide. Is it easier in your mind to flip a switch that ensures you will divert a runaway train from killing five people but will in the end kill one person, or is it easier to push one man in front of that train and stop it from killing the five people? Same result, different actions.
>>By Hume Ungus (Friday, 2 Apr 2004 20:53)
The discussion board is currently closed.