English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Don't you think if we confronted our tendencies to be
selfish, and self-righteous, that we could get over them?

2007-01-06 00:33:33 · 6 answers · asked by Anonymous in Society & Culture Other - Society & Culture

6 answers

No way.The only way is to rid ourself of ignorance.

2007-01-06 00:37:28 · answer #1 · answered by cool man 2 · 0 0

we are chickenhearts because we have the ability to reason out situations and have the ability to keep ourselves from taking chances we should take based on what we feel might not get ... animals instinct makes them fight for strongest position, there fore the animal would be selfish/self righteous as well ...

so I guess there's no getting over it ... live on

2007-01-06 08:50:58 · answer #2 · answered by Chele 5 · 0 0

Music is the culprit. As much as i love it, Music confuses clear thinking and creates havoc within rationality.
http://www.youtube.com/watch?v=c6cTbaBApM4
In a world without music is a people that live as computers.
without the knowledge of the individual and without the knowledge of status.

2007-01-06 08:37:07 · answer #3 · answered by another detroit bassist 5 · 0 0

emotions are what makes us human. selfishness and self-riteousness are part of that.

2007-01-06 08:36:12 · answer #4 · answered by Anonymous · 0 0

No, check the questions, bud.
We could do that, but the population can't, so we help.

2007-01-06 08:39:39 · answer #5 · answered by Anonymous · 0 0

Well, you have asked a Nobel Prize Question! You can't say only humans are chickenhearted emotional fools ... but yes, humans definitely have more of it. It is in the nature of a human being to create relations - positive or negative. Simultaneously, he is programmed to be on the safer side along with his near and dear ones. These tendencies make him act sentimentally. You see, he is programmed to care (about relations & relatives).
Given below is a more scientific explanation:

Emotions play an extremely important role in human mental life – but it is not, on the face of it, clear whether this needs to be the case for AI’s. Much of human emotional life is distinctly human in nature, clearly not portable to systems without humanlike bodies. Furthermore, many problems in human psychology and society are caused by emotions run amok in various ways – so in respects it might seem desirable to create emotion-free AI’s.



On the other hand, it might also be that emotions represent a critical part of mental process, and human emotions are merely one particular manifestation of a more general phenomenon – which must be manifested in some way in any mind. This is the perspective I’ll advocate here. I think the basic phenomenon of emotion is something that any mind must experience – and I will make a specific hypothesis regarding the grounding of this phenomenon in the dynamics of intelligent systems. Human emotions are then considered as an elaboration of the general “emotion” phenomenon in a peculiarly human way. There are a few universal emotions – including happiness, sadness and spiritual joy – which any intelligent system with finite computational resources is bound to experience, to an extent. And then there are many species-specific emotions, which in the case of humans include rage, joy and lust and other related feelings.





What Is Emotion?


Emotions have two aspects, which may be called hot versus cold (Mandler, 1975), or “conscious-experiential-flavor” versus “neural/cognitive structure-and-dynamics” – or, using my preferred vocabulary, qualia versus pattern. From some conceptual perspectives, the relation between the qualia aspects and the pattern aspect is problematic. I follow a philosophy in which qualia and patterns are aligned – each pattern comes along with a quale, which is more or less intense according to the “prominence” of the pattern (the degree of simplification that the pattern provides in its ground) (see Goertzel, 2004a). In this approach, the qualia and pattern aspects of emotion may be dealt with in a unified way.



So what is the general pattern of “emotion”? Dictionary definitions are not usually reliable for philosophical or scientific purposes, but in this case, a definition from dictionary.com is actually a reasonable place to start:



Emotion
A mental state that arises spontaneously rather than through conscious effort and is often accompanied by physiological changes; a feeling: the emotions of joy, sorrow, reverence, hate, and love.



One problem with this definition is its use of the mixed-up word “conscious.” I will replace this with the term “free will” which, in a recent essay, I have sought to define in a general, physiologically and computationally grounded way. Thus I arrive at a definition of an emotion as



Emotion
A mental state that does not arise through free will, and that and is often accompanied by physiological changes



“Free will,” as I understand it (Goertzel, 2004b), is a complex sort of quale, consisting primarily of



the registration of an (internal or external) action in an intelligent system’s “virtual multiverse model,” roughly simultaneously with the execution of that action


This generally goes along with



the construction of causal models explaining what internal structures and dynamics caused the action


Sometimes, though, these two aspects are uncorrelated, giving the feeling of “I don’t know why I decided to do that.”



Mental states that do not arise through free will, are mental states that:



Are registered in the virtual multiverse model only considerably after they have occurred, thus giving a feeling of “having spontaneously arisen”


This often goes along with



Arising through such a large-scale and complicated – or opaque -- process that detailed causal modeling is difficult


But sometimes, these two aspects are uncorrelated, and one can rationally reconstruct why some spontaneous mind-state occurred, in a reasonably confident way.



What causes mental states to register in the brain’s virtual multiverse model in a delayed way? One cause might be that these mental states are ambiguous and difficult to understand, so that it takes the virtual multiverse modeler a long time to understand what’s going on – to figure out which branch has actually been traversed. Another might be that the state is correlated with physical processes that inhibit the virtual multiverse modeler’s normal “branch collapsing” activity – and that the branch-collapsing only proceeds a little later, once this inhibitory effect has diminished.



In the case of human emotions, the “accompaniment with physiological changes” mentioned in the above definition of emotion seems to be a key point. It seems that there’s a time lag between certain kinds of broadly-based physiological sensations in the human brain/body, and registration of these sensations in the human brain’s virtual multiverse modelers.



There are many reasons why this delay might occur. The phenomenon may be a combination of different factors, for instance:



Since in a state of strong emotion, the virtual modeling system is receiving constant powerful inputs from parts of the brain/body it has almost no control over, it doesn’t bother to carry out detailed modeling (since this would be a waste of resources)
Emotions bollix the virtual multiverse modeler because they don’t exactly go forwards in time like physical movements do. Rather, each moment of an emotion helps us to interpret the previous moment as well as the following moment. What I’m feeling right now is far clearer in the light of what I’ll be feeling a moment from now. What this means is that, in the middle of an emotion, the virtual universe modeler doesn’t know how to branch. It can branch in a very general way – “I’m now happy, not sad” – but its detailed branching-activity is in flux.


And so, in regard to emotions, a flexibly superposed subjective multiverse is maintained, rather than a continually collapsed subjective universe that defines a single crisp path through the virtual multiverse. This helps explain both the beautiful and the confusing nature of emotions.



Regarding the second hypothesized factor, the obvious question is: Why do the broadly-based partly-physical sensations we humans call “emotions” have this strange relationship with time? This may be largely because they consist of various types of data coming in from various parts of the brain and body, with various time lags. A piece of sensation coming in from one part of the brain or body right now may have a different meaning depending on information about what’s going on in some other part of the brain or body – but this information may not be there yet. When information gathering and integration regarding a “distributed action pattern” requires this kind of temporally-defused activity, then the tight connection between action and virtual-multiverse-model collapse that exists in other contexts doesn’t exist anymore. Ergo, no feeling of “free will” – rather, a feeling of things happening in oneself, without a correlated “decision process.” A strong emotion can make one feel “outside of time.”



Furthermore, while it’s easy to make a high-level story as to what made one sad or happy or feel some other emotion, it’s not at all easy to make up a story regarding the details of an emotional experience. Usually, one just doesn’t know – because so much of the details of the emotional experience have to do with physiological dynamics that are opaque to the analytical brain (unless the analytical brain makes a huge, massively-effort-consuming push to become aware of these normally unconscious processes).



So we have arrived at a more specific, technical, “mechanistic” and hypothetical definition of emotion:



Emotion
A mental state marked by prominent internal temporal patterns that

are not controllable to any reasonable extent by the virtual multiverse modeling subsystem, or
have the property that their state at each time is far more easily interpretable by integration of past and future information.
Such patterns will often, though not always, involve complex and broad physiological changes.





What does this mean regarding the potential experiencing of emotions by nonhuman minds? Clearly, in any case where there’s diverse and ambiguous information coming in from various hard-to-control parts of an intelligent system, one is not going to have the “usual” situation of virtual multiverse collapse. One is going to have a sensation of major patterns occurring inside one’s own mind, but without any “free will” type “decision” process going along with it. This is, in the most abstract sense, “emotion.” Emotions in this sense need not be correlated with physiological patterns, but it makes sense that they often will be.



Emotional Typology


Now we turn to the question of emotional typology. Humans experience a vast range of emotions. Will other types of minds experience completely different emotion-types, or is there some kind of general system-theoretic typology of emotions?



I think there will be a small amount of emotional commonality among various minds – certain very simple emotions have an abstract, mind-architecture-independent meaning. But the vast majority of human emotional nuance is tied to human physical embodiment and evolutionary history, and would not be emulated in an AI mind or a radically different biological species.



Any system that has a set of goals that remain constant over a period of time, can experience an emotion I call “abstract happiness,” which is the emotion induced by an increasing amount of goal-achievement. On the other hand, it can also experience “abstract sadness,” i.e. the emotion induced by a decreasing amount of goal-achievement. These emotions can become quite complex because organisms can have multiple goals, and at any one moment some may experience increasing achievement while others experience decreasing achievement.



Different flavors of happiness are then associated with different sorts of goals. For instance, there is the goal of increasing the amount of harmony (defined as, say, the amount of similarity with and the amount of emergent pattern produced together with) between the system and the rest of the universe. What I call “spiritual joy” is the feeling of increase in inner/outer harmony – i.e., the feeling of increasing achievement of the “inner/outer harmony” goal.



But why should increasing goal-achievement cause emotion in the sense I’ve defined it above? There are two aspects to this:



Factors tied to human evolutionary history
Factors that are more based on information processing, and may apply beyond the human domain


Due to the existence of these second factors, I suspect that happiness, sadness and spiritual joy are emotions with some universality. Due to the former factors, the specific flavor that these general emotions have in human beings, is definitely peculiarly human in character.



In humans, achieving a goal like finding sex or finding a good place to sleep or killing prey or producing babies naturally induces broad and uncontrollable physiological changes. Achieving more abstract goals, in humans, tends to associatively bring forward patterns and processes associated with achieving these simpler primordial goals – thus activating broad patterns of physiological activity in ancient parts of the brain, and other parts of the body.



The evolutionary-history-bound nature of human emotions is well depicted in a snatch of dialogue from William Gibson’s novel Pattern Recognition (2003, p. 69) – a discourse by an advertising executive on the importance of humans’ odd cognitive architecture for his trade:





“It doesn’t feel so much like a leap of faith as something I know in my heart.” …

“The heart is a muscle,” Bigend corrects. “You ‘know’ in your limbic brain. The seat of instinct. The mammalian brain. Deeper, wider, beyond logic. That is where advertising works, not in the upstart cortex. What we think of as ‘mind’ is only a sort of jumped-up gland, piggybacking on the reptilian brainstem and the older, mammalian mind, but our culture tricks us into recognizing it as all of consciousness. The mammalian spreads continent-wide beneath it, mute and muscular, attending its ancient agenda. And makes us buy things.”



“… [A]ll truly viable advertising addresses that older, deeper mind, beyond language and logic.”





What of specific human emotions like lust, rage and fear? Clearly these exist because we have specific physiological response systems for dealing with specific situations. Fear activates flight-related subsystems; rage activates battle-related subsystems; lust activates sex-related subsystems. Each of these body subsystems, when activated, floods the brain with intensive and diverse and hard-to-process stimuli, which are beyond the control of “free will” related processes. Many of the responses of these body subsystems are fast -- too fast for virtual multiverse modeling to deal with. They’re fast because primordially they had to be fast – you can’t always stop to ponder before running, attacking or mating.



Clearly, a large portion of human emotion has to do with the virtual multiverse modeler’s difficulties in modeling actions that come from the “older, deeper mammalian mind” and the yet more archaic reptilian brainstem. Yet, this kind of awkward fusion of old and new brains is not the sum total of emotion, human or otherwise. Let’s return to the notion of abstract happiness as emotion which accompanies goal-achievement. When a human achieves a goal, the mammalian cortex responds in much the same way as it responds to the achievement of goals like finding food, getting sex, escaping from an enemy, or winning a fight. But the induction of these mammalian circuits is not the only reason for the virtual multiverse modeler to get confused into relative inactivity. There is also the fact that when a goal is achieved, not by a specific localized action, but by a complex coordinated activity pattern among many system components, this activity pattern may well have the property of being hard to model by the virtual multiverse modeler subsystems. So, peculiarities of human evolution aside, it seems some kinds of goal achievement are more likely to cause emotion than others, purely on information-processing grounds.





AI Emotions


There’s no doubt that, unless an AI system is given a mammal-like motivational system, its emotional makeup will vastly differ from that of humans. An AI system won’t necessarily have strong emotions associated with battle, reproduction or flight. Conceivably it could have subsystems associated with these types of actions, but even so, it could be given a much greater ability to introspect into these subsystems than humans have in regard to their analogous subsystems.



Overall, my conclusion about AI emotions is that:



AI systems clearly will have emotions
Their emotions will include, at least, happiness and sadness and spiritual joy
Generally AI systems will probably experience less intense emotions than humans, because they can have more robust virtual multiverse modeling components, which are not so easily bollixed up – so they’ll less often have the experience of major non-free-will-related mental-state shifts
Experiencing less intense emotions does not imply experiencing less intense states of consciousness. Emotion is only one particular species of state-of-consciousness.
The specific emotions AI systems will experience will probably be quite different from those of humans, and will quite possibly vary widely among different AI systems
If you put an AI in a human-like body with the same sorts of needs as primordial humans, it would probably develop every similar emotions to the human ones


It’s interesting to consider these issues in terms of the specific structures and dynamics of the Novamente AI system (Goertzel et al, 2003). In this context, a specific prediction made by the present theory of emotions is that complex map dynamics will be more associated with emotions than other aspects of Novamente cognition. Complex map dynamics involve temporal patterns that are hard to control, and that present sufficiently subtle patterns that the present is much better understood once one knows the immediate future. One may infer from this a possible major feature of the difference between Novamente psychology and human psychology: the strongest emotions of a Novamente system may be associated with the most complexly unpredictable cognitions it has -- rather than, in humans, with phenomena that evoke the activities of powerful, primordial, opaque-to-cognition subsystems.



On the other hand, what can we say about emotions in the case of a hybrid human-computer intelligence architecture like the “global brain mindplex” posited in (Goertzel, 2003a)? In this case, it seems, the main source of difficult-to-model unpredictability in the mindplex’s mind will be the human component. Thus, the subjective experience of a global brain mindplex would likely be one of continually being swung around by strong emotions, corresponding to complex patterns of change in the human mass mind.

2007-01-06 08:39:11 · answer #6 · answered by Rohit S 2 · 0 0

fedest.com, questions and answers