English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

This is only a small portion of what I have to know for my experimental psych test tomorrow. It's over 5 chapters so I was going to ask a question about a couple, but I figured I'd put them all since I don't really understand them.

1. confounding variable
2. content validity
3. construct validity
4. order effect
5. practice effect
6. instrumentation
7. pilot study
8. hawthorne effect
9. stratified random sampling
10. content analysis

2007-02-28 14:26:48 · 2 answers · asked by Anonymous in Social Science Psychology

2 answers

Confouding variable- A lurking variable (confounding factor or variable, or simply a confound or confounder) is an extraneous variable in a statistical or research model that affects the dependent variables in question but has either not been considered or has not been controlled for. The confounding variable can lead to a false conclusion that the dependent variables are in a causal relationship with the independent variable. Such a relation between two observed variables is termed a spurious relationship. An experiment that fails to take a confounding variable into account is said to have poor internal validity.

For example, ice cream consumption and murder rates are highly correlated. Now, does ice cream incite murder or does murder increase the demand for ice cream? Neither: they are joint effects of a common cause or lurking variable, namely, hot weather. Another look at the sample shows that it failed to account for the time of year, including the fact that both rates rise in the summertime.

In statistical experimental design, attempts are made to remove lurking variables such as the placebo effect from the experiment. Because we can never be certain that observational data are not hiding a lurking variable that influences both x and y, it is never safe to conclude that a linear model demonstrates a causal relationship with 100% certainty, no matter how strong the linear association.

There has been a lot of work on criteria for causality in science. There are a set of casual criteria, proposed by Austin Bradford Hill in a paper in the 1960's. Many working epidemiologists take these as a good place to start when considering confounding and causation.

Anecdotal evidence doesn't take account of confounders.

Content Validity- In psychometrics, content validity (also known as logical validity) refers to the extent to which a measure represents all facets of a given social concept. For example, a depression scale may lack content validity if it only assesses the affective dimension of depression but fails to take into account the behavioral dimension. An element of subjectivity is exists in relation to determining content validity, which requires a degree of agreement about what a particular personality trait such as extroversion represents. A disagreement about a personality trait will prevent the gain of a high content validity.[1]

Content validity is related to face validity, though content validity should not be confused with face validity. The latter is not validity in the technical sense; it refers, not to what the test actually measures, but to what it appears superficially to measure. Face validity pertains to whether the test "looks valid" to the examinees who take it, the administrative personnel who decide on its use, and other technically untrained observers. Content validity requires more rigorous statistical tests than face validity, which only requires an intuitive judgement. Content validity is most often addressed in academic and vocational testing, where test items need to reflect the knowledge actually required for a given topic area (e.g., history) or job skill (e.g., accounting). In clinical settings, content validity refers to the correspondence between test items and the symptom content of a syndrome.

One widely used method of measuring content validity was developed by C. H. Lawshe. It is essentially a method for gauging agreement among raters or judges regarding how essential a particular item is. Lawshe (1975) proposed that each raters on the judging panel respond to the following question for each item: "Is the skill or knowledge measured by this item essential/useful but not essential/ not necessary to the performance of the construct?" According to Lawshe, if more than half the panelists indicate that an item is essential, that item has at least some content validity. Greater level of content validity exist as larger numbers of panelists agree that a particular item is essential. Using these assumptions, Lawshe developed a formula termed the content validity ratio:

CVR = (ne - N/2)/(N/2)
CVR= content validity ratio, ne= number of panelists indicating "essential", N= total number of panelists.

Construct validity- In social science and psychometrics, construct validity refers to whether a scale measures the unobservable social construct (such as "fluid intelligence") that it purports to measure. It is related to the theoretical ideas behind the personality trait under consideration; a non-existent concept in the physical sense may be suggested as a method of organising how personality can be viewed.[1] The unobservable idea of a unidimensional easier-to-harder dimension must be "constructed" in the words of human language and graphics.

A construct is not restricted to one set of observable indicators or attributes. It is common to a number of sets of indicators. Thus, "construct validity" can be evaluated by statistical methods that show whether or not a common factor can be shown to exist underlying several measurements using different observable indicators. This view of a construct rejects the operationist past that a construct is neither more nor less than the operations used to measure it.

Evaluation of construct validity requires examining the correlation of the measure being evaluated with variables that are known to be related to the construct purportedly measured by the instrument being evaluated or for which there are theoretical grounds for expecting it to be related (Campbell & Fiske, 1959). Correlations that fit the expected pattern contribute evidence of construct validity. Construct validity is a judgment based on the accumulation of correlations from numerous studies using the instrument being evaluated.

Order Effect- I did not find anything on this definition

Practice Effect-I did not find anything on this definition

But I did find this passage... it might help and it might not:

Affect theory- In psychology, affect is an emotion or subjectively experienced feeling. Affect theory is a branch of psychoanalysis that attempts to organize affects into discrete categories and connect each one with its typical response. So, for example, the affect of joy is observed through the reaction of smiling. These affects can be identified through immediate facial reactions that people have to a stimulus, typically well before they could process any real response to the stimulus.

Affect theory is attributed to Silvan Tomkins and is introduced in the first two volumes of his book Affect Imagery Consciousness (published in 1962 and 1963 respectively).

The nine affects
These are the nine affects, listed with a low/high intensity label for each affect and accompanied by its biological expression [1]:

Positive:

Enjoyment/Joy - smiling, lips wide and out
Interest/Excitement - eyebrows down, eyes tracking, eyes looking, closer listening
Neutral:

Surprise/Startle - eyebrows up, eyes blinking
Negative:

Anger/Rage - frowning, a clenched jaw, a red face
Disgust - the lower lip raised and protruded, head forward and down
Dissmell (reaction to bad smell) - upper lip raised, head pulled back
Distress/Anguish - crying, rhythmic sobbing, arched eyebrows, mouth lowered
Fear/Terror - a frozen stare, a pale face, coldness, sweat, erect hair
Shame/Humiliation - eyes lowered, the head down and averted, blushing

Implications

Prescriptive implications
The nine affects can be used as a blueprint for optimal mental health. According to Tomkins (1962), optimal mental health requires the maximization of positive affect and the minimization of negative affect. Affect should also be properly expressed so to make the identification of affect possible (Nathanson 1997).

Affect theory can also be used as a blueprint for intimate relationships. Kelly (1996) describes relationships as agreements to mutually work toward maximizing positive affect and minimizing negative affect. Like the "optimal mental health" blueprint, this blueprint requires members of the relationship to express affect to one another in order to identify progress.

Descriptive implications
These blueprints can also describe natural and implicit goals. Nathanson (1997), for example, uses the "affect" to create a narrative for one of his patients:

I suspect that the reason he refuses to watch movies is the sturdy fear of enmeshment in the affect depicted on the screen; the affect mutualization for which most of us frequent the movie theater is only another source of discomfort for him.
and:

His refusal to risk the range of positive and negative affect associated with sexuality robs any possible relationship of one of its best opportunities to work on the first two rules of either the Kelly or the Tomkins blueprint. Thus, his problems with intimacy may be understood in one aspect as an overly substantial empathic wall, and in another aspect as a purely internal problem with the expression and management of his own affect.
Tomkins (1991) applies affect theory to religion noting that "Christianity became a powerful universal religion in part because of its more general solution to the problem of anger, violence, and suffering versus love, enjoyment, and peace." The implication is that the optimization of affect motivates the adoption of religion.

Affect theory is also referenced heavily in Tomkins's Script Theory.

Adoption of affect theory
Affect theory's use in psychoanalysis and therapy is limited, though it has gained widespread use in psychoanalytic theory, particularly through the work of Eve Sedgwick and Lauren Berlant, who have written extensively about affect.



Instrumentation- Instrumentation is defined as "the art and science of measurement and control". Instrumentation can be used to refer to the field in which Instrument technicians and engineers work, or it can refer to the available methods of measurement and control and the instruments which facilitate this.

Pilot Study-

Hawthorne effect- The Hawthorne effect refers to the phenomenon that when people are observed in a study, their behavior or performance temporarily changes. Others have broadened the definition to mean that people’s behavior and performance change, following any new or increased attention. The term gets its name from a factory called the Hawthorne Works[1], where a series of experiments on factory workers were carried out between 1924 and 1932.

There were many types of experiments conducted on the employees, but the purpose of the original ones was to study the effect of lighting on workers’ productivity. When researchers found that productivity almost always increased after a change in illumination, no matter what the level of illumination was, a second set of experiments began, supervised by Harvard University professors Elton Mayo, Fritz Roethlisberger and William J. Dickson.

They experimented on other types of changes in the working environment, using a study group of five young women. Again, no matter the change in conditions, the women nearly always produced more. The researchers reported that they had accidentally found a way to increase productivity. The effect was an important milestone in industrial and organizational psychology and in organizational behavior. However, some researchers have questioned the validity of the effect because of the experiments’ design and faulty interpretations. (See Interpretations, criticisms, and conclusions below.)

[edit] The Hawthorne Experiments
Like the Hawthorne effect, the definition of the Hawthorne experiments also varies. Most industrial/occupational psychology and organizational behavior textbooks refer to the illumination studies, and usually to the relay assembly test room experiments and the bank wiring room experiments. Only occasionally are the rest of the studies mentioned.[2].


[edit] Illumination studies
The Hawthorne Works, located in Cicero, Illinois and just outside of Chicago, belonged to the Western Electric Company, and the studies were funded by the National Research Council of the National Academy of Sciences at the behest of General Electric, the largest manufacturer of light bulbs in the United States [3]. The purpose was to find the optimum level of lighting for productivity.

During two and a half years from 1924 to 1927, a series of illumination level studies was conducted [4]:

Study 1a: In the first experiment, there was no control group. The researchers experimented on three different departments; all showed an increase of productivity, whether illumination increased or decreased.
Study 1b: A control group had no change in lighting, while the experimental group got a sequence of increasing light levels. Both groups substantially increased production, and there was no difference between the groups. This naturally piqued the researchers' curiosity.
Study 1c: The researchers decided to see what would happen if they decreased lighting. The control group got stable illumination; the other got a sequence of decreasing levels. Surprisingly, both groups steadily increased production until finally the light in experimental group got so low that they protested and production fell off.
Study 1d: This was conducted on two women only. Their production stayed constant under widely varying light levels. It was found that if the experimenter said bright was good, they said they preferred the light; the brighter they believed it to be, the more they liked it. The same was true when he said dimmer was good. If they were deceived about a change, they said they preferred it. Researchers concluded that their preference on lighting level was completely subjective - if they were told it was good, they believed it was good and preferred it, and vice versa.
At this point, researchers realized that something else besides lighting was affecting productivity. They suspected that the supervision of the researchers had some effect, so they ended the illumination experiments in 1927.


[edit] Relay assembly experiments
The researchers wanted to identify how other variables could affect productivity. They chose two women as test subjects and asked them to choose four other workers to join the test group. Together the women worked in a separate room over the course of five years (1927-1932) assembling telephone relays.

Output was measured mechanically by counting how many finished relays each dropped down a chute. This measuring began in secret two weeks before moving the women to an experiment room and continued throughout the study. In the experiment room, they had a supervisor who discussed changes with them and at times used their suggestions. Then the researchers spent five years measuring how different variables impacted the group's and individuals' productivity. Some of the variables were:

changing the pay rules so that the group was paid for overall group production, not individual production
giving two 5-minute breaks (after a discussion with them on the best length of time), and then changing to two 10-minute breaks (not their preference). Productivity increased, but when they received six 5-minute rests, they disliked it and reduced output.
providing food during the breaks
shortening the day by 30 minutes (output went up); shortening it more (output per hour went up, but overall output decreased); returning to the earlier condition (where output peaked).
Changing a variable usually increased productivity, even if the variable was just a change back to the original condition. however it is said that this is the natural processes of the human being to adapt to the environment without knowing the objective of the experiment being taken place. they wanted to study hard on their work because they thought that they where being experimented on individually for the effort they put into their work.

Researchers hypothesized that choosing one's own coworkers, working as a group, being treated as special (as evidenced by working in a separate room), and having a sympathetic supervisor were the real reasons for the productivity increase. One interpretation, mainly due to Mayo, was that "the six individuals became a team and the team gave itself wholeheartedly and spontaneously to cooperation in the experiment." (There was a second relay assembly test room study whose results were not as significant as the first experiment.)


[edit] Bank wiring room experiments
The purpose of the next study was to find out how payment incentives would affect group productivity. The surprising result was that they had no effect. Ironically, this contradicted the Hawthorne effect: although the workers were receiving special attention, it didn’t affect their behavior or productivity! However, the informal group dynamics studied were a new milestone in organizational behavior.

The study was conducted by Mayo and W. Lloyd Warner between 1931 and 1932 on a group of 14 men who put together telephone switching equipment. The researchers found that although the workers were paid according to individual productivity, productivity did not go up because the men were afraid that the company would lower the base rate. The men also formed cliques, ostracized coworkers, and created a social hierarchy that was only partly related to the difference in their jobs. The cliques served to control group members and to manage bosses; when bosses asked questions, clique members gave the same responses, even if they were untrue.


[edit] Mica splitting test room
In this study from 1928 to 1930, workers in the mica splitting room were paid by individual piece rate, rather than by group incentives. However, work environment conditions were changed to see how they affected productivity. The study lasted fourteen months and productivity also increased by fifteen percent

Stratified random sampling- In statistics, stratified sampling is a method of sampling from a population.

When sub-populations vary considerably, it is advantageous to sample each subpopulation (stratum) independently. Stratification is the process of grouping members of the population into relatively homogeneous subgroups before sampling. The strata should be mutually exclusive : every element in the population must be assigned to only one stratum. The strata should also be collectively exhaustive: no population element can be excluded. Then random or systematic sampling is applied within each stratum. This often improves the representativeness of the sample by reducing sampling error. It can produce a weighted mean that has less variability than the arithmetic mean of a simple random sample of the population.

There are several possible strategies:

Proportionate allocation uses a sampling fraction in each of the strata that is proportional to that of the total population. If the population consists of 60% in the male stratum and 40% in the female stratum, then the relative size of the two samples (three males, two females) should reflect this proportion.
Optimum allocation (or Disproportionate allocation) - Each stratum is proportionate to the standard deviation of the distribution of the variable. Larger samples are taken in the strata with the greatest variability to generate the least possible sampling variance.
A real-world example of using stratified sampling would be for a US political survey. If we wanted the respondents to reflect the diversity of the population of the United States, the researcher would specifically seek to include participants of various minority groups such as race or religion, based on their proportionality to the total population as mentioned above. A stratified survey could thus claim to be more representative of the US population than a survey of simple random sampling or systematic sampling.

Similarly, if population density varies greatly within a region, stratified sampling will ensure that estimates can be made with equal accuracy in different parts of the region, and that comparisons of sub-regions can be made with equal statistical power. For example, in Ontario a survey taken throughout the province might use a larger sampling fraction in the less populated north, since the disparity in population between north and south is so great that a sampling fraction based on the provincial sample as a whole might result in the collection of only a handful of data from the north.

Randomized stratification can also be used to improve population representativeness in a study.

[edit] Advantages
focuses on important subpopulations but ignores irrelevant ones
improves the accuracy of estimation
efficient
sampling equal numbers from strata varying widely in size may be used to equate the statistical power of tests of differences between strata.

[edit] Disadvantages
can be difficult to select relevant stratification variables
not useful when there are no homogeneous subgroups
can be expensive
requires accurate information about the population, or introduces bias.

content analysis-Content analysis (also called: textual analysis) is a standard methodology in the social sciences on the subject of communication content. Earl Babbie defines it as "the study of recorded human communications, such as books, web sites, paintings and laws". Harold Lasswell formulated the core questions of content analysis: "Who says what, to whom, why, to what extent and with what effect?". Ole Holsti (1969) offers a broad definition of content analysis as "any technique for making inferences by objectively and systematically identifying specified characteristics of messages" (p. 14).

[edit] Description
The method of content analysis enables the researcher to include large amounts of textual information and systematically identify its properties, e.g. the frequencies of most used keywords (KWIC meaning "KeyWord In Context") by detecting the more important structures of its communication content. Yet such amounts of textual information must be categorised according to a certain theoretical framework, which will inform the data analysis, providing at the end a meaningful reading of content under scrutiny. David Robertson (1976:73-75) for example created a coding frame for a comparison of modes of party competition between British and American parties. It was developed further in 1979 by the Manifesto Research Group aiming at a comparative content-analytic approach on the policy positions of political parties. This classification scheme was also used to accomplish a comparative analysis between the 1989 and 1994 Brazilian party broadcasts and manifestos by F. Carvalho [1] (2000).

Since the 1980s, content analysis has become an increasingly important tool in the measurement of success in public relations (notably media relations) programs and the assessment of media profiles. In these circumstances, content analysis is an element of media evaluation or media analysis. In analyses of this type, data from content analysis is usually combined with media data (circulation, readership, number of viewers and listeners, frequency of publication).

The creation of coding frames is intrinsically related to a creative approach to variables that exert an influence over textual content. In political analysis, these variables could be political scandals, the impact of public opinion polls, sudden events in external politics, inflation etc. Mimetic Convergence, created by F. Carvalho for the comparative analysis of electoral proclamations on free-to-air television is an example of creative articulation of variables in content analysis. The methodology describes the construction of party identities during long-term party competitions on TV, from a dynamic perspective, governed by the logic of the contingent. This method aims to capture the contingent logic observed in electoral campaigns by focusing on the repetition and innovation of themes sustained in party broadcasts. According to such post-structuralist perspective from which electoral competition is analysed, the party identities, 'the real' cannot speak without mediations because there is not a natural centre fixing the meaning of a party structure, it rather depends on ad-hoc articulations. There is no empirical reality outside articulations of meaning. Reality is an outcome of power struggles that unify ideas of social structure as a result of contingent interventions. In Brazil, these contingent interventions have proven to be mimetic and convergent rather than divergent and polarised, being integral to the repetition of dichotomised worldviews.

Mimetic Convergence thus aims to show the process of fixation of meaning through discursive articulations that repeat, alter and subvert political issues that come into play. For this reason, parties are not taken as the pure expression of conflicts for the representation of interests (of different classes, religions, ethnic groups (see: Lipset & Rokkan 1967, Lijphart 1984) but attempts to recompose and re-articulate ideas of an absent totality around signifiers gaining positivity.

Content analysis has been criticised for being a positivist methodology, yet here is an example of a methodology used to organise a content analysis which is able to capture the logic of the contingent dominating the political field, enabling an analysis of the constitution of party identities from the theoretical perspective of deconstruction and theory of hegemony.

Every content analysis should depart from a hypothesis. The hypothesis of Mimetic Convergence supports the Downsian interpretation that in general, rational voters converge in the direction of uniform positions in most thematic dimensions. The hypothesis guiding the analysis of Mimetic Convergence between political parties' broadcasts is: 'public opinion polls on vote intention, published throughout campaigns on TV will contribute to successive revisions of candidates' discourses. Candidates re-orient their arguments and thematic selections in part by the signals sent by voters. One must also consider the interference of other kinds of input on electoral propaganda such as internal and external political crises and the arbitrary interference of private interests on the dispute. Moments of internal crisis in disputes between candidates might result from the exhaustion of a certain strategy. The moments of exhaustion might consequently precipitate an inversion in the thematic flux.

As an evaluation approach, content analysis is considered to be quasi-evaluation because content analysis judgments need not be based on value statements. Instead, they can be based on knowledge. Such content analyses are not evaluations. On the other hand, when content analysis judgments are based on values, such studies are evaluations (Frisbie, 1986).

As demonstrated above, only a good scientific hypothesis can lead to the development of a methodology that will allow the empirical description, be it dynamic or static.


[edit] Uses of content analysis
Holsti (1969) groups fifteen uses of content analysis into three basic categories:

make inferences about the antecedents of a communication
describe and make inferences about characteristics of a communication
make inferences about the effects of a communication.
He also places these uses into the context of the basic communication paradigm.

The following table shows fifteen uses of content analysis in terms of their general purpose, element of the communication paradigm to which they apply, and the general question they are intended to answer.

2007-02-28 15:13:10 · answer #1 · answered by Lauren S 2 · 1 0

I dont know anything about psych... there are better ways to do research in a pinch

http://en.wikipedia.org/wiki/Content_validity
that is just one of your terms, the search button is your friend!

2007-02-28 14:31:18 · answer #2 · answered by Jon 5 · 0 0

fedest.com, questions and answers