I conducted a survey (n=30) where people could give a score (e.g. on a scale of 1-5) on how they liked a product. I then took a subset from this survey (e.g. n=12) who meet certain criteria (e.g. they are wealthy). For both n=30 and the subset n=12 I calculated the mean of the scores and the confidence levels using the standard deviations (e.g. for n=30, the mean range is 2.9-3.3, for n=12 it's 3.1-3.8). How do I test if the two means are statistically different. Do I just have to check if the mean ranges overlap? If so, are they then statistically indifferent? What if the ranges are 2.8-3.3 and 3.3 to 3.7, does this count as an overlap?
2006-12-08
03:00:38
·
2 answers
·
asked by
tanselmino
1
in
Science & Mathematics
➔ Mathematics