English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

I conducted a survey (n=30) where people could give a score (e.g. on a scale of 1-5) on how they liked a product. I then took a subset from this survey (e.g. n=12) who meet certain criteria (e.g. they are wealthy). For both n=30 and the subset n=12 I calculated the mean of the scores and the confidence levels using the standard deviations (e.g. for n=30, the mean range is 2.9-3.3, for n=12 it's 3.1-3.8). How do I test if the two means are statistically different. Do I just have to check if the mean ranges overlap? If so, are they then statistically indifferent? What if the ranges are 2.8-3.3 and 3.3 to 3.7, does this count as an overlap?

2006-12-08 03:00:38 · 2 answers · asked by tanselmino 1 in Science & Mathematics Mathematics

2 answers

Two ways to do this: calculate confidence intervals and see if they overlap, or the t-test as mentioned above. Excel has a built-in t-test function: >tools > Data analysis >t-Test: paired 2 sample

2006-12-08 03:19:36 · answer #1 · answered by formerly_bob 7 · 0 0

You can use a 2 sample t-test, if you have a stats program like Minitab.

2006-12-08 03:09:56 · answer #2 · answered by Boatman 3 · 0 0

fedest.com, questions and answers