User:Aviai33wi

Jump to: navigation, search
(serve: new section)
(Clustering Survey Responses Based on Dichotomous Responses: new section)
Line 80: Line 80:
 
    
 
    
 
   <li>Du display case</li>
 
   <li>Du display case</li>
 +
 
 +
</ul>
 +
 +
== Clustering Survey Responses Based on Dichotomous Responses ==
 +
 +
Clustering Survey Responses Based on Dichotomous Responses,hogan outlet<br><br>I have a set of about ten questions that I would like to use to create groupings from. The responses are all dichotomous (responses are in the form of 1 or 2 where 1 and 2 represent differences in preference discovered through qualitative research).<br><br>The questions,nike air max plus tn, were provided in the form:<br><br>So far,louboutin prix, I've looked at latent class analysis (challenging to interpret and not consistently reproducible), linear discriminant analysis, kmeans clustering (not consistently reproducible),scarpe hogan outlet, and multiple correspondence analysis (MCA provided the most interpretable results, but I'm unclear IF or how one could classify respondents using the results).<br><br>What would be the most reasonable clustering method to use?<br><br>If you want to play around with my data feel free to do so (n=799):<br><br>So,escarpin louboutin pas cher, if I got you right you want to cluster 799 respondents on the basis of 10 nominal dichotomous variables? One way is to try hierarchical clustering. Recode each dichotomous variable into a pair of dummy (1 vs 0) variables and then compute Dice (= Sorensen) coefficient between 799 objects based on those 20 dummies. Then cluster; I'd recommend complete linkage method (especially do avoid Ward or centroid methods in you case). Hierarchical clustering is generally used for up to 300 or so objects while you have more, but, because your features are few and just dichotomous, there many objects are expected to be identical, thus the actual sample of different entities to combine will not be great.<br><br>Besides hierarchical clustering there are a number of other,nike tn trainers, more recent clustering techniques, specially tailored for large number of objects. For example twostep cluster (this method uses loglikelihood distance for nominal variables). It can take nominal variables. I analysed your data. Hierarchical clustering described above led to 2 or 3 cluster solution (as suggested by silhouette index and cophenetic correlation, respectively). Twostep clustering with BICbased automatic detection of the number of clusters gave 2 cluster solution. When 2 cluster solutions from hierarchical method and twostep method were compared,prix louboutin, however,hogan rebel, they appeared to agree only for 77% of objects,louboutin discount, that was a bit dissapointing and might indicate toward 3 cluster solution as potentially better.<br><br>Have you tried a Rasch analysis to look at the item and respondent fits,louboutin paris? You could have one or more "problem" items or respondents that could be causing issues with your analysis. Respondent fits could show you where natural breaks fall between respondent clusters,chaussures louboutin.<br><br>Update: having a quick look at your data in Winsteps,chaussures louboutin pas cher, items B, F, and H seem to be very similar. Items D and E are similar. Items C and I look similar. Does this have any meaning to you? I have a person map as well as a text file  do you have somewhere I can send this to you? Caveat  this was a quick Rasch examination of the data and I haven't done any cleaning. However, convergence was quick  6 iterations. :)
 +
相关的主题文章:
 +
<ul>
 +
 
 +
  <li>CNN Student News One</li>
 +
 
 +
  <li>mon corps va méga doucement</li>
 +
 
 +
  <li>choose LW Private ReserveR french fries</li>
 
    
 
    
 
  </ul>
 
  </ul>

Revision as of 18:01, 28 February 2014

Personal tools
Namespaces
Variants
Actions
Navigation
Categories
Toolbox