I’m struggling with a confirmatory factor analysis (CFA) for a psych test using Likert scale data. I’ve set it up as ordinal and I’m using WLSMV estimation with theta parameterization. Here’s a simplified version of my code:
This results in an error about empty categories in one group. Curiously, using a different grouping variable produces the same error even though the CFA still runs.
Why does this discrepancy occur between groupings? How can I resolve it?
yo GracefulDancer8, sounds like a real headache with that CFA! i’ve hit similar snags before. might be worth checkin how your data’s spread out in each group. sometimes one group’s got way fewer responses in some categories, ya know? maybe try smushin some of those likert categories together? not ideal but could help. how big’s your sample anyway? keep us posted!
Wow, sounds like you’re diving deep into some tricky CFA territory. I’ve run into similar headaches with ordinal data before. It’s a real pain when things work fine for the overall model but fall apart with grouping, right?
Have you considered taking a closer look at your data distribution across the different groups? Sometimes the culprit is hiding in plain sight—like one gender having way fewer responses in certain categories. Maybe try plotting the response frequencies for each item by group?
Another thought—what about temporarily collapsing some of your Likert scale categories? I know it’s not ideal, but it might help pinpoint if sparse data is the root cause.
Oh, and just curious—how big is your sample size overall and in each group? Sometimes these issues crop up when we’re working with smaller samples.
Keep us posted on what you find out! This kind of problem-solving is what makes stats both frustrating and oddly fun, don’t you think?
I’ve noticed that problems with CFA on ordinal data are often linked to sparse responses in a few categories. In my experience, when one group has insufficient data in some response options, it can lead to instability with the WLSMV estimator. It may help to check your data distributions across groups to detect any categories with very few observations. You could consider merging adjacent Likert categories to bolster the frequency counts. Alternatively, exploring an estimator like MLR might yield more stable results if adjustments to the model or sample size aren’t feasible.