Hey everyone, I’m scratching my head over some weird stuff happening with my factor analysis in R. I’ve been using the psych package for EFA and then Lavaan for CFA, just to compare results. But something’s not adding up.
I know EFA and CFA are different beasts, but shouldn’t they give similar-ish results if I’m using the same factor structure? Here’s what’s bugging me:
- EFA gives me a nice 3-factor structure with all loadings below 1.
- I plug this into CFA, and bam! Standardized loadings over 1 and warnings about negative residual variance.
This is making me nervous about trusting the psych package results. I’m dealing with oblique rotation, so I get that loadings can go over 1 sometimes. But how do I figure out if there’s really negative residual variance when EFA says no and CFA says yes?
Has anyone run into this before? Any tips on how to make sense of these conflicting results? I’m worried about reporting stuff that might not be solid. Help a confused researcher out!
hey mia, i been there too. efa and cfa can act funny sometimes. try a different rotation in efa and check for outliers. loadings >1 in cfa ok with oblique rotation. keep tryin, you’ll get there.
I’ve encountered similar issues in my research, and it can definitely be frustrating. The discrepancy you’re seeing between EFA and CFA results isn’t uncommon, especially with complex datasets.
One thing to consider is that EFA and CFA use different estimation methods. EFA typically uses principal axis factoring or maximum likelihood, while CFA often uses maximum likelihood estimation. This can lead to differences in factor loadings and structure.
For the standardized loadings over 1 in CFA, this could indicate multicollinearity among your factors. It’s worth examining the factor correlations and considering if your factors are too highly related.
Regarding negative residual variance (Heywood cases), this can occur due to various reasons like small sample size, model misspecification, or outliers. It’s not always a deal-breaker, but it does warrant careful investigation.
My suggestion would be to re-examine your data, check for outliers, and possibly try alternative CFA models. You might also consider using a different rotation method in your EFA to see if that aligns better with your CFA results.
Remember, factor analysis is as much an art as it is a science. Don’t be afraid to iterate and refine your approach based on both statistical results and theoretical considerations.
Hey Mia, your situation sounds super intriguing!
I’ve dabbled in factor analysis too, and it can be a real head-scratcher sometimes.
Have you considered the sample size you’re working with? Sometimes smaller samples can lead to these weird discrepancies between EFA and CFA. What’s your N like?
Also, I’m curious about your data - are you dealing with any funky distributions or extreme scores? Those can sometimes throw a wrench in the works.
Oh, and here’s a thought - have you tried running a parallel analysis to confirm that 3-factor structure? It might give you some extra confidence in your EFA results.
Honestly, I think you’re on the right track by questioning these results. It’s way better to be cautious than to publish something iffy, right?
What if you tried a different software package for your CFA? Sometimes a fresh perspective (or algorithm) can shed new light on things.
Keep us posted on what you find out! This kind of puzzle is what makes stats so fun (and frustrating) 