Discrepancies between EFA and CFA results in R: A cause for concern?

I’ve been working with the ‘psych’ package in R for Exploratory Factor Analysis (EFA). Out of curiosity, I decided to run a Confirmatory Factor Analysis (CFA) using Lavaan based on the EFA results. But now I’m worried about the ‘psych’ package output.

I know EFA and CFA are different tools, but shouldn’t they give similar results if I use the same factor structure? Here’s what’s bugging me:

Sometimes I get a 3-factor structure from EFA with all loadings under 1. But when I plug this into CFA, I get standardized loadings over 1 and warnings about negative residual variance. This makes the estimates seem unreliable.

This makes me nervous about using ‘psych’ results in my report. I’m dealing with oblique rotation, which can give loadings over 1 with different factor numbers. I’ve read that this can happen if residual variance isn’t negative. But how do I know what’s right when EFA says one thing and CFA says another?

Has anyone else run into this issue? Any thoughts on what might be causing these differences?

I’ve encountered similar issues in my own research. The discrepancy between EFA and CFA results can be concerning, but it is not uncommon. In practice, the flexibility of EFA versus the stricter constraints of CFA can lead to differences, especially when sample sizes are small, estimation methods vary, or measurement error plays a role. It may be wise to review your data for outliers or non-normality, try alternative estimation techniques, and consult existing literature or experts in psychometrics. While consistency is ideal, some differences can occur depending on your specific model and theoretical framework.

Hey Ethan85, that’s a really interesting problem you’ve run into! :open_mouth: I can totally see why you’re feeling a bit nervous about the discrepancies.

I’m curious, have you tried running the EFA with different rotation methods? Sometimes switching between oblique rotations like promax and oblimin can give slightly different results. It might be worth experimenting to see if that brings your EFA and CFA results closer together.

Also, I wonder if sample size could be playing a role here? CFA tends to be more sensitive to smaller samples. How many participants are you working with?

It’s great that you’re double-checking your results like this. Have you considered reaching out to your stats department or maybe posting on CrossValidated? They might have some more specific insights into the ‘psych’ package quirks.

Keep us posted on what you find out! I’m really interested to hear if you manage to reconcile the EFA and CFA results. Good luck with your analysis! :four_leaf_clover:

yo ethan, i’ve seen this before too. EFA and CFA can be tricky beasts. have u tried adjusting ur model specification in CFA? sometimes adding or removing certain paths can help. also, check ur sample size - CFA needs more data usually. dont stress too much tho, some difference is normal. maybe chat with ur supervisor or a stats guru for more specific advice?