I’m working on a structural equation model to examine latent variables in a Big Five personality dataset. I’m trying to replicate a study about common method variance inflating correlations between Big Five items.
Here’s a snippet of my model code:
bigFive_model <- '
Extraversion =~ E1 + E2 + E3 + E4 + E5 + E6 + E7 + E8
Agreeableness =~ A1 + A2 + A3 + A4 + A5 + A6 + A7 + A8
Neuroticism =~ N1 + N2 + N3 + N4 + N5 + N6 + N7 + N8
Openness =~ O1 + O2 + O3 + O4 + O5 + O6 + O7 + O8
Conscientiousness =~ C1 + C2 + C3 + C4 + C5 + C6 + C7 + C8
MethodFactor =~ E1 + E2 + E3 + E4 + A1 + A2 + A3 + A4 + N1 + N2 + N3 + N4 + O1 + O2 + O3 + O4 + C1 + C2 + C3 + C4
'
bigFive_cfa <- cfa(bigFive_model, data = personality_data, estimator = 'MLR')
I got a warning about the variance-covariance matrix not being positive definite. However, the summary displays good fit statistics. There are also some negative loadings on the method factor.
Should I be concerned about these issues? Can I trust the results, or is there an issue with my model? Any insights would be appreciated!
Hey there! I’ve been following this thread and it’s super interesting. 
I’m curious, have you tried running your model without the method factor? Sometimes adding that extra layer of complexity can throw things off, especially with personality data that’s already pretty interconnected.
Also, I’m wondering about your sample size. In my experience, smaller samples can sometimes lead to funky results with these big models. How many participants did you have?
Oh, and those negative loadings on the method factor - that’s intriguing! Have you considered that maybe some items are behaving differently than expected? It might be worth looking at each item’s correlation with the others to see if any stand out.
Don’t lose heart though! CFA can be a real puzzle sometimes. Maybe try a more exploratory approach first? Like running an EFA to see how the items naturally cluster before jumping into the confirmatory stuff.
Keep us posted on what you find out! I’m really curious to see how this turns out. Good luck! 
hey, i’ve hit this issue before. it’s tricky, but don’t panic! first, check ur data for outliers or coding errors. sometimes that’s the culprit. if that’s not it, maybe try simplifying ur model? the method factor might be causing issues. good fit stats are nice, but with a non-positive definite matrix, u gotta be careful. keep digging!
Having dealt with similar issues in SEM, I understand the frustration that comes with getting a warning about a non-positive definite matrix despite promising fit statistics. In my experience, this indicates there could be underlying data quality issues or potential model misspecifications that require careful investigation. I advise you to review your data meticulously for any errors or outliers, and to consider whether the method factor is overly complex or mis-specified. Trying a simpler model without the method factor might help identify the root cause. Revisiting variable correlations could also reveal multicollinearity that contributes to the problem. It’s important to resolve these issues to ensure your results are reliable.