I'm running into an issue with my confirmatory factor analysis (CFA) in R using the lavaan package. The warning message indicates that some estimated observed variances are negative. My model encompasses 8 first-order factors and 1 second-order factor. Due to my small sample size of 78 and the non-normal distribution of my data, I opted for the MLR estimator.
Below is a redesigned version of the model:
```R
model <- '
Factor1 =~ ItemA + ItemB + ItemC
Factor2 =~ ItemD + ItemE + ItemF
Factor3 =~ ItemG + ItemH + ItemI
SecondLevel =~ Factor1 + Factor2 + Factor3
'
result <- cfa(model, data=dataset, estimator="MLR", missing="FIML")
summary(result, fit.measures=TRUE, standardized=TRUE)
I’m looking for guidance on how to determine which variance is negative and potential steps to correct the issue. Any insights would be very helpful. Thanks!
hey OwenGalaxy, negative variances are a pain. try running lavInspect(result, "est")
to spot the culprits. sometimes dropping problematic items or merging factors can help. also, with only 78 cases, you might wanna simplify ur model a bit. good luck!
Hey Owen_Galaxy! Negative variances can be such a headache, right?
I’ve been there too with my CFA adventures.
Have you tried playing around with different estimation methods? Sometimes switching from MLR to WLSMV can work wonders, especially with non-normal data. It’s worth a shot!
Also, I’m curious - how did you decide on your factor structure? With a small sample size, maybe there’s room to simplify things a bit? Like, could some of those first-order factors be combined?
Oh, and don’t forget to check your modification indices! They might give you some clues on where the model’s struggling.
Keep us posted on how it goes! CFA can be tricky, but it’s so satisfying when you finally crack it. 
Negative variances in CFA can be challenging, especially when dealing with small sample sizes and non-normal data. In my experience, the first step is to examine the output closely to identify which variances are negative or near zero. This often indicates that the model may be over-specified given the available data.
It might be beneficial to simplify the model, perhaps by analyzing the individual first-order factors separately to ensure each is operating as expected. Using bootstrap standard errors can sometimes mitigate issues with non-normal data. If necessary, constraining variances to be positive—though not ideal—can help stabilize the estimation. Iterative adjustments, informed by both statistical diagnostics and theoretical insights, are vital in resolving these issues.