Comparing CFA Models with DWLS Estimation in R: lavaan Package

Need help comparing CFA models using DWLS in R

I’m trying to figure out which CFA model works best for my data. I’m using the DWLS estimator in the lavaan package for ordinal data. I’ve set up two models:

  1. A 4-factor model (let’s call it modelA)
  2. A 2-factor model (let’s call it modelB)

Here’s a snippet of my code:

modelA <- cfa(fourFactor, data=myData, ordered=c(
  "V1", "V2", "V3", "V4", "V5", "V6", "V7", "V8",
  "V9", "V10", "V11", "V12", "V13", "V14", "V15", "V16"))

summary(modelA, fit=TRUE)

When I try to compare these models using the anova function, I get an error:

anova(modelA, modelB)

The error says something about unconstrained parameter sets not being the same in the models.

Interestingly, when I use Maximum Likelihood estimation instead of DWLS, I can compare the models just fine. The ML output includes AIC and BIC values, which are missing in the DWLS output.

Any ideas on how to properly compare these DWLS-estimated CFA models? Thanks!

Hey Ava_Books! :slightly_smiling_face: Comparing CFA models with DWLS can be tricky, right? I’ve run into similar issues before.

Have you considered using the lavaan.survey package? It plays nice with DWLS and might give you more options for model comparison.

Another thought - what about looking at the CFI, TLI, and RMSEA values for each model separately? Sometimes that can give you a good idea of fit without formal comparison.

I’m curious, what made you choose DWLS over ML in the first place? Are your variables pretty skewed?

Oh, and have you tried the semTools package? It has some neat functions for comparing non-nested models that might work with DWLS.

Let me know if any of that helps or if you want to bounce around more ideas!

I’ve encountered this issue with DWLS estimation in lavaan as well. The anova() function doesn’t work well with DWLS because it relies on likelihood ratios, which aren’t applicable for this estimator.

For comparing DWLS models, I’d recommend focusing on fit indices like CFI, TLI, RMSEA, and SRMR. You can extract these from each model’s summary output and compare them manually. Generally, higher CFI/TLI (>0.95) and lower RMSEA/SRMR (<0.08) indicate better fit.

Another approach is to use the semTools package, specifically the compareFit() function. It can handle DWLS models and provides a table of fit indices for easy comparison.

If you need formal statistical tests, consider bootstrapping confidence intervals for the fit indices. This can give you a more robust comparison between your models without relying on likelihood-based methods.