R CFA Analysis: Issue with QR decomposition of Hessian - Optimization convergence problem

I’m working on a Confirmatory Factor Analysis (CFA) in R. The initial model worked fine, but I wanted to improve its fit. I removed the item with the lowest R² (External7), which seemed to help. However, when I tried to remove the next lowest item (Negative2), I encountered an error indicating that the QR decomposition of the Hessian couldn’t be computed and that optimization probably did not converge.

Here’s a simplified version of my code:

library(factorAnalysis)

data_matrix <- matrix(rnorm(1000), ncol = 10)

model <- factorModel()
model$addFactor('F1', c('V1', 'V2', 'V3', 'V4'))
model$addFactor('F2', c('V5', 'V6', 'V7', 'V8'))

# First run - works fine
result1 <- runCFA(model, data_matrix)

# Remove V4 (lowest R²) - still works
model$removePath('F1', 'V4')
result2 <- runCFA(model, data_matrix)

# Remove V8 (next lowest R²) - fails
model$removePath('F2', 'V8')
result3 <- runCFA(model, data_matrix)  # Error here

Any ideas on why this might be happening or how to resolve the issue?

hey ryan, i’ve seen this issue. maybe the model gets unstable when u drop too many items. try checking multicollinearity, adjusting iterations or convergence critria, or using a diffrent estimator. if that doesn’t work, maybe rethink ur model structure. good luck!

Hey there Ryan_Courageous! :face_with_monocle:

Ooh, CFA can be tricky sometimes, right? I’m super curious about your model now. Have you considered that removing items might be changing the overall structure of your factors?

What if we approach this from a different angle? Instead of just removing items, maybe we could look at modification indices or try some exploratory factor analysis first? It might give us some insights into why the model’s getting finicky.

Also, I’m wondering - how’s your sample size looking? Sometimes these issues pop up when we don’t have enough data to support the complexity of our model.

Oh, and here’s a wild thought - what if we tried a Bayesian approach? It might handle the uncertainty better, especially if we’re dealing with a complex model.

What do you think? I’m really curious to hear more about your research and what you’re trying to model. Maybe chatting about it could spark some new ideas?