Hey folks, I’m stuck with a CFA in R. Every time I run it, I get this warning:
Warning message: In eval(expr, envir, enclos) : Could not compute QR decomposition of Hessian. Optimization probably did not converge.
My code is pretty big. It’s got like 11 latent variables and tons of covariances. When I cut down on the variables or remove the covariance stuff, it works fine. But with everything in, it just chokes.
I’ve noticed R eats up a lot of memory when I try to run this. My computer only has 2GB RAM. Could this be why it’s failing? Or am I missing something else?
Any ideas on how to get this working? Maybe there’s a way to make it use less memory? I really need to keep all these variables in my analysis. Help!
Oh wow, Emma_Brave, that’s quite a complex CFA you’re running there! 
I’m not surprised R is struggling with all those variables and covariances. Memory issues can definitely be a culprit, especially with only 2GB of RAM. Have you considered breaking your analysis into smaller chunks? Maybe you could run separate CFAs for groups of related latent variables?
Another thought - have you looked into using packages designed for handling larger datasets, like lavaan.survey or OpenMx? They might be more efficient with memory usage.
Just curious - what kind of data are you working with that requires such an extensive CFA? Sounds like a really interesting project!
Oh, and have you tried increasing R’s memory limit? You can do that with the memory.limit() function, though it might not help much with only 2GB total.
Keep us posted on how it goes! I’d love to hear if you find a solution. 
hey emma, that’s a tough one. have u tried running it on a more powerful machine? 2gb ram is pretty low these days. maybe borrow a friend’s laptop or use a uni computer if u can? also, check out the ‘blavaan’ package - it’s built for big cfas and might handle memory better. good luck!
I’ve encountered similar issues with complex CFAs in R. One approach that’s worked for me is using the ‘sem’ package instead of lavaan. It’s generally more memory-efficient for large models.
Another strategy is to simplify your model initially. Start with a basic structure, then gradually add complexity. This can help identify where the memory issues start occurring.
If you’re set on using lavaan, try the ‘mimic’ optimizer. It’s often more stable for complex models:
fit <- cfa(model, data=mydata, estimator='ML', optimizer='NLMINB')
Lastly, consider using a cloud-based R environment like RStudio Cloud. It offers more computational power without upgrading your hardware.
Remember, model parsimony is crucial. Reassess if all 11 latent variables are truly necessary for your research question.