# Bayesian SEM (BSEM) Application and Example

Sounds good, right?

Bayesian priors allow cross-loadings and residual covariances of SEM’s to vary a small degree (i.e., replace exact zeros with approximate zeros from informative, small-variance priors) and be evaluated (see Asparouhov, Muthén, & Morin, 2015; Muthén, & Asparouhov 2012). The researcher can thereby discover whether 0 cross-loadings likely exist in the “true” population model, given their data, and refine their model accordingly.

I recently had the opportunity to apply this technique to a bi-factor model for a new scale that had previously only been subjected to a traditional CFA. The model had strong theoretical support, but fit indices and some loadings were not supportive of the general factor as theorized. These conditions provide a perfect opportunity to use Bayesian CFA (BCFA) to refine the model for cross-validation.

MPlus 7 was used to specify and test a model where cross-loadings were assigned normally distributed priors with 0 means and variances of .01. The technique is not difficult for those familiar with CFA in MPlus. Working from a typical CFA, the researcher need only:

- Specify “Bayes” as your estimator (shown with option recommendations):

ANALYSIS: ESTIMATOR = BAYES; !Uses two independent MCMC chains PROCESSORS = 2; !To speed up computations if you have 2 processors FBITERATIONS=15000; !Minimum number of Markov Chain iterations

- Include all cross-loadings for a factor below the actual loadings (in our case, the specific factors in a bi-factor model).
- Thus for a data set with 10 items (y1-y10), and two factors (f1 and f2) with loadings y1-y3 and y4-y10 respectively, go from:

f1 BY y1-y3; f2 BY y4-y10; to f1 BY y1-y3 y4-y10; !Cross-loadings f2 BY y4-y10 y1-y3; !Cross-loadings

- Next, assign labels to the cross-loadings so a Bayesian prior can be specified for each. To do so, expand the syntax with cross-loadings as follows (cross-loadings can be labeled however desired–I am using “xlam” to refer to lamda’s [loadings] of the cross-loadings [“x” for cross]):

f1 BY y1-y3 y4-y10 (f1xlam1-f1xlam7); !Cross-loadings (with assigned labels) f2 BY y4-y10 y1-y3(f2xlam1-f2xlam3); !Cross-loadings (with assigned labels)

- Finally, model priors are specified by defining all the cross-loadings’ distribution, mean, and variance. A statement is simply added at the end of the “Model” command as follows (for cross-loading priors normally distributed with mean 0 and variances of 0.01):

MODEL PRIORS: f1xlam1-f2xlam3~N(0,0.01); !You can list across factors if ordered

Naturally, identification and metric setting needs to be addressed first (for example, MPlus sets the first loading of a factor to one by default, but you may want to free that and instead set the variance of each factor to 1 instead). Yet, hopefully this illustrates the ease at which cross-loadings can be assigned Bayesian priors. For those interested in assigning residual covariances prior distributions the concept is very similar. A full example is given in the Asparouhov, Muthén, and Morin (2015) article.

Now for the beauty (i.e., application) of BCFA. Cross-loadings were assigned a normal prior distribution with mean 0 and variance of 0.01. Were the actual loadings 0 given our data? The prior and posterior distributions for each loading can be viewed to verify this (choose “Plot” > “View Plots” in MPlus) or the confidence intervals for the cross-loadings can be examined in the MPlus output. Below is an example of a cross-loading for an item I analyzed:

First, examine the prior distribution (made up of 15,000 iterations):

This looks right, the mean is essentially 0 and variance (Std Dev squared) 0.01 as specified.

Now, did the distribution change given our data? Check the posterior distribution to find out (or the confidence intervals for the cross-loadings in the MPlus output):

It appears the distribution did change. Given our data, the posterior distribution does not include 0 (for a 95% confidence interval). This suggests that the specific cross-loading should be freed.

This simple illustration is just one way BCFA (and BSEM) adds value for scale and model development. I refer you to the linked articles for more. Let me know if you’ve had a chance to apply the technique and what you learned.

**References**

*Journal of Management*, 0149206315591075. https://doi.org/10.1177/0149206315591075

*Psychological Methods*,

*17*(3), 313–335. https://doi.org/10.1037/a0026802

It is very useful. Thank you.