In a sense, yes, you are making a lot of things simpler by being Bayesian in this particular case. If you manage (given the prior and the likelihood) to sample the posterior, then inference based on the posterior samples is simple. If everything worked well, it all ends up being pretty straightforward. It's a frequent (no pun intended) occurence that frequentist procedures for certain situations like repeated measures, multiple-rater-multiple-case studies, small sample confidence intervals, inference with zero counts etc. can get really complicated and need very case specific solutions. In contrast Bayesian solutions tend to look very similar for many different settings, just "write down likelihood, specify prior, get posterior".
Of course, it's not quite that simple. E.g. you need to pick priors in some way (and with flat improper priors you need to check the posterior even exists, also flat proper priors can have poor properties, too). Then, you need to be sure the typical MCMC machinery actually got you "good" samples of the posterior. E.g. using NUTS via Stan or something like that nowadays has better diagnostics for this than we used to have. Still, you might find out that it's hard to resolve issues or there can still be cases where you don't notice issues. There's various well-known tricks that can help (e.g. centered vs. non-centered parameterization for hierarchical models, how to map things like a simplex to an unconstrained parameter space in a way that behaves well, how to deal with discrete parameters by, say, integrating them out...), which you might argue are a similar thing as complicated special frequentist procedures.