mastodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance, open to everyone, but mainly English and French speaking.

Administered by:

Server stats:

816
active users

#bayesian

1 post1 participant0 posts today
Replied in thread

@Posit

It's important to emphasize that "realistic-looking" data does *not* mean "realistic" data – especially high-dimensional data (unfortunately that post doesn't warn against this).

If one had an algorithm that generated realistic data for a given inference problem, it would mean that that inference problem had been solved. So: for educational purposes, why not. But for validation-like purposes, use with uttermost caution and at your own peril.

Байесовская собака: анализ пёсьего компаса

Ориентируются ли собаки по компасу, когда делают свои грязные дела? Оказывается — да! Если вам интересно, как можно это подтвердить в домашних условиях, используя компас, Байесовскую статистику и собаку (собака не включена), то добро пожаловать под кат.

habr.com/ru/articles/895332/

ХабрБайесовская собака: анализ пёсьего компасаtl;dr Ориентируются ли собаки по компасу, когда делают свои грязные дела? Оказывается — да! Если вам интересно, как можно это подтвердить в домашних условиях, используя компас, Байесовскую статистику...

Happy Birthday, Laplace! 🎂 🪐 🎓 One of the first to use Bayesian probability theory in the modern way!

"One sees in this essay that the theory of probabilities is basically only common sense reduced to a calculus. It makes one estimate accurately what right-minded people feel by a sort of instinct, often without being able to give a reason for it. It leaves nothing arbitrary in the choice of opinions and of making up one's mind, every time one is able, by this means, to determine the most advantageous choice. Thereby, it becomes the most happy supplement to ignorance and to the weakness of the human mind. If one considers the analytical methods to which this theory has given rise, the truth of the principles that serve as the groundwork, the subtle and delicate logic needed to use them in the solution of the problems, the public-benefit businesses that depend on it, and the extension that it has received and may still receive from its application to the most important questions of natural philosophy and the moral sciences; if one observes also that even in matters which cannot be handled by the calculus, it gives the best rough estimates to guide us in our judgements, and that it teaches us to guard ourselves from the illusions which often mislead us, one will see that there is no science at all more worthy of our consideration, and that it would be a most useful part of the system of public education."

*Philosophical Essay on Probabilities*, 1814 <doi.org/10.1007/978-1-4612-418>

After a long collaboration with @martinbiehl, @mc and @Nathaniel I’m excited to share the first of (hopefully) many outputs:
“A Bayesian Interpretation of the Internal Model Principle”
arxiv.org/abs/2503.00511.

This work combines ideas from control theory, applied #categorytheory and #Bayesian reasoning, with ramifications for #cognitive science, #AI/#ML, #ALife and biology to be further explored in the future.

In these fields, we come across ideas of “models”, “internal models”, “world models”, etc. but it is hard to find formal definitions, and when one does, they usually aren’t general enough to cover all the aspects these different fields consider important.

In this work, we focus on two specific definitions of models, and show their connections. One is inspired by work in control theory, and one comes from Bayesian inference/filtering for cognitive science, AI and ALife, and is formalised with Markov categories.

In the first part, we review and reformulate the “internal model principle” from control theory (at least, one of its versions) in a more modern language heavily inspired by categorical systems theory (davidjaz.com/Papers/DynamicalB, github.com/mattecapu/categoric).

A #bayesian blogpost, by two of my undergraduate students! It's their report on their learning Bayesian modeling by applying it to my lab's data.
alexholcombe.github.io/brms_ps
Summary: we learned to use brms. But had trouble when we added more than one or two factors to the model. Little idea why; haven't had time to tinker much with that.

alexholcombe.github.ioBayesian analysis of psychophysical data using brms

Today in the December Adventure, we build an intuition around hierarchical Bayes models. Instead of coming in hot with opinionated hyperparameters, we apply a light touch and let the dat aspeak. The domain is Big Ten football, not the Fediverse's most popular subject, but I hope you find the math interesting!

And I'm having a good time learning to plot in Python.

rossabaker.com/notes/december-

rossabaker.comRoss A. Baker: December Adventure, 2024: Day 15
More from Ross A. Baker

I got an email from the author promoting this benchmark comparison of #Julialang + StanBlocks + #Enzyme vs #Stan runtimes.

StanBlocks is a macro package for Julia that mimics the structure of a Stan program. This is the first I've heard about it.

A considerable number of these models are faster in Julia than Stan, maybe even most of them.

nsiccha.github.io/StanBlocks.j

nsiccha.github.ioStanBlocks.jl - Julia vs Stan performance comparison

A very interesting workshop on "Hierarchical models in preclinical research" finished today in Göttingen. This was a joint undertaking of the IBS-DR working groups "Non-clinical statistics" and "Bayes Methods", and included an extensive Tutorial on #brms by Sebastian Weber and Lukas Widmer. Some of the material is available on the meeting website:

biometrische-gesellschaft.de/a