mastodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance, open to everyone, but mainly English and French speaking.

Administered by:

Server stats:

743
active users

#bayesian

2 posts2 participants0 posts today

@highergeometer There's a theorem in probability theory that says that if you have the probability P(H|E1) of some hypothesis H given evidence E1, and the probability P(H|E2) of the same hypothesis given some other evidence E2, then the probability of the hypothesis given the conjunction ("and") of the pieces of evidence can be anything between 0 and 1 included:

P(H | E1 ∧ E2) ∈ [0, 1]

In other words, more details must be given.

See T. Hailperin: "Probability logic and combining evidence" <doi.org/10.1080/01445340600616>.

In the present case it depends on the relationship between the two agents. If you give me your rolls and I trust them, I might add them to mine and update my probability for the next. Or I might not fully trust you, and therefore use them only conditional on some sentence T expressing that what you say is true; the result is that your rolls would be weighed less than mine. It also depends on my prior over the possible long-run frequency distributions. For instance if it isn't exactly exchangeable, I might increase the probability of the hypothesis of a "changepoint" or a different rolling technique (the way of rolling or tossing can give much more bias than the distribution of weight in a die or coin) between your rolls and mine; see <doi.org/10.48550/arXiv.0710.37>. The possibilities are endless.

It seems unlikely to me that a product formula like the one you give would come up in some situation, but right now I can't exclude it.

I have an extremely naive question about probability. Suppose I have a possibly biased d6 and a roll it a bunch, and get a sample pmf
\[p_1\colon \underline{6}=\{1,2,3,4,5,6\} \to [0,1].\]
Then I give it to you and you do the same to get \(p_2\) based purely on your rolls. Then in a fit of madness we calculate the function \(p\colon \underline{6}\to [0,1]\) defined by
\[ p(x) = \frac{p_1(x)p_2(x)}{\sum_y p_1(y)p_2(y)},\]
which is also a pmf.
Does this represent anything? Something like a pooling of our information? A kind of update procedure? Something that violates dimensional analysis? Is it just junk?

A more sophisticated analysis might do something like assume we have a true distribution (remember: possibly biased) and see if the pmf \(p\) is "better" or not compared with \(p_1\) and \(p_2\). A more computational person might even simulate this. But what if we didn't know a true distribution here? Then it's more or less just multiplying a pair of what amount to priors on the same sample space. If we lost or forgot the data of the actual number of rolls, and only remembered the pmfs, we wouldn't be able to correctly combine our rolling experiments to get a single empirical pmf to compare the individual ones.

Version 0.3.1 of *inferno*, the R package for Bayesian nonparametric inference, is out!

<pglpm.github.io/inferno/>

This version brings the following improvement and new functions:

- Possibility of calculating the posterior probability of value ranges, such as Pr(Y ≤ y), besides of point values such as Pr(Y = y). Also for subpopulations.
- New function to generate posterior samples for any set of variates. Also for subpopulations
- Improved calculation of mutual information between variates.

I'd like to remind that this package is especially suited to researchers with a frequentist background who'd like to try out Bayesian nonparametrics. The introductory vignette <pglpm.github.io/inferno/articl> provides a simple and intuitive guide to the ideas, functions, and calculations, with a concrete example. The package provides many useful tools and functions for subgroup/subpopulation studies.

The package is also suited to Bayesian researchers who'd like to do nonparametric analysis without worrying to much about the Monte Carlo coding and calculations that it often involves.

Feedback and questions much appreciated!

pglpm.github.ioInference in R with Bayesian nonparametricsFunctions for Bayesian nonparametric population inference (also called exchangeable inference, or density inference). From a machine-learning perspective, they offer a model-free, uncertainty-quantified prediction algorithm.

Dear R community, I'd like to poll your opinions and ideas about the arguments of a possible R function:

Suppose you're working with the variates of some population; for instance the variates `species`, `island`, `bill_len`, `bill_dep`, `body_mass`, etc. of the `penguins` dataset <cran.r-project.org/package=bas>.

Suppose there's a package that allows you to calculate conditional probabilities of single or joint variates; for example

Pr( bill_len > 40, species = 'Adelie'  |  bill_dep < 16, body_mass = 4200)

and note in particular that this probability refers to intervals/tails ("bill_len > 40") as well as to point-values ("body_mass = 4200").

In fact the crucial point here is that with this function you can inquiry about the probability of a point value, "=", or about a cumulative probability, ">" or "<", or mixtures thereof, as you please.

Now what would be the "best" way to input this kind of choice as an argument to the function? Let's say you have the following two input ways:

**A: indicate the request of a cumulative probability in the variate name:**

```
Pr(
Y = list('bill_len>' = 40, species = 'Adelie'),
X = list('bill_dep<' = 16, body_mass = 4200)
)
```

**B: indicate the request of a cumulative probability in a separate function argument:**

```
Pr(
Y = list(bill_len = 40, species = 'Adelie'),
X = list(bill_dep = 16, body_mass = 4200),
tails = list(bill_len = '>', bill_dep = '<') # or +1, -1 instead of '>', '<'?
)
```

Any other ideas? Feel free to comment :) See <pglpm.github.io/inferno/refere> for a clearer idea about such a function.

Thank you so much for your help!

cran.r-project.orgbasepenguins: Convert Files that Use 'palmerpenguins' to Work with 'datasets'From 'R' 4.5.0, the 'datasets' package includes the penguins and penguins_raw data sets popularised in the 'palmerpenguins' package. 'basepenguins' takes files that use the 'palmerpenguins' package and converts them to work with the versions from 'datasets' ('R' >= 4.5.0). It does this by removing calls to library(palmerpenguins) and making the necessary changes to column names. Additionally, it provides helper functions to define new files paths for saving the output and a directory of example files to experiment with.

New on the blog: Using Bayesian tools to be a better frequentist

Turns out that for negative binomial regression with small samples, standard frequentist tools fail to achieve their stated goals. Bayesian computation ends up providing better frequentist guarantees. Not sure this is a general phenomenon, just a specific example.

martinmodrak.cz/2025/07/09/usi

Sunken British superyacht Bayesian is raised from the seabed.

A superyacht that sank off the coast of the Italian island of Sicily last year has been raised from the seabed by a specialist salvage team.

Seven of the 22 people on board died in the sinking, including the vessel's owner, British tech tycoon Mike Lynch and his 18-year-old daughter.

The cause of the sinking is still under investigation.

mediafaro.org/article/20250620

BBC · Sunken British superyacht Bayesian is raised from the seabed.By BBC
#Italy#UK#Bayesian

Interested in trying out *Bayesian nonparametrics* for your statistical research?

I'd be very grateful if people tried out this R package for Bayesian nonparametric population inference, called "inferno" :

<pglpm.github.io/inferno/>

It is especially addressed to clinical and medical researchers, and allows for thorough statistical studies of subpopulations or subgroups.

Installation instructions are here: <pglpm.github.io/inferno/index.>.

A step-by-step tutorial, guiding you through an example analysis of a simple dataset, is here: <pglpm.github.io/inferno/articl>.

The package has already been tested and used in concrete research about Alzheimer's Disease, Parkinson's Disease, drug discovery, and applications to machine learning.

Feedback is very welcome. If you find the package useful, feel free to advertise it a little :)

pglpm.github.ioInference in R with Bayesian nonparametricsFunctions for Bayesian nonparametric population inference (also called exchangeable inference, or density inference). From a machine-learning perspective, they offer a model-free, uncertainty-quantified prediction algorithm.

aeon.co/essays/no-schrodingers

This is a pretty good article for showing how confused the interpretation of QM is. And its a good article to understand why i personally side with Bohm and Bell in thinking the pilot wave theory is the one most reasonable to believe. Because the pilot wave theory has the following quality. The theory is a mapping from initial position at time t=0 to final position at time t=1...Its deterministic, but our knowledge of the initial condition is not
#quantum #bohm #bayesian

AeonNo, Schrödinger’s cat is not alive and dead at the same time | Aeon EssaysThe weird paradox of Schrödinger’s cat has found a lasting popularity. What does it mean for the future of quantum physics?

Britische Experten enthüllen: Die Luxusjacht „Bayesian“ kenterte vor Sizilien durch extreme Winde 🌬️⚓. Trotz „unsinkbar“-Ruf war sie bei 130 km/h Wind instabil. 7 Tote, darunter Besitzer Mike Lynch. Bergung pausiert nach tödlichem Unfall bei Taucharbeiten 🤿⚠️. #Bayesian #YachtUnglück #Sizilien #MaritimeSafety #newz

n-tv.de/panorama/Britische-Exp

n-tv NACHRICHTEN · Britische Experten nennen Gründe für Untergang der Luxusjacht "Bayesian"By n-tv NACHRICHTEN