There’s a somewhat philosophical argument that when performing Bayesian reasoning, only comparisons between probabilities are meaningful anyway: in order to know the probability, in the absolute sense, of something, you need to be absolutely sure you’ve written down every possible hypothesis (in order to ensure that exactly one of them is true). However, with an improper prior it is still meaningful to make comparisons between probabilities: you can still meaningfully say that is larger than, or exactly times, since this comparison is invariant under scaling by positive reals. Improper priors may seem obviously bad because they don’t assign probabilities to things: in order to assign a probability you need to normalize by the total measure, which is infinite. For example, there’s an improper prior assigning measure to every positive integer, which allows us to talk about hypotheses indexed by the positive integers and with a prior which makes all of them equally likely. The point is that after a Bayesian update an improper prior may become proper again. One reason to like this description is that it naturally incorporates improper priors, which correspond to prior probabilities with possibly infinite total measure, up to scaling by positive reals. This statement of Bayes’ theorem suggests a slight reformulation of what we mean by a probability measure: a probability measure is the same thing as a measure with nonzero total measure, up to scaling by positive reals. This is a Bayesian update.Īside: measures up to scale and improper priors Now you can take your posteriors to be your new priors in preparation for seeing some more evidence. Intuitively: after seeing some evidence, your confidence in a hypothesis gets multiplied by how well the hypothesis predicted the evidence, then normalized. That is, the posterior probability is proportional to the prior probability times the likelihood, where the proportionality constant is uniquely determined by the requirement that the probabilities sum to. The idea is to think of as just a normalization constant. In practice this statement of Bayes’ theorem seems to be annoyingly easy to forget, at least for me. (This is if the are parameterized by a discrete parameter in general this sum should be replaced by an integral.) You might be concerned that requires the introduction of extra information, but in fact it must be given byīy conditioning on each in turn, so it’s already determined by the priors and the likelihoods. That is, each prior probability gets multiplied by, which describes how much more likely thinks the evidence is than before. )īayes’ theorem in this setting is then usually stated as follows: you should now have updated posterior probabilities that your hypotheses are true conditional on your evidence, and they should be given by That is, what it means to give evidence about some hypotheses is that there ought to be some conditional probabilities, the likelihoods, describing how likely it is that you see evidence conditional on hypothesis. (Here is a simultaneous definition of both hypotheses and evidence: hypotheses are things that assert how likely or unlikely evidence is. Suppose you have a set of hypotheses about something, exactly one of which can be true, and some prior probabilities that these hypotheses are true (which therefore sum to ). ![]() We’ll do this by deriving an arguably more fundamental principle of maximum relative entropy using only Bayes’ theorem. (The slogan in statistical mechanics is “all microstates are equally likely.”) the principle of indifference: assign probability to each of possible outcomes if you have no additional knowledge.The goal of this post is to derive the principle of maximum entropy in the special case of probability distributions over finite sets from The principle of maximum entropy asserts that when trying to determine an unknown probability distribution (for example, the distribution of possible results that occur when you toss a possibly unfair die), you should pick the distribution with maximum entropy consistent with your knowledge.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |