Introduction

Notes

Math

Epistemology

Search

Andrius Kulikauskas

  • m a t h 4 w i s d o m - g m a i l
  • +370 607 27 665
  • My work is in the Public Domain for all to share freely.

用中文

  • 读物 书 影片 维基百科

Introduction E9F5FC

Questions FFFFC0

Software

Extracting information

How much information is extractable from a uniform distribution with one constraint.

Uniform distribution: all allowed values have equal probability. For example, the allowed values could be on the interval {$[a,b]$}.

One constraint: the allowed values satisfy a constraint, for example, they are on a geometric shape.

Cramer-Rao lower bound

  • The precision of any unbiased estimator is at most the Fisher information.
  • Equivalently, the reciprocal of the Fisher information is a lower bound on its variance.

Suppose {$\theta$} is an unknown deterministic parameter that is to be estimated from {$n$} independent observations (measurements) of {$x$} each from a distribution according to some probability density function {$f(x;\theta )$}. The variance of any unbiased estimator {$\hat{\theta}$} of {$\theta$} is then bounded by the reciprocal of the Fisher information {$I(\theta)$}.

{$\operatorname{var}({\hat {\theta }})\geq {\frac {1}{I(\theta )}}$}

where the Fisher information is {$I(\theta )=n\operatorname {E} _{X;\theta }\left[\left({\frac {\partial \ell (X;\theta )}{\partial \theta }}\right)^{2}\right]$} and {$ \ell (x;\theta )=\log(f(x;\theta ))$} is the natural logarithm of the likelihood function for a single sample {$x$} and {$\operatorname {E} _{x;\theta }$} denotes the expected value with respect to the density {$f(x;\theta )$} of {$X$}. The expectation is taken with respect to {$X$}.

Fisher information is a way of measuring the amount of information that an observable random variable {$X$} carries about an unknown parameter {$\theta $} of a distribution that models {$X$}.

  • It is the variance of the score. The score is the gradient of the log-likelihood function with respect to the parameter vector.
  • It is the expected value of the observed information. The observed information is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function).

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component.

Edit - Upload - History - Print - Recent changes
Search:
This page was last changed on August 22, 2025, at 11:56 AM