Difference between revisions of "Maximum (random variable)"

From Maths
Jump to: navigation, search
(Saving work)
 
m (Wrong categories)
 
Line 42: Line 42:
 
==Notes==
 
==Notes==
 
<references group="Note"/>
 
<references group="Note"/>
{{Definition|Statistics|Probability Theory|Elementary Probability Theory}}
+
{{Definition|Statistics|Probability|Elementary Probability}}

Latest revision as of 18:57, 26 November 2017

(Unknown grade)
This page is a stub
This page is a stub, so it contains little or minimal information and is on a to-do list for being expanded.The message provided is:
This page is currently little more than notes, beware. Alec (talk) 18:56, 26 November 2017 (UTC)

Definition notes

Let X1, , Xn be i.i.d random variables that are sampled from a distribution, X and additionally let M:=Max(X1,,Xn) for short.

  • P[Max(X1,,Xn)x]=P[X1x]P[X2x | X1x]P[Xnx | X1xXn1x]
    =ni=1P[Xix]
    - provided the Xi are independent random variables
    =ni=1P[Xx]
    =(P[Xx])n

We shall call this F(x):=(P[Xx])n (and use F(x):=P[Xx], as is usual for cumulative distribution functions) Caveat:Do not confuse the 's for derivatives, then:

  • the probability density function, f(x):=ddx[F(x)]|x
    [Note 1] is:
    • f(x)=ddx[(P[Xx])n]|x
      =ddx[(F(x))n]|x
      =n(F(x))n1f(x)
      by the chain rule, herein written nF(x)n1f(x) for simplicity.
    • so f(x)=nF(x)n1f(x)

Expectation of the maximum

  • E[M]:=xf(x)dx
    =nxf(x)F(x)n1dx
    - I wonder if we could use integration by parts or a good integration by substitution to clear this up

Special cases

  • For XRect([a,b]):
    • E[Max(X1,,Xn)]=nb+an+1
      • This is actually simplified from the perhaps more useful a+nn+1(ba)
        , recognising (ba) as the width of the uniform distribution we see that it slightly "under estimates" a+(ba)=b, from this we can actually obtain a very useful unbiased estimator.

a=0 case

Suppose that a=0, then to find b we could observe that the E[X]=b2

, so 2x the average of our sample would have expectation =b - this is indeed true.

However note in this case that the maximum has expectation E[M]=nn+1b

  • Thus: n+1nE[M]=b
    and so E[n+1nM]=b

So defining: M=n+1nM we do obtain an unbiased estimator for b from our biased one

It can be shown that for n8 that M has lower variance (and thus is better) than the 2x average estimator, they agree for n=1. For 2n7 2x the average has the lower variance and is thus objectively better (or the same, from the n=1 case) as an estimator for b when n7 Warning:Only the following is known Alec (talk) 18:56, 26 November 2017 (UTC)

  • This is only true when comparing the variance of M (not M) to that of 1nni=1Xi, as we double the average, the variance would go up 4 times making the difference even worse.
  • To obtain M we multiply M by a small (specifically n+1n) constant slightly bigger than 1 as n stops being tiny, this will increase the variance by a factor of this value squared, which will still be "slightly bigger than 1" so M is a better estimator, it may however move the specific bound of "better for n8 further down probably - this needs to be calculated

This is independent of b


Specifically:

  • Var(M)=(nn+2n2(n+1)2)b2
    - note this is for M not M and
  • Var(1nni=1Xi)=b212n

Notes

  1. Jump up The x inside the square brackets is bound to the x at the base of the |