UA MATH566 统计理论 概念与定理总结
UA MATH566 統計理論 概念與定理總結
Part 1 Exponential Family
Tip 1: Form of Exponential Family
f(x∣η)=h(x)exp?(∑i=1nηiTi(x)?A(η))f(x|\eta) = h(x)\exp \left( \sum_{i=1}^n \eta_i T_i(x)-A(\eta) \right)f(x∣η)=h(x)exp(i=1∑n?ηi?Ti?(x)?A(η))
Tip 2: {Ti(X)}\{T_i(X)\}{Ti?(X)} are complete minimal sufficient statistics of Exponential family
Tip 3:
ETi(X)=?A(η)?ηi,Var(T(X))=A′′(η)ET_i(X) = \frac{\partial A(\eta)}{\partial \eta_i}, Var(T(X)) = A''(\eta)ETi?(X)=?ηi??A(η)?,Var(T(X))=A′′(η)
Tip 4:
I(η)=Var(T(X)),I(θ)=I(η(θ))(?η?θ)2I(\eta) =Var(T(X)) ,\ \ I(\theta) = I(\eta(\theta))(\frac{\partial \eta}{\partial \theta})^2 I(η)=Var(T(X)),??I(θ)=I(η(θ))(?θ?η?)2
Part 2 Sufficient Statistics and Complete Statistics
Tip 1: Neyman-Fisher Theorem, T(X)T(X)T(X) is sufficient ?\Leftrightarrow? f(x,θ)=h(x)g(T(x),θ)f(x,\theta) = h(x)g(T(x),\theta)f(x,θ)=h(x)g(T(x),θ)
Tip 2: T(X)T(X)T(X) is minimal sufficient statistics ?\Leftrightarrow? L(θ∣X)L(θ∣Y)\frac{L(\theta|\textbf{X})}{L(\theta|\textbf{Y})}L(θ∣Y)L(θ∣X)? independent of θ\thetaθ holds iff T(X)=T(Y)T(X)=T(Y)T(X)=T(Y)
Tip 3: f(x,θ)f(x,\theta)f(x,θ) is complete iff E[g(X)]=0?g(x)=0a.s.E[g(X)] = 0 \Leftrightarrow g(x) = 0\ a.s.E[g(X)]=0?g(x)=0?a.s.. If f(x,θ)f(x,\theta)f(x,θ) is complete, any statistics from f(x,θ)f(x,\theta)f(x,θ) are complete. A statistics T(X)T(X)T(X) is complete iff E[h(T)]=0?h(t)=0a.s.E[h(T)] = 0 \Leftrightarrow h(t) = 0\ a.s.E[h(T)]=0?h(t)=0?a.s..
Tip 4: Basu’s Theorem, complete minimal sufficient statistics are independent of ancillary statistics.
Part 3 C-R Inequality and Fisher Information
Tip 1: C-R inequality, g^(X)\hat{g}(X)g^?(X) and θ^(X)\hat{\theta}(X)θ^(X)are unbiased estimator of g(θ)g(\theta)g(θ) and θ\thetaθ where g(θ)g(\theta)g(θ) is differentiable, and then
Var(θ^)≥I?1(θ),Var(g^(X))≥[g′(θ)]2I?1(θ)Var(\hat{\theta})\ge I^{-1}(\theta),\ \ Var(\hat{g}(X)) \ge [g'(\theta)]^2I^{-1}(\theta)Var(θ^)≥I?1(θ),??Var(g^?(X))≥[g′(θ)]2I?1(θ)
Tip 2: C-R inequality under multivariate case, g^(X)\hat{g}(X)g^?(X) and θ^(X)\hat{\theta}(X)θ^(X) are unbiased estimator of g(θ)g(\theta)g(θ) and θ\thetaθ的, where g(θ)g(\theta)g(θ) is differentiable and its Jacobian determinant is Dg(θ)Dg(\theta)Dg(θ), and then
Var(g^(X))≥Dg(θ)I?1(θ)[Dg(θ)]TVar(\hat{g}(X)) \ge Dg(\theta)I^{-1}(\theta)[Dg(\theta)]^TVar(g^?(X))≥Dg(θ)I?1(θ)[Dg(θ)]T
Tip 3: Properties of Fisher information
Tip 4: Fisher information of transformation of random variable, if want to transform the parameter θ\thetaθ to ξ\xiξ,
I(ξ)=[Dξθ(ξ)]TI(θ)Dξθ(ξ)I(\xi) = [D_{\xi}\theta(\xi)]^TI(\theta)D_{\xi}\theta(\xi)I(ξ)=[Dξ?θ(ξ)]TI(θ)Dξ?θ(ξ)
Part 4 Point Estimation
Tip 1: Moment estimation equation
Xmˉ=1n∑i=1nXi≈μm(θ)=Eθ[Xm],m=1,2,?,d\bar{X^m} = \frac{1}{n} \sum_{i=1}^n X_i \approx \mu_m(\theta) = E_{\theta}[X^m],m=1,2,\cdots,d Xmˉ=n1?i=1∑n?Xi?≈μm?(θ)=Eθ?[Xm],m=1,2,?,d
By forcing sample moments and theoretical moments to be equal and solving for θ\thetaθ, we get moment estimators which are naturally functions of sample moments. So always remember Delta method to estimate their mean and variance.
Tip 2: Maximum likelihood estimation
θ^=arg?max?θ∈Θln?f(x1,x2,?,xn,θ)\hat{\theta} = \argmax_{\theta \in \Theta} \ln f(x_1,x_2,\cdots,x_n,\theta) θ^=θ∈Θargmax?lnf(x1?,x2?,?,xn?,θ)
Tip 3: Lehmann-Scheffe Theorem
Part 5 Hypothesis Testing
Tip 1: Likelihood ratio test, define likelihood ratio
λ(X)=sup?{θ=θ0}L(θ∣X)sup?{θ=θ0,θ1}L(θ∣X)\lambda(X) = \frac{\sup_{\{\theta=\theta_0\}} L(\theta|X)}{\sup_{\{\theta=\theta_0,\theta_1\}} L(\theta|X)} λ(X)=sup{θ=θ0?,θ1?}?L(θ∣X)sup{θ=θ0?}?L(θ∣X)?
and construct reject region with this statistic
C?={X:λ(X)≤cα}C^*=\{X:\lambda(X) \le c_{\alpha}\} C?={X:λ(X)≤cα?}
where cαc_{\alpha}cα? satisfied P(λ(X)≤cα)=αP(\lambda(X) \le c_{\alpha}) = \alphaP(λ(X)≤cα?)=α.
Tip 2: Neyman-Pearson Lemma, likelihood ratio test is UMP for two point test.
Tip 3: Karlin-Rubin theorem, if likelihood ratio is monoton on some statistic, likelihood ratio test is UMP for arbitrary test.
Part 6 Confidential Interval
Tip 1: γ\gammaγ-level confidential interval, P{θ∈C^(X)}=γP\{\theta \in \hat{C}(X)\} = \gammaP{θ∈C^(X)}=γ,
Tip 2: Pivot method, if Q(X,θ)Q(X,\theta)Q(X,θ) is a pivot, solve this equation to get a confidential interval
P(l≤Q(X,θ)≤u)=γP(l \le Q(X,\theta) \le u) = \gammaP(l≤Q(X,θ)≤u)=γ
Tip 3: Optimal confidential interval
min?E[θ^u?θ^l]s.t.P[θ^l≤θ≤θ^u]≥γ\min E[\hat{\theta}_u - \hat{\theta}_l] \\ s.t. P[\hat{\theta}_l \le \theta \le \hat{\theta}_u] \ge \gammaminE[θ^u??θ^l?]s.t.P[θ^l?≤θ≤θ^u?]≥γ
Tip 4: For Θ?R\Theta \subset \mathbb{R}Θ?R, γ\gammaγ-level confidential interval is exactly the accepted region of an 1?γ1-\gamma1?γ hypothesis testing
總結
以上是生活随笔為你收集整理的UA MATH566 统计理论 概念与定理总结的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: UA MATH564 概率论 公式与定理
- 下一篇: UA MATH564 概率分布总结