当前位置: 首页 > news >正文

两视角分析

考虑下面两个视角:
R 1 = H 1 W 1 + Z 1 \mathcal{R}_1 = \mathcal{H}_1 \mathcal{W}_1 + \mathcal{Z}_1 R1=H1W1+Z1
R 2 = H 2 W 2 + Z 2 \mathcal{R}_2 = \mathcal{H}_2 \mathcal{W}_2 + \mathcal{Z}_2 R2=H2W2+Z2

  • W 1 \mathcal{W}_1 W1 has no direct prior, but is conditionally dependent on W 2 \mathcal{W}_2 W2 through their correlation.
  • W 2 ∼ N ( μ w 2 , σ w 2 2 ) \mathcal{W}_2 \sim \mathcal{N}(\mu_{w_2}, \sigma_{w_2}^2) W2N(μw2,σw22) and the correlation with W 1 \mathcal{W}_1 W1 is defined by:
    Cov ⁡ ( W 1 , W 2 ) = K σ w 1 σ w 2 \operatorname{Cov}(\mathcal{W}_1, \mathcal{W}_2) = K \sigma_{w_1} \sigma_{w_2} Cov(W1,W2)=Kσw1σw2
  • H 1 , H 2 \mathcal{H}_1, \mathcal{H}_2 H1,H2 are independent Rayleigh-distributed:
    H i ∼ Rayleigh ( σ h i ) i = 1 , 2 \mathcal{H}_i \sim \text{Rayleigh}(\sigma_{h_i}) \quad i = 1, 2 HiRayleigh(σhi)i=1,2
  • Z 1 , Z 2 \mathcal{Z}_1, \mathcal{Z}_2 Z1,Z2 are independent Gaussian noises:
    Z i ∼ N ( 0 , σ z 2 ) , i = 1 , 2 \mathcal{Z}_i \sim \mathcal{N}(0, \sigma_z^2), \quad i = 1, 2 ZiN(0,σz2),i=1,2
    解码端接收到 R 1 \mathcal{R}_1 R1 R 2 \mathcal{R}_2 R2对于 W 1 \mathcal{W}_1 W1的恢复, R 2 \mathcal{R}_2 R2相当于是side information,由于 W 1 \mathcal{W}_1 W1 W 2 \mathcal{W}_2 W2存在相关性,所以 R 2 \mathcal{R}_2 R2对于 W 1 \mathcal{W}_1 W1的恢复相当于一种有损的先验,为什么是有损的先验,因为 R 2 \mathcal{R}_2 R2的传输过程有噪声。现在考虑对于 W 1 \mathcal{W}_1 W1的恢复过程没有完美的CSI,只有估计的CSI,求 R 1 \mathcal{R}_1 R1 R 2 \mathcal{R}_2 R2下的贝叶斯分布

Joint Likelihood Function
p ( R 1 , R 2 ∣ W 1 , W 2 ) = ∫ p ( R 1 ∣ W 1 , H 1 ) p ( R 2 ∣ W 2 , H 2 ) p ( H 1 ) p ( H 2 ) d H 1 d H 2 p(\mathcal{R}_1, \mathcal{R}_2 \mid \mathcal{W}_1, \mathcal{W}_2) = \int p(\mathcal{R}_1 \mid \mathcal{W}_1, \mathcal{H}_1) p(\mathcal{R}_2 \mid \mathcal{W}_2, \mathcal{H}_2) p(\mathcal{H}_1) p(\mathcal{H}_2) d \mathcal{H}_1 d \mathcal{H}_2 p(R1,R2W1,W2)=p(R1W1,H1)p(R2W2,H2)p(H1)p(H2)dH1dH2
Marginalizing Rayleigh Distributions
p ( R i ∣ W i ) = ∫ h i σ h i 2 exp ⁡ ( − h i 2 2 σ h i 2 ) 1 2 π σ z 2 exp ⁡ { − ( R i − h i W i ) 2 2 σ z 2 } d h i p(\mathcal{R}_i \mid \mathcal{W}_i) = \int \frac{h_i}{\sigma_{h_i}^2} \exp\left(-\frac{h_i^2}{2\sigma_{h_i}^2}\right) \frac{1}{\sqrt{2 \pi \sigma_z^2}} \exp \left\{-\frac{(\mathcal{R}_i - h_i \mathcal{W}_i)^2}{2 \sigma_z^2}\right\} dh_i p(RiWi)=σhi2hiexp(2σhi2hi2)2πσz2 1exp{2σz2(RihiWi)2}dhi
Since W 1 \mathcal{W}_1 W1 has no direct prior, we model it conditionally based on W 2 \mathcal{W}_2 W2 as 【PRML,p87】
W 1 ∣ W 2 ∼ N ( K σ w 1 σ w 2 ( W 2 − μ w 2 ) , ( 1 − K 2 ) σ w 1 2 ) \mathcal{W}_1 \mid \mathcal{W}_2 \sim \mathcal{N}\left( K \frac{\sigma_{w_1}}{\sigma_{w_2}} (\mathcal{W}_2 - \mu_{w_2}), (1 - K^2) \sigma_{w_1}^2 \right) W1W2N(Kσw2σw1(W2μw2),(1K2)σw12)

[ W 1 W 2 ] ∼ N ( [ 0 μ w 2 ] , [ σ w 1 2 K σ w 1 σ w 2 K σ w 1 σ w 2 σ w 2 2 ] ) \begin{bmatrix} \mathcal{W}_1 \\ \mathcal{W}_2 \end{bmatrix} \sim \mathcal{N} \left( \begin{bmatrix} 0 \\ \mu_{w_2} \end{bmatrix}, \begin{bmatrix} \sigma_{w_1}^2 & K \sigma_{w_1} \sigma_{w_2} \\ K \sigma_{w_1} \sigma_{w_2} & \sigma_{w_2}^2 \end{bmatrix} \right) [W1W2]N([0μw2],[σw12Kσw1σw2Kσw1σw2σw22])
我们想要得到 W 1 ∣ W 2 \mathcal{W}_1 \mid \mathcal{W}_2 W1W2 的条件分布。
根据多元高斯分布的性质,条件分布仍然是高斯分布,其均值和协方差如下:
E [ W 1 ∣ W 2 ] = μ 1 + Σ 12 Σ 22 − 1 ( W 2 − μ 2 ) \mathbb{E}[\mathcal{W}_1 \mid \mathcal{W}_2] = \mu_1 + \Sigma_{12} \Sigma_{22}^{-1} (\mathcal{W}_2 - \mu_2) E[W1W2]=μ1+Σ12Σ221(W2μ2)其中: μ 1 = 0 \mu_1 = 0 μ1=0, μ 2 = μ w 2 \mu_2 = \mu_{w_2} μ2=μw2 Σ 12 = K σ w 1 σ w 2 \Sigma_{12} = K \sigma_{w_1} \sigma_{w_2} Σ12=Kσw1σw2 Σ 22 = σ w 2 2 \Sigma_{22} = \sigma_{w_2}^2 Σ22=σw22
条件协方差公式 Var ⁡ [ W 1 ∣ W 2 ] = Σ 11 − Σ 12 Σ 22 − 1 Σ 21 \operatorname{Var}[\mathcal{W}_1 \mid \mathcal{W}_2] = \Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21} Var[W1W2]=Σ11Σ12Σ221Σ21
Σ 11 = σ w 1 2 , Σ 12 = K σ w 1 σ w 2 , Σ 22 = σ w 2 2 \Sigma_{11} = \sigma_{w_1}^2, \quad \Sigma_{12} = K \sigma_{w_1} \sigma_{w_2}, \quad \Sigma_{22} = \sigma_{w_2}^2 Σ11=σw12,Σ12=Kσw1σw2,Σ22=σw22
Var ⁡ [ W 1 ∣ W 2 ] = σ w 1 2 − ( K σ w 1 σ w 2 ) 2 σ w 2 2 = σ w 1 2 ( 1 − K 2 ) \operatorname{Var}[\mathcal{W}_1 \mid \mathcal{W}_2] = \sigma_{w_1}^2 - \frac{(K \sigma_{w_1} \sigma_{w_2})^2}{\sigma_{w_2}^2} = \sigma_{w_1}^2 (1 - K^2) Var[W1W2]=σw12σw22(Kσw1σw2)2=σw12(1K2)

E [ W 1 ∣ W 2 ] = K σ w 1 σ w 2 ( W 2 − μ w 2 ) \mathbb{E}[\mathcal{W}_1 \mid \mathcal{W}_2] = K \frac{\sigma_{w_1}}{\sigma_{w_2}} (\mathcal{W}_2 - \mu_{w_2}) E[W1W2]=Kσw2σw1(W2μw2)

Using Bayes’ theorem, we derive the posterior of W 1 \mathcal{W}_1 W1:

  • p ( W 1 ∣ R 1 , R 2 ) ∝ p ( R 1 ∣ W 1 ) p ( W 1 ∣ W 2 ) p ( W 2 ∣ R 2 ) p(\mathcal{W}_1 \mid \mathcal{R}_1, \mathcal{R}_2) \propto p(\mathcal{R}_1 \mid \mathcal{W}_1) p(\mathcal{W}_1 \mid \mathcal{W}_2) p(\mathcal{W}_2 \mid \mathcal{R}_2) p(W1
http://www.xdnf.cn/news/680779.html

相关文章:

  • Python 实现桶排序详解
  • 龙虎榜——20250527
  • 7.0 Q1|四川大学CHARLS发文 | 前瞻性队列中肌肉减少症和心血管疾病的变化
  • vue3 判断 一个多字段数组内 包含某个值
  • STM32 UART通信实战指南:从原理到项目落地
  • 编译pg_duckdb步骤
  • linux 通过命令将 MinIO 桶的权限设置为 Custom(自定义策略)
  • 常用流程审批OA系统推荐,三款产品对比分析
  • 【AI面试秘籍】| 第22期:进行SFT时,基座模型选用Chat还是Base模型?
  • 罗技优联接收器如何配对,如何让一个接收器配对多个无线设备
  • Kruskal-Wallis检验 vs. 多次Wilcoxon检验:多次两两比较为什么会增加I类错误-spss
  • 创意编程:用Python打造粒子爱心烟花秀
  • 微信小程序获取手机号
  • 商用密码 vs 普通密码:安全加密的核心区别
  • ISO 20000体系:软件配置管理中的功能基线、分配基线以及产品基线的解释,以及与WBS分解对应关系
  • python和java差异:关键数据类型与容器
  • 探秘 OSPF 协议:从拓扑到实战的网络工程进阶之路
  • DMA STM32H7 Domains and space distrubution
  • Android11 访问所有文件
  • 数字孪生技术前沿探索:与5G/6G、区块链的深度融合及伦理治理框架构建
  • 配置文件元数据
  • 【赵渝强老师】HBase的体系架构
  • 从“学术杠精”到“学术创新”
  • 数据结构测试模拟题(2)
  • 改进yolo11模型学习
  • 真话与假话
  • #跟着Lucky学鸿蒙# HarmonyOS NEXT 工程介绍
  • jenkins-jenkins简介
  • 【Redis】Redis使用规范
  • 鸿蒙OSUniApp 制作带有分页功能的列表组件#三方框架 #Uniapp