[自然语言处理]计算语言的熵
一、要求
利用给定的中英文语料,分别计算英语字母、英语单词、汉字、汉语词的熵,并和已公开结果比较,思考汉语的熵对汉语编码和处理的影响。
二、实验内容
2.1 统计英文语料的熵
1.代码
(1)计算英文字母的熵
import math
#计算每个英文字母的熵
def calculate_letter_entropy(file_path):letter_count={}with open(file_path,'r',encoding='utf-8')as file:for line in file:for char in line:if char.isalpha():char=char.lower()if 'a' <= char <= 'z': letter_count[char]=letter_count.get(char,0)+1total_count=sum(letter_count.values())letter_prob={k:v/total_count for k,v in letter_count.items()}letter_entropy={}for letter,prob in letter_prob.items():letter_entropy[letter]=-prob*math.log2(prob)overall_entropy=-sum([prob*math.log2(prob)for prob in letter_prob.values()])return letter_entropy,overall_entropy
file_path='D:\[NLP]test work\实验语料库25\实验2、3\eng.txt'
letter_entropy,overall_entropy1=calculate_letter_entropy(file_path)
word_entropy,overall_entropy2=calculate_word_entropy(file_path)
print("每个英文字母的熵:")
for letter,entropy_value in letter_entropy.items():print(f"{letter}:{entropy_value}")
print("统计英文字母整体的熵:",{overall_entropy1})
(2)计算英文单词的熵
import math
def calculate_word_entropy(file_path):word_count={}with open(file_path,'r',encoding='utf-8')as file:for line in file:words=line.strip().split()for word in words:word=word.lower()word_count[word]=word_count.get(word,0)+1total_count=sum(word_count.values())word_prob={k:v/total_count for k,v in word_count.items()}word_entropy={}for word,prob in word_prob.items():word_entropy[word]=-prob*math.log2(prob)overall_entropy=-sum([prob*math.log2(prob)for prob in word_prob.values()])return word_entropy,overall_entropy
file_path='D:\[NLP]test work\实验语料库25\实验2、3\eng.txt'
letter_entropy,overall_entropy1=calculate_letter_entropy(file_path)
word_entropy,overall_entropy2=calculate_word_entropy(file_path)
print("每个英文单词的熵:")
for word,entropy_value in word_entropy.items():print(f"{word}:{entropy_value}")
print("统计英文单词整体的熵:",{overall_entropy2})
2.结果展示及分析
(1)计算英语字母的熵结果如下图所示,根据相关资料显示,英语字母的熵约为4.03比特,实验中得到的熵约为4.16比特。
(2)计算英语单词的熵结果如下图所示,根据相关资料显示,英语词的熵约为10比特,这里实验得到的英语单词的熵约为9.96比特。
2.2 统计中文语料的熵
1.代码
(1)计算汉字的熵
import math
from collections import Counter
def calculate_char_entropy(file_path):char_count=Counter()with open(file_path,'r',encoding='gb2312',errors='ignore')as file:for line in file:chinese_chars=[char for char in line if '\u4e00'<=char<='\u9fff']char_count.update(chinese_chars)total_count=sum(char_count.values())char_prob={char:count/total_count for char,count in char_count.items()}#每个汉字熵char_entropy={char:-prob*math.log2(prob)for char ,prob in char_prob.items()}#整体汉字熵overall_entropy=-sum([prob*math.log2(prob)for prob in char_prob.values()])return char_entropy,overall_entropy
file_path='D:\[NLP]test work\实验语料库25\实验2、3\chn.txt'
char_entropy,overall_entropy1=calculate_char_entropy(file_path)
print("每个汉字的熵:")
for char,entropy_value in char_entropy.items():print(f"{char}:{entropy_value}")
print("统计汉字整体的熵:",{overall_entropy1})
(2)计算汉语词的熵
import math
from collections import Counter
def calculate_word_entropy(file_path):word_count=Counter()total_word_count=0with open(file_path,'r',encoding='gb2312',errors='ignore')as file:text=file.read()words=text.split()word_count.update(words)total_word_count=len(words)word_prob={word:count/total_word_count for word,count in word_count.items()}word_entropy={word:-prob*math.log2(prob) for word,prob in word_prob.items()}overall_entropy =sum(word_entropy.values())return word_entropy,overall_entropy
file_path='D:\[NLP]test work\实验语料库25\实验2、3\chn.txt'
word_entropy,overall_entropy=calculate_word_entropy(file_path)
for word,entropy_value in word_entropy.items():print(f"{word}:{entropy_value}")
print(f"整体汉语词的熵:{overall_entropy}")
2.实验结果展示及分析
(1)计算汉字的熵结果如下图所示,根据资料显示,汉字的信息熵为9.71比特,这里实验得到的汉字的熵约为9.50比特。
(2)计算汉语词的熵结果如下图所示,资料显示汉语词的熵约为11.46比特,汉语词汇平均长度约为2.5个汉字,这里实验得到的汉语词的熵约为10.77比特。
三、总结
信息熵能反映语料库中词汇等的复杂程度,熵越高,代表复杂程度越高,即语言结构和表达更多样。根据实验结果对比可以发现,汉语的熵相对比英语的熵高,其原因可能是汉语的词汇更丰富、用法更灵活,而且汉语的词序变化、词汇的使用可能带来丰富的语义变化,而英语的语法结构较为严谨,在表达相同的内容时,表达方式可能比较少。