CD-DNN-HMM训练过程

网友投稿 665 2022-05-30

我们可以使用嵌入的维特比算法来训练CD-DNN-HMM,主要的步骤总结见下表。

CD-DNN-HMM包含三个组成部分,一个深度神经网络dnn、一个隐马尔可夫模型hmm,以及一个状态先验概率分布prior。由于CD-DNN-HMM系统和GMM-HMM系统共享音素绑定结构,训练CD-DNN-HMM 的第一步就是使用训练数据训练一个GMM-HMM系统。因为DNN训练标注是由GMM-HMM系统采用维特比算法产生得到的,而且标注的质量会影响DNN系统的性能。因此,训练一个好的GMM-HMM系统作为初始模型就非常重要。

一旦训练好GMM-HMM模型hmm0,我们就可以创建一个从状态名字到senoneID的映射。这个从状态到senoneID的映射(stateTosenoneIDMap )的建立并不简单。这是因为每个逻辑三音素HMM是由经过聚类后的一系列物理三音素HMM代表的。换句话说,若干个逻辑三音素可能映射到相同的物理三音素,每个物理三音素拥有若干个(例如3)绑定的状态(用senones表示)。

使用已经训练好的GMM-HMM模型hmm0,我们可以在训练数据上采用维特比算法生成一个状态层面的强制对齐,利用stateTosenoneIDMap,我们能够把其中的状态名转变为senonelIDs。然后可以生成从特征到senonelD的映射对( featuresenoneIDPairs )来训练DNN。相同的featuresenoneIDPairs也被用来估计senone先验概率。

利用GMM-HMM模型hmm0,我们也可以生成一个新的隐马尔可夫模型hmm,其中包含和 hmm0相同的状态转移概率,以便在 DNN-HMM系统中使用。一个简单的方法是把hmm0中的每个GMM(即每个senone的模型)用一个(假的)一维单高斯代替。高斯模型的方差(或者说精度)是无所谓的,它可以设置成任意的正整数(例如,总是设置成1),均值被设置为其对应的senoneID。应用这个技巧之后,计算每个senone的后验概率就等价于从 DNN的输出向量中查表,找到索引是senoneID 的输出项(对数概率)。

CD-DNN-HMM训练过程

在这个过程中,我们假定一个CD-GMM-HMM存在,并被用于生成senone对齐。在这种情况下,用于对三音素状态聚类的决策树也是在GMM-HMM训练的过程中构建的。但这其实不是必需的,如果我们想完全去除图中的GMM-HMM步骤,可以通过均匀地把每个句子分段(称为flat-start )来构建一个单高斯模型,并使用这个信息作为训练标注。这可以形成一个单音素DNN-HMM,我们可以用它重新对句子进行对齐。然后可以对每个单音素估计一个单高斯模型,并采用传统方法构建决策树。事实上,这种无须GMM 的CD-DNN-HMM是能够成功训练的,可以参考[GMM-free DNN training]详细了解(http://bacchiani.net/resume/papers/ICASSP2014_3.pdf)。

以下是还含有GMM的CD-DNN-HMM训练脚本:

#!/usr/bin/env bash # Copyright 2017 Beijing Shell Shell Tech. Co. Ltd. (Authors: Hui Bu) # 2017 Jiayu Du # 2017 Xingyu Na # 2017 Bengu Wu # 2017 Hao Zheng # Apache 2.0 # This is a shell script, but it's recommended that you run the commands one by # one by copying and pasting into the shell. # Caution: some of the graph creation steps use quite a bit of memory, so you # should run this on a machine that has sufficient memory. #data=/export/a05/xna/data data=/home/data data_url=www.openslr.org/resources/33 . ./cmd.sh local/download_and_untar.sh $data $data_url data_aishell || exit 1; local/download_and_untar.sh $data $data_url resource_aishell || exit 1; # Lexicon Preparation, local/aishell_prepare_dict.sh $data/resource_aishell || exit 1; # Data Preparation, local/aishell_data_prep.sh $data/data_aishell/wav $data/data_aishell/transcript || exit 1; # Phone Sets, questions, L compilation utils/prepare_lang.sh --position-dependent-phones false data/local/dict \ "" data/local/lang data/lang || exit 1; # LM training local/aishell_train_lms.sh || exit 1; # G compilation, check LG composition utils/format_lm.sh data/lang data/local/lm/3gram-mincount/lm_unpruned.gz \ data/local/dict/lexicon.txt data/lang_test || exit 1; # Now make MFCC plus pitch features. # mfccdir should be some place with a largish disk where you # want to store MFCC features. mfccdir=mfcc for x in train dev test; do steps/make_mfcc_pitch.sh --cmd "$train_cmd" --nj 10 data/$x exp/make_mfcc/$x $mfccdir || exit 1; steps/compute_cmvn_stats.sh data/$x exp/make_mfcc/$x $mfccdir || exit 1; utils/fix_data_dir.sh data/$x || exit 1; done steps/train_mono.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/mono || exit 1; # Monophone decoding utils/mkgraph.sh data/lang_test exp/mono exp/mono/graph || exit 1; steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/mono/graph data/dev exp/mono/decode_dev steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/mono/graph data/test exp/mono/decode_test # Get alignments from monophone system. steps/align_si.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/mono exp/mono_ali || exit 1; # train tri1 [first triphone pass] steps/train_deltas.sh --cmd "$train_cmd" \ 2500 20000 data/train data/lang exp/mono_ali exp/tri1 || exit 1; # decode tri1 utils/mkgraph.sh data/lang_test exp/tri1 exp/tri1/graph || exit 1; steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/tri1/graph data/dev exp/tri1/decode_dev steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/tri1/graph data/test exp/tri1/decode_test # align tri1 steps/align_si.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri1 exp/tri1_ali || exit 1; # train tri2 [delta+delta-deltas] steps/train_deltas.sh --cmd "$train_cmd" \ 2500 20000 data/train data/lang exp/tri1_ali exp/tri2 || exit 1; # decode tri2 utils/mkgraph.sh data/lang_test exp/tri2 exp/tri2/graph steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/tri2/graph data/dev exp/tri2/decode_dev steps/decode.sh --cmd "$decode_cmd" --config conf/decode.config --nj 10 \ exp/tri2/graph data/test exp/tri2/decode_test # train and decode tri2b [LDA+MLLT] steps/align_si.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri2 exp/tri2_ali || exit 1; # Train tri3a, which is LDA+MLLT, steps/train_lda_mllt.sh --cmd "$train_cmd" \ 2500 20000 data/train data/lang exp/tri2_ali exp/tri3a || exit 1; utils/mkgraph.sh data/lang_test exp/tri3a exp/tri3a/graph || exit 1; steps/decode.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri3a/graph data/dev exp/tri3a/decode_dev steps/decode.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri3a/graph data/test exp/tri3a/decode_test # From now, we start building a more serious system (with SAT), and we'll # do the alignment with fMLLR. steps/align_fmllr.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri3a exp/tri3a_ali || exit 1; steps/train_sat.sh --cmd "$train_cmd" \ 2500 20000 data/train data/lang exp/tri3a_ali exp/tri4a || exit 1; utils/mkgraph.sh data/lang_test exp/tri4a exp/tri4a/graph steps/decode_fmllr.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri4a/graph data/dev exp/tri4a/decode_dev steps/decode_fmllr.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri4a/graph data/test exp/tri4a/decode_test steps/align_fmllr.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri4a exp/tri4a_ali # Building a larger SAT system. steps/train_sat.sh --cmd "$train_cmd" \ 3500 100000 data/train data/lang exp/tri4a_ali exp/tri5a || exit 1; utils/mkgraph.sh data/lang_test exp/tri5a exp/tri5a/graph || exit 1; steps/decode_fmllr.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri5a/graph data/dev exp/tri5a/decode_dev || exit 1; steps/decode_fmllr.sh --cmd "$decode_cmd" --nj 10 --config conf/decode.config \ exp/tri5a/graph data/test exp/tri5a/decode_test || exit 1; steps/align_fmllr.sh --cmd "$train_cmd" --nj 10 \ data/train data/lang exp/tri5a exp/tri5a_ali || exit 1; # nnet3 local/nnet3/run_tdnn.sh # chain local/chain/run_tdnn.sh # getting results (see RESULTS file) for x in exp/*/decode_test; do [ -d $x ] && grep WER $x/cer_* | utils/best_wer.sh; done 2>/dev/null for x in exp/*/*/decode_test; do [ -d $x ] && grep WER $x/cer_* | utils/best_wer.sh; done 2>/dev/null exit 0;

机器学习

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:2018年摩拜校招嵌入式工程师笔试卷
下一篇:【华为云IoT】读书笔记之《万物互联:物联网核心技术与安全》第1章
相关文章