| 注册
请输入搜索内容

热门搜索

Java Linux MySQL PHP JavaScript Hibernate jQuery Nginx
jopen
6年前发布

非死book发布wav2letter工具包,用于端到端自动语音识别

   <p><a href="/misc/goto?guid=4958992220058155744" title="非死book"><img alt="非死book发布wav2letter工具包,用于端到端自动语音识别" src="https://simg.open-open.com/show/7335b414f3b8f2f6a943cf004cf9ef11.gif" /></a></p>    <p>日前, 非死book 人工智能研究院发布 wav2letter 工具包,它是一个简单高效的端到端自动语音识别(ASR)系统,实现了 <a href="/misc/goto?guid=4959012148542671108" rel="nofollow">Wav2Letter: an End-to-End ConvNet-based Speech Recognition System</a> 和 <a href="/misc/goto?guid=4959012148654386520" rel="nofollow">Letter-Based Speech Recognition with Gated ConvNets</a> 这两篇论文中提出的架构。如果大家想现在就开始使用这个工具进行语音识别,非死book 提供 Librispeech 数据集的预训练模型。</p>    <p>以下为对系统的要求,以及这一工具的安装教程,雷锋网(公众号:雷锋网) AI 科技评论整理如下:</p>    <p><strong>安装要求:</strong></p>    <p>系统:MacOS 或 Linux</p>    <p>Torch:接下来会介绍安装教程</p>    <p>在 CPU 上训练:Intel MKL</p>    <p>在 GPU 上训练:英伟达 CUDA 工具包 (cuDNN v5.1 for CUDA 8.0)</p>    <p>音频文件读取:Libsndfile</p>    <p>标准语音特征:FFTW</p>    <p><strong>安装:</strong></p>    <p><strong>MKL</strong></p>    <p>如果想在 CPU 上进行训练,强烈建议安装 Intel MKL</p>    <p>执行如下代码更新 .bashrc file </p>    <blockquote>     <p># We assume Torch will be installed in $HOME/usr.</p>     <p># Change according to your needs.</p>     <p>export PATH=$HOME/usr/bin:$PATH</p>     <p># This is to detect MKL during compilation</p>     <p># but also to make sure it is found at runtime.</p>     <p>INTEL_DIR=/opt/intel/lib/intel64</p>     <p>MKL_DIR=/opt/intel/mkl/lib/intel64</p>     <p>MKL_INC_DIR=/opt/intel/mkl/include</p>     <p>if [ ! -d "$INTEL_DIR" ]; then</p>     <p>echo "$ warning: INTEL_DIR out of date"</p>     <p>fi</p>     <p>if [ ! -d "$MKL_DIR" ]; then</p>     <p>echo "$ warning: MKL_DIR out of date"</p>     <p>fi</p>     <p>if [ ! -d "$MKL_INC_DIR" ]; then</p>     <p>echo "$ warning: MKL_INC_DIR out of date"</p>     <p>fi</p>     <p># Make sure MKL can be found by Torch.</p>     <p>export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$INTEL_DIR:$MKL_DIR</p>     <p>export CMAKE_LIBRARY_PATH=$LD_LIBRARY_PATH</p>     <p>export CMAKE_INCLUDE_PATH=$CMAKE_INCLUDE_PATH:$MKL_INC_DIR</p>    </blockquote>    <p><strong>LuaJIT 和 LuaRocks</strong></p>    <p>执行如下代码可以在 $HOME/usr 下安装 LuaJIT 和 LuaRocks,如果你想要进行系统级安装,删掉代码中的 -DCMAKE_INSTALL_PREFIX=$HOME/usr 即可。</p>    <blockquote>     <p>git clone https://github.com/torch/luajit-rocks.git</p>     <p>cd luajit-rocks</p>     <p>mkdir build; cd build</p>     <p>cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/usr -DWITH_LUAJIT21=OFF</p>     <p>make -j 4</p>     <p>make install</p>     <p>cd ../..</p>    </blockquote>    <p>接下来,我们假定 luarocks 和 luajit 被安装在 $PATH 下,如果你把它们安装在 $HOME/usr 下了,可以执行 ~/usr/bin/luarocks 和 ~/usr/bin/luajit 这两段代码。</p>    <p><a href="/misc/goto?guid=4959012148760375651" rel="nofollow"><strong>KenLM 语言模型工具包</strong></a></p>    <p>如果你想采用 wav2letter decoder,需要安装 KenLM。</p>    <p>这里需要用到 <a href="/misc/goto?guid=4958197884341747362" rel="nofollow">Boost</a>:</p>    <blockquote>     <p># make sure boost is installed (with system/thread/test modules)</p>     <p># actual command might vary depending on your system</p>     <p>sudo apt-get install libboost-dev libboost-system-dev libboost-thread-dev libboost-test-dev</p>    </blockquote>    <p>Boost 安装之后就可以安装 KenLM 了:</p>    <blockquote>     <p>wget https://kheafield.com/code/kenlm.tar.gz</p>     <p>tar xfvz kenlm.tar.gzcd kenlm</p>     <p>mkdir build && cd build</p>     <p>cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/usr -DCMAKE_POSITION_INDEPENDENT_CODE=ON</p>     <p>make -j 4</p>     <p>make install</p>     <p>cp -a lib/* ~/usr/lib # libs are not installed by default :(cd ../..</p>    </blockquote>    <p><a href="/misc/goto?guid=4959012148916839600" rel="nofollow"><strong>OpenMPI</strong></a><strong> 和 </strong><a href="/misc/goto?guid=4959012149037737011" rel="nofollow"><strong>TorchMPI</strong></a></p>    <p>如果计划用到多 CPU/GPU(或者多设备),需要安装 OpenMPI 和 TorchMPI</p>    <p>免责声明:我们非常鼓励大家重新编译 OpenMPI。标准发布版本中的 OpenMPI 二进制文件编译标记不一致,想要成功编译和运行 TorchMPI,确定的编译标记至关重要。</p>    <p>先安装 OpenMPI:</p>    <blockquote>     <p>wget https://www.open-mpi.org/software/ompi/v2.1/downloads/openmpi-2.1.2.tar.bz2</p>     <p>tar xfj openmpi-2.1.2.tar.bz2</p>     <p>cd openmpi-2.1.2; mkdir build; cd build</p>     <p>./configure --prefix=$HOME/usr --enable-mpi-cxx --enable-shared --with-slurm --enable-mpi-thread-multiple --enable-mpi-ext=affinity,cuda --with-cuda=/public/apps/cuda/9.0</p>     <p>make -j 20 all</p>     <p>make install</p>    </blockquote>    <p>注意:也可以执行 openmpi-3.0.0.tar.bz2,但需要删掉 --enable-mpi-thread-multiple。</p>    <p>接下来可以安装 TorchMPI 了:</p>    <blockquote>     <p>MPI_CXX_COMPILER=$HOME/usr/bin/mpicxx ~/usr/bin/luarocks install torchmpi</p>    </blockquote>    <p><strong>Torch 和其他 Torch 包</strong></p>    <blockquote>     <p>luarocks install torch</p>     <p>luarocks install cudnn # for GPU supportluarocks install cunn # for GPU support</p>    </blockquote>    <p><strong>wav2letter 包</strong></p>    <blockquote>     <p>git clone https://github.com/非死bookresearch/wav2letter.git</p>     <p>cd wav2letter</p>     <p>cd gtn && luarocks make rocks/gtn-scm-1.rockspec && cd ..</p>     <p>cd speech && luarocks make rocks/speech-scm-1.rockspec && cd ..</p>     <p>cd torchnet-optim && luarocks make rocks/torchnet-optim-scm-1.rockspec && cd ..</p>     <p>cd wav2letter && luarocks make rocks/wav2letter-scm-1.rockspec && cd ..</p>     <p># Assuming here you got KenLM in $HOME/kenlm</p>     <p># And only if you plan to use the decoder:</p>     <p>cd beamer && KENLM_INC=$HOME/kenlm luarocks make rocks/beamer-scm-1.rockspec && cd ..</p>    </blockquote>    <p><strong>训练 wav2letter 模型</strong></p>    <p><strong>数据预处理</strong></p>    <p>数据文件夹中有预处理不同数据集的多个脚本,现在我们只提供预处理 LibriSpeech 和 TIMIT 数据集的脚本。</p>    <p>下面是预处理 LibriSpeech ASR 数据集的案例:</p>    <blockquote>     <p>wget http://www.openslr.org/resources/12/dev-clean.tar.gz</p>     <p>tar xfvz dev-clean.tar.gz</p>     <p># repeat for train-clean-100, train-clean-360, train-other-500, dev-other, test-clean, test-other</p>     <p>luajit ~/wav2letter/data/librispeech/create.lua ~/LibriSpeech ~/librispeech-proc</p>     <p>luajit ~/wav2letter/data/utils/create-sz.lua librispeech-proc/train-clean-100 librispeech-proc/train-clean-360 librispeech-proc/train-other-500 librispeech-proc/dev-clean librispeech-proc/dev-other librispeech-proc/test-clean librispeech-proc/test-other</p>    </blockquote>    <p><strong>训练</strong></p>    <blockquote>     <p>mkdir experiments</p>     <p>luajit ~/wav2letter/train.lua --train -rundir ~/experiments -runname hello_librispeech -arch ~/wav2letter/arch/librispeech-glu-highdropout -lr 0.1 -lrcrit 0.0005 -gpu 1 -linseg 1 -linlr 0 -linlrcrit 0.005 -onorm target -nthread 6 -dictdir ~/librispeech-proc  -datadir ~/librispeech-proc -train train-clean-100+train-clean-360+train-other-500 -valid dev-clean+dev-other -test test-clean+test-other -gpu 1 -sqnorm -mfsc -melfloor 1 -surround "" -replabel 2 -progress -wnorm -normclamp 0.2 -momentum 0.9 -weightdecay 1e-05</p>    </blockquote>    <p><strong>多 GPU 训练</strong></p>    <p>利用 OpenMPI</p>    <blockquote>     <p>mpirun -n 2 --bind-to none  ~/TorchMPI/scripts/wrap.sh luajit ~/wav2letter/train.lua --train -mpi -gpu 1 ...</p>    </blockquote>    <p><strong>运行 decoder(推理阶段)</strong></p>    <p>为了运行 decoder,需要做少量预处理。</p>    <p>首先创建一个字母词典,其中包括在 wav2letter 中用到的特殊重复字母:</p>    <blockquote>     <p>cat ~/librispeech-proc/letters.lst >> ~/librispeech-proc/letters-rep.lst && echo "1" >> ~/librispeech-proc/letters-rep.lst && echo "2" >> ~/librispeech-proc/letters-rep.lst</p>    </blockquote>    <p>然后将得到一个语言模型,并对这个模型进行预处理。这里,我们将使用预先训练过的 LibriSpeech 语言模型,大家也可以用 KenLM 训练自己的模型。然后,我们对模型进行预处理,脚本可能会对错误转录的单词给予警告,这不是什么大问题,因为这些词很少见。</p>    <blockquote>     <p>wget http://www.openslr.org/resources/11/3-gram.pruned.3e-7.arpa.gz luajit</p>     <p>~/wav2letter/data/utils/convert-arpa.lua ~/3-gram.pruned.3e-7.arpa.gz ~/3-gram.pruned.3e-7.arpa ~/dict.lst -preprocess ~/wav2letter/data/librispeech/preprocess.lua -r 2 -letters letters-rep.lst</p>    </blockquote>    <p>可选项:利用 KenLM 将模型转换成二进制格式,加载起来将会更快。</p>    <blockquote>     <p>build_binary 3-gram.pruned.3e-7.arpa 3-gram.pruned.3e-7.bin</p>    </blockquote>    <p>现在运行 test.lua lua,可以生成 emission。下面的脚本可以显示出字母错误率 (LER) 和单词错误率 (WER)。</p>    <blockquote>     <p>luajit ~/wav2letter/test.lua ~/experiments/hello_librispeech/001_model_dev-clean.bin -progress -show -test dev-clean -save</p>    </blockquote>    <p>一旦存储好 emission,可以执行 decoder 来计算 WER:</p>    <blockquote>     <p>luajit ~/wav2letter/decode.lua ~/experiments/hello_librispeech dev-clean -show -letters ~/librispeech-proc/letters-rep.lst  -words ~/dict.lst -lm ~/3-gram.pruned.3e-7.arpa -lmweight 3.1639 -beamsize 25000 -beamscore 40 -nthread 10 -smearing max -show</p>    </blockquote>    <p><strong>预训练好的模型:</strong></p>    <p>我们提供训练充分的 LibriSpeech 模型:</p>    <blockquote>     <p>wget https://s3.amazonaws.com/wav2letter/models/librispeech-glu-highdropout.bin</p>    </blockquote>    <p>注意:该模型是在 非死book 的框架下训练好的,因此需要用稍微不同的参数来运行 test.lua</p>    <blockquote>     <p>luajit ~/wav2letter/test.lua ~/librispeech-glu-highdropout.bin -progress -show -test dev-clean -save -datadir ~/librispeech-proc/ -dictdir ~/librispeech-proc/ -gfsai</p>    </blockquote>    <p><strong>大家可以加入 wav2letter 社群</strong></p>    <p>非死book:<a href="/misc/goto?guid=4959012149158561978" rel="nofollow">https://www.非死book.com/groups/717232008481207/</a></p>    <p>Google 社群:<a href="/misc/goto?guid=4959012149265432723" rel="nofollow">https://groups.google.com/forum/#!forum/wav2letter-users</a></p>    <p>来自: <a href="/misc/goto?guid=4959012149365325417" id="link_source2">雷锋网</a></p>