<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://lst.cls.ru.nl/clst-asr/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="https://lst.cls.ru.nl/clst-asr/feed.php">
        <title>CLST-ASR</title>
        <description></description>
        <link>https://lst.cls.ru.nl/clst-asr/</link>
        <image rdf:resource="https://lst.cls.ru.nl/clst-asr/lib/tpl/dokuwiki/images/favicon.ico" />
       <dc:date>2026-05-08T00:17:07+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=alex_asr&amp;rev=1491310536&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=distributed_computation&amp;rev=1430373325&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=fa-evaluation&amp;rev=1543316449&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=forced-aligner&amp;rev=1561014655&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=kaldi_asr_toolkit&amp;rev=1663963592&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=kaldi_on_ponyland&amp;rev=1620891506&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=language_model_settings&amp;rev=1430381626&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=language_modeling&amp;rev=1427812803&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=preprocessing&amp;rev=1429103973&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=quick_quality_check&amp;rev=1521729623&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=solved-bugs&amp;rev=1424766856&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=spraak_asr_toolkit&amp;rev=1442573032&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=start&amp;rev=1714028653&amp;do=diff"/>
                <rdf:li rdf:resource="https://lst.cls.ru.nl/clst-asr/doku.php?id=websockets&amp;rev=1620381064&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="https://lst.cls.ru.nl/clst-asr/lib/tpl/dokuwiki/images/favicon.ico">
        <title>CLST-ASR</title>
        <link>https://lst.cls.ru.nl/clst-asr/</link>
        <url>https://lst.cls.ru.nl/clst-asr/lib/tpl/dokuwiki/images/favicon.ico</url>
    </image>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=alex_asr&amp;rev=1491310536&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2017-04-04T14:55:36+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>alex_asr</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=alex_asr&amp;rev=1491310536&amp;do=diff</link>
        <description>Alex ASR

Url: &lt;https://github.com/UFAL-DSG/alex-asr&gt;
The Alex ASR software package basically is an incremental speech decoder using the Kaldi ASR toolkit. It can be used for various types of GMM-HMM acoustic models and nnet2 ones.

After having done a clone of its Git repository, you can compile it as described on the github page or if you&#039;d like to compile it with NVIDIA CUDA GPU support (to also support acoustic models trained using GPUs), do the following:</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=distributed_computation&amp;rev=1430373325&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2015-04-30T07:55:25+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>distributed_computation</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=distributed_computation&amp;rev=1430373325&amp;do=diff</link>
        <description>SPRAAK Distributed computing

Different ways are used to distribute computing for training and evaluating in SPRAAK. See corresponding sections below.

Training acoustic models

Distributing the computing required for training acoustic models is possible by utilising multiple hosts. Please refer to SPRAAK&#039;s own page on how to do that:</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=fa-evaluation&amp;rev=1543316449&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2018-11-27T12:00:49+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>fa-evaluation</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=fa-evaluation&amp;rev=1543316449&amp;do=diff</link>
        <description>Evaluation of Forced aligners

This page contains information on forced alignment tools available here at CLST and from other third parties. They are compared and evaluated. It is in no way a formal evaluation, but more an evaluation of experiences and intuitions while working with them.</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=forced-aligner&amp;rev=1561014655&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2019-06-20T09:10:55+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>forced-aligner</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=forced-aligner&amp;rev=1561014655&amp;do=diff</link>
        <description>CLST ASR Forced Aligner

Authors: Linde Kuijpers (student assistent), Mario Ganzeboom (PhD 
student, m.ganzeboom@let.ru.nl), Xing Wei (PHD, X.Wei@let.ru.nl) 

Last changes in code: 14-04-2019 

Last changes in readme: 19-06-2019 

Current location: /vol/tensusers/xwei/clst-asr_forced-aligner/</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=kaldi_asr_toolkit&amp;rev=1663963592&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2022-09-23T22:06:32+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>kaldi_asr_toolkit</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=kaldi_asr_toolkit&amp;rev=1663963592&amp;do=diff</link>
        <description>Kaldi ASR Toolkit

Under this topic you can find information about the Kaldi ASR Toolkit, like URLs and paths where to find it. Kaldi is a more recent ASR toolkit compared to SPRAAK. Like SPRAAK, it contains functionality to train different types of GMM-HMM acoustic models, but also various types of Deep Neural Networks (DNNs), the current standard in ASR. This page provides links to Kaldi&#039;s own documentation pages and tips &amp; tricks on how to use Kaldi for certain contexts.
Feel free to add expe…</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=kaldi_on_ponyland&amp;rev=1620891506&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-05-13T09:38:26+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>kaldi_on_ponyland</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=kaldi_on_ponyland&amp;rev=1620891506&amp;do=diff</link>
        <description>Kaldi (shared LaMachine) on Ponyland

Update May 2021: (Not recommended, see this link to configure your own LaMachine with Kaldi).

The kaldi installation is kept as a part of LaMachine. It is in LaMachine2: /vol/customopt/lamachine.stable/opt/kaldi. 

Activation of the LaMachine environment (using</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=language_model_settings&amp;rev=1430381626&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2015-04-30T10:13:46+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>language_model_settings</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=language_model_settings&amp;rev=1430381626&amp;do=diff</link>
        <description>Language model settings

In SPRAAK you can use Finite State Grammars (FSGs) and N-gram language models. Both have a different set of settings when using them in the recognition process. Below an explanation of several settings as I used them:



The parameters can be set by the following command on the interactive SPRAAK commandline (spr_cwr_main):</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=language_modeling&amp;rev=1427812803&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2015-03-31T16:40:03+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>language_modeling</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=language_modeling&amp;rev=1427812803&amp;do=diff</link>
        <description>Language modeling (for ASR)

This section provides links, tips, tricks and tutorials about language modeling for ASR. Initially, these are relevant to the projects within the PI group LS-LT. However, they could prove useful in other contexts too.

SRI LM Toolkit</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=preprocessing&amp;rev=1429103973&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2015-04-15T15:19:33+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>preprocessing</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=preprocessing&amp;rev=1429103973&amp;do=diff</link>
        <description>Preprocessing

SPRAAK comes with a range of filters and other signal processing algorithms which can be used to preprocess the audio before sending it to a recogniser. These algorithms are for example used to extract the standard MFCC features from the audio on which the recognition takes place.
See</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=quick_quality_check&amp;rev=1521729623&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2018-03-22T15:40:23+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>quick_quality_check</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=quick_quality_check&amp;rev=1521729623&amp;do=diff</link>
        <description>Quick Quality Check

For assessing the quality of a speech tool or speech corpus, you can use the SPEX Quick Quality Check (QQC):</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=solved-bugs&amp;rev=1424766856&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2015-02-24T09:34:16+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>solved-bugs</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=solved-bugs&amp;rev=1424766856&amp;do=diff</link>
        <description>Solved bugs

This page describes the bugs that were found and solved in the CLST-ASR framework from January 2015. It is set up to help solve possible future bugs and is meant as an archive.

DigLin

DigLin stands for DIGital Literacy INstructor and is a project in which research is done to CALL systems for low literate persons. Systems to facilitate their learning of a second language. The CLST-ASR framework was initially developed in this project and is used to provide feedback on the user&#039;s pr…</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=spraak_asr_toolkit&amp;rev=1442573032&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2015-09-18T12:43:52+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>spraak_asr_toolkit</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=spraak_asr_toolkit&amp;rev=1442573032&amp;do=diff</link>
        <description>SPRAAK ASR Toolkit

Under this topic you can find information about the SPRAAK ASR Toolkit, like URLs and paths where to find it. Links to SPRAAK&#039;s own documentation pages and tips &amp; tricks on how to use SPRAAK for certain contexts.
Feel free to add experiences which you feel are useful to others (i.e. to not &#039;reinvent the wheel&#039;).</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=start&amp;rev=1714028653&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-04-25T09:04:13+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>start</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=start&amp;rev=1714028653&amp;do=diff</link>
        <description>Op deze website wisselen taal- en spraakonderzoekers van het Centre for Language Studies algoritmes, tips en tutorials uit die te maken hebben met hun onderzoek. Lees meer over hun onderzoeksprojecten op de website van de Radboud Universiteit.

&lt;https://www.ru.nl/&gt;</description>
    </item>
    <item rdf:about="https://lst.cls.ru.nl/clst-asr/doku.php?id=websockets&amp;rev=1620381064&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2021-05-07T11:51:04+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>websockets</title>
        <link>https://lst.cls.ru.nl/clst-asr/doku.php?id=websockets&amp;rev=1620381064&amp;do=diff</link>
        <description>Use of WebSockets in CLST-ASR

HTML5 WebSockets are used as one of the ways to communicate between client and ASR server. It is used to send and receive audio data for recognition and replaying.
See following URLs for info on the WebSockets specification:
Info on websockets:</description>
    </item>
</rdf:RDF>
