PERCEPT-R: An Open-Access American English Child/Clinical Speech Corpus Specialized for the Audio Classification of/ɹ/

Nina R. Benway, Jonathan L. Preston, Elaine Hitchcock, Asif Salekin, Harshit Sharma, Tara McAllister

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

We present the PERCEPT-R corpus, a labeled corpus of child speakers of American English with typical speech and residual speech sound disorders affecting rhotics. We demonstrate the utility of age-and-gender normalized formants extracted from PERCEPT-R in training support vector classifiers to predict ground-truth perceptual judgments of “rhotic” (i.e., dialect-typical) and clinical “derhotic”/ɹ/for novel speakers (mean of participant-specific f-metrics=.83; SD =.18, N = 281).

Original languageEnglish
Pages (from-to)3648-3652
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2022-September
DOIs
StatePublished - 2022
Event23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, Korea, Republic of
Duration: 18 Sep 202222 Sep 2022

Keywords

  • /ɹ/
  • child speech
  • clinical speech
  • mispronunciation detection
  • open access dataset

Fingerprint

Dive into the research topics of 'PERCEPT-R: An Open-Access American English Child/Clinical Speech Corpus Specialized for the Audio Classification of/ɹ/'. Together they form a unique fingerprint.

Cite this