Human speech does not only communicate linguistic information but also paralinguistic features, e.g. information about the identity and the arousal state of the sender. Comparable morphological and physiological constraints on vocal production in mammals suggest the existence of commonalities encoding sender-identity and the arousal state of a sender across mammals. To explore this hypothesis and to investigate whether specific acoustic parameters encode for sender-identity while others encode for arousal, we studied infants of the domestic cat (Felis silvestris catus). Kittens are an excellent model for analysing vocal correlates of sender-identity and arousal. They strongly depend on the care of their mother. Thus, the acoustical conveyance of sender-identity and arousal may be important for their survival.
We recorded calls of 18 kittens in an experimentally-induced separation paradigm, where kittens were spatially separated from their mother and siblings. In the Low arousal condition, infants were just separated without any manipulation. In the High arousal condition infants were handled by the experimenter. Multi-parametric sound analyses revealed that kitten isolation calls are individually distinct and differ between the Low and High arousal conditions. Our results suggested that source- and filter-related parameters are important for encoding sender-identity, whereas time-, source- and tonality-related parameters are important for encoding arousal.
Comparable findings in other mammalian lineages provide evidence for commonalities in non-verbal cues encoding sender-identity and arousal across mammals comparable to paralinguistic cues in humans. This favours the establishment of general concepts for voice recognition and emotions in humans and animals.
Human speech and non-linguistic vocalisations convey paralinguistic cues encoding the physical characteristics of a speaker, termed here indexical cues (e.g., sex, age, body size, sender-identity), and the emotional state of a sender, termed here prosodic cues (e.g., emotional valence, arousal) (e.g.,[1–3]). Whereas linguistic aspects of human speech are unique to humans, non-verbal cues comparable to paralinguistic cues were also found in the vocalisations of animals of at least 11 mammalian orders (for indexical cues e.g., humans:[3, 4], non-human primates:[5, 6], Scandentia:; Artiodactyla:[8, 9]; Perissodactyla:[10, 11]; Carnivora:[12, 13]; Cetaceae:; Chiroptera:[15, 16]; Rodentia:[17, 18]; Proboscidae:[19, 20]; Sirenia:; Hyracoidea:; for prosodic cues see review[23–25]). This suggests a pre-human origin of paralinguistic cues due to homologies in the central nervous system and the mammalian vocal production system.
In mammals vocal production is based on a highly evolutionarily conserved system. According to the Source-Filter theory of vocal production the respiratory airstream from the lungs passes the larynx (=source) with the vocal folds followed by the supra-laryngeal vocal tract (=filter;[2, 26, 27]). Indexical cues are suggested to be related to the length, the density and the tension of the vocal folds (affecting the fundamental frequency of the sound signal) and to the length and the shape of the supra-laryngeal vocal tract (affecting the formant pattern). Affect-induced physiological changes are suggested to be related to changes in respiratory airstream (affecting amplitude, tempo and fundamental frequency[28, 30]), changes in muscle tonus of laryngeal muscles controlling the tension of the vocal folds (causing disruption and changes of fundamental frequency[28, 30, 31]) and changes in the shape, the length and the filter-properties of the supra-laryngeal vocal tract (affecting formant frequencies[1, 30]).
Studies in human and non-human mammals demonstrated that source- and/or filter-related acoustic parameters are important acoustic parameters encoding sender-identity (e.g.[4, 6]), whereas time-, source- and tonality-related variations are associated with the arousal of the sender (e.g.,[23, 24, 32–34]). Furthermore, non-linear phenomena (NLP), irregular vibrations of the vocal folds (e.g., subharmonics, biphonations, frequency jumps), have become a focal point of acoustic research describing highly complex vocalisations (e.g.,[35–39]) and are common in human and non-human animals[35–37, 39–42]. However, their function is not yet clear[36, 39, 43]. On the one hand, it is argued that NLPs could be important for individual recognition (e.g.,[36, 37, 39, 42]) and on the other hand that NLPs convey information about the emotional state of the sender (e.g.[37, 39]).
To explore the impact of certain acoustic parameters on encoding sender-identity and arousal in non-human mammals, it is important to study both aspects in the same individuals using the same set of acoustic parameters and the same behavioural contexts. To date, there are only few studies investigating both aspects in the same individuals and behavioural contexts (bats:[44, 45]; primates:[30, 46]; elephants:; dogs:; tree shrews:) and to our knowledge only three studies are available for mammalian infants (elephants:; bats:; cattle:). To explore the role and potential commonalities of certain acoustic parameters or sets of acoustic parameters encoding prosodic and indexical cues in mammalian infant vocalisations, further studies on infants of various mammalian taxa are needed.
In this study, we explored vocal cues encoding sender-identity (indexical cues) and arousal (prosodic cues) by investigating infant isolation calls of domestic cats. Cats are an important animal model in human hearing research due to similarities in their auditory system to humans (e.g.[49, 50]). Adult females usually live communally in small social groups, whereas males live solitarily. Domestic cats are an altricial species, kittens being born blind with their ears closed. During the first three weeks after birth visual and auditory skills of the kittens as well as their locomotor and thermoregulatory abilities are limited[52–54] and kittens are completely dependent on their mother. Cats have an elaborated vocal repertoire[55–59]. Thus, infant vocalisations may play an important role for their survival, signalling their emotional state and their needs. Females give birth to one to 10 infants per litter. Litters from different females may be reared in the same nest and thus, may become mixed, which could make kin signatures essential for offspring recognition and offspring-directed maternal care. Previous studies have already shown that kittens produce isolation calls when isolated from their mother[55, 57–60] which evoke maternal behaviour. Context and age-specific variations in the acoustic structure of kitten isolation calls have already been described but only for a few acoustic parameters[58, 60], whereas to our knowledge no data on acoustically conveyed individual signatures in kitten isolation calls have been published.
The aim of this study was to investigate the following two hypotheses: (1) sender-identity is encoded in the acoustic structure of kitten isolation calls, (2) arousal is encoded in the acoustic structure of kitten isolation calls and non-linear phenomena occur more often in High arousal compared to Low arousal situations. Based on these results we aimed to investigate which acoustic parameters or sets of acoustic parameters are important for encoding sender identity and which are important for encoding arousal. Vocal correlates of arousal in non-human animals can be investigated at the behavioural level by measuring different levels of situational urgency within the same behavioural context and linking it to the corresponding vocal expression. Thus, we separated the kittens from their mother and siblings and exposed them to two sub-contexts which were assumed to vary in their level of arousal (Low arousal versus High arousal condition). To investigate our hypotheses multi-parametric sound analyses were performed measuring 3 time-, 4 source-, 12 filter- and 3 tonality-related parameters (Table1). We will report that a set of source- and filter-related acoustic parameters is important for encoding sender-identity, whereas a set of time-, source- and tonality-related acoustic parameters is important for encoding arousal. By comparing our findings with data on other mammals we will explore to which extent our results support the hypothesis for universal acoustic coding rules expressing indexical and prosodic cues in mammals due to similar physiological and anatomical constraints in the peripheral vocal production system.
Description of measured acoustic parameters
Call duration [ms]
Time between the onset and the offset of a call.
Time between the offset of a call and the onset of the successive call.
Time between the onset and the maximum amplitude of a call.
Source-related spectral parameters
Mean fundamental frequency of a call.
Minimum fundamental frequency of a call.
Maximum fundamental frequency of a call.
Standard deviation of the fundamental frequency of a call.
Filter-related spectral features
Frequency with maximum energy over a call.
Mean frequency of the first formant of a call.
Standard deviation of the first formant frequency of a call.
Bandwidth of the first formant frequency of a call.
Mean frequency of the second formant of a call.
Standard deviation of the second formant frequency of a call.
Bandwidth of the second formant frequency of a call.
Mean frequency of the third formant of a call.
Standard deviation of the third formant frequency of a call.
Bandwidth of the third formant frequency of a call.
Difference between the mean of the second and the first formant frequency.
Mean maximum correlation of power spectra of successive 25 ms time steps of a call.
Cepstral peak [V]
Value of the peak at the fundamental period of a cepstrum for the middle 10 ms of the call.
Percentage of voiced frames of a call.
Maximum harmonic-to-noise ratio of a call.
We found no significant differences in the acoustic parameters between individuals which were initially exposed to the Low or the High arousal condition (Fishers Omnibus test: χ2≤55.55, df=44, p≥0.114 for both conditions). This suggests that the order in which the subjects were exposed to the two arousal conditions did not affect the acoustic parameters of their vocalizations. Therefore, both groups were pooled for further analysis.
For both arousal conditions the majority of time-, source-, filter- and tonality-related parameters showed significant differences between individuals (Fisher Omnibus test: χ2≥784.64, df=44, p<0.001; Table2). For time-related parameters almost all parameters differed significantly between individuals for both arousal conditions (High arousal: F(17)≥1.89, N=18, p≤0.022; Low arousal: F(15)≥2.69, N=16, p≤0.001 except ICI F(15)=1.23, N=16, p=0.256). For the source- and tonality-related parameters all measured acoustic parameters differed between individuals for both arousal conditions (Low arousal: F(15)≥2.57, N=16, p≤0.002; High arousal: F(17)≥1.96, N=18, p≤0.016). For the filter-related parameters almost all measured parameters for both arousal conditions differed between individuals (High arousal: F(17)≥1.90, N=18, p≤0.022; Low arousal: F(15)≥1.85, N=16, p≤0.033 except BWF2 and SD3: F(15)≤1.73, N=16, p≥0.052). To investigate whether calls can correctly classified to the respective individuals, we performed Discriminant function analysis (DFA) combined with Principal Component Analysis (PCA) for each arousal condition separately.
Results of the one-way Anova testing for differences between individuals for each acoustic parameter and arousal condition and the correlation coefficient with the three most important PCs for the DFA; LOW = Low arousal condition; HIGH = High arousal condition; bold p-values represent significant difference p < 0.05; bold loading factors represent the parameters showing loading factors higher than 0.700 with the respective PC
Call duration [ms]
Cepstral peak [V]
For the Low arousal condition a PCA based on the acoustic parameters extracted seven factors (PC) with an eigenvalue higher than 1 explaining 71.95% of the variance (see Additional file1). An independent DFA based on these seven PCs was able to classify 53.13% of the calls to the respective individual (cross-validation: 41.88%) which was significantly above chance level (6%; p<0.001). On an individual level for 15 out of 16 subjects for the original classification and for 12 out of 16 subjects for the cross-validation significantly more calls were correctly classified than expected by chance (p≤0.019). The DFA calculated seven DFs. Thereby, DF1, 2 and 3 explained 86.6% of the variation in the calls. DF1 showed the highest correlation to PC1 (r=0.568), DF2 showed the highest correlation to PC6 (r=0.698), whereas DF3 showed the highest correlation to PC2 (r=−0.593). PC1 showed the highest loading factors to the source-related parameters: MeanF0, MinF0 and MaxF0 (r≥0.751; Table2) and to the filter-related parameter F2-F1 (r=−0.704). PC2 showed the highest correlation to the filter-related parameters: MeanF1 and SDF1 (r≥0.755). PC6 showed no loading factors above 0.700.
For the High arousal condition a PCA based on the acoustic parameters extracted seven factors with an eigenvalue higher than 1 explaining 68.90% of the variance (see Additional file1). An independent DFA based on these seven PCs was able to classify 63.33% of the calls to the respective individual (cross-validation: 47.78%) which was significantly above chance level (6%; binomial test: p<0.001). On an individual level for all subjects for the original classification and for 16 out of 18 subjects for the cross-validation significantly more calls were correctly classified than expected by chance (p≤0.019). The DFA calculated seven DFs. Thereby, DF1, 2 and 3 explained 82.9% of the variation in the calls. DF1 showed the highest correlation to PC1 (r=−0.730), DF2 showed the highest correlation to PC2 (r=0.700), whereas DF3 showed the highest correlation to PC3 (r=0.706). PC1 showed the highest loading factor to the filter-related parameters: Peak, MeanF2, F2-F1 (r≥0.711; Table2). PC2 showed the highest loading factor to source-related parameters: MeanF0 and MaxF0 (r≥0.810). PC3 showed no loading factor above 0.700 to any of the acoustic parameters.
Comparing the classification accuracy between both arousal conditions showed no significant differences (original: t(15)=1.29, N=16, p=0.215; cross-validation; t(15)=0.426, N=16, p=0.676) demonstrating that the level of individual distinctiveness was similar for both arousal conditions. Performing a crossed pDFA investigating differences between subjects by controlling for the arousal level also revealed that individuals could significantly correctly be classified (original: p=0.004; cross-validation: p=0.002).
Performing a nested pDFA testing for differences between subjects by controlling for litter confirmed significant differences between individuals (original and cross-validation: p≤0.001 for both arousal conditions). This suggests that individual differences cannot be explained by the fact that we used a varying number of kittens per litter so that one litter can contribute more to the results than another.
We found almost no significant differences in the acoustic parameters between sexes and almost no significant correlations with body weight. For the factor sex in the High arousal condition only the BWF3 and in the Low arousal condition only the SDF2 and SDF3 differed significantly between sexes (t(16)≥|2.45|, N=18, p≤0.026). For the factor body weight a significant negative correlation with call duration and a significant positive correlation for the percentage of voiced frames was found only for the Low arousal condition (r≥|0.540|, N=18, p≤0.021). However, controlling for multiple testing, using the Fisher Omnibus test, showed that these differences could be explained by chance (sex: χ2=104.33, df=88, p=0.113; body weight: χ2 =101.09, df=88, p=0.161). This indicates that individual differences cannot be explained by sex or body weight. Furthermore, the body weight of kittens did not differ between sexes (t(16)=1.09, Nfemale=Nmale=9, p=0.292).
All in all, almost all measured acoustic parameters differed between individuals for both arousal conditions. However, classification of individuals was mainly attributed to source- and filter-related parameters. Thereby, the parameters which seem to be most important for classification were similar across conditions, suggesting consistency across different arousal levels.
Arousal affects acoustic structure of kitten isolation calls for time-, source-, filter- and tonality-related parameters (Fisher Omnibus test: χ2=175.55, df=44, p<0.001; Table3). For time-related parameters two out of three parameters differed significantly between arousal conditions. Thereby, call duration was longer in the High than in the Low arousal condition, whereas ICI was shorter (t(17)≥|2.58|, N=18, p≤0.019; Figure1). Peaktime showed a tendency to be longer in the High than in the Low arousal condition (t(17)=−1.92, N=18, p=0.072). For the source-related parameters three out of four parameters differed significantly between conditions. Thus, the MeanF0, MinF0 and MaxF0 were lower in the High compared to the Low arousal condition (t(17)≥3.12, N=18, p≤0.006; Figure1). For the filter-related parameters six out of 12 parameters differed significantly between conditions. Thus, Peak and MeanF1 were higher in the High than in the Low arousal condition, whereas SDF1, BWF1 and F2-F1 were lower in the High versus the Low arousal condition (t(17)≥|2.13|, N=18, p≤0.048). Furthermore, the consistency was lower in the High compared to the Low arousal condition (t(17)=3.03, N=18, p=0.008). For the tonality-related parameters two out of three parameters differed significantly between arousal conditions. Thus, the percentage of voiced frames and MaxHNR were lower in the High compared to the Low arousal condition (t(17)≥|2.51|, N=18, p=0.022; Figure1).
Mean and standard deviation of the acoustic parameters for Low and High arousal condition, results of the dependent t-test comparing both arousal-levels for each acoustic parameter and the correlation coefficient with the PC1; bold p-values represent significant difference; ↑ value is higher in the High than in the Low arousal condition, ↓ value is lower in the High than in the Low arousal condition; bold loading factors represent the parameters showing loading factors higher than 0.700 with the respective PC
LOW versus HIGH
Call duration [ms]
Peak time [ms]
Cepstral peak [V]
Based on the means of the acoustic parameters for each individual and arousal condition a PCA extracted six factors with an eigenvalue higher than 1 explaining 81.28% of the variance (see Additional file1). An independent DFA based on these six PCs was able to assign 88.9% of the cases to the respective arousal condition (cross-validation: 80.06%), which was significantly above chance level (50%; for original and cross-validated classification: both conditions: binomial test: p<0.001; Low arousal: p=0.008; High arousal: p=0.031; Figure2). Thereby, PC1 showed the highest correlation with the discriminant function (r=0.709), whereas the other factors showed correlations lower than |0.219|. PC1 showed the highest loading factors to call duration (r=-0.756), MinF0 (r=0.785), MeanF0 (r=0.746) and percentage of voiced frames (r=0.712; Figure1).
Analysing non-linear phenomena we detected NLPs in 47.46% of the analysed calls, but the percentage of calls containing NLPs was not significantly different between the Low and the High arousal condition (meanLow=50.00%; meanHigh=45.00%; Z=−0.358, n=17, N=18, p=0.720). The most often seen NLP was chaos (33.61%, N=18), followed by frequency jumps (15.43%, N=14) and subharmonics (9.26%, N=8). We found no significant differences in the percentage of calls containing frequency jumps (meanLow=20.00%; meanHigh=10.56%; Z=−1.84, n=12, N=18, p=0.066) or chaos (meanLow=38.89%; meanHigh=28.89%; Z=−1.03, n=15, N=18, p=0.304) between the Low and the High arousal condition. In contrast, subharmonics were only observed in the High and not in the Low arousal condition (meanLow=0.00%; meanHigh=18.33%; Z=−2.55, n=8, N=18, p=0.011).
Altogether, arousal conditions differed in time-, source-, filter- and tonality-related parameters. However, for classification the most loading acoustic parameters were call duration, percentage of voiced frames, mean and minimum fundamental frequency. In the High arousal condition significantly more calls containing subharmonics could be observed, whereas the occurrence of other NLPs did not differ between the two arousal conditions.
The results clearly show that in kitten isolation calls sender-identity and arousal-level are encoded by different combinations of acoustic parameters. Although univariate analysis showed that almost all kinds of acoustic parameters varied between sender-identity and arousal, DFA combined with PCA suggested that the impact of certain parameters differed. Sender-identity was mainly determined by a combination of source- and filter-related parameters, whereas arousal level was mainly determined by a combination of time-, source- and tonality-related parameters.
Kitten isolation calls differed between individuals in almost all acoustic parameters independent of arousal condition and could correctly be classified above chance level, supporting our hypothesis that sender-identity is encoded in the acoustic structure of kitten isolation calls. Analysis showed that this cannot be explained by the fact that we used a varying number of kittens per litter so that one litter can contribute more to the results than another. Thus, the pDFA controlling for litter also revealed differences in the acoustic structure between kittens.
Individual distinctiveness was found for both arousal conditions and could also be approved by pooling both conditions using a pDFA. Thereby, for both arousal conditions almost the same source- and filter-related parameters (MeanF0, MaxF0, F2-F1) contributed mainly to the classification result. This suggests that individual differences are consistent across different arousal levels. This is in agreement with several studies showing that infant isolation calls contain individual signatures (e.g.,[16, 62–64]). It can be assumed that these variations in the acoustic structure of kitten isolation calls can be perceived by the mother, since Härtle demonstrated that mothers recognise their kittens from their voices. Thus, individual signatures in infant isolation calls would allow the mother to discriminate their own infant from those of others, to direct their care-giving behaviour and thereby increase their own fitness. This suggests that these individual signatures in kitten isolation calls may be an important tool for kin selection.
We found no effect of sex on the acoustic structure of kitten isolation calls, this being in agreement with other studies on small-bodied animals (e.g., tree shrews:; pygmy marmosets:; bats:), whereas the majority of studies on large-bodied animals revealed sex-specific differences (see review on primates:). Ey and colleagues argued that these sex-specific differences were mainly driven by differences in body size due to sexual dimorphism. Since the kittens at this age did not show such a sex dimorphism in body weight, no differences in the acoustic structure of kitten isolation calls was expected. We also found no influence of body weight, which is also in agreement with findings of other studies (e.g., see review on primates and additionally tree shrews:). Ey and colleagues argued that a relationship between body size and acoustic parameters is highly predictable when body size variation is large but less predictable if variation is small. Thus, it could be argued that the variation in body weight is not large enough to affect acoustic structures of vocalisations in kitten isolation calls (mean=307.33 g; range: 246–370 g; SD=33.03). All in all, kitten isolation calls contain individual signatures, which cannot be explained by sex or body weight.
Our hypothesis that arousal is encoded in acoustic parameters of kitten isolation calls was supported. Calls recorded in the High arousal condition were characterised by longer call duration, a shorter intercall-interval, a lower fundamental frequency, a higher peak- and first formant frequency and lower tonality values than calls recorded in the Low arousal condition. This is partly in agreement with other studies in cats investigating whether acoustic structure of isolation calls varies between contexts[58, 60]. Our results are in line with the finding of Haskins and Romand and Ehret that call duration was shorter in low arousal contexts (Isolation without manipulation) than in high arousal context comparable to our High arousal condition (namely a Restrain context, Picked-up and Tail-pressing context). Regarding our finding that the fundamental frequency was decreased in the High arousal condition in comparison to the Low arousal condition, our data are not in agreement with those of Haskins who found no significant differences in the fundamental frequency between the Isolation and the Restrain context. However, Romand and Ehret found that the fundamental frequency became significantly lower in the Tail-pressing context than in the Isolation context when kittens turned 32 days old.
Comparing our results with other animal taxa we found that for the temporal parameters similar changes are reported for a variety of mammalian taxa and behavioural contexts (see review[23, 24]). Concerning source-related parameters the results are controversial. Thus, the majority of studies found either an increase of fundamental frequency with increasing arousal or no effect (see review[23, 24]). Surprisingly, we found a decrease in fundamental frequency from Low to High arousal condition. As described above, also Romand and Ehret found a decrease in F0 from the Tail-pressing context (similar to our High arousal condition) compared to the Isolation context (similar to our Low arousal condition) in 32–46 day-old kittens. Furthermore, during male-male interaction it was shown for grey mouse lemurs that the start fundamental frequency of their calls was lower in contexts where they had physical fights (assumed to reflect high arousal) compared to contexts where they had no physical contact (assumed to reflect low arousal in the animal).
For the filter-related parameters we found an increase of the peak frequency and the frequency of the first formant from Low to High arousal condition. An increase in the frequency of filter-related parameters was also found for pigs, primates[30, 70] and tree shrews. An increase in the frequency of the first formant (=resonance frequency) was also found in pigs and chimpanzees. Furthermore, a decrease in the consistency agrees with findings in tree shrews. The increase in peak frequency and formant frequencies could be explained by the extent of mouth opening which results in a shorter vocal tract length. It could be argued that the changes we found for the acoustic parameters, especially those of filter-related parameters, could be attributed to the manipulation in the High arousal condition. This means by turning the kittens on their back the length of the vocal tract may be changed. However, we did not systematically manipulate the head position so that the angle between the head and the breast could vary between kittens. Due to this unsystematical variation of head position, it would be unlikely that the analysis of sender-identity favoured the same source- and filter-related parameters for both arousal conditions. Thus, we suggest that turning the kitten onto its back cannot account for the increase in filter-related parameters. Instead, we favour the assumption that mouth opening shortens the vocal tract, resulting in an increase of filter-related parameters which was already shown for cats by Shipley and colleagues. The decrease in tonality from Low arousal to High arousal condition agrees with findings in other animals (e.g.,[7, 20, 37]). The decrease in tonality may go along with an increase in non-linear phenomena due to a loss of vocal control. However, we found only a difference in the percentage of calls containing subharmonics between the Low- and the High arousal condition but not for NLPs in general, chaos or frequency jumps. Stoeger and colleagues found a positive correlation between harmonic-to–noise ratio (HNR) and duration of chaotic segments. Since we found a decrease in the MaxHNR it could be assumed that although the occurrence (percentage of calls) is the same the relation of NLP in the call differs. In the data set we used for these analyses we could not always decide reliably when a chaotic component started or finished. Therefore, further studies are needed to investigate the role and function of non-linear phenomena in kitten isolation calls.
To expose animals to a situation assumed to induce a specific emotion and measuring the corresponding behavioural and physiological changes is a general approach in animal emotional research. Vocal correlates of arousal were investigated by exposing subjects to different levels of situational urgency within the same behavioural context and analysing the acoustic parameters of their vocal expressions (e.g.,[7, 23, 30, 34, 44]). In this study kittens were separated from their mother and siblings in both conditions. In the Low arousal condition they were left undisturbed whereas in the High arousal condition they were additionally manipulated by the experimenter assumed to induce a higher level of urgency/arousal. However, although if we assume that the general behavioural context and the emotional quality might be fairly similar between the sub-contexts, we can not rule out that the meaning/function of vocalizations differs between sub-contexts. To clarify this, further studies are needed which expose kittens to different contexts assumed to vary in arousal and also in emotional quality and compare their responses.
All in all, we found that arousal-related changes of time- and tonality-related parameters in kitten isolation calls correspond with previous findings in other mammalian taxa.
In conclusion, our results showed that kitten isolation calls encode sender-identity and arousal. Thereby, different sets of parameters seem to be important. Thus, time-, source- and filter-related parameters mainly encode for arousal, whereas source- and filter-related parameters mainly encode for sender-identity. Thereby, source-related parameters seem to be important for both coding the sender-identity and arousal. This suggests that based on parameters of the fundamental frequency alone we cannot differentiate between sender-identity and arousal. Instead, we argue that single parameters alone do not code for arousal and sender-identity (especially because all vary) but that certain sets or relations of parameters encode sender-identity or arousal. Thus, playback studies are needed, manipulating specific acoustic parameters, to verify which acoustic parameters are biologically important for recognising sender-identity and arousal.
Material & methods
Subjects and housing
We tested 18 mongrel kittens (9 males, 9 females) from 6 litters aged 9 to 11 days and housed in the SPF (Specific Pathogen Free) breeding colony at the Hannover Medical School. All kittens were reared by their mothers. The animal husbandry there complies with the recommendations for domestic cats noted in Appendix A of the European Convention for the Protection of Vertebrate Animals used for Experimental and other Scientific Purposes (ETS No.123) (http://conventions.coe.int/Treaty/EN/Treaties/PDF/123-Arev.pdf). One mother and her kittens lived in one animal room (12.5 m2 to 20.6 m2) equipped with a wooden nest box, an infrared lamp as additional heat source, bars for scratching and plastic items for playing. Cats were used to the daily routine of animal keepers entering the animal rooms and playing with or grooming them. All kittens were familiar with being handled by humans due to the daily weighing routine and mothers were used to the kittens being removed for a short time from the nest box. Furthermore, they had acoustic and olfactory contact to other cats. The mother was fed daily with canned (Pet, De Haan Petfood, Nieuwkoop, the Netherlands) and dry cat food (SDS Pet Food, Special Diets Services, Witham, Essex, UK). Additionally, freshly killed rats were provided daily together with milk or curd cheese. Water was available ad libitum. Animals were housed at a temperature of 22±2°C, relative humidity of 55±5% and a light/dark cycle of 12:12 hours (lights on at 6:00 a.m.).
Experimental procedure and data recording
Experiments were performed in the animal rooms of the respective mother and her kittens. We conducted a separation paradigm in which each kitten was removed from its nestbox and spatially separated from its mother and siblings. To induce two different levels of arousal in a kitten (the Low and High arousal condition), kittens were exposed to two sub-contexts varying in the level of situational urgency. Thus, in the Low arousal condition a kitten was only spatially separated from its mother and siblings and left undisturbed by the experimenter (=placed alone on the floor of the animal room), whereas in the High arousal condition a kitten was additionally manipulated by the experimenter i.e. the kitten was grasped, lifted off the ground and/or turned onto its back so that the legs had no contact to the ground. In the Low arousal condition kittens moved around slowly, whereas in the High arousal condition they struggled with their legs and tried to turn around. Thus, we assume that the strong manipulation by the experimenter in the High arousal condition induced a higher level of urgency/arousal in the kitten compared to the Low arousal condition where they were left undisturbed.
Kittens were tested in one session. In this session both conditions were performed in a randomised order for 3 minutes each. After finishing a condition kittens were reunited with their mother and siblings before the other condition was performed. The inter-condition interval was dependent from the number of siblings. Thus, we tested the kittens of one litter one after another in the first condition. After finishing this test for all kittens we started to test the kittens in the same order for the second condition. To avoid stress for the mother, the mother remained in the animal room but was prevented from coming into contact with the kittens during the experimental trial by the animal keeper (e.g., groomed or played with the mother).
Kitten vocal responses were recorded using a Sennheiser microphone (ME 67, Sennheiser, Wedemark, Germany; frequency range: 40 – 20,000 Hz) linked to a Marantz professional solid state recorder (PMD 660, Marantz, Osnabrück, Germany; sampling frequency: 44.1 kHz, 16 bit). Sound files were stored as wave files on a Compact Flash memory card (4 GB, Scan Disk Corporation, Milpitas, CA, USA). The kittens’ behaviour were videotaped using a digital camcorder (Sony DR-TRV 22E-PAL, Tokyo, Japan).
Vocal recordings were visually inspected using spectrograms of the software Batsound PRO 3.31 (Pettersson Elektronik AB, Uppsala, Sweden). Isolation calls were characterised as tonal calls with a rise and fall in the fundamental frequency with peak intensity around the mid-point (Figure3a;). For each individual and each arousal condition we selected 10 calls of good quality with a minimum amplitude difference of 5% between background noise and maximum amplitude of the call. For the Low arousal condition we selected the first 10 calls of good quality. For the High arousal condition we selected the first 10 calls of good quality after turning the kitten onto its back (except for one kitten which was only lifted up so that its legs had no contact to the ground). In total, we analysed 348 calls from 18 individuals. For two individuals only three and five calls were available in the Low arousal condition.
We performed a multi-parametric sound analysis using the software Batsound PRO 3.31, SIGNAL 3.1 (Engineering Design, Berkeley, California, U.S.A.) and PRAAT (http://www.praat.org;) combined with GSU PRAAT TOOLS. The software Batsound PRO was used to manually measure the call duration and intercall-interval using the oscillogram of the calls. Furthermore, we classified visually whether a call contained NLPs and which type of NLP was present. According to the classification of Riede and colleagues we classified calls as calls containing NLPs if we could observe one or more of the following non-linear components: Frequency jumps, subharmonics or chaos (Figure3b-c). Frequency jumps were defined as abrupt upward and downward transitions of the fundamental frequency (F0). Subharmonics were defined as additional spectral components at integer fractions of the F0 (e.g., F0/2, F0/3). Chaos was defined as broad-banded frequency components which could contain traces of harmonic elements. If a call contained none of these components we classified it as a harmonic call (Figure3a). To control for reliability of visual classification, a second person analysed all calls and we calculated the percentage of agreement between both persons. For NLPs in total both persons agreed in 85.63% of the calls, for frequency jumps in 87.64% of them, for chaos 85.63% of them and for subharmonics in 95.98% of them, respectively. The software SIGNAL 3.1 was used to measure the peak frequency, the cepstral peak and consistency using self-written macros. We calculated a power spectrum over the entire call to measure the peak frequency. To measure the cepstral peak, we calculated the cepstrum over the 10 ms in the middle of the call. The cepstrum is a spectrum of the signal (=cepstrum, CEP command), which is used to study the periodicity of a time signal. The cepstrum shows a cepstral peak at periodicity of a signal (=harmonic interval of the signal). Thus, a signal with a fundamental frequency of 100 Hz shows a cepstral peak at 10 ms (1/ 100 Hz=10 ms). The cepstral peak is higher for calls with a clear harmonic structure (high tonality) and a stable pitch. To measure spectral consistency across the entire call we measured the maximum correlation by correlating power spectra of successive 10 ms time segment of the entire call with each other. The maximum correlation is the maximum value of the normalised cross-covariance function which is a sequence of correlation values for successive intervals. The software PRAAT combined with GSU PRAAT TOOLS 1.9 (GSU -> quantify) were used to measure acoustic parameters related to fundamental frequency, formants and tonality-related parameters. Using the sub-menu “quantify Amp and Dur”, the Peaktime, i.e. time between the onset and the maximum amplitude of the call, was measured. Using the sub-menu “quantify Source” (min pitch: 75 Hz; max pitch: 3000 Hz; time steps: 0.01 s) the source-related parameters as well as the number of voiced frames (Voiced) and the maximum harmonic-to-noise-ratio (MaxHNR) were measured. We used the pitch target segment to check and correct the data. Using the sub-menu “quantify formant” (number of formant: 4; max formant value: 20 kHz; time steps: 0.01 s; see Additional file2) we measured the first, second and third formant. To estimate the number of formants expected in kitten isolation calls we used a formula according to Pfefferle and Fischer.
where N=number of formants, L=vocal tract length [m], c=speed of sound (350 m/s) and fc=cut-off frequency [Hz]. Carterette and colleagues reported the length of the vocal tract for young kittens (first week of life) as being approximately 3.0 to 3.5 cm. As we tested kittens of 9–11 days, we used the maximum value of vocal tract length, reported by Carterette and colleagues, for estimating the number of formants (L=3.5 cm). Kitten isolation calls ranged up to a frequency of 20 000 Hz, which we used as the cut-off frequency. Furthermore, we calculated the distance between the mean of the second and the first formant.
In total, we measured 3 time-, 4 source-, 12 filter- and 3 tonality-related parameters. Detailed descriptions of the acoustic parameters are presented in Table1.
To analyse whether the order in which subjects were exposed to the two conditions effects the acoustic parameters of their vocalisations we performed independent t-tests and controled for multiple testing by applying the Fishers Omnibus test combining multiple p-values.
To investigate sender-identity in kitten isolation calls, we conducted the following analysis for each condition separately: First, to investigate whether acoustic parameters differ statistically between individuals we performed a One-way-ANOVA. To control for multiple testing we applied the Fishers Omnibus test combining multiple p-values. Second, to investigate whether calls can correctly be classified to the respective individuals, we performed an independent DFA combined with a PCA. Thus, we first performed a PCA extracting PCs with an eigenvalue higher than 1 to reduce the number of parameters. We considered acoustic parameters with a loading factor higher than 0.700 to the respective PC as parameters, which have a strong impact on this factor. Based on these extracted PCs we calculated a DFA. In addition to the DFA original classification, we performed a cross-validation using the leave-one-out method. Furthermore, we investigated whether the number of correctly classified cases was significantly higher than expected by chance using a binomial test.
To investigate whether the level of individual distinctiveness may vary between arousal conditions, we recalculated the DFA for the High arousal condition using the same subjects as for the Low arousal condition (N=16) and compared the percentage of correctly classified calls per subject between arousal conditions using the dependent t-test. To test the consistency of individual signatures across arousal levels, we pooled the data for both arousal conditions and performed a crossed permutated DFA (pDFA;) using subject as test factor and arousal as control factor. Since subjects belong to different litters and litter size differs, we also performed a nested pDFA using subject as test factor and litter as control factor.
To control for the effect of sex and body weight on the acoustic structure of kitten isolation calls, we conducted independent t-tests comparing the acoustic parameters between male and female kittens for each acoustic parameter as well as body weight and correlated body weight with the acoustic parameters using a Pearson correlation.
To investigate whether arousal is encoded in kitten isolation calls we first calculated the mean of each acoustic parameter and condition for each individual. Then we compared each of these means between the Low and High arousal condition using a dependent t-test. To test whether arousal could be correctly classified based on the acoustic parameters of the isolation calls we conducted an independent DFA based on the means of the acoustic parameters for each subject similar to the sender-identity analyses (see above).
To investigate the occurrence of non-linear phenomena, for each individual we calculated the percentage of calls containing NLPs (total), frequency jumps, chaos, or subharmonics. To investigate whether the occurrence of NLPs differed between conditions, we compared these percentages between conditions using a non-parametric test, the Wilcoxon Signed Rank test, because these data were not normally distributed.
All tests were performed using the statistical software SPSS 19 except the Fisher Omnibus test and the pDFA. The Fisher Omnibus test was calculated manually using Excel. The pDFA was performed using scripts written by R. Mundry (MPI for Evolutionary Anthropology, Leipzig, Germany) which runs in the statistical software R (http://www.r-project.org/).
Principal component analysis
Principal component factor
Discriminant function analysis
Mean fundamental frequency
Minimum fundamental frequency
Maximum fundamental frequency
Standard deviation of fundamental frequency
Mean frequency of the first formant
Standard deviation of the first formant
Bandwidth of the first formant
Mean frequency of the second formant
Standard deviation of the second formant
Bandwidth of the second formant
Mean frequency of the third formant
Standard deviation of the third formant
Bandwidth of the third formant
Difference between second and first formant frequencies
Percentage of voiced frames
Maximum harmonic-to-noise ratio.
We wish to thank Kristin Möller for her assistance during the data collection and Sönke v.d. Berg for preparing the figures, Frances Sherwood-Brock for polishing up the English, Roger Mundry for providing and adapting the pDFA scripts and Sabine Schmidt for critical comments on the manuscript.
Institute of Zoology, University of Veterinary Medicine Hannover
Institute for Laboratory Animal Science, Hannover Medical School
Scherer KR: Vocal affect expression: a review and a model for future research. Psychol Bull. 1986, 99: 143-165.View ArticlePubMed
Lieberman P, Blumstein SE: Speech physiology, speech perception, and acoustic phonetics. 1988, Cambridge, MA: Cambridge University PressView Article
Bachorowski JA, Owren MJ: Acoustic correlates of talker sex and individual talker identity are present in a short vowel segment produced in running speech. J Acoust Soc Am. 1999, 106 (2): 1054-1063. 10.1121/1.427115.View ArticlePubMed
Leliveld LMC, Scheumann M, Zimmermann E: Acoustic correlates of individuality in the vocal repertoire of a nocturnal primate (Microcebus murinus). J Acoust Soc Am. 2011, 129 (4): 2278-2288. 10.1121/1.3559680.View ArticlePubMed
Rendall D, Owren MJ, Rodman PS: The role of vocal tract filtering in identity cueing in rhesus monkey (Macaca mulatta) vocalizations. J Acoust Soc Am. 1998, 103 (1): 602-614. 10.1121/1.421104.View ArticlePubMed
Schehka S, Zimmermann E: Acoustic features to arousal and identity in disturbance calls of tree shrews (Tupaia belangeri). Behav Brain Res. 2009, 203 (2): 223-231. 10.1016/j.bbr.2009.05.007.View ArticlePubMed
Searby A, Jouventin P: Mother-lamb acoustic recognition in sheep: a frequency coding. Proc Biol Sci. 2003, 270 (1526): 1765-1771. 10.1098/rspb.2003.2442.PubMed CentralView ArticlePubMed
Volodin IA, Lapshina EN, Volodina EV, Frey R, Soldatova NV: Nasal and oral calls in juvenile goitred gazelles (Gazella subgutturosa) and their potential to encode sex and identity. Ethology. 2011, 117 (4): 294-308. 10.1111/j.1439-0310.2011.01874.x.View Article
Lemasson A, Boutin A, Boivin S, Blois-Heulin C, Hausberger M: Horse (Equus caballus) whinnies: a source of social information. Anim Cogn. 2009, 12 (5): 693-704. 10.1007/s10071-009-0229-9.View ArticlePubMed
Budde C, Klump GM: Vocal repertoire of the black rhino Diceros bicornis ssp and possibilities of individual identification. Mamm Biol. 2003, 68 (1): 42-47. 10.1078/1616-5047-00060.
Müller CA, Manser MB: Mutual recognition of pups and providers in the cooperatively breeding banded mongoose. Anim Behav. 2008, 75: 1683-1692. 10.1016/j.anbehav.2007.10.021.View Article
Charlton BD, Zhihe Z, Snyder RJ: Vocal cues to identity and relatedness in giant pandas (Ailuropoda melanoleuca). J Acoust Soc Am. 2009, 126 (5): 2721-2732. 10.1121/1.3224720.View ArticlePubMed
Nousek AE, Slater PJB, Wang C, Miller PJO: The influence of social affiliation on individual vocal signatures of northern resident killer whales (Orcinus orca). Biol Letters. 2006, 2: 481-484. 10.1098/rsbl.2006.0517.View Article
Scherrer JA, Wilkinson GS: Evening bat isolation calls provide evidence for heritable signatures. Anim Behav. 1993, 46 (5): 847-860. 10.1006/anbe.1993.1270.View Article
Gelfand DL, McCracken GF: Individual variation in the isolation calls of Mexican free-tailed bat pups (Tadarida brasiliensis mexicana). Anim Behav. 1986, 34: 1078-1086. 10.1016/S0003-3472(86)80167-1.View Article
Randall JA, McCowan B, Collins KC, Hooper SL, Rogovin K: Alarm signals of the great gerbil: Acoustic variation by predator context, sex, age, individual, and family group. J Acoust Soc Am. 2005, 118 (4): 2706-2714. 10.1121/1.2031973.View ArticlePubMed
Matrosova VA, Blumstein DT, Volodin IA, Volodina EV: The potential to encode sex, age, and individual identity in the alarm calls of three species of Marmotinae. Naturwissenschaften. 2011, 98 (3): 181-192. 10.1007/s00114-010-0757-9.PubMed CentralView ArticlePubMed
McComb K, Reby D, Baker L, Moss C, Sayialel S: Long-distance communication of acoustic cues to social identity in African elephants. Anim Behav. 2003, 65: 317-329. 10.1006/anbe.2003.2047.View Article
Soltis J, Leong K, Savage A: African elephant vocal communication II: rumble variation reflects the individual identity and emotional state of callers. Anim Behav. 2005, 70: 589-599. 10.1016/j.anbehav.2004.11.016.View Article
Sousa-Lima RS, Paglia AP, da Fonseca GAB: Signature information and individual recognition in the isolation calls of Amazonian manatees, Trichechus inunguis (Mammalia: Sirenia). Anim Behav. 2002, 63: 301-310. 10.1006/anbe.2001.1873.View Article
Koren L, Geffen E: Individual identity is communicated through multiple pathways in male rock hyrax (Procavia capensis) songs. Behav Ecol Sociobiol. 2011, 65 (4): 675-684. 10.1007/s00265-010-1069-y.View Article
Zimmermann E, Leliveld LMC, Schehka S: Towards the evolutionary roots of affective prosody in human acoustic communication: a comparative approach to mammalian voices. Evolution of emotional communication: from sound in nonhuman mammals to speech and music in man. Edited by: Altenmüller E, Schmidt S, Zimmermann E. Oxford: Oxford University Press, in press
Briefer EF: Vocal expression of emotions in mammals: mechanisms of production and evidence. J Zool. 2012, 10.1111/j.1469-7998.2012.00920.x.
Volodin IA, Volodina EV, Gogoleva SS, Doronina LO: Indicators of emotional arousal in vocal emissions of the humans and nonhuman mammals. Zh Obshch Biol. 2009, 70 (3): 210-224.PubMed
Fitch WT: The evolution of language. 2010, Cambridge: Cambridge University PressView Article
Fant G: Acoustic theory of speech production. With calculations based on X-ray studies of Russian articulations. 1960, The Hague: Mouton & Co
Scherer KR: Vocal correlates of emotional arousal and affective disturbance. Handbook of Psychophysiology: Emotion and social behavior. Edited by: Wagner H, Manstead A. 1989, London: Wiley, 165-197.
Blumstein DT, Récapet C: The sound of arousal: The addition of novel non-linearities increases responsiveness in marmot alarm calls. Ethology. 2009, 115 (11): 1074-1081. 10.1111/j.1439-0310.2009.01691.x.View Article
Wilden I, Herzel H, Peters G, Tembrock G: Subharmonics, biphonation, and deterministic chaos in mammal vocalization. Bioacoustics. 1998, 9: 171-196.View Article
Mende W, Herzel H, Wermke K: Bifurcations and chaos in newborn infant cries. PhysLettA. 1990, 145: 418-424.
Riede T, Arcadi AC, Owren MJ: Nonlinear acoustics in the pant hoots of common chimpanzees (Pan troglodytes): Vocalizing at the edge. J Acoust Soc Am. 2007, 121 (3): 1758-1767. 10.1121/1.2427115.View ArticlePubMed
Volodina EV, Volodin IA, Isaeva IV, Unck C: Biphonation may function to enhance individual recognition in the dhole, Cuon alpinus. Ethology. 2006, 112 (8): 815-825. 10.1111/j.1439-0310.2006.01231.x.View Article
Taylor AM, Reby D: The contribution of source-filter theory to mammal vocal communication research. J Zool. 2010, 280 (3): 221-236. 10.1111/j.1469-7998.2009.00661.x.View Article
Bastian A, Schmidt S: Affect cues in vocalizations of the bat, Megaderma lyra, during agonistic interactions. J Acoust Soc Am. 2008, 124 (1): 598-608. 10.1121/1.2924123.View ArticlePubMed
Camaclang AE, Hollis L, Barclay RMR: Variation in body temperature and isolation calls of juvenile big brown bats Eptesicus fuscus. Anim Behav. 2006, 71: 657-662. 10.1016/j.anbehav.2005.07.009.View Article
Spillmann B, Dunkel LP, van Noordwijk MA, Amda RNA, Lameira AR, Wich SA, van Schaik CP: Acoustic properties of long calls given by flanged male orang-utans (Pongo pygmaeus wurmbii) reflect both individual identity and context. Ethology. 2010, 116 (5): 385-395. 10.1111/j.1439-0310.2010.01744.x.View Article
Yin S, McCowan B: Barking in domestic dogs: context specificity and individual identification. Anim Behav. 2004, 68: 343-355. 10.1016/j.anbehav.2003.07.016.View Article
Thomas TJ, Weary DM, Appleby MC: Newborn and 5-week-old calves vocalize in response to milk deprivation. Appl Anim Behav Sci. 2001, 74: 165-173. 10.1016/S0168-1591(01)00164-2.View Article
Heid S, Hartmann R, Klinke R: A model for prelingual deafness, the congenitally deaf white cat - population statistics and degenerative changes. Hear Res. 1998, 115 (1–2): 101-112.View ArticlePubMed
Kral A, Hartmann R, Tillein J, Heid S, Klinke R: Hearing after congenital deafness: central auditory plasticity and sensory deprivation. Cereb Cortex. 2002, 12 (8): 797-807. 10.1093/cercor/12.8.797.View ArticlePubMed
Deag JM, Manning A, Lawrence CE: Factors influencing mother-kitten relationship. The domestic cat: The biology of its behaviour. Edited by: Turner DC, Bateson P. 2000, Cambridge: Cambridge University Press, 23-45.
Bateson P: Behavioural development in cats. The domestic cat: The biology of its behaviour. Edited by: Turner DC, Bateson P. 2000, Cambridge: Cambridge University Press, 9-22.
Levine MS, Hull CD, Buchwald NA: Development of motor activity in kittens. Dev Psychobiol. 1978, 13 (4): 357-371.View Article
Jensen RA, Davis JL, Shnerson A: Early experience facilitates the development of temperature regulation in the cat. Dev Psychobiol. 1980, 13 (1): 1-6. 10.1002/dev.420130102.View ArticlePubMed
Härtel R: Zur Struktur und Funktion akustischer Signale im Pflegesystem der Hauskatze (Felis catus L.). Biol Zbl. 1975, 94: 187-204.
Moelk M: Vocalizing in the house-cat; a phonetic and functional study. Am J Psychol. 1944, 57 (2): 184-205. 10.2307/1416947.View Article
Brown KA, Buchwald JS, Johnson JR, Mikolich DJ: Vocalization in the cat and kitten. Dev Psychobiol. 1978, 11 (6): 559-570. 10.1002/dev.420110605.View ArticlePubMed
Romand R, Ehret G: Development of sound production in normal, isolated and deafened kittens during the first postnatal months. Dev Psychobiol. 1984, 17: 629-649. 10.1002/dev.420170606.View ArticlePubMed
Kiley-Worthington M: Animal language? Vocal communication of some ungulates, canids and felids. Acta Zool Fennica. 1984, 171: 83-88.
Haskins R: A causal analysis of kitten vocalization: an observational and experimental study. Anim Behav. 1979, 27: 726-736.View Article
Haskins R: Effect of kitten vocalizations on maternal behavior. J Comp Physiol Psych. 1977, 91 (4): 830-838.View Article
Phillips AV, Stirling I: Vocal individuality in mother and pup South American fur seals, Arctocephalus australis. Mar Mammal Sci. 2000, 16 (3): 592-616. 10.1111/j.1748-7692.2000.tb00954.x.View Article
Hammerschmidt K, Todt D: Individual differences in vocalizations of young Barbary macaques (Macaca sylvanus) - a multi-parametric analysis to identify critical cues in acoustic signaling. Behaviour. 1995, 132: 381-399. 10.1163/156853995X00621.View Article
Terrazas A, Serafin N, Hernández H, Nowak R, Poindron P: Early recognition of newborn goat kids by their mother: II. Auditory recognition and evidence of an individual acoustic signature in the neonate. Dev Psychobiol. 2003, 43 (4): 311-320. 10.1002/dev.10139.View ArticlePubMed
de la Torre S, Snowdon CT: Dialects in pygmy marmosets? Population variation in call structure. Am J Primatol. 2009, 71 (4): 333-342. 10.1002/ajp.20657.View ArticlePubMed
Kazial KA, Kenny TL, Burnett SC: Little brown bats (Myotis lucifugus) recognize individual identity of conspecifics using sonar calls. Ethology. 2008, 114 (5): 469-478. 10.1111/j.1439-0310.2008.01483.x.View Article
Ey E, Pfefferle D, Fischer J: Do age- and sex-related variations reliably reflect body size in non-human primate vocalizations?. A review. Primates. 2007, 48: 253-267. 10.1007/s10329-006-0033-y.View ArticlePubMed
Dietz M, Zimmermann E: Does call structure in a nocturnal primate change with arousal? [Abstract]. Folia Primatol. 2004, 75 (S1): 370-371.
Hillmann E, Mayer C, Schön P-C, Puppe B, Schrader L: Vocalisation of domestic pigs (Sus scrofa domestica) as an indicator for their adaptation towards ambient temperatures. Appl Anim Behav Sci. 2004, 89: 195-206. 10.1016/j.applanim.2004.06.008.View Article
Slocombe KE, Zuberbühler K: Food-associated calls in chimpanzees: responses to food types or food preferences?. Anim Behav. 2006, 72: 989-999. 10.1016/j.anbehav.2006.01.030.View Article
Düpjan S, Schön P-C, Puppe B, Tuchscherer A, Manteuffel G: Differential vocal response to physical and mental stressors in domestic pigs (Sus scrofa). Appl Anim Behav Sci. 2008, 114: 105-115. 10.1016/j.applanim.2007.12.005.View Article
Shipley C, Carterette EC, Buchwald JS: The effects of articulation on the acoustical structure of feline vocalizations. J Acoust Soc Am. 1991, 89: 902-909. 10.1121/1.1894652.View ArticlePubMed
Boersma P: Praat, a system for doing phonetics by computer. Glot International. 2001, 5 (9/10): 341-345.
Owren MJ: GSU Praat Tools: scripts for modifying and analyzing sounds using Praat acoustics software. Behav Res Methods. 2008, 40 (3): 822-829. 10.3758/BRM.40.3.822.View ArticlePubMed
Pfefferle D, Fischer J: Sounds and size: identification of acoustic variables that reflect body size in hamadryas baboons, Papio hamadryas. Anim Behav. 2006, 72: 43-51. 10.1016/j.anbehav.2005.08.021.View Article
Carterette EC, Shipley C, Buchwald JS: Linear prediction theory of vocalization in cat and kitten. Frontiers of speech communication research. Edited by: Lindblom B, Öhman S. 1979, London: Academic Press, 245-257.
Haccou P, Melis E: Statistical analysis of behavioural data. 1994, New York: Oxford University Press
Mundry R, Sommer C: Discriminant function analysis with nonindependent data: consequences and an alternative. Anim Behav. 2007, 74: 965-976. 10.1016/j.anbehav.2006.12.028.View Article
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.