Skip to main content

Archived Comments for: Experiments in no-impact control of dingoes: comment on Allen et al. 2013

Back to article

  1. Reply to the criticism by Johnson et al. (2014) on the report by Allen et al. (2013)

    Benjamin Allen, The University of Queensland

    26 February 2014

    Reply by Benjamin L. Allen, Lee R. Allen, Richard M. Engeman, Luke K-P. Leung

    Summary

    We recently reported summary results from a series of applied top-predator manipulation experiments conducted across Australia since the early 1990s (Allen et al. 2013, Frontiers in Zoology 10:39). These experiments permitted the highest level of inference achievable in open rangeland areas and collectively comprise the second-largest predator manipulation experiment conducted on any species anywhere in the world. Johnson et al. (2014) have criticised these experiments, claiming to identify several “serious problems” with our subsequent analyses which, they say, mean that “all inferences that Allen et al. draw from their results should be regarded as unreliable”. Here, we demonstrate that their claims are insignificant, being based on misrepresentations of our stated methods, results and conclusions. We clarify 10 issues they raise in order of their appearance.

    Introduction

    The ecological roles of top-predators can include the suppression of sympatric mesopredators and prey, which can have cascading effects on ecosystem structure [1]. The effects that humans have on top-predators, such as the lethal control of top-predators, might therefore have indirect consequences that may include mesopredator release [2, 3]. This topic is widely discussed in ecology, and was recently the subject of a series of applied manipulative experiments we summarised in Allen et al. [4]. In these experiments, we systematically monitored populations of Australian dingoes (Canis lupus dingo and hybrids), red foxes (Vulpes vulpes), feral cats (Felis catus) and goannas (Varanus spp.) over several years in places where lethal dingo control (broad-scale poison-baiting) was or was not applied. Predator populations were sampled in a standardised way over time using passive tracking indices (or PTI) inside a manipulative experimental design inclusive of pre- and post-baiting surveys in large paired baited and unbaited areas. Our primary interest was to determine whether or not contemporary dingo control practices resulted in a release of mesopredators over time. We analysed our data in a logical 3-Step approach using a variety of common procedures, reporting only the most informative summary results from all of the analyses we performed on these data.

    Johnson et al. [5] have criticised our analyses, claiming to identify several “serious problems” which, they say, mean that “all inferences that Allen et al. draw from their results should be regarded as unreliable”. Most of the concerns Johnson et al. [5] raise about our experiments appear to stem from their mistaken impression that it was our view that “baiting suppressed the activity of all medium-sized and large vertebrate predators”. But this is the exact opposite of what we concluded. In truth, the assertions of Johnson et al. [5] are the same as our own, that our data show that contemporary dingo control has little (if any) lasting effect on dingo, fox, cat or goanna populations. Nevertheless, Johnson et al. [5] argue that re-analyses of our data are required to substantiate this conclusion. In doing so, Johnson et al. [5] make many incorrect claims about our experiments. We address these claims below in order of their appearance.

    Responses to concerns raised by Johnson et al. [5]

    1.       The number of sites used and the duration of the study

    Johnson et al. [5] claimed that we “worked on six cattle stations” and that our baiting regimes were conducted in the baited treatment area “twice per year for two to three years”. This is incorrect, and suggests a misunderstanding of our methods.

    We stated that baiting regimes were conducted at various times throughout the year at different sites, and for over four years at some sites (Table 6 in Allen et al. [4]). We also reported the results from nine separate experimental sites (six of our own and three from Eldridge et al. [6]; where treatments were randomly assigned) and an additional three quasi-experimental sites each involving multiple properties (where treatments were not randomly assigned). Our interpretations and overall conclusions were drawn from all of the results obtained from all of these 12 sites, not just the six targeted by Johnson et al. [5].

    2.       Comprehension of the analytical logic

    Johnson et al. [5] primarily criticise the interim results of only Step 1 and Step 2 in our logical 3-Step analytical approach. In doing so, they mistake our interim results for our overall conclusions. Comprehension of the analytical logic is first required to appreciate the results of a given analysis in light of all the other analyses performed. To be clear, we approached the analysis in a 3-Step sequence, as follows:

    • Step 1 – Are there differences in overall mean predator PTI between treatment areas?
    • Step 2 – Are there short-term responses in predator PTI between pre- and post-baiting periods?
    • Step 3 – Are there negative relationships between dingo and mesopredator PTI trends over time in each treatment?

    Given an observed difference in overall mean PTI between treatment areas (of which there were several; Table 1 in [4]), Johnson et al. [5] quite rightly point out from these interim results of Step 1 that a variety of factors may explain these differences, which may not necessarily have been caused by dingo control. We agree, notwithstanding that treatments were randomly assigned and thus provide a powerful indication of what is common or not between sites (7). But this was only Step 1, and we went on to actually measure the short-term responses of predators to control in the repeated pre- and post-control assessments of Step 2. Doing so suggested that short-term population changes did not always occur (Tables 3 and 4, Fig. 3 in [4]). Johnson et al. [5] agree, pointing out that such analyses should have detected suppression of predators negatively affected by baiting. Dingo, fox and cat PTI did indeed show short-term declines in baited areas (Table 4 in [4]), but similar changes in unbaited areas (for dingoes and cats, but not foxes) meant that repeated measures ANOVA failed to detect demonstrable time x treatment interactions (Table 3 in [4]; see additional comments below in Point 5). We stated in Allen et al. [4] that this result could be due to rapid reinvasion of baited areas or a true lack of baiting impact on populations. Johnson et al. [5] then used this interim result from only Step 2 to incorrectly claim that our overall conclusions were unsupported. However, consideration of all of our analyses revealed that lethal dingo control, as routinely applied, cannot be responsible for any observed population trends in mesopredator numbers (Step 3) because dingo populations were not always reduced by baiting (Step 2) and observed differences in mean PTI between treatments (Step 1) may be influenced by factors other than baiting. Thus, our overall conclusion (that dingo control-induced mesopredator releases did not occur) still holds both in spite of and because of the issues raised by Johnson et al. [5].

    3.       Pre- and post-baiting comparisons

    Johnson et al. [5] claim to identify a “serious problem” with our experiments because we “did not make” before-after baiting comparisons. This is self-evidently untrue, because later in their comment Johnson et al. [5] go on to critique the many before-after comparisons we did indeed make (Tables 3 and 4, Fig. 3 in [4]; see also Point 5 below).

    As mentioned above, Step 2 in our analytical approach was solely focused on comparing predator indices obtained before baiting to predator indices obtained shortly after baiting. Demonstrating dingo control-induced releases of mesopredators (Step 3) first requires demonstrating some actual effect of baiting on dingo populations. Unlike our experiments, which did this in Step 2, most other studies purporting to show the responses of dingoes and sympatric predators to dingo control do not do this, but rather stop at Step 1, including the study of Brook et al. [8], Letnic et al. [9], Wallach et al. [10] and many others [11]. Despite what they might claim or how others might regard them, studies of such inferentially limited experimental designs have no capacity whatsoever to identify control-induced changes in predator indices. At best, they can only demonstrate differences between the two areas without identifying the causes of those differences [7], just as Johnson et al. [5] go to great lengths to point out when they criticise our interim results from the treatment comparisons we made in Step 1. Manipulative experimental designs that do measure the actual effects of baiting on predators (such as Allen et al. [4] or Eldridge et al. [6]) have a far greater capacity to identify baiting-induced mesopredator releases if they occur. This is indisputable.

    4.       Provision of raw data

    Johnson et al. [5] requested our raw data, but then claimed that we “were unwilling to provide it.” This statement does not reflect the true circumstances of their request.

    Johnson et al. [5] requested a copy of the data used to generate Fig. 5 (in [4]) “for the purpose only of running those quantile regressions” associated with that figure (Personal email communication from Chris Johnson to Ben Allen, Rick Engeman, Lee Allen and Luke Leung; 15th July 2013). Notwithstanding the inability of such analyses to generate results that would change or improve our conclusions (as we discussed at length in [4]), we responded that we had already undertaken or were in the process of undertaking several other analyses (including quantile regressions), and that “the additional analyses you're enquiring about either didn’t yield anything 'significant' or are already well on their way to publication” (Personal email communication from Ben Allen to Chris Johnson, Rick Engeman, Lee Allen and Luke Leung; 22nd July 2013). Nevertheless, we added that we were indeed willing to freely provide a copy of our raw data after a simple data-sharing agreement had been developed. This was necessary because associated research was still in progress, and the intellectual property (data) was also jointly owned by several collaborating government and non-government organisations. We also personally offered to progress their data request at an upcoming meeting to be held later that week. Our response also addressed the other points we raise here. The reply we received: “Don’t worry about pushing for data-sharing. My feeling is that re-analysis might not change the results all that much” (Personal email communication from Chris Johnson to Ben Allen, Rick Engeman, Lee Allen and Luke Leung; 23rd July 2013). Consequently, we did not progress their request and did not provide the data.

    5.       Short-term responses of predators to baiting

    Johnson et al. [5] claimed that our own analyses refutes the conclusion “that baiting produced rapid reductions in dingo activity, but that dingo activity recovered towards pre-baiting levels, presumably due to immigration.” This claim is evidently only speculative, but addressing it requires a deeper understanding of our experiments.

    In this part of our analyses (Step 2), we compared pre-baiting predator indices to post-baiting predator indices. To do this we pooled the results from several baiting episodes conducted in different seasons and years, separately for each site and species. Thus, we presented only summary data in Allen et al. [4]. However, when individual baiting events are viewed separately, it is clear that baiting programs produced variable responses by predators. In some cases, dingo activity was eliminated completely by baiting; in others, dingo activity increased substantially [12-14]. This variability is captured in the 95% confidence intervals shown in Fig. 3 of Allen et al. [4] and is the primary reason why overall ANOVA results showed that baiting does not always produce detectable short-term declines in activity for canid species known to be killed by baiting – especially when our post-baiting surveys were conducted up to four months after baiting (perhaps allowing sufficient time for reinvasion). Seasonal variation in the magnitude of knock-downs is commonplace for many vertebrate pests subject to lethal control, including dingoes, foxes and cats [7, 15, 16]. Hence, when averaged over several baiting events (as we did in Step 2 of Allen et al. [4]), it should come as no surprise that baiting did not appear to consistently reduce dingo or fox activity [17]. We concede that our conclusion on this point might not be very clear from the summary data we presented and that greater detail might have been useful for supporting this conclusion. But we stress that caution must be exercised when attempting to re-interpret summary data from others’ work. Misinterpreting summary data on this exact topic has previously led to unsupported criticisms of others’ work by Johnson and Ritchie [18] in the past, which has been corrected by the original authors [19, 20].

    6.       Normality and the ecological meaning of zero values

    Johnson et al. [5] claim that because few predators were detected in some surveys (producing low or zero predator PTI values) our data are skewed and should have first been transformed or normalised, and “for this reason alone, all inferences that Allen et al. draw from their results should be regarded as unreliable”. This is misleading, and ignores the most important ecological interpretation of our experiments.

    It is true that few predators were detected during some surveys [4], and that the data for some predators was sometimes skewed or not normally distributed. That the data are ‘approximately normally distributed’ is one of the assumptions of repeated measures ANOVA. However, repeated measures ANOVA is very robust to deviations from normality, with non-normal distributions seldom affecting the overall outcomes or interpretations [21-23]. Thus, re-analysing our data after the transformations suggested by Johnson et al. [5] is highly unlikely to change our conclusions, as Johnson et al. [5] acknowledge in their personal correspondence (see Point 4 above). Violating the assumption of normality is essentially a non-issue for large sample sizes due to the Central Limit Theorum [24], and “the relevant question is not whether ANOVA assumptions are met exactly, but rather whether the plausible violations of the assumptions have serious consequences on the validity of the probability statements” associated with the analyses ([21] pg. 237). For relatively small sample sizes, such as ours, severe deviation from normality can lead to a lowering of p values, an increased probability of type I errors or false positives. We used repeated measures ANOVA to assess mean overall differences in predator PTI between treatments (Step 1) and short-term responses of predators to baiting (Step 2)[4]. Preliminary Anderson-Darling tests [25] showed that the assumption of normality was met in some case but not others (B. Allen, unpublished data). Thus, to argue (as did Johnson et al. [5]) that our results are seriously affected by violating the assumption of normality is to argue that demonstrable differences in (1) mean predator PTI between treatments and (2) short-term baiting-induced predator suppression or releases are even less likely than what we report. This criticism does not diminish, but rather strengthens our overall conclusion that baiting could not be responsible for observed predator PTI trends.

    These results of our conservative analytical approach are intuitive, and are easily recognisable simply by viewing the predator PTI trends (including all the zero values) in Fig. 2 or Fig. 6 of Allen et al. [4]. This was also acknowledged by Johnson et al. [5], who stated: “just looking at the data I also get the feeling that re-analysis with more appropriate methods wouldn’t make much difference to the conclusions” (Personal email communication from Chris Johnson to Ben Allen, Rick Engeman, Lee Allen and Luke Leung; 23rd July 2013). Data manipulation or transformation cannot make such non-diverging or parallel population trendlines appear as though they are not parallel, nor fabricate mesopredator releases to make low-abundance predators appear as though they were there in high abundance when they were not there. Johnson et al. [5] have thus ignored the most salient ecological interpretation of our experiments: mesopredators were present at the beginning of our experiments in both treatments (often in low numbers), and mesopredators were still present at the end of our experiments in similar numbers despite years of baiting (i.e. no baiting-induced mesopredator release).

    7.       Supporting speculation with invalid comparisons of indices between species

    Johnson et al. [5] claimed that fox indices were “consistently low at all of [our] sites” and “were typically 5-10% of those measured for dingoes”. Johnson et al. [5] then appeal to the snap-shot study of Letnic et al. [9] – which was conducted over a few days only at one of our sites, Quinyambie, just prior to commencement of our experiment at that site – to support their speculations that dingoes might indeed have been suppressing foxes at that site. These claims are invalid and should be disregarded.

    With regard to comparing dingo and fox indices, every good textbook on the subject indicates that contrasting the size of index values between species is completely invalid (e.g. [26-29]; but see also [30, 31]). Why Johnson et al. [5] and others (see [31]) continue to do this unknown. Furthermore, our results from Quinyambie were also obtained from 14 standardised surveys undertaken within a manipulative experimental design conducted in each season of the year over several successive years capturing both above- and below-average environmental conditions, and are therefore based on several orders of magnitude more and better data than that available in the snap-shot study of Letnic et al. [9]. Our results (from Step 3) showed that dingo and fox PTI trends were negatively correlated in the baited area of Quinyambie only, but were not correlated in the unbaited area (Table 5 in Allen et al. [4]). Moreover (with data that was normally distributed for both predators), results from Step 1 showed that both fox and dingo indices in the unbaited area were approximately double those in the adjacent baited area for both predators, demonstrating that mean fox activity was much greater in areas where mean dingo activity was also much greater (Table 1 in Allen et al. [4]). Given these results, and the quantity and inferential quality of the data underpinning them, we conclude that dingoes and foxes had positive or neutral relationships under normal conditions in unbaited areas (as found in many other long-term and/or manipulative studies [6, 32, 33]), but that baiting might somehow alter this relationship in ways that facilitate negative dingo-fox relationships. We also cannot understand why Johnson et al. [5] would criticise our manipulative experiments for measuring but finding no treatment effect, while appealing to contrary results from snap-shot correlative studies that did not even attempt to measure such baiting effects or provide any data on the timing, frequency, quantity or scale of baiting – studies that instead just assume that observed differences between “baited” and “unbaited” areas are a consequence of baiting (e.g. [8-10]). Such research communication practices hamper on-ground efforts to both conserve or control dingoes and should be omitted from discussions about dingo management [19, 20, 34].

    8.       Sensitivity of PTI methodology to changes in predator activity

    Johnson et al. [5] claimed that the predator sampling methodology we used (sand-plot tracking) is “relatively insensitive to changes in activity of cats and goannas” and that this is a “problem in claiming responses to baiting by mesopredators”. While insensitivity of indices would be problematic, our study demonstrated no such insensitivity.

    Almost all available studies comparing population trends of dingoes with cats and other mesopredators are derived from sand-plot tracking data, including the studies of Letnic et al. [9], Wallach et al. [10], Johnson and VanDerWal [35], Kennedy et al. [36], Brawata and Neeman [37] and many others [11]. Do Johnson and colleagues’ [5] assertions about the utility of sand-plots mean that all these studies should be automatically discarded as insensitive and problematic too? Passive tracking indices can be an incredibly powerful and sensitive technique for monitoring cats, goannas and a variety of other terrestrial fauna when applied correctly [26, 30, 38]. A closed-form variance estimate is also available to quantify the sensitivity or precision of such data ([39]; but see [30]). Although space considerations necessitated omission of all these in our account, calculated variance estimates were typically very tight and supportive of our overall conclusions (and very loose variance estimates would have strengthened our overall conclusions even further). For example, the 95% confidence limits for any given index value for cats during the entire 5-year sampling at Mt Owen were no greater than 7% of the index value, which values were even more precise than those for dingoes [14]. This indicates that the standardised and systematic PTI methodology we used is an extremely precise method of detecting cat presence and activity, as it is for the other species we monitored as well [30].

    9.       Representativeness of our results

    Johnson et al. [5] claimed that our results “are not relevant to areas where broad-scale aerial baiting is employed, or where ground-baiting is coordinated over large areas, or conducted in conjunction with exclusion fencing”. How Johnson et al. [5] could make such a claim is unknown.

    As described explicitly in the methods section of Allen et al. [4] and in the source publications [12-14], seven of the 12 study sites we described were ground-baited over large areas and the remaining five sites were both ground- and aerially-baited. This occurred multiple times each year, year after year, and usually in conjunction with neighbouring properties. The mean size of properties that bait in north Queensland (where several of our study sites were located) is 400 km2, and is substantially less elsewhere in Queensland (Queensland Department of Agriculture, Forestry and Fisheries, unpublished data). The size of the baited treatment areas sampled in our experiments ranged from 400 km2 to 4,000 km2 (Table 6 in [4]). Thus, the sizes of our baited treatment areas represent areas of similar size or up to 10 times larger than those commonly subjected to baiting. Moreover, Todmorden and Lambina (combined area of 11,000 km2) are located towards the centre of a ~124,000 km2 area that has been repeatedly and extensively ground-baited for the previous 30 years, and aerially baited prior to that [12]. Johnson et al. [5] did not define what they thought a ‘large baited area’ was, but we think 124,000 km2 qualifies. Our results are representative of the beef-cattle rangelands of Australia where ground- and/or aerially-laid baits are routinely distributed in the ways we described over the size of the areas we sampled. Dingo exclusion fencing is typically absent from these areas, and is therefore irrelevant to our study.

    10.   Gross misrepresentation of our conclusions

    Johnson et al. [5] claimed that it was our view that “baiting suppressed the activity of all medium-sized and large vertebrate predators”. But this claim is the opposite of our overall conclusion and is also untrue.

    In fact, our overall conclusions were consistent with the assertions of Johnson et al. [5] that baiting indeed had little enduring effect on any of the four predators we assessed. Dingoes, foxes, cats and goannas were not sustainably suppressed by baiti

    Competing interests

    The authors declare no competing interests.

Advertisement