Research led by a University of Montana undergraduate student to recognize less mistake inclined strategies for performing wildlife reviews was published Oct. 20 in Ecological Applications.
Biologists around the world utilize an assortment of boots-on-the-ground field techniques to study animal populaces. When extrapolated, this information gives populace counts and other scientific data used to study and manage species. Be that as it may, counting wildlife is rarely straightforward. Birds, for example, are small or once in a while difficult to see, and numerous species look and sound comparative.
“Many biologists assume that false positives — either misidentifications or double-counts of animals — don’t happen in their surveys. But research has shown that false positives happen quite a lot, and those false positives can have huge impacts on the reliability of population estimates that we calculate from those data,” said first author Kaitlyn Strickfaden, a researcher with UM’s Avian Science Center. “So we made bird call simulations in which we knew the true identity of every calling bird to test differences in false-positive rates in a few survey scenarios.”
Strickfaden and her co-authors tested different experience levels (expert and naive) and two review techniques. The first study strategy utilized a single observer while the second utilized two teaming up observers.
Strickfaden and her team made call simulations including mashups of songs from 10 different bird species. The analysts knew when specific species were calling all through every one of the simulations. Their volunteer observers – six specialists and six beginners – didn’t. These observers listened to the call simulations, either alone or with another observer, and recorded the birds they thought they heard.
The double-observer strategy revealed altogether lower false-positive rates regardless of the observers’ experience level.
Observer experience was additionally a significant factor, reaffirming that proper training is critical to minimizing misidentifications during information accumulation.
The specialists found that mistake rates changed widely by species. Species with more unique melodies were not misidentified as often as different species in the study. There likewise was an uneven trade-off in misidentifications within comparable sounding pairs. For example, McCown’s longspurs were often misidentified as horned larks, so horned larks were enormously overcounted in the investigation contrasted with what number of genuinely happened, while McCown’s longspurs were significantly undercounted.
“We don’t make any claims about what survey method researchers should use since every researcher’s situation is different, but our data do show that the double-observer method was less prone to errors than the single-observer method,” Strickfaden said. “Collecting more accurate data gives us the ability to more accurately estimate population sizes. When we ignore false-positive errors, we may not know when populations are doing poorly and need conservation actions. Our research is a step forward in addressing this problem.”
Strickfaden, who graduated from UM in 2018 with a wildlife biology degree, has worked in the Avian Science Center since 2017. She led this research as her undergraduate senior thesis project.
“Kaitlyn’s persistence and tenacity are admirable. Publishing her undergraduate senior research in Ecological Applications is an outstanding accomplishment and demonstrates her abilities,” said Vicky Dreitz, director of the Avian Science Center and paper co-author. “Kaitlyn had the foresight to develop a project that provides information to avian ecologists, and wildlife biologists and managers, about the level of false positives, a well-known nuance in count-based survey data. We are proud and excited to be part of her accomplishment.”
Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No journalist was involved in the writing and production of this article.