|

Markerless mouse tracking for social experiments.

Researchers

Journal

Modalities

Models

Abstract

Automated behavior quantification in socially interacting animals requires accurate tracking. While many methods have been very successful and highly generalizable to different settings, issues of mis-taken identities and lost information on key anatomical features are common, although they can be alleviated by increased human effort in training or post-processing. We propose a markerless video-based tool to simultaneously track two interacting mice of the same appearance in controlled settings for quantifying behaviors such as different types of sniffing, touching, and locomotion, to improve tracking accuracy under these settings without increased human effort. It incorporates conventional handcrafted tracking and deep-learning-based techniques. The tool is trained on a small number of manually annotated images from a basic experimental setup and outputs body masks and coordinates of the snout and tail-base for each mouse. The method was tested on several commonly used exper-imental conditions including bedding in the cage and fiberoptic or headstage implants on the mice. Results obtained without any human corrections after the automated analysis showed a near elimina-tion of identities switches and a ∼15% improvement in tracking accuracy over pure deep-learning-based pose-estimation tracking approaches. Our approach can be optionally ensembled with such techniques for further improvement. Finally, we demonstrated an application of this approach in studies of social behavior of mice, by quantifying and comparing interactions between pairs of mice in which some lack olfaction. Together, these results suggest that our approach could be valuable for studying group behaviors in rodents, such as social interactions.Significance Statement There is an increasing need for tools that accurately track animals during social interactions. Current, state-of-the-art deep-learning-based approaches are highly successful and generalizable to different situ-ations, but commonly require significant human effort in iterative training and refinement to maintain identity assignment of individuals during interactions. Here, we present a new approach that reliably tracks two mice within a constrained set of experimental conditions: a top-down camera view with con-trolled illumination, bedding in cage and fiberoptic &/or headstage implants. Using this approach, we report a near elimination of identities switches and a ∼15% improvement in tracking accuracy over pure deep-learning-based keypoint tracking approaches trained on the same data and without any human correction, albeit within our chosen set of experimental conditions.Copyright © 2024 Le et al.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *