Navigating across light gradients is essential for survival for many animals. However, we still have a poor understanding of the algorithms that underlie such behaviors. Here, we developed a novel closed-loop phototaxis assay for Drosophila larvae in which light intensity is always spatially uniform but updates depending on the location of the animal in the arena. Even though larvae can only rely on temporal cues during runs, we find that they are capable of finding preferred areas of low light intensity. Further detailed analysis of their behavior reveals that larvae turn more frequently and that heading angle changes increase when they experience brightness increments over extended periods of time. We suggest that temporal integration of brightness change during runs is an important – and so far largely unexplored – element of phototaxis.

Many animals have evolved behaviors to find favorable locations in complex natural environments. Such behaviors include chemotaxis to approach or avoid chemical stimuli, thermotaxis to find cooler or warmer regions, and phototaxis to approach or avoid light (Gepner et al., 2015; Gomez-Marin and Louis, 2014; Gomez-Marin et al., 2011; Kane et al., 2013; Klein et al., 2015; Luo et al., 2010).

Drosophila larvae are negatively phototactic, preferring darker regions (Sawin et al., 1994). To navigate, larvae alternate between runs and turns. During runs, larvae move relatively straight. During turns, they slow down and perform head-casts (Lahiri et al., 2011) to sample their environment for navigational decisions (Gomez-Marin and Louis, 2012; Humberg and Sprecher, 2018; Humberg et al., 2018; Kane et al., 2013). However, it is unclear whether such local spatial sampling is necessary to perform phototaxis. Zebrafish larvae, for example, can perform phototaxis even when light intensity is uniform across space but changes over time with the animal's position (Chen and Engert, 2014). In a purely temporal phototaxis assay, spatial contrast information is absent, so navigation must depend on other sensory cues.

Previous work indicates that as brightness increases, Drosophila larvae make shorter runs and bigger turns (Humberg et al., 2018; Kane et al., 2013). This is reminiscent of chemotactic strategies, where decreasing concentrations of a favorable odorant increase the likelihood of turning (Gomez-Marin et al., 2011). While it has been shown that temporal sampling of olfactory cues is sufficient to guide chemotaxis (Schulze et al., 2015), it remains unclear whether larvae can use a purely temporal strategy for visual navigation.

Using a virtual landscape in which brightness is always spatially uniform but depends on the location of the animal in the arena, we confirm that larvae can perform phototaxis by modulating run length and heading angle. Our data indicate that larvae achieve this by integrating brightness change during runs (Movie 1).

Experimental setup

All experiments were performed using wild-type second-instar Drosophila melanogaster (Canton-S) larvae collected 3–4 days after egg-laying. This age was chosen to ensure consistent phototactic behavior because older larvae might change their light preference (Sawin-McCormack et al., 1995). Larvae were raised on agarose plates with grape juice and yeast paste, with a 12 h:12 h light:dark cycle at 22°C and 60% humidity. Before experiments, larvae were washed in droplets of deionized water. All experiments were carried out between 14:00 and 19:00 h to avoid potential circadian effects (Mazzoni et al., 2005). Each experiment lasted for 60 min. For all stimuli, animals were presented with constant gray during the first 15 min, allowing them to distribute in the arena.

Larvae were placed in the center of a custom-made circular acrylic dish (6 cm radius) filled with a thin layer of freshly made 2% agarose (Fig. 1A). As previously described (Bahl and Engert, 2020), spatially uniform whole-field illumination was presented via a projector (60 Hz, AAXA P300 Pico Projector) from below. The projected light was white, using three spectrally overlapping standard RGB LEDs, with a wavelength range from ∼420 to ∼650 nm. Brightness was set by the computer and ranged from values 0 to 255. Respective light intensity was measured using an Extech Instruments Light Meter LT300 and ranged from 41 to 2870 lux (Fig. S1A). We did not attempt to linearize this curve as it is unclear how the larval visual system processes contrast. Therefore, for all brightness-dependent behavioral analyses, the original pixel brightness value, as set by the program, was used.

Fig. 1.

Drosophila melanogaster larvae can perform temporal phototaxis. (A) Setup for tracking freely crawling D. melanogaster larvae. (B) Whole-field pixel brightness versus larval position for the valley and control stimuli. (C) Raw trajectories. Dashed circles delineate the bright center, the dark ring and the bright ring. (D) Percentage of time spent in regions (left to right: P=0.045, P=0.001, P<0.001; two-sided t-tests). (E) Crawling speed in regions (left to right: P=0.304, P=0.891, P=0.479; two-sided t-tests). Error bars represent means±s.e.m. Blue solid lines and dots indicate valley stimulus larvae; gray solid lines and dots indicate constant stimulus larvae. N=27 larvae for both groups. Open small circles represent individual animals. n.s., not significant; *P<0.05; **P<0.01; ***P<0.001.

Fig. 1.

Drosophila melanogaster larvae can perform temporal phototaxis. (A) Setup for tracking freely crawling D. melanogaster larvae. (B) Whole-field pixel brightness versus larval position for the valley and control stimuli. (C) Raw trajectories. Dashed circles delineate the bright center, the dark ring and the bright ring. (D) Percentage of time spent in regions (left to right: P=0.045, P=0.001, P<0.001; two-sided t-tests). (E) Crawling speed in regions (left to right: P=0.304, P=0.891, P=0.479; two-sided t-tests). Error bars represent means±s.e.m. Blue solid lines and dots indicate valley stimulus larvae; gray solid lines and dots indicate constant stimulus larvae. N=27 larvae for both groups. Open small circles represent individual animals. n.s., not significant; *P<0.05; **P<0.01; ***P<0.001.

Three virtual light intensity landscapes were tested: a ‘valley’ stimulus, a ‘ramp’ stimulus and a ‘constant’ stimulus. For the valley and ramp stimuli, the spatially uniform light brightness (λ) was updated in closed-loop according to λ=255·(r–3)2/9 (Fig. 1B; Fig. S1D) and (Fig. S2A), respectively, where r is the larva's radial distance to the center of the arena. Both profiles ensure that brightness levels near the wall are high, decreasing the edge preference of larvae and reducing boundary effects. For the constant stimulus, brightness values remained gray (λ=128) regardless of the larva's position.

For online tracking, the scene was illuminated using infrared LED panels (940 nm panel, 15-IL05, Cop Security). A high-speed camera (90 Hz, USB3 Grasshopper3-NIR, FLIR Systems) with an infrared filter (R72, Hoya) was used to track the larva's centroid position in real-time. Eight independent arenas were operated in parallel, making the system medium to high throughput and relatively cost-effective. The position of the animal was determined by spatially filtering the background-subtracted image and then searching for the largest contour. The procedure provides a reliable estimate of the animal's centroid position but cannot determine the precise location of the head or the tail. Using the centroid as a closed-loop position signal significantly simplifies the experimental procedure and is justified as larvae are small in size relative to the slowly changing and always spatially uniform virtual brightness landscapes. The spatial precision of our tracking was in the order of ±0.01 cm per ∼10 ms, resulting in a nearly noise-free presentation of the stimulus profiles (Fig. S1B). In addition to the online-tracking, a video of the animal was stored for offline posture analysis (Movie 2).

In our system, the closed-loop latency between the detection of the animal's position and the update of the visual stimulus is 100 ms. This value was determined using the following protocol. Infrared filters were removed from the cameras, allowing for direct measurements of the brightness from the projector. Arena brightness starts at a high level but is set to a dark state after a few seconds. When the camera detects such an event, the computer sets the brightness back at a high level. The length of the resulting dark period is the closed-loop delay. Using this strategy, the resulting value contains the sum of all delays of the system (camera image acquisition, image buffering, data transport to the USB 3.0 hub, PCI-express to CPU transport, CPU image analysis, command to the graphics card, graphics buffering, and buffering and image display on the projector). It is challenging to use such systems to reach closed-loop delays below 100 ms (Stowers et al., 2017), simpler systems with direct LED control allow for delays as short as 30 ms (Tadres and Louis, 2020).

Control experiments

Notably, animals navigating the constant stimulus were always analyzed as if they navigated the respective experimental stimulus (valley or ramp), using the same binning, naming conventions and analysis methods. For example, control animals that spend time in the ‘dark’ ring (gray open circles in Fig. 1D) actually perceive constant gray during the entire experiment. This analysis was chosen to control for the spatial arrangement of our stimulus and boundary effects. The best example where this strategy is important can be seen for the turn-triggered brightness change (Fig. 2G): even though control animals always perceive gray, the turn-triggered brightness dynamics indicate a complex dependency on the spatial arrangement of the arena. Only by using this control analysis is it possible to appreciate the dynamics in the experimental group.

Fig. 2.

Brightness and brightness history modulate navigational decisions. (A) Posture tracking for estimating larval body curvature (angle between solid and dashed blue lines). Turns (orange circles) are curvature peaks above a threshold (30 deg). (B) Example trajectory with detected turns for an inset view (top) and the entire arena (bottom). (C,D) Probability density distributions for turn angles and run length (C) and respective brightness changes (D). (E,F) Turn angle and run length as a function of light intensity (dark: <29; bright: otherwise; see brightness profile, Fig. 1B) and as a function of brightness change since the previous turn (left to right: P=0.004, P=0.010, P<0.001, P=0.006 for the valley stimulus and P=0.289, P=0.018, P=0.066, P=0.221 for the constant stimulus; paired t-tests). (G) Turn event-triggered brightness for the valley and constant stimuli (mean±s.e.m. over all turns from all larvae, n=3153 and n=2981 turns, respectively). As usual, we analyze the constant gray control group as if it would navigate the valley stimulus. The non-flatness of this curve indicates that the geometry of the dish significantly influences our analysis. Hence, interpretation of the results for the valley stimulus should best be done in relation to this control. N=27 larvae for both groups. Open small circles and thin solid lines in E and F represent median turn angle and run length for individual larvae. n.s., not significant; *P<0.05; **P<0.01; ***P<0.001.

Fig. 2.

Brightness and brightness history modulate navigational decisions. (A) Posture tracking for estimating larval body curvature (angle between solid and dashed blue lines). Turns (orange circles) are curvature peaks above a threshold (30 deg). (B) Example trajectory with detected turns for an inset view (top) and the entire arena (bottom). (C,D) Probability density distributions for turn angles and run length (C) and respective brightness changes (D). (E,F) Turn angle and run length as a function of light intensity (dark: <29; bright: otherwise; see brightness profile, Fig. 1B) and as a function of brightness change since the previous turn (left to right: P=0.004, P=0.010, P<0.001, P=0.006 for the valley stimulus and P=0.289, P=0.018, P=0.066, P=0.221 for the constant stimulus; paired t-tests). (G) Turn event-triggered brightness for the valley and constant stimuli (mean±s.e.m. over all turns from all larvae, n=3153 and n=2981 turns, respectively). As usual, we analyze the constant gray control group as if it would navigate the valley stimulus. The non-flatness of this curve indicates that the geometry of the dish significantly influences our analysis. Hence, interpretation of the results for the valley stimulus should best be done in relation to this control. N=27 larvae for both groups. Open small circles and thin solid lines in E and F represent median turn angle and run length for individual larvae. n.s., not significant; *P<0.05; **P<0.01; ***P<0.001.

Data analysis and statistics

All data analysis was performed using custom-written Python code on the 45 min period after acclimatization. To avoid tracking problems and minimize boundary effects, data were excluded where larvae were within 0.1 cm distance to the edge.

The circular arena was binned in three concentric regions depending on the radius r: 0–2, 2–4 and 4–6 cm. These regions were named the ‘bright’ center, the ‘dark’ ring and the ‘bright’ ring for the valley stimulus (Fig. 1B) and the ‘dark’ center, the ‘gray’ ring and the ‘bright’ ring for the ramp stimulus (Fig. S2A). Animal speed was computed by interpolating the trajectory to 1 s bins and then by taking the average distance of consecutive points (Fig. 1E).

For the turn event-based offline analysis (Fig. 2), a pose estimation toolbox, DeepPoseKit (Graving et al., 2019), was used. To this end, 100 frames were manually annotated (head, centroid and tail) to train the neural network, which was then used to predict animal posture across all frames from all animals. Body curvature was defined as the angle between the tail-to-centroid vector and the centroid-to-head vector (Fig. 2A). The pose estimation algorithm occasionally had difficulties distinguishing between the head and the tail. However, this problem was not relevant for the curvature measurement as the angle between these two body parts does not change when they are flipped. In a few frames, the algorithm placed the head and the tail at the same location, leading to the transient detection of large body curvatures. These events were discarded by low-pass filtering traces with a Butterworth filter (cut-off frequency: 3 Hz). Turn events were defined as a local curvature peak above 30 deg and needed to be separated from the previous event by at least 2 s in time and 0.2 cm in space. The value for the curvature threshold was chosen such that the identified curvature peaks clearly stood out from the curvature fluctuations in between events (Fig. 2A).

Turn angles were defined as the angle between the location in the arena 2 s before a turn event and 2 s after. Run length was defined as the time between consecutive turn events. Each turn event was labeled as ‘dark’ or ‘bright’, based on the brightness equations and binning described above (dark: pixel brightness less than 29; bright: otherwise), and as ‘darkening’ or ‘brightening’ based on the sign in brightness change since the last turn event (Fig. 2E,F). As turn events are short and spatially confined, by stimulus design, the whole-field brightness change during such events is nearly zero (Fig. 2D). Notably, our curvature-based turn event identification procedure does not allow for precise labeling of the beginning and the end of the event. Therefore, the brightness change during turns was defined as the brightness difference 0.5 s before and 0.5 s after the event. This time range often includes brief periods of runs, explaining the small residual width of the reported brightness distribution (Figs 2D and 3E). The brightness change during runs was defined as the difference in brightness between two consecutive turn events (Figs 2D and 3E).

Fig. 3.

Simulated larvae perform temporal phototaxis. (A) Characterization of combinations of four potential navigational rules, with a grid search for the parameters run length and turn angle, quantified by a phototaxis performance index. Rule 1: decrease turn angle when it is dark, increase it when it is bright. Rule 2: increase run length when it is dark, decrease it when it is bright. Rule 3: decrease turn angle when it was darkening during the run, increase it when it was brightening. Rule 4: increase run length when it was darkening during the run, decrease it when it was brightening. (B–I) Simulations using only rules 3 and 4, with turn angle and run length multiplier set to one. (B–D) Raw trajectories, percentage of time spent in regions, and crawling speed (as in Fig. 1C–E). Left to right: (C) P=0.181, P<0.001, P=0.015; two-sided t-tests; (D) P=0.531, P=0.651, P=0.665; two-sided t-tests. (E–I) Analysis of turns and runs (same as in Fig. 2C–G). We analyzed the constant gray control group as if it would navigate the valley stimulus. The non-flatness of the constant gray control curve in I corroborates that the geometry of the dish significantly influences our analysis. The inversion of the shape of this control curve compared with experimental data (Fig. 2G) likely originates from too simplistic an implementation of boundary effects in our model. (G,H) Left to right: P<0.001, P=0.001, P<0.001, P<0.001 for the valley stimulus; P=0.283, P=0.165, P=0.796, P=0.656 for the constant stimulus; paired t-tests. Open circles and thin solid lines in C–I represent individual model larvae. N=50 simulation runs for both groups using different random seeds. n=5331 and n=5334 events in I. n.s., not significant; **P<0.01; ***P<0.001.

Fig. 3.

Simulated larvae perform temporal phototaxis. (A) Characterization of combinations of four potential navigational rules, with a grid search for the parameters run length and turn angle, quantified by a phototaxis performance index. Rule 1: decrease turn angle when it is dark, increase it when it is bright. Rule 2: increase run length when it is dark, decrease it when it is bright. Rule 3: decrease turn angle when it was darkening during the run, increase it when it was brightening. Rule 4: increase run length when it was darkening during the run, decrease it when it was brightening. (B–I) Simulations using only rules 3 and 4, with turn angle and run length multiplier set to one. (B–D) Raw trajectories, percentage of time spent in regions, and crawling speed (as in Fig. 1C–E). Left to right: (C) P=0.181, P<0.001, P=0.015; two-sided t-tests; (D) P=0.531, P=0.651, P=0.665; two-sided t-tests. (E–I) Analysis of turns and runs (same as in Fig. 2C–G). We analyzed the constant gray control group as if it would navigate the valley stimulus. The non-flatness of the constant gray control curve in I corroborates that the geometry of the dish significantly influences our analysis. The inversion of the shape of this control curve compared with experimental data (Fig. 2G) likely originates from too simplistic an implementation of boundary effects in our model. (G,H) Left to right: P<0.001, P=0.001, P<0.001, P<0.001 for the valley stimulus; P=0.283, P=0.165, P=0.796, P=0.656 for the constant stimulus; paired t-tests. Open circles and thin solid lines in C–I represent individual model larvae. N=50 simulation runs for both groups using different random seeds. n=5331 and n=5334 events in I. n.s., not significant; **P<0.01; ***P<0.001.

Two-sample t-tests were used for pairwise comparisons between the experimental and control data. Paired-sample t-tests were used for pairwise comparisons within groups. Statistics for the linear regression fits (Fig. S3) were based on a bootstrapping approach. To disconnect data points while keeping their individual distributions intact, data vectors were randomly rearranged. The linear regression analysis was then performed on the new dataset. The procedure was repeated 1000 times, allowing for the creation of a distribution of R2 values. Comparing the actual R2 obtained from the original dataset with that distribution allowed for the calculation of the probability (P-value) that the observed correlation might have simply arisen by chance.

Larvae were discarded if they spent more than 99% of the experimental time in a single region or if their speed was zero. All data analysis was perfomed automatically in the same way for the experimental and control groups.

Modeling

Simulations (Fig. 3, Figs S2E–G and S3C,D) were custom written in Python 3.7, using the high-performance Python compiler numba. Simulations were performed using Euler's method with a timestep of dt=0.01 s. Model larvae were initialized with a random position and orientation. At each time step, larvae stochastically chose one of two possible actions: they could either move forward, with a speed of 0.04 cm s−1 (parameter was taken directly from the experiment, Fig. 1E), or turn. The baseline probability for turning was P=0.00066. This value was directly computed from the experiment to match the measured average run length of T=15 s (Fig. 2E,F), following p=dt/T. When making turns, turn angles were drawn from a Gaussian distribution with a baseline standard deviation of 32 deg, matching the experimental value (Fig. 2C,E,F). When model larvae reached the edge, a new random direction vector was chosen, preventing them from leaving the arena.

In correspondence with our experimental findings (Fig. 2E,F), the model was equipped with four additional navigational rules (Fig. 3A). Rule 1: when the environment is dark (brightness smaller than 29), turn angles decrease; when it is bright (brightness larger than 29), turn angles increase. Rule 2: when the environment is dark (brightness smaller than 29), run lengths increase; when it is bright (brightness larger than 29), run lengths decrease. Rule 3: when the environment is darkening (change since previous turn smaller than zero), turn angles decrease; when it is brightening (change since previous turn larger than zero), turn angles increase. Rule 4: when the environment is darkening (change since previous turn smaller than zero), run lengths increase; when it is brightening (change since previous turn larger than zero), run lengths decrease.

Changes in absolute turn angle were accomplished by adjusting the standard deviation of the Gaussian turn angle distribution by ±30%, the effect size observed in our experiments (Fig. 2E,F). Under all conditions, we used a mean of zero for the distribution, as turns should equally often go to the left and to the right. We modulated run length (T) by scaling them by ±30%, thereby modulating the probability of turning (p=dt/T). When combinations of those rules were tested (Fig. 3A), effects were concatenated.

A performance index (PI) (Fig. 3A) was used to characterize how well animals or models performed temporal phototaxis. The metric was based on the difference between the experimental and control group for the fraction of time spent in the dark ring. To compute this value, bootstrapping was used to average 1000 samples of randomly chosen differences between experimental and control conditions.

For the parameter grid search (Fig. 3A), the absolute turn angle and the run length were varied systematically. To this end, respective baseline parameter values (taken from the experiment; Fig. 2E,F) were changed by scaling them with two multipliers (run length multiplier and turn angle multiplier).

Data generated from model larvae were analyzed and displayed using the exact same scripts that were used to analyze experimental data, allowing for easy comparison between model and animal behavior.

Fly larvae can navigate a virtual brightness gradient

We first asked whether fly larvae can perform temporal phototaxis, i.e. navigate a virtual light landscape lacking spatial contrast information. We placed individual animals in an agarose-filled arena, allowed them to freely explore, and tracked their position in real-time (Fig. 1A). We presented spatially uniform light from below, with brightness levels following a quadratic dependence of the larva's distance from the center (valley stimulus, Fig. 1B) or constant gray as a control (constant stimulus). For both groups, we analyzed how animals were distributed across three concentric regions: the bright center, the dark ring and the bright ring. Notably, throughout this study, control animals were always analyzed as if they navigated the experimental stimulus even though they in fact perceived constant gray. This analysis is important to control for the spatial arrangement of our stimulus and boundary effects.

Larvae that navigated the valley stimulus spent a significantly higher fraction of time in the dark ring than those that navigated the constant stimulus (Fig 1C,D; Fig. S2B). This behavior was most pronounced between minutes 10 and 40 of the experiment (Fig. S1F). To verify that this behavior was not an artifact of our specific stimulus design, we also tested a gradient where brightness monotonically ‘ramps’ with radial distance (Fig. S2A) and observed that larvae also navigated to dark regions under these conditions (Fig. S2B,C).

Because larvae lacked spatial brightness cues in our setup, it was unclear which behavioral algorithms they employ. One basic, yet potentially sufficient, algorithm would be to reduce movement in darker regions. However, speed was independent of brightness (Fig 1E; Fig. S3D), suggesting that larvae employ more complex navigational strategies.

We conclude that D. melanogaster larvae are capable of performing phototaxis in the absence of any spatial contrast cues. We also find that this behavior cannot be explained by a simple brightness-dependent modulation of crawling speed. Hence, larvae must use either the momentary luminance or luminance information across time to modulate other aspects of their sensory-motor decision-making.

Larval temporal phototaxis depends on brightness change over time

In spatially differentiated light landscapes, fly larvae make navigational decisions by sampling brightness differences during head-casts. In our setup, by design, larvae experience no brightness fluctuations during head-casts. Hence, they have to use whole-field brightness or brightness history information to modulate the magnitude and/or frequency of turns. To explore this possibility, we segmented trajectories into runs and turns. We applied a deep learning-based package, DeepPoseKit (Graving et al., 2019), to extract the larvae's head, centroid and tail positions from the experimental video (Fig. 2A; Movie 2). From there, we calculated the animal's body curvature to identify head-casting events and to quantify turn angles and run lengths (Fig. 2A–C).

As expected, brightness changes during the spatially confined turns were negligible compared with those measured during runs (Fig. 2D). To quantify the effect of brightness on heading angles and run lengths, we checked how these parameters varied with the larva's position. During the valley but not the constant stimulus, turns in the dark region led to smaller heading angle changes than in the bright regions (Fig. 2E). Similarly, runs before a turn in the dark region of the valley stimulus were slightly longer compared with runs ending in the bright region. However, this also partly occurred with the constant stimulus, suggesting that the effect might not arise from a visuomotor transformation.

Next, we explored whether brightness history affects behavior. As run lengths were highly variable, ranging from ∼3 to ∼40 s (Fig. 2C), we focused our analysis on the brightness change between consecutive turns. We classified turns by whether larvae experienced a decrease or increase in whole-field brightness during the preceding run. We found that heading angle changes were smaller and that run lengths were longer when larvae had experienced a brightness decrease compared to an increase (Fig. 2F). We did not observe these effects in control animals.

To further quantify the effects of brightness and brightness change on heading angle change, we performed regression analysis directly on individual events (Fig. S3A,B). This analysis revealed a weak positive correlation where turn angles slightly scale with brightness and brightness change.

These observations led us to hypothesize that larvae might integrate information about the change in brightness during runs and that this integration period might span several seconds. To obtain an idea about time-scales, we computed a turn event-triggered brightness average (Fig. 2G). We observed that, on average, turns performed in the valley stimulus are preceded by an extended period of >20 s of brightening, suggesting that long-term brightness increases drive turns.

In summary, our analysis of turns and runs confirms that, first, brightness levels modulate heading angle change and, second, changes in brightness prior to turns modulate heading angle change as well as run-length.

A simple algorithmic model can explain larval temporal phototaxis

We next wanted to test whether the identified behavioral features are sufficient to explain larval temporal phototaxis. Based on our experimental findings (Fig. 2), we propose four rules as navigational strategies (Fig. 3A). For rules 1 and 2, the instantaneous brightness modulates the heading angle change and run length, respectively. By contrast, for rules 3 and 4, the brightness change since the last turn modulates the heading angle changes and run lengths.

To test these navigational rules, we simulated larvae as particles that could either move straight or make turns. To compare the performances of different models, we calculated a phototaxis index (difference of time spent in the dark ring between experimental and control groups; Fig. 3A). For all permutations of our rules, we explored a set of multipliers for the heading angle change and run length, with a multiplier of 1 corresponding to the experimental averages (Fig. 2E,F). This allowed us to assess the robustness of our model to parameter choice. As expected, with no active rules, the larval distribution was comparable between the valley and constant stimulus. Activating rules 1 or 2 did not improve performance, suggesting that modulation of behavior based on instantaneous brightness is insufficient to perform temporal phototaxis. After activating rules 3 or 4, phototaxis emerged for small run lengths and large turn angle multipliers. However, for multipliers set to 1, the resulting phototaxis index was weaker than in experiments (=14%). Only when combining rules 3 and 4 did phototaxis performance match the experimental values. Combining all four rules yielded minimal improvements. Therefore, for further analysis, we focused on a combination of rules 3 and 4, with both multipliers set to 1.

Like real larvae (Fig. 1C–E), simulated larvae navigating the valley stimulus spent more time in the dark ring than larvae navigating the constant stimulus (Fig. 3B,C) without modulating speed (Fig. 3D). Furthermore, distributions of turn angle changes, run lengths and brightness changes were comparable to experimental data (compare Figs 2C,D and 3E,F). Residual differences in those distributions are likely due to additional mechanisms used by the animal, such as a refractory period for turn initiation, which we did not incorporate in our model. When we examined the effects of instantaneous brightness and brightness change on turn angle amplitude and run length (Fig. 3G,H), we observed similar patterns as in the experimental data (Fig. 2E,F). As found in experiments (Fig. 2G), turns are preceded by long stretches of increasing brightness (Fig. 3I), supporting our hypothesis that larvae integrate brightness change over several seconds. Moreover, in the event-based regression analysis, we found results to be in agreement with experimental data as well (Fig S3C,D). Finally, to verify that our model generalizes to other visual stimulus patterns, we simulated larvae exploring the ramp stimulus and observed phototaxis performance comparable to that of real larvae (Fig. S2E–G).

In summary, after implementing our experimentally observed navigational rules in a simple computational model, we propose that the most critical element of larval temporal phototaxis is the ability to integrate brightness change over extended time periods. Modulating turn angle amplitude and run length based on such measurement is sufficient to perform temporal phototaxis.

Using a closed-loop behavioral assay, we show that D. melanogaster larvae find the darker regions of a virtual brightness gradient that lacks any spatial contrast cues. Temporal phototaxis behavioral algorithms have already been dissected in open-loop configurations, where stimuli are decoupled from an animal's actions. Following a global brightness increase, larvae are known to modify both their heading angle magnitude and their run length (Gepner et al., 2015; Kane et al., 2013), which is in agreement with our findings. We were able to demonstrate that these navigational strategies are in fact sufficient for phototactic navigation. Given that brightness fluctuations in our assay are slow and negligibly small during head-casts, we suggest that animals integrate brightness change during runs to make decisions about the strength and timing of turns. Previous work has shown that larvae can navigate olfactory or thermal gradients using only temporal cues (Luo et al., 2010; Schulze et al., 2015). Together with our findings, this should enable future exploration of the shared computational principles and neural pathways across these sensory modalities.

Closed-loop systems are powerful tools to dissect an animal's sensorimotor transformation. They have been employed in many models including adult Drosophila (Bahl et al., 2013), larval zebrafish (Bahl and Engert, 2020; Chen and Engert, 2014; Štih et al., 2019) and Caenorhabditis elegans (Kocabas et al., 2012; Leifer et al., 2011). Recent work in Drosophila larvae used LED-based devices to study closed-loop temporal chemotaxis in virtual optogenetic environments (Tadres and Louis, 2020). Such systems are cheaper and have shorter stimulus refresh times, but cannot easily be used to present animals with spatially differentiated landscapes. By utilizing a projector, our setup overcomes this limitation. With the drawback of slightly longer delays and higher component costs, the ability to present any type of visual stimulus adds important flexibility and versatility.

Future studies could use our paradigm to study, for example, specific behavioral differences between animals navigating a true luminance gradient compared with when they navigate the exact same one virtually. Moreover, our system makes it possible to explicitly investigate navigational strategies exclusively using spatial contrast cues. This has already been achieved in zebrafish larvae (Chen et al., 2021; Huang et al., 2013) by always locking a sharp contrast edge to the center of the animal's head. Testing such stimuli in Drosophila larvae will, however, require more precise real-time position, orientation and posture measurements, improvements that can be added to our setup. The result from such experiments could be used to construct a spatial phototaxis model which could then be combined with our proposed temporal phototaxis model.

We thank L. Hernandez-Nunez for discussions and reading through an earlier version of the manuscript. We are grateful to F. Engert and A. Samuel and their lab members for discussions and general support.

Author contributions

Conceptualization: M.L.Z., K.J.H., K.V., A.B.; Methodology: M.L.Z., A.B.; Software: M.L.Z., A.B.; Validation: M.L.Z., K.J.H., A.B.; Formal analysis: M.L.Z., K.J.H., A.B.; Investigation: M.L.Z., A.B.; Resources: A.B.; Data curation: M.L.Z., A.B.; Writing - original draft: M.L.Z., K.J.H., K.V., A.B.; Writing - review & editing: M.L.Z., K.J.H., K.V., A.B.; Visualization: A.B.; Supervision: K.J.H., K.V., A.B.; Project administration: A.B.; Funding acquisition: A.B.

Funding

K.J.H. was funded by the Harvard Mind Brain Behavior Initiative. K.V. received funding from a German Science Foundation Research Fellowship (345729665). A.B. was supported by the Human Frontier Science Program Long-Term Fellowship (LT000626/2016). We thank the Zukunftskolleg Konstanz for supporting A.B.

Data availability

Source code for data analysis and modeling are available on GitHub (https://github.com/arminbahl/drosophila_phototaxis_paper).

Bahl
,
A.
and
Engert
,
F.
(
2020
).
Neural circuits for evidence accumulation and decision making in larval zebrafish
.
Nat. Neurosci.
23
,
94
-
102
.
Bahl
,
A.
,
Ammer
,
G.
,
Schilling
,
T.
and
Borst
,
A.
(
2013
).
Object tracking in motion-blind flies
.
Nat. Neurosci.
16
,
730
-
738
.
Chen
,
X.
and
Engert
,
F.
(
2014
).
Navigational strategies underlying phototaxis in larval zebrafish
.
Front. Syst. Neurosci.
8
,
39
.
Chen
,
A. B.
,
Deb
,
D.
,
Bahl
,
A.
and
Engert
,
F.
(
2021
).
Algorithms underlying flexible phototaxis in larval zebrafish
.
J. Exp. Biol.
224
,
jeb238386.
Gepner
,
R.
,
Mihovilovic Skanata
,
M.
,
Bernat
,
N. M.
,
Kaplow
,
M.
and
Gershow
,
M.
(
2015
).
Computations underlying Drosophila photo-taxis, odor-taxis, and multi-sensory integration
.
eLife
4
,
e06229
.
Gomez-Marin
,
A.
and
Louis
,
M.
(
2012
).
Active sensation during orientation behavior in the Drosophila larva: more sense than luck
.
Curr. Opin. Neurobiol.
22
,
208
-
215
.
Gomez-Marin
,
A.
and
Louis
,
M.
(
2014
).
Multilevel control of run orientation in Drosophila larval chemotaxis
.
Front. Behav. Neurosci.
8
,
38
.
Gomez-Marin
,
A.
,
Stephens
,
G. J.
and
Louis
,
M.
(
2011
).
Active sampling and decision making in Drosophila chemotaxis
.
Nat. Commun.
2
,
441
.
Graving
,
J. M.
,
Chae
,
D.
,
Naik
,
H.
,
Li
,
L.
,
Koger
,
B.
,
Costelloe
,
B. R.
and
Couzin
,
I. D.
(
2019
).
DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning
.
eLife
8
,
e47994
.
Huang
,
K.-H.
,
Ahrens
,
M. B.
,
Dunn
,
T. W.
and
Engert
,
F.
(
2013
).
Spinal projection neurons control turning behaviors in zebrafish
.
Curr. Biol.
23
,
1566
-
1573
.
Humberg
,
T.-H.
and
Sprecher
,
S. G.
(
2018
).
Two pairs of Drosophila central brain neurons mediate larval navigational strategies based on temporal light information processing
.
Front. Behav. Neurosci.
12
,
305
.
Humberg
,
T.-H.
,
Bruegger
,
P.
,
Afonso
,
B.
,
Zlatic
,
M.
,
Truman
,
J. W.
,
Gershow
,
M.
,
Samuel
,
A.
and
Sprecher
,
S. G.
(
2018
).
Dedicated photoreceptor pathways in Drosophila larvae mediate navigation by processing either spatial or temporal cues
.
Nat. Commun.
9
,
1260
.
Kane
,
E. A.
,
Gershow
,
M.
,
Afonso
,
B.
,
Larderet
,
I.
,
Klein
,
M.
,
Carter
,
A. R.
,
de Bivort
,
B. L.
,
Sprecher
,
S. G.
and
Samuel
,
A. D. T.
(
2013
).
Sensorimotor structure of Drosophila larva phototaxis
.
Proc. Natl. Acad. Sci. USA
110
,
E3868
-
E3877
.
Klein
,
M.
,
Afonso
,
B.
,
Vonner
,
A. J.
,
Hernandez-Nunez
,
L.
,
Berck
,
M.
,
Tabone
,
C. J.
,
Kane
,
E. A.
,
Pieribone
,
V. A.
,
Nitabach
,
M. N.
,
Cardona
,
A.
et al. 
(
2015
).
Sensory determinants of behavioral dynamics in Drosophila thermotaxis
.
Proc. Natl. Acad. Sci. USA
112
,
E220
-
E229
.
Kocabas
,
A.
,
Shen
,
C.-H.
,
Guo
,
Z. V.
and
Ramanathan
,
S.
(
2012
).
Controlling interneuron activity in Caenorhabditis elegans to evoke chemotactic behaviour
.
Nature
490
,
273
-
277
.
Lahiri
,
S.
,
Shen
,
K.
,
Klein
,
M.
,
Tang
,
A.
,
Kane
,
E.
,
Gershow
,
M.
,
Garrity
,
P.
and
Samuel
,
A. D. T.
(
2011
).
Two alternating motor programs drive navigation in Drosophila larva
.
PLoS ONE
6
,
e23180
.
Leifer
,
A. M.
,
Fang-Yen
,
C.
,
Gershow
,
M.
,
Alkema
,
M. J.
and
Samuel
,
A. D. T.
(
2011
).
Optogenetic manipulation of neural activity in freely moving Caenorhabditis elegans
.
Nat. Methods
8
,
147
-
152
.
Luo
,
L.
,
Gershow
,
M.
,
Rosenzweig
,
M.
,
Kang
,
K.
,
Fang-Yen
,
C.
,
Garrity
,
P. A.
and
Samuel
,
A. D. T.
(
2010
).
Navigational decision making in Drosophila thermotaxis
.
J. Neurosci.
30
,
4261
-
4272
.
Mazzoni
,
E. O.
,
Desplan
,
C.
and
Blau
,
J.
(
2005
).
Circadian pacemaker neurons transmit and modulate visual information to control a rapid behavioral response
.
Neuron
45
,
293
-
300
.
Sawin
,
E. P.
,
Harris
,
L. R.
,
Campos
,
A. R.
and
Sokolowski
,
M. B.
(
1994
).
Sensorimotor transformation from light reception to phototactic behavior in Drosophila larvae (Diptera: Drosophilidae)
.
J. Insect Behav.
7
,
553
-
567
.
Sawin-McCormack
,
E. P.
,
Sokolowski
,
M. B.
and
Campos
,
A. R.
(
1995
).
Characterization and genetic analysis of Drosophila melanogaster photobehavior during larval development
.
J. Neurogenet.
10
,
119
-
135
.
Schulze
,
A.
,
Gomez-Marin
,
A.
,
Rajendran
,
V. G.
,
Lott
,
G.
,
Musy
,
M.
,
Ahammad
,
P.
,
Deogade
,
A.
,
Sharpe
,
J.
,
Riedl
,
J.
,
Jarriault
,
D.
et al. 
(
2015
).
Dynamical feature extraction at the sensory periphery guides chemotaxis
.
eLife
4
,
e06694
.
Štih
,
V.
,
Petrucco
,
L.
,
Kist
,
A. M.
and
Portugues
,
R.
(
2019
).
Stytra: an open-source, integrated system for stimulation, tracking and closed-loop behavioral experiments
.
PLoS Comput. Biol.
15
,
e1006699
.
Stowers
,
J. R.
,
Hofbauer
,
M.
,
Bastien
,
R.
,
Griessner
,
J.
,
Higgins
,
P.
,
Farooqui
,
S.
,
Fischer
,
R. M.
,
Nowikovsky
,
K.
,
Haubensak
,
W.
,
Couzin
,
I. D.
et al. 
(
2017
).
Virtual reality for freely moving animals
.
Nat. Methods
14
,
995
-
1002
.
Tadres
,
D.
and
Louis
,
M.
(
2020
).
PiVR: an affordable and versatile closed-loop platform to study unrestrained sensorimotor behavior
.
PLoS Biol.
18
,
e3000712
.

Competing interests

The authors declare no competing or financial interests.

Supplementary information