Visual landmarks guide humans and animals including insects to a goal location. Insects, with their miniature brains, have evolved a simple strategy to find their nests or profitable food sources; they approach a goal by finding a close match between the current view and a memorised retinotopic representation of the landmark constellation around the goal. Recent implementations of such a matching scheme use raw panoramic images (‘image matching’) and show that it is well suited to work on robots and even in natural environments. However, this matching scheme works only if relevant landmarks can be detected by their contrast and texture. Therefore, we tested how honeybees perform in localising a goal if the landmarks can hardly be distinguished from the background by such cues. We recorded the honeybees' flight behaviour with high-speed cameras and compared the search behaviour with computer simulations. We show that honeybees are able to use landmarks that have the same contrast and texture as the background and suggest that the bees use relative motion cues between the landmark and the background. These cues are generated on the eyes when the bee moves in a characteristic way in the vicinity of the landmarks. This extraordinary navigation performance can be explained by a matching scheme that includes snapshots based on optic flow amplitudes (‘optic flow matching’). This new matching scheme provides a robust strategy for navigation, as it depends primarily on the depth structure of the environment.

Visual landmarks play a prominent role in guiding insects, such as bees, wasps or ants to their food sources or nest. According to Collett and Collett, landmarks are used for navigation on different spatial scales (Collett and Collett, 2002). On the way to a food source, honeybees use salient objects as route landmarks to segment the foraging trip by associating with each landmark a local vector representing the distance to the next landmark (Collett et al., 1993; Collett and Baron, 1995; Collett et al., 1996; Collett and Collett, 2002). Furthermore, landmarks are used as a beacon to guide the honeybees' path. As they approach the goal, bees are assumed to learn the appearance of a landmark in the frontal field of view and associate this stored view with a motor trajectory (Collett and Rees, 1997; Fry and Wehner, 2005). The final approach is thought to be then mediated by comparing the current retinal input with a stored retinotopic representation of the visual scene around the goal (‘snapshot matching’).

Evidence for snapshot matching comes from studies on a variety of hymenopteran species in which, during the training phase, a single landmark or an array of landmarks was positioned close to the goal. When the landmark or the array is enlarged or otherwise transformed in a subsequent test phase of the experiment, the insects search for the goal in locations where their two-dimensional view of the landmarks matches their view at the goal location, as memorised during the training phase [ants (Wehner and Räber, 1979; Harris et al., 2007); bees (Cartwright and Collett, 1983); wasps (Zeil, 1993a)]. Nonetheless, it is still unclear how many and what features are stored in the retinotopic snapshot, whether the relevance of landmark features depends on the specific navigational task and how these features are combined in a spatial representation that is used to locate the goal.

In addition to retinal size and position cues, honeybees use also colour and distance cues to locate a food source (Cartwright and Collett, 1979; Cartwright and Collett, 1983; Cheng et al., 1986; Lehrer and Collett, 1994; Fry and Wehner, 2005). If the food source is surrounded by an array of landmarks, information about landmarks close to the goal is more important than information about more distant ones (Cheng et al., 1987). Other cues like shape, edge orientation and symmetry (van Hateren et al., 1990; Srinivasan et al., 1993; Giurfa et al., 1996), which are involved in pattern discrimination, might also be relevant to spatial navigation.

Experimental evidence for snapshot matching in insects has inspired several visual homing algorithms for robot navigation (Cartwright and Collett, 1987; Franz et al., 1998; Lambrinos et al., 2000; Zeil et al., 2003; Vardy and Möller, 2005; Möller and Vardy, 2006; Stürzl and Mallot, 2006; Möller, 2009). These algorithms usually make use of a panoramic imaging system that mimics the large visual field of insect eyes, but differ from each other by the amount of visual image processing (e.g. feature extraction and identification of landmarks), the way corresponding features are identified and locomotion commands are computed. Vardy and Möller, for instance, use differential methods for finding local matches between intensity images and compute the home vector by pooling local correspondences (Vardy and Möller, 2005). The prominent ‘snapshot model’ is based on honeybee experiments (Cartwright and Collett, 1987). It finds the closest matching contours between a stored snapshot and the actual image after the image has been segmented into landmarks and background and generates a homing vector to align these contours.

The segmentation of views into landmarks and background may be difficult in complex and natural environments and may, in fact, be unnecessary (Zeil et al., 2003; Stürzl and Zeil, 2007). Zeil et al. show that the similarities between panoramic images of natural environments decrease smoothly with spatial distance between an observer and the goal location (Zeil et al., 2003). An animal that is sensitive to the similarity of views relative to the memorised view of the goal location could return to this location by maximising the similarities between images [modelled by simple image similarity gradient methods (Zeil et al., 2003)]. Thus, panoramic image similarities can be used for view-based homing in natural environments. Recently, the behaviour of ants and crickets in goal-finding tasks could be explained by ‘image matching’ (Wystrach and Beugnon, 2009; Mangan and Webb, 2009).

Fig. 1.

Honeybee flight arena. (A) Approach flights in a circular flight arena (diameter of 1.95 m; height of 50 cm) towards a perspex feeder surrounded by three landmarks (diameter of 5 cm, height of 20 cm) placed at different distances (10, 20, 40 cm) from the feeder were recorded with three high-speed cameras at 125 frames s–1. The flight arena was covered with a white curtain, and indirect illumination was provided by artificial light sources positioned above and symmetrically around the arena. (B) The floor and walls of the flight arena were covered with a Gaussian blurred random dot pattern; the landmarks during training were covered with a red homogeneous texture. Note that the white curtain surrounding the flight arena was removed for this picture.

Fig. 1.

Honeybee flight arena. (A) Approach flights in a circular flight arena (diameter of 1.95 m; height of 50 cm) towards a perspex feeder surrounded by three landmarks (diameter of 5 cm, height of 20 cm) placed at different distances (10, 20, 40 cm) from the feeder were recorded with three high-speed cameras at 125 frames s–1. The flight arena was covered with a white curtain, and indirect illumination was provided by artificial light sources positioned above and symmetrically around the arena. (B) The floor and walls of the flight arena were covered with a Gaussian blurred random dot pattern; the landmarks during training were covered with a red homogeneous texture. Note that the white curtain surrounding the flight arena was removed for this picture.

In our combined behavioural and modelling approach, we tested the content of the spatial memory in honeybees during complex navigational tasks. Honeybees were trained to locate an inconspicuous feeder surrounded by three cylinders, which we refer to as landmarks. By altering the spatial configuration and landmark texture and monitoring the approach flights to the feeder, we addressed the following questions: what role does the spatial configuration of the landmarks play? Does landmark texture play a role in navigational tasks? In particular, can landmarks be discriminated from the background if they have the same texture as the background? Under these conditions a landmark can only be detected on the basis of optic flow cues, i.e. if the retinal images of the landmarks and the background are displaced relative to each other as a consequence of the bee's self motion. By comparing the search behaviour of the bees with model simulations, we show that the goal-seeking behaviour of honeybees cannot be explained by the matching of raw panoramic images if the landmarks have the same texture as the background. Instead, the shape of the bee's search pattern can be explained by the matching of panoramic snapshots based on optic flow amplitudes.

General experimental procedures

Freely flying honeybees (Apis mellifera carnica, Pollman 1879) were trained to collect sugar solution from a transparent feeder that was located in an indoor flight-arena at Bielefeld University, Germany. Honeybees were trained to associate a food reward with a constellation of three cylinders placed at different distances around the inconspicuous feeder. The flight trajectories of the honeybees approaching the food source were recorded with three high-speed cameras that allowed us to reconstruct the three-dimensional position and orientation of the long axis of the bee within the flight arena.

Experimental setup

The circular flight arena had a diameter of 1.95 m (Fig. 1A,B) and was located in a room about 10 m away from the honeybee hive. The wall of the arena was 50 cm high and covered with the same red–white Gaussian blurred random dot pattern as the arena floor (Fig. 1B). The windows of the room were covered and light was provided by artificial light sources. Honeybees entered the flight arena via a small plastic tube, which led the bees from the room window through a small hole in the arena wall directly into the flight arena.

The goal (feeder) was surrounded by three cylinders with a height of 25 cm and a diameter of 5 cm. The cylinders were placed at different distances (10, 20, 40 cm), subtending angles of 120 deg to each other, as seen from the feeder. The cylinders were covered with either red paper (which we will refer to as homogenous red, see Fig. 1B) or paper with the same Gaussian blurred random dot pattern as the arena floor and the wall depending on the experiment. A drop of sugar solution was placed on the feeder, which was made of an upright perspex cylinder (10 cm high, 2 cm diameter) carrying a perspex disc (0.5 cm high, 4 cm diameter) on top. A dome of white cloth (about 2 m high) surrounded and covered the upper part of the flight arena to prevent bees from using external visual cues. Indirect illumination was provided by eight Dedo-Lights (DLH4, 150 W each; Munich, Germany) placed outside the cloth around the arena and by 50 W halogen lamps from above. All lights were positioned symmetrically with respect to the arena centre.

Approach flights to the feeder were recorded with three synchronised high-speed digital video cameras mounted around the arena. Two high-speed cameras (Redlake MotionPro 500, San Diego, CA, USA) were positioned above the arena (top cameras) along the flight path of the bees from the entrance of the arena to the feeder to provide the position and orientation of the body length axis at 125 frames s–1 with a spatial resolution of 1024 pixels×1024 pixels. The third camera (Lightning RDT, Oakland, NJ, USA) was located on the side of the arena and was equipped with a wide-angle lens to cover the whole field of view of the top cameras (∼2 m2) with a spatial resolution of 1280 pixels×1024 pixels. By using this side view, in combination with the top cameras, we were able to reconstruct the three-dimensional positions of the bees in the arena using stereo-triangulation. From within the arena, the top cameras were only visible through small holes in the white cloth 2 m above the floor of the flight arena. The side camera was visible through a hole in the arena wall. To prevent this camera from being used as a directional cue, three additional black paper discs (diameter of 7 cm) matching the size of the camera lens were placed on the wall at angles of 90 deg to each other. All visual cues that could have provided positional or directional information about the goal location, other than the cylinders, were carefully avoided. However, because we did not change the position of the entry hole, the constant orientation in which the bees entered the arena could be used as a directional cue (Fry and Wehner, 2002).

Training procedure

Honeybees were trained to enter the room through a plastic tube leading directly into the flight arena. Droplets of 50% vol. sugar water were presented as reward on the inconspicuous feeder within the flight arena (see above).

At the beginning of each experiment, honeybees were trained to find the feeder in relation to three dark red cylinders in a stepwise manner starting with a visible feeder. Once there were a constant number of bees returning to the visible feeder, we replaced it with the inconspicuous perspex feeder. During the experimental period (eight weeks), honeybees were allowed to freely enter the arena for 2–3 h in the morning (‘free training phase’). During the free training phase, the entire cylinder constellation, including the perspex feeder, was moved without rotation, to four different positions. This was done to prevent the bees from relying on visual odometry to locate the feeder (Srinivasan et al., 2000; Esch et al., 2001; Tautz et al., 2004). Bees were allowed to come to the final (filming) position of the cylinder constellation 30 min before the start of the recording phase. Honeybees were then marked and tested individually, and only those bees that located the feeder within 3 min from entering the arena were used for recording and testing.

Recording and testing procedure

Recording commenced when a bee made her first visit after marking. The cylinder arrangement (feeder included) was kept at the same position throughout the recording phase, because the visual field of the cameras was too small to cover the other three training positions in the arena. During the recording sessions only one bee was allowed to enter the arena at a time. The approach flight of each bee to the training or test configuration was recorded until it landed on the feeder. The approach flights were recorded at 125 frames s–1. The data were read into the camera's internal ring buffer, which allowed us to store the last 32 s prior landing. As the time between leaving the entrance tube of the flight arena until landing on the feeder was longer in some flights, the whole flight duration was measured manually using a stopwatch. Bees were released by switching the lights off and opening the cloth near to the window so that the bees could leave through the window and return to their hive. After each visit the feeder and the arena floor were cleaned with hot water to eliminate potential olfactory cues and a new droplet of 50% sugar solution was placed on the feeder. Each individual bee visited the feeder once every 5–20 min.

Experiment 1: changing the spatial cylinder configuration

Honeybees (N=21) were trained to locate the feeder in relation to three cylinders covered with a dark red pattern and served as landmarks (training, see Fig. 1B). We made three recordings using the initial training configuration before manipulating the spatial configuration of the cylinders. Each manipulation was considered a single test. During a test, one landmark was removed and the approach flight to the feeder containing the reward (surrounded by the two remaining cylinders) was recorded. A test of this nature was inserted every fourth trial [according to Cartwright and Collett (Cartwright and Collett, 1983)]. Most of the bees were tested three times, each time with a different cylinder removed. The order of the removed cylinder varied between bees in a pseudorandom fashion.

Experiment 2: changing the cylinder texture

Honeybees (N=9) were trained to locate the feeder in relation to three cylinders covered with a homogenous dark red pattern. They were tested with cylinders that were covered with the same random dot pattern that also covered the floor and the wall of the arena. We recorded five flights with the training pattern, then changed the cylinder texture to the random dot patterns and recorded five successive approach flights to the feeder for each bee.

Data analysis

The position of the bee and the orientation of her body length axis were automatically determined in each video frame with the aid of custom-built software (FlyTrace), using standard image processing algorithms (Lindemann, 2006). This was done for all three camera views. For camera calibration and 3-D stereo-triangulation we used the Camera Calibration Toolbox for MATLAB (The MathWorks, Inc., Natick, MA, USA) (Bouget, 1999). Knowing the relative position of the cameras with respect to each other and the two-dimensional position in each camera view, we were able to reconstruct the three-dimensional position of the honeybee in the arena. Three-dimensional coordinates and the yaw body orientation were low-pass filtered using a second-order Butterworth filter with a cut-off frequency of 20 Hz.

To compare the spatial search distribution of bees during approach flights, we calculated the probability density of visits. A new visit was counted each time a bee flew through one of 1024 3 cm×3 cm fields in the field of view of the central top camera. For each field, the mean value of visits for all flights of one bee was calculated and then averaged over all bees.

Computer modelling

From a three-dimensional computer model of the flight arena, images were rendered with the open source graphics engine ‘OGRE’ (www.ogre3d.org). The arena model did not include the illumination of the arena as all lights were positioned symmetrically. Following the approach of Neumann (Neumann, 2002), six virtual cameras covering the whole viewing sphere were used to account for the very large field of view of insects' eyes. The six rendered camera images were converted to grey-value images and re-mapped to panoramic images I(u,v) of 1 deg pixel–1 angular resolution (in azimuth and elevation) (where u is the horizontal image coordinate and v is the vertical image coordinate). 1 deg pixel–1 is still higher than the visual resolution of the bee's compound eye, which has been behaviourally estimated to be in the range of 2–4 deg (Horridge, 2003).

To test the hypothesis that bees search mainly in areas where the similarity between the current visual input and an assumed snapshot at the feeder is high, panoramic image similarity maps were calculated for comparison with the spatial search distribution. Image similarity maps were computed by using the (uncentered) correlation coefficient between the panoramic image at the feeder IF(u,v) and all images Ix,y(u,v) in the arena that lie on a grid with step size of 3 cm:
formula
(1)
where Nv is the number of pixels in the vertical dimension. The ‘max’ operation in Eqn 1 computes the best match over all orientations ‘s’ (in pixel steps) between the image at position (x,y) and the snapshot image, assuming that bees have no other information about their orientation than image similarity. Pitch and roll angle are assumed to be held constant and close to zero. w(v), presented in Eqn 1, is a weighting factor that compensates for the distortions of the viewing sphere due to the equi-rectangular mapping. All images were rendered at a constant height of the feeder at z=11 cm above the arena floor (∼5 mm above the feeder).

Similarity maps for optic flow amplitudes

Flow fields, f(u,v)=[δu(u,v),δv(u,v)], were computed using the well-known Lucas–Kanade algorithm (Lucas and Kanade, 1981; Barron et al., 1994). For flow amplitudes at position (x,y) four images at (x±1 mm,y), (x,y±1 mm) were used, i.e. translations in the x and y directions were δxyl=2 mm. To generate flow amplitudes F(u,v) that are basically independent of the direction of motion, flow fields for translation steps in x and y were calculated and their square amplitudes added:
formula
(2)
In order to avoid erroneously high flow amplitudes for image regions with low contrast or due to occlusions, we used a simple threshold operation limiting the flow amplitudes to 0≤F(u,v)≤1. Subsequently, Gaussian blurring with a standard deviation of 4.25 pixels was applied to the flow amplitude maps.

Similarity maps for flow amplitudes were computed by simply replacing I(u,v) by F(u,v) in Eqn 1. Because we use the correlation coefficient as a similarity measure, the translation amplitude δl for calculating the optic flow snapshot at the feeder, FF(u,v), and the flow amplitudes at position (x,y), Fx,y(u,v), can be varied without changing the results significantly. This, as well as the fact that F(u,v) is basically independent from the direction of motion, can be seen from the first-order approximations of the image shifts (see  Appendix).

Search behaviour during training conditions

We observed that, after entering the arena (Fig. 1A,B), most bees performed a characteristic flight, which consisted of looking back at the entrance and then flying off to the arrangement of the three cylinders. In the following we will refer to the cylinders as landmarks – without implying that the bees actually need to identify them as landmarks. After looking back at the entrance, bees then approached the landmark arrangement in a more or less direct way (see flight examples in Fig. 2). This initial part of the flight is not further analysed in this study, which instead concentrates on the localisation of the feeder while the bee flies in the vicinity of the landmarks (see Fig. 2).

Close to the goal, bees did not usually approach the feeder in straight flight but instead performed circling flights around the landmarks (see below and Fig. 2B, Fig. 7A). During subsequent approach flights, we did not observe a consistent time-dependent effect on flight duration and the straightness of flights (see Fig. S1 in supplementary material). To determine the areas of interest to the bees during their flights in the arena, we calculated the spatial distribution of visits (for a definition see the Materials and methods section) of honeybees to a field of 3 cm×3 cm (Fig. 3A). The distribution shows not only a peak around the feeder location but also distinct circles around the landmarks.

At first, this might seem a puzzling result, because in a previous study with a similar landmark arrangement, Cartwright and Collett found only a single pronounced search peak at the feeder location and the honeybees did not tend to search around the landmarks (Cartwright and Collett, 1983). In contrast to our arrangement, Cartwright and Collett placed three landmarks at equal distance to the feeder (Cartwright and Collett, 1983). Assuming that honeybees memorise a snapshot at the feeder location, they would return to the feeder by increasing the similarity between the retinal image and the stored snapshot. Thus, we expect that bees most frequently search at locations where the retinal image is similar to this stored snapshot. To test whether the differences in the snapshot due to a specific landmark setup can explain the different search behaviour of the honeybees, we calculated image similarities (see Materials and methods) between the stored snapshot and images taken at different positions (step size of 3 cm) in the arena with our feeder landmark arrangement and also with the arrangement used by Cartwright and Collett (compare Fig. 3B and Fig. 4A). Interestingly, the image similarity maps differ depending on the specific landmark constellation. The image similarity map with three landmarks placed at equal distance from the feeder shows only one peak at the feeder location and low similarity around the three landmarks (Fig. 4A). We compared image similarity maps for hypothetical landmark–feeder constellations with a feeder placed stepwise closer to one of the landmarks (Fig. 4B,C). If the feeder position is closer to one of the landmarks, circles of high similarity emerge around the landmarks – becoming more pronounced the closer the goal location is to one of the landmarks. Thus, if the retinal size of the landmarks is not equal at the goal location, the image similarity map shows, despite the peak at the goal location, regions of high similarity around each landmark (Fig. 4C, Fig. 3B). Thus, if honeybees tend to search at locations where the retinal image is similar to their memorised snapshot, we would expect to find a single search peak with the setup used by Cartwright and Collett (Cartwright and Collett, 1983) and high search density around the landmarks with the setup used in our experiments. In fact, this prediction corresponds to what was recorded in both studies. We conclude that, image similarity maps assuming the storage of a global image at the goal location allow us to predict important features of the search distribution of honeybees.

Fig. 2.

Top view of two sample flight trajectories during training conditions. (A,B) The whole flight trajectories of two honeybees are shown from leaving the entrance until landing on the feeder. The food source (black) and the three landmarks (grey) are indicated by circles. The position of the honeybee is indicated by grey circles at each 32 ms interval; straight lines indicate the orientation of the long axis of the bee. Honeybees approached the feeder landmark arrangement in a direct way; close to the feeder the honeybees often showed characteristic search flight manoeuvres (B). To study these search manoeuvres close to the landmarks, we analysed only the final approach to the feeder (area indicated by a black frame in A).

Fig. 2.

Top view of two sample flight trajectories during training conditions. (A,B) The whole flight trajectories of two honeybees are shown from leaving the entrance until landing on the feeder. The food source (black) and the three landmarks (grey) are indicated by circles. The position of the honeybee is indicated by grey circles at each 32 ms interval; straight lines indicate the orientation of the long axis of the bee. Honeybees approached the feeder landmark arrangement in a direct way; close to the feeder the honeybees often showed characteristic search flight manoeuvres (B). To study these search manoeuvres close to the landmarks, we analysed only the final approach to the feeder (area indicated by a black frame in A).

Fig. 3.

Spatial search distributions and image similarity maps in an arena with three landmarks or one of the landmarks removed. (A,C,E,G) To compare the spatial search distribution of bees during approach flights, we calculated the probability density of visits (number of times a bee flew through one of 1024 3 cm×3 cm fields). For each field, the mean value of visits for all the flights of one bee was calculated and then averaged over all bees. (A) The feeder (black) was surrounded by three landmarks (grey). The probability density of visits shows a clear peak around the feeder location but also shows characteristic circles of high density around the landmarks (N=21 bees, n=173 flights). (C,E,G) The spatial search distributions differed when one of the landmarks was removed (removed landmark is indicated in white). The honeybees then restricted their search to the remaining landmarks. (B,D,F,H) Panoramic image similarity maps were calculated to compare with the spatial search distributions. Image similarity maps were computed by using the correlation coefficient between the panoramic image at the feeder and all the images in the arena that lie on a grid with step size of 3 cm at the height of the feeder at an angular resolution of 1 deg pixel–1 (arena size indicated by the white circle). The area of the flight arena that could be analysed for the spatial search distributions of bees (A,C,E,G) is indicated by the white broken rectangle in the image similarity maps. The image similarity maps reflect the search behaviour of the honeybees.

Fig. 3.

Spatial search distributions and image similarity maps in an arena with three landmarks or one of the landmarks removed. (A,C,E,G) To compare the spatial search distribution of bees during approach flights, we calculated the probability density of visits (number of times a bee flew through one of 1024 3 cm×3 cm fields). For each field, the mean value of visits for all the flights of one bee was calculated and then averaged over all bees. (A) The feeder (black) was surrounded by three landmarks (grey). The probability density of visits shows a clear peak around the feeder location but also shows characteristic circles of high density around the landmarks (N=21 bees, n=173 flights). (C,E,G) The spatial search distributions differed when one of the landmarks was removed (removed landmark is indicated in white). The honeybees then restricted their search to the remaining landmarks. (B,D,F,H) Panoramic image similarity maps were calculated to compare with the spatial search distributions. Image similarity maps were computed by using the correlation coefficient between the panoramic image at the feeder and all the images in the arena that lie on a grid with step size of 3 cm at the height of the feeder at an angular resolution of 1 deg pixel–1 (arena size indicated by the white circle). The area of the flight arena that could be analysed for the spatial search distributions of bees (A,C,E,G) is indicated by the white broken rectangle in the image similarity maps. The image similarity maps reflect the search behaviour of the honeybees.

Image similarities were calculated slightly above the feeder (11 cm above the arena floor). This implies that the returning honeybees should fly predominantly at this height to compare the current retinal image with the stored snapshot. This is a reasonable assumption because the three-dimensional reconstruction of the flight trajectories allowed us to analyse the flight height of the honeybees. The median value of the height for all flights to the training configuration is 11.9 cm (for reference: height of the feeder=10.5 cm, height of the assumed snapshot=11 cm; data obtained from experiment 2; N=9 bees, n=60746 positions, positions with a horizontal distance <10 cm from the feeder location before landing were excluded from the analysis). The honeybees spend 65% of the flight time at a height of ±7 cm with respect to the correct height of the feeder.

We verified that our findings do not depend on the specific image similarity function we use (see Materials and methods). Using an image distance function [the sum of squared (pixel) differences], instead of the image similarity function we get very similar results, i.e. positions with high similarities in the image similarity map correspond to high image distances in the image distance map.

Changing the spatial landmark configuration

In the first experiment, honeybees (N=21) were trained with three homogenous red landmarks and tested with one of the landmarks removed. Test trials were inserted every fourth flight. We analysed the effects of removing one landmark on the flight duration and on the spatial distribution of approach flights. Removing a landmark affects the overall search distribution (Fig. 3C,E,G). The clear peak close to the feeder is only present when the far landmark is missing (Fig. 3G). For all three landmark configurations, bees seem to restrict their search close to the remaining landmarks. This change in search behaviour could be predicted by the image similarity maps. Assuming that bees search at locations of high similarity to a snapshot memorised in an arena with three landmarks; they would indeed search around the remaining landmarks (Fig. 3D,F,H). Furthermore, mean flight duration to the feeder increases when either the near (N=15) or the middle-range landmark (N=14) is removed (significant in the case of the middle range landmark: Wilcoxon sign rank test P<0.01) but is unaffected when the far landmark is removed (N=14) (Fig. 5). Note that the landmarks do not only differ in their distance to the feeder but also in their retinal size when viewed from the feeder. The result that near and large landmarks in relation to the goal position have a strong effect on the localisation of the goal is consistent with previous studies (e.g. Cheng et al., 1987).

Fig. 4.

Image similarity maps in an arena with three black landmarks and a homogeneous white background similar to the setup used by Cartwright and Collett (Cartwright and Collett, 1983). Image similarity maps were computed by using the correlation coefficient between the panoramic image at the feeder location and all the images that lie on a grid (step size of 3 cm) at feeder height within a circular area (in the size of our flight arena with a diameter of 1.95 m, white circle) at an angular resolution of 1 deg pixel–1. (A) Three black cylinders with a diameter of 4 cm and a height of 40 cm (blue circles) are placed at equal distance (76 cm) from the feeder (F, black circle). (B,C) Image similarity maps for hypothetical landmark–feeder constellations with a feeder (F, black cross) placed stepwise closer to one of the landmarks (27.5 cm and 10.5 cm distance to the centre of the feeder). A snapshot taken closer to one of the landmarks looks similar to the snapshot in our arena with the landmarks placed at different distances from the feeder (see Fig. 1). Now, due to the increased retinal size of one of the three landmarks in the snapshot, circles of high similarity appear around the landmarks.

Fig. 4.

Image similarity maps in an arena with three black landmarks and a homogeneous white background similar to the setup used by Cartwright and Collett (Cartwright and Collett, 1983). Image similarity maps were computed by using the correlation coefficient between the panoramic image at the feeder location and all the images that lie on a grid (step size of 3 cm) at feeder height within a circular area (in the size of our flight arena with a diameter of 1.95 m, white circle) at an angular resolution of 1 deg pixel–1. (A) Three black cylinders with a diameter of 4 cm and a height of 40 cm (blue circles) are placed at equal distance (76 cm) from the feeder (F, black circle). (B,C) Image similarity maps for hypothetical landmark–feeder constellations with a feeder (F, black cross) placed stepwise closer to one of the landmarks (27.5 cm and 10.5 cm distance to the centre of the feeder). A snapshot taken closer to one of the landmarks looks similar to the snapshot in our arena with the landmarks placed at different distances from the feeder (see Fig. 1). Now, due to the increased retinal size of one of the three landmarks in the snapshot, circles of high similarity appear around the landmarks.

Changing the landmark texture

In the second experiment, honeybees (N=9) were trained with three homogenous red landmarks and tested with three landmarks in the same spatial arrangement but covered with the random texture that also covered the floor and the wall of the arena. Five to six flights per bee under each condition were recorded. The mean flight duration (the time from entering the arena until landing on the feeder) did not increase significantly after the change of the landmark texture from homogenous red to random (N=14; Wilcoxon sign rank test P>0.05; see Fig. S2 in supplementary material). In addition, the search distributions do not differ considerably between approach flights to the homogenous red training texture (Fig. 6A) and to the random test texture (Fig. 6B). With the random texture, the search distribution is slightly broader but still shows the same characteristic features as the search distribution with the homogenous red landmarks. The peak at the feeder location is similar and the characteristic circles around the landmarks are also present, although the circle around the far landmark is less pronounced.

To exclude any training effect during subsequent approach flights with the random textured landmarks, we compared the search behaviour during the last flight with homogenous red landmarks with the search distribution during the next flight with random textured landmarks. The spatial distributions look very much like those shown in Fig. 6A,B and therefore do not differ considerably (data not shown).

Control experiments

We conducted additional control experiments to further exclude the possibility that the search behaviour did not change with the different landmarks patterns because the honeybees used other cues than the landmarks to locate the feeder (1) and that they are not able to discriminate between the homogenous red and the random dot pattern (2). (1) When all landmarks were removed, 50% of the bees did not find the feeder at all and 80% of the bees, which located the feeder, took longer than one minute to land on the feeder (N=11, data not shown) and thus longer than when the landmark arrangement was at its normal location (see Fig. 5; see Figs S1, S2 in supplementary material). In a different type of test, the landmark arrangement and an additional feeder were shifted to a new location in the arena, whilst one feeder remained at the filming position. 100% of the bees (N=11) flew first to the landmark arrangement and 80% of the bees landed on the feeder surrounded by the landmarks in the new position. These results show that honeybees rely primarily on the landmarks to locate the feeder. To check whether the search distributions are mainly influenced by the presence of the feeder, we tested, in a different study, bees with the same landmark constellation but with the feeder absent (L.D., unpublished). Without the feeder, the search peak at the feeder location is still present. We can therefore conclude that the search distributions are not strongly influenced by the feeder. (2) Honeybees were trained to discriminate between the random dot and the homogenous red pattern of equal brightness presented as circles (diameter=18 cm) on a horizontally placed 50 cm×50 cm white platform. They were able to discriminate between the rewarded random dot pattern and the homogenous red pattern (66% correct choices, N=13 bees, n=32 flights, first day of training; 83% correct choices, N=16 bees, n=77 flights, Chi square-test against a random choice expectation P<0.01, after two days of training), even though these patterns had no brightness difference in contrast to the patterns used in our landmark experiments (see Fig. 1B). This finding makes it very unlikely that the honeybees' search behaviour does not change with the different landmark patterns because they are not able to discriminate between the patterns.

Goal-seeking behaviour cannot be explained by image matching

Goal-seeking behaviour seems not to be much impaired even if the texture of the landmark is changed to the background texture. To test whether the search distribution of honeybees can be predicted by image matching, image similarity maps are calculated for both landmark conditions (Fig. 6C,D). We calculated image similarity maps at different visual resolutions. In the range of the bee's visual resolution (1–3 deg) the image similarity map is not influenced by the visual resolution of the images (compare Fig. 6D and Fig. S3 in supplementary material). The similarity map for the arena with random textured landmarks shows local maxima scattered over the whole arena (Fig. 6D). If bees only navigate by increasing the similarity of the retinal image to its memorised snapshot, we would expect them to distribute their search throughout the whole arena, as the similarity is equally high almost everywhere in the flight arena. By contrast, we find that the honeybees restrict their search behaviour to regions close to the landmark and feeder locations. Even if we assume that honeybees switch after one flight to a snapshot containing the landmarks with the random dot texture, the corresponding image similarity map again has local maxima not restricted to the region defined by the landmarks (data not shown). Thus, it is very unlikely that the search behaviour is guided by image similarity when landmarks have the same texture as the background.

Fig. 5.

Flight duration when one landmark was removed. The time from the bee entering the arena until landing on the feeder was measured for the last flight before the removal of one landmark (1) and for the next flight with the landmark removed (2). The inset shows the landmark arrangement; removed landmark marked with a cross. (A) The near landmark (10 cm) was removed (N=15 bees). (B) The middle far landmark (20 cm) was removed (N=14). Flight duration is significantly increased (Wilcoxon sign rank test P<0.01). (C) The far landmark (40 cm) was removed (N=14). See also Fig. S1 (flight duration for subsequent approach flights) and Fig. S2 (comparison between flight duration with different landmark textures) in supplementary material.

Fig. 5.

Flight duration when one landmark was removed. The time from the bee entering the arena until landing on the feeder was measured for the last flight before the removal of one landmark (1) and for the next flight with the landmark removed (2). The inset shows the landmark arrangement; removed landmark marked with a cross. (A) The near landmark (10 cm) was removed (N=15 bees). (B) The middle far landmark (20 cm) was removed (N=14). Flight duration is significantly increased (Wilcoxon sign rank test P<0.01). (C) The far landmark (40 cm) was removed (N=14). See also Fig. S1 (flight duration for subsequent approach flights) and Fig. S2 (comparison between flight duration with different landmark textures) in supplementary material.

Goal-seeking behaviour can be explained by optic flow matching

So far, we have only taken into account snapshots of static images of the scene surrounding the feeder when trying to explain the bees' ability to localise their goal [in agreement with Cartwright and Collett's earlier studies (Cartwright and Collett, 1983; Cartwright and Collett, 1987)]. However, as we have seen, snapshots of static images fail to guide the bees to the feeder when the texture on the landmarks is identical to that of the background. This finding is not surprising because both landmarks and background are covered with the same texture and a landmark can hardly be discriminated from the background unless it moves relative to the background. The use of motion parallax cues has been shown in earlier studies to be relevant in the context of landing behaviour, pattern discrimination and navigation (Lehrer et al., 1988; Lehrer, 1993; Lehrer and Collett, 1994; Zeil, 1993a; Zeil, 1993b; Srinivasan et al., 2000; Lehrer and Campan, 2005) (for a review, see Srinivasan and Zhang, 2004). As such relative motion cues are likely to be available to the honeybees during their flights in the arena, we computed snapshots on the basis of optic flow amplitudes. Optic flow was generated by simulated translations of 2 mm in the flight arena (see Materials and methods). The similarity maps for optic flow amplitudes are very similar for both landmark textures (Fig. 6E,F). In particular, a peak at the feeder location and the circular areas of high similarity around the landmarks are obvious. We conclude that a bee can locate the feeder by matching optic flow snapshots that implicitly contain information about the depth structure of a scene, even when the landmarks are covered with the same texture as the background. We verified that the optic flow matching scheme can not only explain the search distributions when the landmark pattern is changed but also when one landmark is removed. The optic flow similarity maps when one landmark is removed (data not shown) are similar to the image similarity maps shown in Fig. 3D,F,H.

Characteristic flight behaviour of honeybees

Do honeybees employ a flight strategy that facilitates the extraction of depth information from optic flow? The potential relevance of motion cues is emphasised by characteristic flight manoeuvres that can be observed frequently near landmarks. These flight manoeuvres contain a strong sideward component (Fig. 7A). Flying sideways in front of the landmarks can be useful as it generates a strong translatory optic flow field in the frontal field of view that is directed onto the landmark. Depending on the characteristics of self-motion during flight, honeybees generate different kinds of optic flow. The rotational optic flow component is independent of the distance to environmental objects such as the landmarks. By contrast, the translational optic flow component is dependent on distance and contains spatial information (Koenderink, 1986). In Fig. 7B the body orientation (yaw) angle is shown for a honeybee during an approach flight to the feeder under training conditions. The yaw angle shows characteristic steps, which indicates that the honeybee flight consists of phases of fast turns and phases where they fly more or less straight. Nevertheless, the orientation of the body is not kept perfectly constant during these relatively straight phases. A comparison of body and head orientation during such flights reveals the head direction to be almost perfectly stabilised during these flight phases (Boeddeker et al., 2010). This finding suggests that honeybees experience a predominantly translational optic flow field between brief high-velocity saccadic turns, which may well be exploited to extract relative motion and, thus, distance cues from the optic flow pattern.

In Fig. 7C,D the angle between flight direction and body orientation (alpha) is plotted at the different locations in the arena. Large alpha angles indicate sideward movements of the honeybees, which occur mostly in locations close to the landmarks and the feeder. To quantify this effect, we defined two circular areas at two distances around the near landmark and calculated the alpha angle for each position of the trajectories. The median value of the alpha angle close to the landmark is 23.1 deg (in a ring with a radius larger than 50 mm and smaller than 150 mm with regard to the centre of the landmark; n=7343; angles>135 deg were excluded because these indicate backward movements). In a distant area around the landmark we find a median value of 14.2 deg for the alpha angle (the area was defined by a radius larger than 150 mm but smaller than 200 mm with regard to the centre of the landmark; n=24,502; positions close to next landmark and angles larger than 135 deg were excluded). The mean angle alpha is significantly increased close to the landmark (Wilcoxon rank sum test P<0.001). This is true for flights in the arena with landmarks covered with the homogenous red and the random dot texture (compare Fig. 7C,D).

We analysed the performance of honeybees in a navigational task where they were trained to localise an inconspicuous feeder in relation to three landmarks. Our study provides three novel results: (1) changing the spatial arrangement of landmarks has a greater effect on the bee's performance than changing landmark texture, even if the landmark has the same texture as the background and cannot easily be detected on the basis of contrast and texture cues. (2) The goal-seeking behaviour of honeybees in a situation where the landmarks are covered with the same texture as the background cannot be explained by the matching of raw panoramic images. Instead, if guided by a matching scheme that involves optic flow amplitudes, honeybees can locate the feeder. We therefore suggest that bees use optic flow snapshots in addition to other cues for the localisation of a food source. (3) The extraction of optic flow information about the spatial layout of the environment is facilitated by the honeybees' flight strategy, which largely separates behaviourally rotational and translational optic flow components.

Fig. 6.

Spatial search distributions and image similarity maps in an arena with different landmark patterns. (A,B) Spatial search distributions show the probability density of visits during approach flights to the feeder (for details see Fig. 3). Honeybees were trained with three homogenous red landmarks and tested with landmarks that had the same random dot texture as the background of the arena. (A) The probability density of visits for approach flights to the training configuration (N=9 bees, n=48 flights); the feeder (black) was surrounded by three landmarks with a red homogenous texture (grey). The spatial distribution shows a peak around the feeder location and characteristic circles of high density around the landmarks (compare with Fig. 3A). (B) The probability density of visits for approach flights when the landmark texture was changed to the random dot texture (indicated by white circles; N=9 bees, n=42 flights). The distribution resembles the search behaviour in an arena with homogenous red landmarks. (C,D) Image similarity maps for the homogeneous red texture, C, and the random dot texture, D. For calculation of the image similarity maps see Materials and Methods. Arena size is indicated by the white circle. (C) The area of the flight arena, which was analysed in the spatial search distributions, is indicated by the white broken rectangle in the image similarity maps. The similarity map of images taken in an arena with homogeneous red textured landmarks reflects the spatial search distribution of the honeybees. (D) The similarity map for the arena with random textured landmarks assuming a snapshot containing the homogeneous red textured landmarks does not resemble the search behaviour of the honeybees. (E,F) Similarity maps for optic flow amplitudes obtained by simulated translations of 2 mm in the flight arena with homogeneous textured landmarks, E, and random textured landmarks, F. The similarity maps for flow amplitudes are similar for both landmark textures and resemble the search distribution of the honeybees.

Fig. 6.

Spatial search distributions and image similarity maps in an arena with different landmark patterns. (A,B) Spatial search distributions show the probability density of visits during approach flights to the feeder (for details see Fig. 3). Honeybees were trained with three homogenous red landmarks and tested with landmarks that had the same random dot texture as the background of the arena. (A) The probability density of visits for approach flights to the training configuration (N=9 bees, n=48 flights); the feeder (black) was surrounded by three landmarks with a red homogenous texture (grey). The spatial distribution shows a peak around the feeder location and characteristic circles of high density around the landmarks (compare with Fig. 3A). (B) The probability density of visits for approach flights when the landmark texture was changed to the random dot texture (indicated by white circles; N=9 bees, n=42 flights). The distribution resembles the search behaviour in an arena with homogenous red landmarks. (C,D) Image similarity maps for the homogeneous red texture, C, and the random dot texture, D. For calculation of the image similarity maps see Materials and Methods. Arena size is indicated by the white circle. (C) The area of the flight arena, which was analysed in the spatial search distributions, is indicated by the white broken rectangle in the image similarity maps. The similarity map of images taken in an arena with homogeneous red textured landmarks reflects the spatial search distribution of the honeybees. (D) The similarity map for the arena with random textured landmarks assuming a snapshot containing the homogeneous red textured landmarks does not resemble the search behaviour of the honeybees. (E,F) Similarity maps for optic flow amplitudes obtained by simulated translations of 2 mm in the flight arena with homogeneous textured landmarks, E, and random textured landmarks, F. The similarity maps for flow amplitudes are similar for both landmark textures and resemble the search distribution of the honeybees.

Homing based on image matching

Image similarities decrease smoothly with distance in the neighbourhood of a goal position (Zeil et al., 2003; Stürzl and Zeil, 2007). Animals that can compute image similarities between a previously stored image of the environment and the current image could, therefore, successfully navigate to their goal location. Global matching of images does not require a segmentation of the image into landmarks and background as in other models of visual homing such as the snapshot model of Cartwright and Collett (Cartwright and Collett, 1983). Thus, global image matching represents a simple model for navigation, which motivated us to study its possible use in the vicinity of the goal location in honeybees. We did not analyse the strategies that enabled the bees to aim for the feeder–landmark constellation just after entering the flight arena, as this was the main focus of previous studies (e.g. Collett and Rees, 1997; Fry and Wehner, 2005). The early segments of approach flights seem to become more direct with prolonged learning (Fry and Wehner, 2005). In our study, the final parts of the search flights turned out not to lead directly to the goal. We find that the final phase of approach flights is characterised by flight manoeuvres directed at and around the landmarks. This might be an unexpected result but computer simulations reveal that the similarity of an image memorised at the feeder position and images in the arena is not only high at the feeder but also in the vicinity of either of the landmarks. Comparisons with a different landmark constellation used in an earlier study (Cartwright and Collett, 1983) showed that this is due to the specific landmark arrangement we used. If the retinal size of the landmarks is not equal at the goal, the image similarity map shows regions of high similarity around each landmark in addition to the peak at the goal location. Hence, a bee guided by image matching would search at locations of high similarity, which could result in the search behaviour the honeybees showed with our landmark setup. We find that the honeybees do not search as symmetrically around the landmarks as expected from the similarity maps. It is possible that the bees might have had directional information by their magnetic compass or by the constant direction they entered the flight arena (Collett and Barron, 1995; Fry and Wehner, 2002). Directional information was not included in the simulations.

What conclusions can be drawn from the comparison between image similarity maps and the search behaviour of honeybees? Our results indicate that image similarity maps allow a prediction about the preferred search locations of honeybees. Nonetheless, we do not expect that they allow a straightforward transformation into the search behaviour. The search behaviour of a bee guided by image similarity would not only depend on image similarity but on how image similarity is transformed and used to control behaviour. For instance, we find search density maxima (for example between the two near landmarks and between the feeder and the far landmark), which are not visible in the image similarity maps. These maxima could be caused by honeybees flying back and forth between the landmarks. Thus, honeybees would pass regions of low similarity to get to areas of high image similarity. Computer simulations of an agent guided by image similarity would reveal if this homing strategy results in the search behaviour the bees showed.

Fig. 7.

Characteristic flight behaviour of honeybees. (A) The top view of an example flight trajectory of a honeybee approaching the feeder the seventh time after recording started. The food source and the landmarks are indicated by black and grey circles, respectively. The position of the honeybee is indicated by grey circles at each 32 ms interval; straight lines indicate the orientation of the long axis of the bee. The inset shows a sideways-directed flight manoeuvre of a honeybee close to the landmark (two times magnified). (B) The body orientation of the long axis of the honeybee (yaw angle) is plotted vs time. The yaw angle shows characteristic steps in time which indicate phases of fast turns and phases of straight flight. (C,D) Spatial distribution of the angle between flight direction and body orientation (alpha). The mean alpha angle was calculated for fields of 3 cm×3 cm in the flight arena. Large angles >135 deg that indicate backward movements as well as fields, which have been visited by the honeybees for less than 160 ms have been excluded. Sideways-directed movements of the honeybees are indicated by large alpha angles, which occur most frequently close to the feeder location and around the landmarks independent of landmark texture (C homogeneous red texture, D random dot texture). Note that large angles also occur close to the arena wall (upper right corner of the figure).

Fig. 7.

Characteristic flight behaviour of honeybees. (A) The top view of an example flight trajectory of a honeybee approaching the feeder the seventh time after recording started. The food source and the landmarks are indicated by black and grey circles, respectively. The position of the honeybee is indicated by grey circles at each 32 ms interval; straight lines indicate the orientation of the long axis of the bee. The inset shows a sideways-directed flight manoeuvre of a honeybee close to the landmark (two times magnified). (B) The body orientation of the long axis of the honeybee (yaw angle) is plotted vs time. The yaw angle shows characteristic steps in time which indicate phases of fast turns and phases of straight flight. (C,D) Spatial distribution of the angle between flight direction and body orientation (alpha). The mean alpha angle was calculated for fields of 3 cm×3 cm in the flight arena. Large angles >135 deg that indicate backward movements as well as fields, which have been visited by the honeybees for less than 160 ms have been excluded. Sideways-directed movements of the honeybees are indicated by large alpha angles, which occur most frequently close to the feeder location and around the landmarks independent of landmark texture (C homogeneous red texture, D random dot texture). Note that large angles also occur close to the arena wall (upper right corner of the figure).

Homing based on optic flow matching

If the landmarks have the same texture as the background, a model based on image similarities would fail to predict the search distribution of the honeybees. We therefore extended the content of panoramic snapshots to optic flow amplitudes. Then, areas of high similarity correspond well to the preferred search locations of the honeybees. Hence, honeybees may, in addition to other cues, use an optic flow snapshot that is dependent on the depth structure of the environment for goal localisation.

Is motion parallax the only information available to the bees in our experimental setup when the landmarks have the same texture as the background? Some residual luminance contrast cues might have existed due to the slightly asymmetrical illumination of the landmarks. These contrast cues are likely to be extremely weak and thus difficult for the honeybees to detect. Furthermore, we cannot exclude the possibility that honeybees may have seen in their upper field of view the top end of the random textured landmarks against the white background above the arena wall. If the honeybees memorised an image stripe of 30–60 deg elevation above the horizon (‘skyline’), they could later locate the goal by matching this image stripe (simulation data not shown). However, the ‘skyline’ of the landmarks did not provide a reliable cue to determine the feeder location because it changed during the training phase, during which we regularly shifted the landmark–feeder arrangement. Also, the bees did not know on which elevation range they would have to focus because, during the training flights in the arena with three homogenous red landmarks, a wide range of elevations provided reasonably high contrast cues. Thus, the bees would have to suddenly change their matching scheme from global image matching to matching of only a part of the visual field when searching for the feeder in the arena with randomly textured landmarks.

Functional significance of motion cues

Our model simulations have shown that motion parallax cues are sufficient to find the goal location by matching panoramic optic flow snapshots. The use of motion cues has been demonstrated in various behavioural contexts for a wide spectrum of flying insects (including honeybees) and birds. These animals use optic flow cues in tasks such as obstacle avoidance, the control of landing behaviour and for object–background discrimination (e.g. Lehrer et al., 1988; Davies and Green, 1990; Lee et al., 1993; Lehrer, 1993; Kern et al., 1997; Srinivasan et al., 2000; Eckmeier and Bischof, 2008) (for reviews, see Egelhaaf and Kern, 2002; Egelhaaf, 2006). Several studies show that honeybees, in particular, use motion parallax as a cue in pattern discrimination and visual navigation, e.g. distance estimation (Lehrer, 1994; Zhang and Srinivasan, 1994; Srinivasan and Zhang, 2004; Lehrer and Campan, 2005). During local navigation there is evidence that bees and wasps use parallax cues to identify close landmarks and to estimate the distance of landmarks relative to the goal (Zeil, 1993b; Zeil et al., 1996; Zeil, 1997).

Optic flow matching based on translatory self-motion of the bee

We showed that the use of optic flow in local navigation is facilitated by the honeybee's flight style in the vicinity of landmarks, which helps to extract depth cues by generating relative motion between the landmarks and their background. The flights are characterised by fast, saccadic changes of flight direction. Between saccades, the bees' head orientation is stabilised against yaw and roll rotations (Boeddeker and Hemmi, 2009; Boeddeker et al., 2010). Therefore, they experience mostly translational optic flow between saccades which contains information about the three-dimensional structure of the environment. Such a saccadic flight and gaze strategy has been analysed in particular detail in blowflies (Schilstra and van Hateren, 1999; van Hateren and Schilstra, 1999) where it also could be shown to facilitate the extraction of depth information by the nervous system (Kern et al., 2005; van Hateren et al., 2005; Karmeier et al., 2006; Kern et al., 2006). The sideways-directed flight manoeuvres close to the landmark and feeder location, shown in the present study for honeybees, are similar to scanning movements found in locusts and mantids (Collett and Paterson, 1991; Kral and Poteser, 1997), and also to the flight strategy that hoverflies use in front of nearby obstacles, such as the walls of a flight arena (Geurten et al., 2010).

During translatory movements the bees can use relative motion cues to estimate their distance to the landmarks, which seems to be relevant in the early learning phase when the search behaviour is determined by the distance of landmarks relative to the goal position (Lehrer and Collett, 1994). We suggest that the translatory optic flow field during intersaccadic intervals is memorised as an optic flow snapshot by the honeybees and used for guidance, at least after the learning phase. The use of an optic flow snapshot is in accordance with two earlier experimental findings on honeybees and wasps. First, bees and wasps weight landmarks according to their distance from the goal (Cheng et al., 1987; Zeil, 1993b). Second, after the learning phase honeybees seem to rely on the retinal size of landmarks instead of distance cues. An optic flow snapshot contains implicitly the depth information of the scene but also the retinal size of the landmarks. Simulations with a single landmark–feeder setup showed that, assuming the storage of an optic flow snapshot, the best match occurs when the retinal size of the landmarks fits (W.S. and L.D., unpublished data).

Optic flow matching provides a robust strategy for navigation, as it depends primarily on the depth structure of the environment and is thus less susceptible to illumination changes than the matching of raw images. Currently, we do not know how motion parallax information is embedded in the honeybees' spatial representation. Assuming the storage of an optic flow snapshot, honeybees could locate the feeder by increasing the similarity between the current and the memorised optic flow snapshot. This could be modelled by a simple gradient ascent, similar to the gradient descent algorithms in image distances as proposed by Zeil et al. (Zeil et al., 2003). However, the storage of a global optic flow snapshot might not be necessary. Assuming the detection of the landmark edges by motion contrast instead of luminance contrast, and the storage of just the edge locations, an agent guided by the snapshot model (Cartwright and Collett, 1983) could navigate successfully in our arena with the landmarks that bear the same texture as the background. Our forthcoming analysis of individual flights might identify further constraints for possible homing algorithms. Although our results suggest the significance of optic flow information in local homing, we do not claim that honeybees do not also rely on other cues. Rather, a combination of optic flow and texture information in an extended snapshot scheme may be a particularly powerful way to explain most experimental results. Differences in locomotion may also play a role for the selection and weighting of different visual cues: walking insects (e.g. ants) can more easily stay motionless relative to their surroundings (an advantage if doing static image matching), and flying insects (e.g. bees) can fly sideways (an advantage if using motion parallax information).

Limits of our modelling approach

In our modelling approach, we made three assumptions about the snapshot that the bee memorises concerning (1) the spatial extent of the snapshot, (2) the number of snapshots, and (3) the location where the snapshot was taken.

(1) It is as yet unclear whether honeybees memorise a snapshot of the full field of view. There is some evidence that insects store a horizontally extended snapshot. Ants store a snapshot while fixating a landmark with their frontal field of view but this snapshot extends at least 120 deg into the periphery (Durier et al., 2003). Although the snapshot is centred on one landmark, it was concluded that ants identify landmarks with the aid of the background pattern, indicating that the panorama is somehow included in their representation (Graham et al., 2004). An extended snapshot is suggested to improve the insect's precision in localising a goal and to enhance the reliability of the snapshot recall (Cartwright and Collett, 1983; Wehner et al., 1996; Durier et al., 2003; Graham et al., 2004). In our study, the honeybees often landed on the feeder with one of the landmarks centred in their frontal field of view (see Fig. S4 in supplementary material) but we do not know to which degree their snapshot might extend in azimuth and elevation. It might well be that the elevation range of a snapshot can be rather small, as homing algorithms that use only a one-dimensional stripe along the horizon as a snapshot template can perform well (Cartwright and Collett, 1983; Franz et al., 1998).

(2) Although agents can navigate successfully to a goal location with the aid of one snapshot in the neighbourhood of the goal (e.g. Vardy and Möller, 2005), there is evidence that ants store several snapshots during their approach to the goal (Judd and Collett, 1998; Harris et al., 2007). Our image similarity maps, where we assume a single snapshot at the feeder location, do not show a continuously increasing image similarity from the entrance of the flight arena to the landmark arrangement. This indicates that other mechanisms like beacon aiming or the storage of several snapshots on the way to the goal might be necessary (see Collett and Rees, 1997; Fry and Wehner, 2005). Nevertheless, the final approach could be guided by a snapshot stored at the goal location. As the image similarity gradient has local maxima, a strategy to escape from these maxima would certainly be necessary. Such a strategy has recently been identified by Wystrach and Beugnon (Wystrach and Beugnon, 2009). They found that ants, trained to find an exit in a rectangular box, seem to follow the image similarity gradient but turn by 180 deg and move in the opposite direction as soon they detect a significant mismatch to the memorised snapshot (Wystrach and Beugnon, 2009). In a forthcoming study, we will analyse individual flight trajectories and reconstruct the corresponding visual input for individual honeybees. This might reveal whether bees use specific behavioural strategies in order to cope with local minima in the similarity map.

(3) As in most modelling studies, we assumed that a snapshot is taken directly at the goal location but it is not yet known where and when honeybees memorise the visual surroundings of the goal location. It was suggested that wasps and honeybees store snapshots while fixating the goal at the end of arcs, when they turn back and look at the feeder (TBL) during their learning flights (Collett and Lehrer, 1993). During these learning flights, bees and wasps generate a motion parallax field, which could provide them with spatial information about the goal location (Lehrer, 1991; Lehrer, 1993; Zeil, 1993a; Zeil, 1993b; Zeil et al., 1996; Zeil, 1997). For an optic flow snapshot, this could take place during translatory phases of flight when facing the feeder. However, if honeybees store snapshots during the TBL behaviour, they should follow a somewhat similar trajectory in their next approach flight. The similarity between learning and return flights concerning viewing directions and flight path seems to be different between species and less striking in honeybees (Collett and Lehrer, 1993; Zeil, 1993b; Collett, 1995; Lehrer and Bianco, 2000; de Ibarra et al., 2009). Future studies need to tackle the question of when and where these insects memorise snapshots of the visual surroundings and in which way they are used to structure the return flights.

APPENDIX

First-order approximations of the image shifts

It can be shown that image shifts for movements in the x- and y-directions (δuδxvδx), (δuδyvδy), as well as the resulting flow amplitudes F(u,v) (Eqn 2), can be described in the first-order approximation by:
formula
(3)
ϵ(u,v) is the elevation angle of the pixel with indices (u,v), φ(u,v) its azimuth angle and r(u,v) the distance to a point of the scene for viewing direction [φ(u,v),ϵ(u,v)]. These approximations are valid for δlr(u,v) and |ϵ(u,v)|≪90 deg. Eqn 3 shows that, because we use the correlation coefficient (Eqn 1) as a similarity measure, similarity does not depend on the translation amplitude δl in first-order approximation. From the last line in Eqn 3, it is also obvious that we could have replaced F(u,v) by F(u,v)1/2 to obtain flow amplitudes that are (approximately) proportional to 1/r(u,v).

We thank Grit Schwerdtfeger and our undergraduate students for their assistance with the experiments and the video analysis.

The study was supported by the Deutsche Forschungsgemeinschaft (DFG). L.D. was funded by the Studienstiftung des deutschen Volkes.

Barron
J. L.
,
Fleet
D. J.
,
Beauchemin
S. S.
(
1994
).
Performance of optical-flow techniques
.
Int. J. Comput. Vision
12
,
43
-
77
.
Boeddeker
N.
,
Hemmi
J. M.
(
2009
).
Visual gaze control during peering flight manoeuvres in honeybees
.
Proc. Biol. Sci.
277
,
1209
-
1217
.
Boeddeker
N.
,
Dittmar
L.
,
Stürzl
W.
,
Egelhaaf
M.
(
2010
).
The fine structure of honeybee head and body movements in a homing task
.
Proc. Biol. Sci.
277
,
1899
-
1906
.
Bouget
J.-Y.
(
1999
).
Visual Methods for Three-Dimensional Modelling
.
California
:
California Institute for Technology
.
Cartwright
B. A.
,
Collett
T. S.
(
1979
).
How honeybees know their distance from a near-by visual landmark
.
J. Exp. Biol.
82
,
367
-
372
.
Cartwright
B. A.
,
Collett
T. S.
(
1983
).
Landmark learning in bees-experiments and models
.
J. Comp. Physiol.
151
,
521
-
543
.
Cartwright
B. A.
,
Collett
T. S.
(
1987
).
Landmark maps for honeybees
.
Biol. Cybern.
57
,
85
-
93
.
Cheng
K.
,
Collett
T. S.
,
Wehner
R.
(
1986
).
Honeybees learn the colors of landmarks
.
J. Comp. Physiol. A
159
,
69
-
73
.
Cheng
K.
,
Collett
T. S.
,
Pickhard
A.
,
Wehner
R.
(
1987
).
The use of visual landmarks by honeybees-bees weight landmarks according to their distance from the goal
.
J. Comp. Physiol. A
161
,
469
-
475
.
Collett
T. S.
(
1995
).
Making learning easy-the acquisition of visual information during the orientation flights of social wasps
.
J. Comp. Physiol. A
177
,
737
-
747
.
Collett
T. S.
,
Baron
J.
(
1995
).
Learnt sensorimotor mappings in honeybees-interpolation and its possible relevance to navigation
.
J. Comp. Physiol. A
177
,
287
-
298
.
Collett
T. S.
,
Collett
M.
(
2002
).
Memory use in insect visual navigation
.
Nat. Rev. Neurosci.
3
,
542
-
552
.
Collett
T. S.
,
Lehrer
M.
(
1993
).
Looking and learning-a spatial pattern in the orientation flight of the wasp Vespula vulgaris
.
Proc. R. Soc. Lond. B. Biol. Sci.
252
,
129
-
134
.
Collett
T. S.
,
Paterson
C. J.
(
1991
).
Relative motion parallax and target localization in the locust, Schistocerca gregaria
.
J. Comp. Physiol. A
169
,
615
-
621
.
Collett
T. S.
,
Rees
J. A.
(
1997
).
View-based navigation in Hymenoptera: multiple strategies of landmark guidance in the approach to a feeder
.
J. Comp. Physiol. A
181
,
47
-
58
.
Collett
T. S.
,
Fry
S. N.
,
Wehner
R.
(
1993
).
Sequence learning by honeybees
.
J. Comp. Physiol. A
172
,
693
-
706
.
Collett
T. S.
,
Baron
J.
,
Sellen
K.
(
1996
).
On the encoding of movement vectors by honeybees. Are distance and direction represented independently?
J. Comp. Physiol. A
179
,
395
-
406
.
Davies
M. N.
,
Green
P. R.
(
1990
).
Optic flow-field variables trigger landing in hawk but not in pigeons
.
Naturwissenschaften
77
,
142
-
144
.
de Ibarra
N. H.
,
Philippides
A.
,
Riabinina
O.
,
Collett
T. S.
(
2009
).
Preferred viewing directions of bumblebees (Bombus terrestris L.) when learning and approaching their nest site
.
J. Exp. Biol.
212
,
3193
-
3204
.
Durier
V.
,
Graham
P.
,
Collett
T. S.
(
2003
).
Snapshot memories and landmark guidance in wood ants
.
Curr. Biol.
13
,
1614
-
1618
.
Eckmeier
D.
,
Bischof
H. J.
(
2008
).
The optokinetic response in wild type and white zebra finches
.
J. Comp. Physiol. A
194
,
871
-
878
.
Egelhaaf
M.
(
2006
).
The neural computation of visual motion information
. In
Invertebrate Vision
(ed.
Warrant
E.
,
Nilsson
D.-E.
), pp.
399
-
461
.
Cambridge
:
Cambridge University Press
.
Egelhaaf
M.
,
Kern
R.
(
2002
).
Vision in flying insects
.
Curr. Opin. Neurobiol.
12
,
699
-
706
.
Esch
H. E.
,
Zhang
S. W.
,
Srinivasan
M. V.
,
Tautz
J.
(
2001
).
Honeybee dances communicate distances measured by optic flow
.
Nature
411
,
581
-
583
.
Franz
M. O.
,
Schölkopf
B.
,
Mallot
H. A.
,
Bülthoff
H.
(
1998
).
Where did I take that snapshot? Scene-based homing by image matching
.
Biol. Cybern.
79
,
191
-
202
.
Fry
S. N.
,
Wehner
R.
(
2002
).
Honey bees store landmarks in an egocentric frame of reference
.
J. Comp. Physiol. A
187
,
1009
-
1016
.
Fry
S. N.
,
Wehner
R.
(
2005
).
Look and turn: landmark-based goal navigation in honey bees
.
J. Exp. Biol.
208
,
3945
-
3955
.
Geurten
B. R. H.
,
Kern
R.
,
Braun
E.
,
Egelhaaf
M.
(
2010
).
A syntax of hoverfly flight prototypes
.
J. Exp. Biol.
213
,
2461
-
2475
.
Giurfa
M.
,
Eichmann
B.
,
Menzel
R.
(
1996
).
Symmetry perception in an insect
.
Nature
382
,
458
-
461
.
Graham
P.
,
Durier
V.
,
Collett
T. S.
(
2004
).
The binding and recall of snapshot memories in wood ants (Formica rufa L.)
.
J. Exp. Biol.
207
,
393
-
398
.
Harris
R. A.
,
Graham
P.
,
Collett
T. S.
(
2007
).
Visual cues for the retrieval of landmark memories by navigating wood ants
.
Curr. Biol.
17
,
93
-
102
.
Horridge
G. A.
(
2003
).
Visual resolution of gratings by the compound eye of the bee Apis mellifera
.
J. Exp. Biol.
206
,
2105
-
2110
.
Judd
S. P. D.
,
Collett
T. S.
(
1998
).
Multiple stored views and landmark guidance in ants
.
Nature
392
,
710
-
714
.
Karmeier
K.
,
van Hateren
J. H.
,
Kern
R.
,
Egelhaaf
M.
(
2006
).
Encoding of naturalistic optic flow by a population of blowfly motion-sensitive neurons
.
J. Neurophysiol.
96
,
1602
-
1614
.
Kern
R.
,
Egelhaaf
M.
,
Srinivasan
M. V.
(
1997
).
Edge detection by landing honeybees: behavioural analysis and model simulations of the underlying mechanism
.
Vis. Res.
37
,
2103
-
2117
.
Kern
R.
,
van Hateren
J. H.
,
Michaelis
C.
,
Lindemann
J. P.
,
Egelhaaf
M.
(
2005
).
Function of a fly motion-sensitive neuron matches eye movements during free flight
.
PLoS Biol.
3
,
1130
-
1138
.
Kern
R.
,
van Hateren
J. H.
,
Egelhaaf
M.
(
2006
).
Representation of behaviourally relevant information by blowfly motion-sensitive visual interneurons requires precise compensatory head movements
.
J. Exp. Biol.
209
,
1251
-
1260
.
Koenderink
J. J.
(
1986
).
Optic flow
.
Vis. Res.
26
,
161
-
179
.
Kral
K.
,
Poteser
M.
(
1997
).
Motion parallax as a source of distance information in locusts and mantids
.
J. Insect Behav.
10
,
145
-
163
.
Lambrinos
D.
,
Möller
R.
,
Labhart
T.
,
Pfeifer
R.
,
Wehner
R.
(
2000
).
A mobile robot employing insect strategies for navigation
.
Rob. Auton. Syst.
30
,
39
-
64
.
Lee
D. N.
,
Davies
M. N. O.
,
Green
P. R.
,
Vanderweel
F. R. R.
(
1993
).
Visual control of velocity of approach by pigeons when landing
.
J. Exp. Biol.
180
,
85
-
104
.
Lehrer
M.
(
1991
).
Bees which turn back and look
.
Naturwissenschaften
78
,
274
-
276
.
Lehrer
M.
(
1993
).
Why do bees turn back and look?
J. Comp. Physiol. A
172
,
549
-
563
.
Lehrer
M.
(
1994
).
Spatial vision in the honeybee-the use of different cues in different tasks
.
Vis. Res.
34
,
2363
-
2385
.
Lehrer
M.
,
Bianco
G.
(
2000
).
The turn-back-and-look behaviour: bee versus robot
.
Biol. Cybern.
83
,
211
-
229
.
Lehrer
M.
,
Campan
R.
(
2005
).
Generalization of convex shapes by bees: what are shapes made of?
J. Exp. Biol.
208
,
3233
-
3247
.
Lehrer
M.
,
Collett
T. S.
(
1994
).
Approaching and departing bees learn different cues to the distance of a landmark
.
J. Comp. Physiol. A
175
,
171
-
177
.
Lehrer
M.
,
Srinivasan
M. V.
,
Zhang
S. W.
,
Horridge
G. A.
(
1988
).
Motion cues provide the bees visual world with a third dimension
.
Nature
332
,
356
-
357
.
Lindemann
J. P.
(
2006
).
Visual navigation of a virtual blowfly
.
PhD Thesis
,
Bielefeld University
.
Lucas
B. D.
,
Kanade
T.
(
1981
).
An iterative image registration technique with an application to stereo vision
. In
IJCAI81
, pp.
674
-
679
.
Mangan
M.
,
Webb
B.
(
2009
).
Modelling place memory in crickets
.
Biol. Cybern.
101
,
307
-
323
.
Möller
R.
(
2009
).
Local visual homing by warping of two-dimensional images
.
Robot. Auton. Syst.
57
,
87
-
101
.
Möller
R.
,
Vardy
A.
(
2006
).
Local visual homing by matched-filter descent in image distances
.
Biol. Cybern.
95
,
413
-
430
.
Neumann
T. R.
(
2002
).
Modeling insect compound eyes: space-variant spherical vision
. In
Biologically Motivated Computer Vision
, pp.
360
-
367
.
Berlin, Heidelberg
:
Springer
.
Schilstra
C.
,
van Hateren
J. H.
(
1999
).
Blowfly flight and optic flow I. Thorax kinematics and flight dynamics
.
J. Exp. Biol.
202
,
1481
-
1490
.
Srinivasan
M. V.
,
Zhang
S. W.
(
2004
).
Visual motor computations in insects
.
Annu. Rev. Neurosci.
27
,
679
-
696
.
Srinivasan
M. V.
,
Zhang
S. W.
,
Rolfe
B.
(
1993
).
Is pattern vision in insects mediated by cortical processing?
Nature
362
,
539
-
540
.
Srinivasan
M. V.
,
Zhang
S.
,
Altwein
M.
,
Tautz
J.
(
2000
).
Honeybee navigation: nature and calibration of the ‘odometer’
.
Science
287
,
851
-
853
.
Stürzl
W.
,
Mallot
H. A.
(
2006
).
Efficient visual homing based on Fourier transformed panoramic images
.
Robot. Auton. Syst.
54
,
300
-
313
.
Stürzl
W.
,
Zeil
J.
(
2007
).
Depth, contrast and view-based homing in outdoor scenes
.
Biol. Cybern.
96
,
519
-
531
.
Tautz
J.
,
Zhang
S.
,
Spaethe
J.
,
Brockmann
A.
,
Si
A.
,
Srinivasan
M.
(
2004
).
Honeybee odometry: performance in varying natural terrain
.
PLoS Biol.
2
,
E211
.
van Hateren
J. H.
,
Schilstra
C.
(
1999
).
Blowfly flight and optic flow II. Head movements during flight
.
J. Exp. Biol.
202
,
1491
-
1500
.
van Hateren
J. H.
,
Srinivasan
M. V.
,
Wait
P. B.
(
1990
).
Pattern-recognition in bees-orientation discrimination
.
J. Comp. Physiol. A
167
,
649
-
654
.
van Hateren
J. H.
,
Kern
R.
,
Schwerdtfeger
G.
,
Egelhaaf
M.
(
2005
).
Function and coding in the blowfly H1 neuron during naturalistic optic flow
.
J. Neurosci.
25
,
4343
-
4352
.
Vardy
A.
,
Möller
R.
(
2005
).
Biologically plausible visual homing methods based on optical flow techniques
.
Connect. Sci.
17
,
47
-
89
.
Wehner
R.
,
Räber
F.
(
1979
).
Visual spatial memory in desert ants, Cataglyphis bicolor (Hymenoptera, Formicidae)
.
Experientia
35
,
1569
-
1571
.
Wehner
R.
,
Michel
B.
,
Antonsen
P.
(
1996
).
Visual navigation in insects: coupling of egocentric and geocentric information
.
J. Exp. Biol.
199
,
129
-
140
.
Wystrach
A.
,
Beugnon
G.
(
2009
).
Ants learn geometry and features
.
Curr. Biol.
19
,
61
-
66
.
Zeil
J.
(
1993a
).
Orientation flights of solitary wasps (Cerceris, Sphecidae, Hymenoptera): I. Description of flight
.
J. Comp. Physiol. A
172
,
189
-
205
.
Zeil
J.
(
1993b
).
Orientation flights of solitary wasps (Cerceris, Sphecidae, Hymenoptera): II. Similarities between orientation and return flights and the use of motion parallax
.
J. Comp. Physiol. A
172
,
207
-
222
.
Zeil
J.
(
1997
).
The control of optic flow during learning flights
.
J. Comp. Physiol. A
180
,
25
-
37
.
Zeil
J.
,
Kelber
A.
,
Voss
R.
(
1996
).
Structure and function of learning flights in bees and wasps
.
J. Exp. Biol.
199
,
245
-
252
.
Zeil
J.
,
Hofmann
M. I.
,
Chahl
J. S.
(
2003
).
Catchment areas of panoramic snapshots in outdoor scenes
.
J. Opt. Soc. Am. A Opt. Image Sci. Vis.
20
,
450
-
469
.
Zhang
S. W.
,
Srinivasan
M. V.
(
1994
).
Prior experience enhances pattern-discrimination in insect vision
.
Nature
368
,
330
-
332
.

Supplementary information