Vision is one of the most important modalities for the remote perception of biologically important stimuli. Insects like honeybees and bumblebees use their colour and spatial vision to solve tasks, such as navigation, or to recognise rewarding flowers during foraging. Bee vision is one of the most intensively studied animal visual systems, and several models have been developed to describe its function. These models have largely assumed that bee vision is determined by mechanistic hard-wired circuits, with little or no consideration for behavioural plasticity or cognitive factors. However, recent work on both bee colour vision and spatial vision suggests that cognitive factors are indeed a very significant factor in determining what a bee sees. Individual bumblebees trade-off speed for accuracy, and will decide on which criteria to prioritise depending upon contextual information. With continued visual experience, honeybees can learn to use non-elemental processing, including configural mechanisms and rule learning, and can access top-down information to enhance learning of sophisticated, novel visual tasks. Honeybees can learn delayed-matching-to-sample tasks and the rules governing this decision making, and even transfer learned rules between different sensory modalities. Finally, bees can learn complex categorisation tasks and display numerical processing abilities for numbers up to and including four. Taken together, this evidence suggests that bees do have a capacity for sophisticated visual behaviours that fit a definition for cognition, and thus simple elemental models of bee vision need to take account of how a variety of factors may influence the type of results one may gain from animal behaviour experiments.

An interesting recent perspective on insect vision raised the hypothesis that the apparent ‘mysterious cognitive abilities’ of honeybees are derived solely from anthropomorphic misinterpretation of results (Horridge, 2009b). In particular, the thesis of this Commentary is that insect vision is only mediated by low-level feature detectors that combine to create elemental cues (Horridge, 2000; Horridge, 2009a; Horridge, 2009b). The coincidence of these cues is remembered as a retinotopic label for a particular image, and bees generalise between stimuli containing similar cues. This perspective on how an insect sees and interacts with its environment has been recently reviewed in detail, and specifically excludes any possibility that insects have cognitive inputs that influence perceptual outputs in the visual modality (Horridge, 2009a; Horridge, 2009b). However, there is a growing body of new evidence, reviewed below, that cognitive factors may influence how individual insects perceive their visual environment, and this evidence was overlooked in some previous and influential models of insect vision (Backhaus, 1991; Backhaus et al., 1987; Horridge, 2000; Horridge, 2009a; Horridge, 2009b). In this current review visual cognition is defined as an ability to learn, retain, classify and process visual information (Dukas, 2004; Shettleworth, 2001) in a sophisticated way that is not predicted by simple mechanistic or elemental responses to stimuli. For example, simple elemental processing based on visual cues (Horridge, 2009a; Horridge, 2009b) does not predict that a bee can use previously acquired information to solve novel tasks, nor that a bee can categorise similar novel stimuli based on image interpolation, nor that bees can use information acquired in the visual domain to solve novel tasks presented to a different sensory modality. In this review evidence for potential cognitive behaviours by bees is described, and a brief outline of training methodologies and subsequent interpretations of results is presented. Finally, pathways for future investigations are outlined, to more fully reveal how miniature brains enable individual insects to successfully function within their environment.

Individual learning in bees and evidence of cognitive input into colour decision making

To understand why cognition is potentially important to consider as a factor in understanding bee vision, it is first essential to understand how and why bees interact with their environment in a way that makes it possible to collect behavioural data to test hypotheses about visual processing. This review will be confined to visual learning in two principal species: the honeybee (Apis mellifera Linnaeus), and the bumblebee [Bombus terrestris (Linnaeus)]. These ‘bees’ are: (i) currently the best studied hymenopteran bee models, (ii) very important pollinators from an ecological and economic standpoint, (iii) social and live in colonies with many individuals, and (iv) central place feeders where forager bees leave their hive to collect nutrition in the form of nectar and pollen from flowers, and on their return contribute this nutrition to the entire colony.

The foraging pattern of bee populations means that an individual bee may make many sorties a day to collect rewards. By comparison, motivation in most other animals declines sharply once satiated, rendering them unavailable for extended experiments. An individual free-flying honeybee can be recruited to a test site where it is marked for easy identification and rewarded with a sucrose solution that reliably substitutes for nectar but does not provide an olfactory cue (Frisch, 1967). After the bee imbibes a full crop [stomach of the bee (Winston, 1987)] of sucrose it returns to the hive and contributes this nutrition to a central food store. The bee then returns to a profitable feeding site. Once the bee learns to reliably return to the test site every 2-3 min, it can then be trained and subsequently tested with a variety of target, distractor and/or novel transfer stimuli. Because of this recurrent cycle of foraging and deposit, individual bees have an extensive opportunity to learn within an experiment, and large amounts of data can be obtained from each forager bee studied, yielding robust insights into animal behaviour. This method of recruiting bees is applicable to visual learning tasks in a number of contexts and allows for a great deal of experimenter control over the specific paradigm. Among other examples, visual stimuli can be presented in a Y-maze apparatus (Fig. 1) to control for visual angle (Avarguès-Weber et al., 2010a; Avarguès-Weber et al., 2010b; Horridge, 2009b; Zhang and Srinivasan, 2004) or on a vertically presented rotating screen to enable extended learning opportunities where a bee can make many independent decisions (Avarguès-Weber et al., 2010b; Dyer et al., 2008; Dyer and Vuong, 2008). Alternatively, natural foraging situations can be simulated with flight arenas (Dyer and Chittka, 2004c; Spaethe et al., 2001).

While the researcher may want and expect test bees to consistently perform at the limit of their sensory capacity, so that it is possible to collect reliable data to reveal underlying mechanisms, for free-flying bees, the relevant factor that is of biological value is the amount of nutrition that they collect per unit time (Burns, 2005; Burns and Dyer, 2008). Thus, it is important to consider the potential cognitive influences on decision making that may influence bee behaviour for choosing stimuli whilst trying to collect nectar. For example, one type of visual problem that bees must solve is discriminating between very similar colours (Dyer and Chittka, 2004a; Lehrer, 1999). Colour is an interesting stimulus to investigate bee vision with because perceptual difficulty between target and distractor stimuli can be reliably specified in numeric terms using colorimetry (Chittka, 1992; Chittka and Wells, 2004; Dyer and Chittka, 2004a; Dyer and Chittka, 2004b; Lehrer, 1999; Vorobyev and Brandt, 1997), and it is known that both bumblebees (Dyer and Chittka, 2004c) and honeybees (Giurfa, 2004) learn a particular target colour very differently depending upon either absolute or differential conditioning (Fig. 2). In absolute conditioning only a target colour is present, and while this allows bees to learn how to discriminate the target from dissimilar colours, similar colours are generalised; in differential conditioning both target and distractor stimuli are present, and in this case the bee brain learns to make very fine colour discriminations depending upon the level of visual experience (Dyer and Chittka, 2004c; Dyer and Neumeyer, 2005; Giurfa, 2004). These experiments show that bees exhibit behavioural flexibility, and a possible neural mechanism underpinning behavioural flexibility is that different chromatic pathways in the bee brain can be tuned depending upon the level of experience an individual bee receives with stimuli (Dyer et al., 2011).

Fig. 1.

A Y-maze apparatus that can be used to present stimuli at a fixed visual angle. The bee should make a decision from within the decision chamber, but for perceptually difficult tasks it may be necessary to constrain the bee in the decision chamber in order to avoid speed-accuracy trade-offs introducing false negative results because individual bees may elect to go faster and randomly sample stimuli rather than slow down and be accurate.

Fig. 1.

A Y-maze apparatus that can be used to present stimuli at a fixed visual angle. The bee should make a decision from within the decision chamber, but for perceptually difficult tasks it may be necessary to constrain the bee in the decision chamber in order to avoid speed-accuracy trade-offs introducing false negative results because individual bees may elect to go faster and randomly sample stimuli rather than slow down and be accurate.

For a difficult colour discrimination task of 0.012 hexagon colour units (Dyer and Chittka, 2004a), Chittka and colleagues trained and tested individual bumblebees with differential conditioning to choose between a target colour rewarded with 2 mol l-1 of sucrose solution and a distractor colour that only provided plain water (Chittka et al., 2003). Interestingly, for this visual problem there was a significant linear correlation between the amount of time that individual bees allocated to the task and their subsequent accuracy. Traditionally, most behavioural tests evaluating bee vision defined the level of accuracy as the mean of this group performance. However, when the experimenters then changed the experiment ‘rules’ for the bees by penalising incorrect choices with a bitter tasting quinine hemisulphate solution, the bees significantly increased both the amount of time allocated to making a decision and their accuracy (Chittka et al., 2003). Importantly, this effect could be reversed (Chittka et al., 2003), suggesting something fundamental about studying animal behaviour with bees; individuals can make sophisticated decisions about ‘rule changes’ and the level of performance required in a particular context. It is thus not possible to expect individuals to always perform at the limit of what their sensory apparatus might allow; the sophisticated decision making of the bee often underpins the accuracy that a bee displays in a particular experiment (Avarguès-Weber et al., 2010a; Chittka et al., 2003). Modelling of speed-accuracy behavioural data shows that in some cases, when considering the biologically relevant metric of the amount of nutrition collected per unit time, fast bumblebees out-performed accurate bumblebees (Burns, 2005). Behavioural experiments on honeybees have since shown that it is beneficial for social bees to have a mix of individuals within a colony with different ‘accuracy’ strategies to help manage the different distributions of rewarding and non-rewarding flowers that might occur in natural conditions throughout the year (Burns and Dyer, 2008). Other work on bumblebees reveals that individuals have the capacity to modulate response time, dependent upon the perceived difficulty (Dyer and Chittka, 2004b; Dyer et al., 2007) or even danger (Ings and Chittka, 2008) involved with a colour discrimination task. This reversible and sophisticated decision making by bumblebees for perceptually similar-coloured stimuli is strong evidence of cognitive behaviour as it shows that bees can learn stimuli, retain the information and subsequently make decisions to classify what type of stimuli should be visited in what context. This rich behaviour that has been observed in bees can have very significant effects on the design and interpretation of animal behaviour experiments (Chittka et al., 2003; Chittka et al., 2009).

Honeybees also make sophisticated decisions about the level of accuracy required to efficiently collect nectar from stimuli, and in particular will only learn perceptually difficult colour visual tasks if incorrect choices are penalised with an appetitive-aversive differential conditioning protocol; while for perceptually easy tasks bees learn to make discriminations independent of an aversive conditioning agent like quinine hemisulphate (Avarguès-Weber et al., 2010a). A plausible explanation for this finding is that with differential conditioning bees develop attention-like mechanisms to enable discrimination between similar target and distractor stimuli (Avarguès-Weber et al., 2010a; Giurfa, 2004). However, clear experimental evidence to show that attention to stimuli following differential conditioning can be disrupted for fine discrimination tasks is currently not available. Despite the current absence of the specific mechanism(s) that mediate how bees make decisions for perceptually difficult tasks, the results from both bumblebee and honeybee behavioural experiments with colour stimuli clearly show that cognitive factors do significantly influence to the type of result an experiment will produce from free-flying bees (Skorupski and Chittka, 2011).

Fig. 2.

Conditioning procedure for bumblebees trained individually in a horizontal flight arena reveals that colour is learned very differently depending upon conditioning procedure. Thus, bees show evidence of behavioural plasticity. Similar results have been reported for honeybees trained in a Y-maze apparatus (see text for details). (A) Choice frequencies of bumblebees discriminating between coloured stimuli in non-rewarded trials (N=5 bees ± s.e.m.). Column 1 (absSmall) shows that, with absolute conditioning, bees do not discriminate between similar colours separated by 0.045 hexagon units, whilst column 2 (absLarge) shows that, with absolute conditioning, these same bees can discriminate between colours separated by a large colour distance of 0.152 hexagon units. Column 3 (diffSmall) shows that these same bees when provided with differential conditioning (see B below) can discriminate between similar colours. Column 4 (Control) shows that an independent control group of bees that was only provided with differential conditioning performs at a similar level of discrimination to the main test group. The horizontal line represents random foraging. (B) Acquisition curve considering differential conditioning to perceptually similar colour stimuli shows that bees that initially failed to discriminate a small colour difference with absolute conditioning can learn the colour difference when both target and distractor stimuli are present during training (N=5 bees ± s.e.m.). (C) Differential conditioning forms a long-term memory, as two bees tested were able to repeat the high level of discrimination for a number of days. This suggests that differential conditioning results in a permanent change in the bee brain. Data from Dyer and Chittka (Dyer and Chittka, 2004c).

Fig. 2.

Conditioning procedure for bumblebees trained individually in a horizontal flight arena reveals that colour is learned very differently depending upon conditioning procedure. Thus, bees show evidence of behavioural plasticity. Similar results have been reported for honeybees trained in a Y-maze apparatus (see text for details). (A) Choice frequencies of bumblebees discriminating between coloured stimuli in non-rewarded trials (N=5 bees ± s.e.m.). Column 1 (absSmall) shows that, with absolute conditioning, bees do not discriminate between similar colours separated by 0.045 hexagon units, whilst column 2 (absLarge) shows that, with absolute conditioning, these same bees can discriminate between colours separated by a large colour distance of 0.152 hexagon units. Column 3 (diffSmall) shows that these same bees when provided with differential conditioning (see B below) can discriminate between similar colours. Column 4 (Control) shows that an independent control group of bees that was only provided with differential conditioning performs at a similar level of discrimination to the main test group. The horizontal line represents random foraging. (B) Acquisition curve considering differential conditioning to perceptually similar colour stimuli shows that bees that initially failed to discriminate a small colour difference with absolute conditioning can learn the colour difference when both target and distractor stimuli are present during training (N=5 bees ± s.e.m.). (C) Differential conditioning forms a long-term memory, as two bees tested were able to repeat the high level of discrimination for a number of days. This suggests that differential conditioning results in a permanent change in the bee brain. Data from Dyer and Chittka (Dyer and Chittka, 2004c).

Evidence of cognitive input into decision making for pattern vision

The evidence in the previous section suggests that individual experience is important for how bees learn target stimuli, and experienced bees make complex sophisticated decisions about how to use colour information. This effect of visual experience has also been demonstrated for honeybee spatial vision where there are significant differences between the accuracy with which individual bees discriminate patterns depending upon either absolute or differential conditioning (Giurfa et al., 1999). Importantly, it has been very well established that by varying the length of training to patterns with differential conditioning, very different levels of discrimination ability are learned by individuals (Dyer et al., 2008; Stach and Giurfa, 2005). It has also been shown that bees can abstract learned features of a pattern to solve novel visual tasks (van Hateren et al., 1990) and even construct representations that combine different parts of the visual field to solve novel tasks (Stach et al., 2004). This, in conjunction with the findings presented in the previous section, shows that bees demonstrate a remarkable degree of behavioural plasticity depending upon the level of experience that an individual forager has with particular stimuli.

This led to the question: ‘what type of images can bees discriminate if they have received extensive amounts of differential conditioning?’ One way to test this question was to present bees with stimuli that we assume are so complex that it should require a large mammalian brain to process the information. Face stimuli are ideal in this regard as it is a class of stimuli that are reasonably homogeneous (all faces have elemental features of eyes, nose and mouth in similar positions), and mammalian vision is very reliable at recognising upright familiar human faces (Collishaw and Hole, 2000; Kanwisher, 2000; Kendrick et al., 2001; Maurer et al., 2002; Pascalis et al., 2002; Yin, 1969). By using face stimuli from a standard face recognition test (Warrington, 1996), it was possible to provide honeybees with appetitive-aversive differential conditioning and observe that they were able to both learn to discriminate between a target and distractor face, and then also recognise the target from novel faces (Dyer et al., 2005). This finding that bees can recognise faces was then confirmed in a second study, which also revealed that the honeybee visual system can interpolate between learned face viewpoints (Fig. 3) to subsequently solve novel viewpoint representations of the face stimuli (Dyer and Vuong, 2008). Importantly, face viewpoint invariance for rotation, which is the ability to recognise a rotated and thus novel views of a familiar face, was not possible for bees from control groups that only experienced a single image view (Dyer and Vuong, 2008). This finding is consistent with a model of visual processing where learned stimulus representations are stored at one level, and higher level processing enables image interpolation (Logothetis et al., 1994; Logothetis et al., 1995; Poggio and Edelman, 1990). For example, in the primate brain there is evidence that the capacity to process rotated complex stimuli like faces is upstream of initial image representations and in the inferior temporal cortex which may accumulate responses from the population of view-tuned neurons, and thus by summing the responses across different view-tuned neurons the visual system can average neural responses to facilitate recognition of novel views (Perrett et al., 1998). This level of visual processing in bees, which is more sophisticated than simple elemental type processing, is likely to be of high value for image recognition of natural stimuli like flowers in complex environments (Dyer and Vuong, 2008). However, these findings of bees recognising face stimuli led to the specific hypothesis that bees use only simple cues to recognise complex patterns like faces (Horridge, 2009a).

Fig. 3.

Honeybees trained with differential conditioning to similar faces show a capacity to correctly categorise a novel face viewpoint only if they can interpolate between two known viewpoints. (A) Face stimuli used for training and testing bees show stimulus1 (S1) and stimulus 2 (S2) at 0 deg (blue), 30 deg (green) and 60 deg (yellow) rotation. (B) Non-rewarded test results (means ± s.d.) for independent groups of bees (N=18 per group). Bees in Group 1 were trained to a 0 deg view and could not subsequently recognise a 30 deg view; bees in Group 2 were trained to a 60 deg view and could not subsequently recognise a 30 deg view. However, when bees in Group 3 were trained to both 0 deg and 60 deg views, these bees could subsequently recognise a novel 30 deg view (by interpolating 0 deg and 60 deg images). Bees in Group 4 could not recognise a novel presentation of 60 deg by extrapolating from learned 0 deg and 30 deg views. Thus, bees can learn through experience to correctly categorise complex spatial stimuli via a mechanism of image interpolation. For non-significant results (n.s.) P>0.35. Data from Dyer and Vuong (Dyer and Vuong, 2008).

Fig. 3.

Honeybees trained with differential conditioning to similar faces show a capacity to correctly categorise a novel face viewpoint only if they can interpolate between two known viewpoints. (A) Face stimuli used for training and testing bees show stimulus1 (S1) and stimulus 2 (S2) at 0 deg (blue), 30 deg (green) and 60 deg (yellow) rotation. (B) Non-rewarded test results (means ± s.d.) for independent groups of bees (N=18 per group). Bees in Group 1 were trained to a 0 deg view and could not subsequently recognise a 30 deg view; bees in Group 2 were trained to a 60 deg view and could not subsequently recognise a 30 deg view. However, when bees in Group 3 were trained to both 0 deg and 60 deg views, these bees could subsequently recognise a novel 30 deg view (by interpolating 0 deg and 60 deg images). Bees in Group 4 could not recognise a novel presentation of 60 deg by extrapolating from learned 0 deg and 30 deg views. Thus, bees can learn through experience to correctly categorise complex spatial stimuli via a mechanism of image interpolation. For non-significant results (n.s.) P>0.35. Data from Dyer and Vuong (Dyer and Vuong, 2008).

Thus, to test the hypothesis that the capacity of bees to learn face stimuli was due to simple or elemental cues, a series of experiments was conducted with both parameterised stimuli and photographs of real faces (Avarguès-Weber et al., 2010b). Bees trained with differential conditioning to sets of parameterised stimuli that were either configured into face-like stimuli versus non-face-like stimuli (Fig. 4A) were able to build a representation that enabled the correct classification of novel stimuli in transfer tests (Avarguès-Weber et al., 2010b). Importantly, bees were able to correctly recognise the configuration of ‘face’ even when the stimulus representing a face was roughly drawn and presented versus a stimulus that contained the elemental feature information that was present in the training sets (Fig. 4B). This experiment thus excluded low-level cues like the centre of gravity of the stimulus or the spatial energy distribution facilitating recognition (Avarguès-Weber et al., 2010b). Bees could only solve these tasks by using information about the relationship between elemental features mediating the difference between target and distractor stimuli. Other experiments, using similar methodology, excluded cues including symmetry, location of local spatial elements or brightness. Finally, experiments with photographs of real face stimuli confirmed that, after differential conditioning, bees were able to bind different face components and thus build a configured representation of how elemental features fit together to provide reliable recognition (Avarguès-Weber et al., 2010b). This work clearly shows that the hypotheses that bee brains can only use elemental features (Horridge, 2009b) are not correct. Following extended differential conditioning, the miniature brain of a honeybee can learn to use complex spatial information and employ configural processing to recognise face stimuli (Avarguès-Weber et al., 2010b). This finding is also consistent with previous reports that for differently coloured elements honeybees can learn to use configurations rather than the specific outcome of individual elements (Giurfa et al., 2003).

This evidence of configural processing in bees has also been extended to show that, with differential conditioning, individual bees can learn a concept in which a known referent element must always have the correct arrangement where a given object is either above or below the referent (Avarguès-Weber et al., 2011b). Importantly, during these experiments the shape (and/or colour) of the object that was viewed in relation to the referent was changed to exclude low-level cues, and bees were subsequently able to pass a transfer test that presented a novel object in the correct location relative to the known referent (Fig. 5). This new experimental work on bee vision indicates that bees are capable of learning to use rules about elemental relationships in spatial vision to solve complex conceptual problems that are not possible using simple elemental-type processing, a cognitive task that was previously assumed to require large primate brains (Chittka and Jensen, 2011). Does this mean that bees do not use simple mechanisms? A recent study did a comparative evaluation of how honeybees solved a visual task at a novel visual angle depending upon the conditioning of individual bees (Dyer and Griffiths, 2011). When one group of bees was provided with differential conditioning to the same stimulus pair at a constant small visual angle (50 deg), the bees could learn this task but failed to correctly recognise the target stimulus at a relatively large visual angle of 100 deg. A second group of bees that was provided with differential conditioning to configured stimuli sets that specifically excluded retinoptic matching learned the visual problem in a different way; these bees learned a configural rule and were able to reliably recognise the target stimulus at both the visual angle used during training (50 deg) and a novel visual angle (100 deg). In this case bees had learned a relationship rule that enabled extrapolation to a visual task that was well beyond the range of variation encountered in the training phase of the experiment. Comparing these two different results from the same study with stimuli of similar configuration suggests that in some cases bees do use a relatively simple retinotopic or elemental-type recognition system, while in other cases their vision is much more complex and consistent with rule-based cognitive visual processing (Dyer and Griffiths, 2011). Importantly, in both tested experimental conditions, individual honeybees were able to learn configured spatial stimuli at a small visual angle that was predicted not to be possible according to simple elemental models (Horridge, 2009a; Horridge, 2009b). The reason for this new development in our understanding of insect spatial vision is the use of the appetitive-aversive differential conditioning protocol (Avarguès-Weber et al., 2010a) combined with a modified Y-maze that constrained an individual bee to making a decision at a distance (Dyer and Griffiths, 2011), which underlines the importance of careful experimental control to understand mechanisms of spatial vision.

Fig. 4.

With differential conditioning, honeybees can learn to use configuration of multiple elements to solve novel tasks. (A) Six face-like (F1-F6) and six non-face-like (NF1-NF6) stimuli used to train bees to test configural processing. Both stimulus classes were made of the same elements but arranged differently. (B) Performance (means ± s.e.m.; N=8 for each bar) in non-rewarded transfer tests designed to control for potential low-level elemental cues such as the position of the dots, the centre of gravity (COG), the main visual angle or the spatial frequency distribution. In these experiments, bees were trained to choose face-like stimuli. In both transfer tests, bees showed a preference for the novel stimulus that was closer to the face-like category, irrespective of the low-level cue considered [black bar: percentage of choices for the face-like stimulus F6 (both the top position of the eyes and bilateral symmetry were excluded as predictive cues of reward); white bar: percentage of choices for the rough-drawing face-like stimulus (in this test the COG, visual angle, elemental feature cues, line length, symmetry and spatial frequency distribution all predict that bees should choose the non-face-like stimulus); horizontal line, random foraging]. Only configuration predicts choices for the face-like stimulus, which is the stimulus that the bees significantly preferred [(modified from Avarguès-Weber et al., 2010b) see original manuscript for additional control experiments].

Fig. 4.

With differential conditioning, honeybees can learn to use configuration of multiple elements to solve novel tasks. (A) Six face-like (F1-F6) and six non-face-like (NF1-NF6) stimuli used to train bees to test configural processing. Both stimulus classes were made of the same elements but arranged differently. (B) Performance (means ± s.e.m.; N=8 for each bar) in non-rewarded transfer tests designed to control for potential low-level elemental cues such as the position of the dots, the centre of gravity (COG), the main visual angle or the spatial frequency distribution. In these experiments, bees were trained to choose face-like stimuli. In both transfer tests, bees showed a preference for the novel stimulus that was closer to the face-like category, irrespective of the low-level cue considered [black bar: percentage of choices for the face-like stimulus F6 (both the top position of the eyes and bilateral symmetry were excluded as predictive cues of reward); white bar: percentage of choices for the rough-drawing face-like stimulus (in this test the COG, visual angle, elemental feature cues, line length, symmetry and spatial frequency distribution all predict that bees should choose the non-face-like stimulus); horizontal line, random foraging]. Only configuration predicts choices for the face-like stimulus, which is the stimulus that the bees significantly preferred [(modified from Avarguès-Weber et al., 2010b) see original manuscript for additional control experiments].

Evidence of top-down processing influencing visual discrimination following visual experience

When humans view complex visual information, we can often use prior knowledge to help solve novel difficult tasks, a process that involves top-down processing (Miyashita and Hayashi, 2000). There is also evidence that prior experience with visual information can be used by the bee brain to solve novel tasks. If bees are trained in a Y-maze apparatus to discriminate between a disc and ring when these stimuli are camouflaged and of the same pattern to the background but presented 6 cm in front to potentially allow for motion parallax cues, bees fail to learn the task. However, if a separate group of bees first learns a relatively easy preconditioning task of discriminating a salient black disc and ring when presented on a white background, then this group of bees can next use this initial visual experience to quickly learn to break the camouflage discrimination task described above. Then, bees that have received this initial preconditioning are able to go on and learn new novel camouflage-breaking tasks by using the initial visual experience that they gained (Zhang and Srinivasan, 1994; Zhang and Srinivasan, 2004). This experimental work shows that bees can classify and use visual information in a sophisticated way that is not expected from a simple elemental response to stimuli by a visual system because the stimuli in the novel tasks are in some cases completely different to the easy salient task, and thus meets the definition for cognition. This work suggests that the bee brain is likely to have multiple levels of visual processing for complex spatial stimuli. Simple elemental processing may happen at an early visual stage and allow for learning of easy or salient stimuli, e.g. it has been suggested that bees possess filters that respond in specific ways to parameters like tangential or radial cues of varying sizes (Horridge, 2000; Horridge, 2005; Horridge, 2009a; Horridge, 2009b), and there is at least some physiological basis for this idea as orientation-selective neurons have been found in the lobula region of the bee brain (Yang and Maddess, 1997). However, evidence that prior visual experience can also be used in a top-down-type cascade of information to solve more difficult tasks (Zhang and Srinivasan, 1994; Zhang and Srinivasan, 2004) suggests that higher level input from potential visual integration areas of the bee brain like the mushroom body can influence visual perception (Avarguès-Weber et al., 2011a). Interestingly, bees are known to have innate preferences for certain shapes like star-shaped flower patterns (Lehrer et al., 1995), and it will be interesting to investigate if this information can be used to enhance the learning rate for complex natural stimuli like flowers.

In other contexts bumblebees also show evidence of top-down-type processing for making sophisticated decisions about visual stimuli (Chittka et al., 2003; Lynn et al., 2005). For example, if bees are presented with perceptually similar colour stimuli where one type is rewarding and one type is not rewarding, there is a peak shift phenomenon where bees will manage this risk of errors by choosing a novel ‘dissimilar’ colour to which they have actually received no conditioning (Lynn et al., 2005). Bees thus take account of prior experience with certain stimuli to then make sophisticated decisions about which visual stimuli to choose in subsequent contexts. This type of behaviour requires bees to learn, retain, classify and subsequently process visual information in a sophisticated way that cannot be explained by elemental-type processing.

Fig. 5.

Honeybees can learn in a Y-maze the spatial concept of choosing a referent element either above or below a novel object when provided with extended differential conditioning. This capacity shows a complex understanding of relationships between different elements. (A) An example of the conditioning and testing procedure. Half of the bees were rewarded on the ‘target above referent’ relation whereas the other half was rewarded on the ‘target below referent’ relation. The referent pattern was either a disc (as shown) or a cross depending on the group of bees trained. The transfer test was not rewarded. (B) Phase 1 (pre-training): acquisition curve during pre-training (percentage of correct choices as a function of blocks of five trials) and performance (cumulative choices during 45 s test) in the non-rewarded discrimination test (white bar). Data shown are means and s.e.m. (N=20). Bees learned to choose the referent pattern and to discriminate it from other patterns used as targets in the subsequent training phase (**P<0.01; ***P<0.001). (C) Phase 2 (main training experiment): acquisition curve during training (percentage of correct choices as a function of blocks of five trials) and performance (cumulative choices during 45 s test) in the non-rewarded tests (white bars). Data shown are means and s.e.m. (N=20 for acquisition curve and transfer test, and N=8 for controls 1 and 2). The inset shows acquisition performance during the first five trials that integrate the first training block. Bees learned the concept of above/below and transferred it to novel stimuli. Controls 1 and 2 show that the low-level cue of spatial location of the referent on the background was not used as a discrimination cue to solve the task (*P<0.05; **P<0.01; ***P<0.001). The horizonal line in B and C represents random foraging. Data from Avarguès-Weber et al. (Avarguès-Weber et al., 2011b).

Fig. 5.

Honeybees can learn in a Y-maze the spatial concept of choosing a referent element either above or below a novel object when provided with extended differential conditioning. This capacity shows a complex understanding of relationships between different elements. (A) An example of the conditioning and testing procedure. Half of the bees were rewarded on the ‘target above referent’ relation whereas the other half was rewarded on the ‘target below referent’ relation. The referent pattern was either a disc (as shown) or a cross depending on the group of bees trained. The transfer test was not rewarded. (B) Phase 1 (pre-training): acquisition curve during pre-training (percentage of correct choices as a function of blocks of five trials) and performance (cumulative choices during 45 s test) in the non-rewarded discrimination test (white bar). Data shown are means and s.e.m. (N=20). Bees learned to choose the referent pattern and to discriminate it from other patterns used as targets in the subsequent training phase (**P<0.01; ***P<0.001). (C) Phase 2 (main training experiment): acquisition curve during training (percentage of correct choices as a function of blocks of five trials) and performance (cumulative choices during 45 s test) in the non-rewarded tests (white bars). Data shown are means and s.e.m. (N=20 for acquisition curve and transfer test, and N=8 for controls 1 and 2). The inset shows acquisition performance during the first five trials that integrate the first training block. Bees learned the concept of above/below and transferred it to novel stimuli. Controls 1 and 2 show that the low-level cue of spatial location of the referent on the background was not used as a discrimination cue to solve the task (*P<0.05; **P<0.01; ***P<0.001). The horizonal line in B and C represents random foraging. Data from Avarguès-Weber et al. (Avarguès-Weber et al., 2011b).

Evidence of symbolic rule learning

Some of the most impressive evidence that bees have cognitive input into visual processing comes from a variety of clever behavioural studies that used delayed-matching-to-sample-type tasks (Giurfa et al., 2001; Zhang and Srinivasan, 2004; Zhang et al., 2004). In this type of experiment a sample is presented in a modified Y-maze before a bee enters the decision chamber, then once the bee enters the chamber it must choose between the stimulus just viewed and a distractor. Bees can solve this type of visual task even if a variety of different types of sample stimuli are used, and bees can also be flexible and learn to avoid the sample and choose the different stimulus in a delayed-non-matching-to-sample alternative of the task (Giurfa et al., 2001; Zhang and Srinivasan, 2004). There is also evidence of high-level top-down-type processing as bees that learn this type of task using an alternative modality like olfaction can use the learned information or rule to solve a novel task when visual stimuli are presented (Giurfa et al., 2001; Srinivasan et al., 1998; Zhang and Srinivasan, 2004). This type of processing cannot be explained by elemental models of processing because a bee has to first learn a rule using one sensory modality such as olfaction and then in a transfer test immediately apply that rule to a different modality such as colour or spatial vision. As the low-level sensory input to either olfactory or visual processing only converges at the mushroom body, which is considered to be an integration processing unit in bees (Avarguès-Weber et al., 2011a), the ability of highly trained individuals to use multimodal rule learning to influence visual processing suggests a top-down cascade of information that can influence visual decision making. This again is strong evidence of a sophisticated brain-mediating visual processing in bees, and is not consistent with the ideas that visual processing is only determined by low-level elemental processing.

Categorisation of complex visual information by bees

One important capability of the primate visual system is the capacity to categorise stimuli based upon various features and to correctly assign novel stimuli to these categories (Peelen et al., 2009; Sigala and Logothetis, 2002). For example, the human brain can categorise visual stimuli into distinct groups like animal/non-animal or vehicle/residence (Peelen et al., 2009; Sigala and Logothetis, 2002) and then assign novel stimuli like car/igloo to the correct categories of vehicle/residence. To perform this task, a visual system must use features that are common to a particular stimulus category, while stimuli within the category can be discriminated from each other (Avarguès-Weber et al., 2010b; Benard and Giurfa, 2008; Benard et al., 2007; Sigala and Logothetis, 2002; Zhang et al., 2004). Studies on pattern vision by bees showed that individual bees could learn to use orientation or symmetry as a common property to make correct decisions about novel stimuli (Giurfa et al., 1996; Horridge and Zhang, 1995; Stach et al., 2004). It has been argued that these studies may not necessarily exclude the possibility that decisions are made by bees using low-level cues that might be mediated by a feature detection model, because neurons that learn to respond to one type of feature (Yang and Maddess, 1997) may simply generalise to other stimuli with similar features (Horridge, 2009b). However, work on bee categorisation has also used complex groups of natural stimuli including star-shaped flowers, circular flowers, plants and landscapes for which it was shown that these stimuli excluded low-level cues, and bees could discriminate between stimuli from within a category. Bees were able to correctly assign stimuli to a category, including novel stimuli in transfer phases of the experiment (Zhang et al., 2004). More recent work also confirms that complex object categorisation by bees is possible by the visual system learning to allocate the stimuli to groups based upon the relationship between the elements rather that the low-level elemental information (Avarguès-Weber et al., 2011a; Avarguès-Weber et al., 2010b). As is shown in Fig. 4b, bees can correctly use configural information to categorise stimuli even when the distractor stimulus contains low-level elemental cues that are more consistent with the training stimuli sets. Evidence that bees can correctly categorise novel stimuli to groups of learned stimuli also comes from the study discussed above which shows that bees can interpolate learned representations of a face (Dyer and Vuong, 2008) but fail the task if visual conditioning is only to a single viewpoint (Dyer and Vuong, 2008). This work clearly shows that visual categorisation of novel stimuli by bees is a sophisticated process and is dependent on prior experience with different stimuli.

Numeric representations for complex visual tasks

One of the main tasks of foraging bees is to fly through complex environments to collect nutrition. The navigational requirements of passing multiple landmarks, and then keeping track of the frequency of a flower being rewarding at a foraging patch, have potentially placed evolutionary pressure on the bee brain to develop some form of capacity for counting (Chittka and Geiger, 1995; Gross et al., 2009). This is interesting to consider in relation to visual processing and evidence for or against cognitive inputs, as counting is clearly a task that likely requires some form of high-level conceptual representation in a brain (Dehaene, 1997).

One early study indicated the possibility that honeybees can use a principle of counting to aid in the estimation of distance that should be travelled to find food rewards (Chittka and Geiger, 1995), and it is known that bees can use multiple strategies for landmark navigation depending upon context (Collett and Zeil, 1997; Vladusich et al., 2005). In a subsequent study that investigated the capacity of bees to count landmarks that were presented in a flight tunnel it was shown that bees could sequentially count up to four, even though the landmarks themselves were varied to avoid low-level cues like distance or were even novel landmarks in transfer tests (Dacke and Srinivasan, 2008). A separate study used a delayed-matching-to-sample test so that individual bees had to choose stimuli that matched the number of elements in the sample stimulus (Gross et al., 2009). The experiment was careful to control all low-level cues like colour, edge length and combined area (these were equalised in separate controls), and showed that bees could represent the concept of one versus two elements, two versus three elements and then in novel transfer tests three versus four elements (Gross et al., 2009). These studies suggest that bees may count using an exact numeric mechanism (Chittka and Geiger, 1995; Dacke and Srinivasan, 2008; Gross et al., 2009), and exploring the mechanisms underlying this capacity promises to provide important new insights into the richness of cognitive processing in bees. Importantly in the context of this current manuscript, the ability of bees to count cannot be explained by single elemental models of visual processing, but the behavioural results suggest a sophisticated higher level of visual processing that is consistent with a cognitive input to how experienced bees perceive visual stimuli. Interestingly, the capacity of bees to count appears to be limited to numbers less than five, which has some interesting parallels with the counting capacity of young human children prior to extensive exposure to modern educational rigour (Le Corre and Carey, 2007).

Concluding remarks

Models of visual processing need to consider the potential individual differences of bees for two main reasons. Firstly, there is now overwhelming evidence that bees trade-off accuracy for speed (Burns and Dyer, 2008; Chittka et al., 2003; Chittka et al., 2009; Dyer and Chittka, 2004b; Ings and Chittka, 2008), and thus free-flying bees are only likely to perform a perceptually difficult task at close to their limit of capability if a combination of appetitive and aversive differential conditioning is used in an experiment (Avarguès-Weber et al., 2010a; Chittka et al., 2003). Secondly, the level of visual experience that an individual bee has with particular set of stimuli has a very significant effect on how the bee can solve, or even learn to subsequently solve, novel visual problems (Dyer et al., 2008; Stach and Giurfa, 2005; Zhang and Srinivasan, 1994). There is thus very strong evidence from a number of independent groups that the visual system of bees has the capacity to use higher level information to guide decisions in a sophisticated way that is not expected from a simple elemental response to stimuli by a visual system. This is very suggestive that bees have an ability to develop a cognitive capacity that can then be used to guide visual decision making (Avarguès-Weber et al., 2011a; Zhang and Srinivasan, 2004). Thus, there appears to be no mystery about the cognitive ability in bees. Data from several experiments show that highly trained and experienced bees exhibit behaviour that is consistent the definition of cognition (Avarguès-Weber et al., 2011b; Chittka et al., 2003; Dacke and Srinivasan, 2008; Dyer and Griffiths, 2011; Dyer and Vuong, 2008; Gross et al., 2009; Ings and Chittka, 2008; Zhang et al., 2004; Zhang and Srinivasan, 1994).

It is also possible, indeed likely, that in some cases bees also exhibit behaviours that are more consistent with simple elemental-type processing (Horridge, 2009a; Horridge, 2009b). These two possibilities about insect vision need not be mutually exclusive, and there is evidence, at least for colour visual processing, that the bee brain may contain multiple separate pathways that facilitate either simple hard-wired behaviours or plasticity for learning how to use more complex stimuli if it is required in a particular context (Dyer et al., 2011). Behavioural experiments suggest that pattern vision in bees may be mediated by separate mechanisms that include retinotopic matching and/or configural-based rule learning, depending upon the context and level of individual experience of a bee (Dyer et al., 2005; Efler and Ronacher, 2000; Giger and Srinivasan, 1995). Recent work that has evaluated bee spatial vision using a combination of appetitive and aversive differential conditioning for an extended training period clearly shows that bees can learn a task stated to be impossible according to simple elemental models of bee vision (Dyer and Griffiths, 2011). This new behavioural work thus shows that models of insect vision based upon simple mechanistic explanations need to be revised to consider how individual differences, experience effects and indeed cognitive influences can significantly affect the results and interpretation of behavioural experiments.

Future work may consider how the evidence of multiple mechanisms for pattern vision (Dyer and Griffiths, 2011; Dyer et al., 2005; Efler and Ronacher, 2000; Giger and Srinivasan, 1995) and the potential use of top-down processing are managed by the different levels of the bee brain. It will also be of high value to understand the limits of the different visual mechanisms and how bees can best make use of the different mechanisms depending upon context and experience of an individual. This current Commentary presents some strong evidence that bees exhibit behaviours consistent with our current definition of cognition, but currently we know very little about these mechanisms underlying cognitive type behaviour in bees. This promises to be a fruitful area of research in the future.

     
  • Absolute-differential conditioning

    Absolute conditioning is a form of elemental-type processing where a bee learns to collect appetitive rewards for visiting target stimuli in isolation (no perceptually similar stimuli are present during learning). Differential conditioning is a form of elemental-type processing where a bee learns to collect an appetitive reward for visiting target stimuli in the presence of perceptually similar distractor stimuli that present no reward. Appetitive-aversive differential conditioning is where target stimuli present rewards whilst distractor stimuli present aversive substances like bitter tasting quinine solution.

  •  
  • Cognition

    Cognition is defined here as an ability to learn, retain, classify and process visual information in a sophisticated way that is not predicted by simple mechanistic or elemental responses to stimuli.

  •  
  • Colorimetry

    Colorimetry is the science used to quantify and describe physically the perceptual colour capabilities of a bee or other animal. It combines physiological measurements of bee colour photoreceptor sensitivities, measurements of physical spectra and illumination, as well as behavioural responses of bees to stimuli. For example, ‘target’ and ‘distracter’ colour stimuli reflectance properties can be measured with a spectrophotometer and then plotted in a colour space such that the numeric distance between stimuli allows for predictions of the probability with which a bee can discriminate between these respective colours.

  •  
  • Configural processing

    Configural processing is where an animal learns complex visual stimuli by taking into account not only the individual components but also the relationships between these components.

  •  
  • Delayed-matching-to-sample

    For delayed-matching-to-sample experiments, bees are first presented with a sample stimulus and then second with a choice of stimuli, one of which is the sample stimulus and is reinforced with an appetitive reward. During training, the sample is regularly changed so that to solve the task bees must learn the rule to choose the ‘shown’ sample in the second stage. Bees can do this task with novel stimuli, even from different sensory modalities, or can even learn to not choose the sample.

  •  
  • Elemental-type processing

    Elemental-type processing is where an animal learns separately about each of the component stimuli that may make up complex visual stimuli.

  •  
  • Hexagon colour units

    Hexagon colour space is one current model of bee colour perception that enables the position of colour stimuli to be quantified, visually represented and for colorimetry to be done. Distance from the centre of the colour space to the edge equals unity. This colour space has been calibrated with behavioural experiments where bees choose between colours of different similarities with differing probabilities.

  •  
  • Mushroom body

    The mushroom body is a part of the bee brain implicated in integration of information from lower level regions of sensory processing.

  •  
  • Retinotopic matching

    Retinotopic matching is a rigid model of visual processing where a stimulus image is represented as a coincidence of elemental cues behind the retina to correspond to the exact or photographic layout of learned stimuli.

  •  
  • Rule learning

    In rule learning, the animal learns to use information about the relationship between elemental components, not the elements themselves. Rules can be easily transferred to novel stimuli that preserve the learned relationship.

  •  
  • Spatial energy distribution

    Spatial energy distribution is the relative distribution of low-, medium- and high-frequency spatial patterns (e.g. gratings) within a stimulus.

  •  
  • Transfer test

    A transfer test is conducted following a learning phase (e.g. see absolute-differential conditioning described above) such that a trained bee is presented with novel transfer stimuli to dissect the mechanisms mediating visual perception.

Funding

A.G.D. is grateful to the Alexander von Humboldt Foundation and the ARC DP0878968/0987989 for funding support.

I am grateful for comments on the manuscript by Lalina Muir, David Reser and anonymous reviewers, and for honeybee training resources provided by the Jock Marshall Reserve at Monash University, and Michael Grant.

Avarguès-Weber
A.
,
de Brito Sanchez
M. G.
,
Giurfa
M.
,
Dyer
A. G.
(
2010a
).
Aversive reinforcement improves visual discrimination learning in free flying honeybees
.
PLoS ONE
,
e15370
.
Avarguès-Weber
A.
,
Portelli
G.
,
Benard
J.
,
Dyer
A.
,
Giurfa
M.
(
2010b
).
Configural processing enables discrimination and categorization of face-like stimuli in honeybees
.
J. Exp. Biol.
213
,
593
-
601
.
Avarguès-Weber
A.
,
Deisig
N.
,
Giurfa
M.
(
2011a
).
Visual cognition in social insects
.
Annu. Rev. Entomol.
56
,
423
-
443
.
Avarguès-Weber
A.
,
Dyer
A. G.
,
Giurfa
M.
(
2011b
).
Conceptualization of above and below relationships by an insect
.
Proc. R. Soc. B
278
,
898
-
905
.
Backhaus
W.
(
1991
).
Color opponent coding in the visual system of the honeybee
.
Vision Res.
31
,
1381
-
1397
.
Backhaus
W.
,
Menzel
R.
,
Kreissl
S.
(
1987
).
Multidimensional scaling of color similarity in bees
.
Biol. Cybern.
56
,
293
-
304
.
Benard
J.
,
Giurfa
M.
(
2008
).
The cognitive implications of asymmetric color generalization in honeybees
.
Anim. Cogn.
11
,
283
-
293
.
Benard
J.
,
Stach
S.
,
Giurfa
M.
(
2007
).
Categorization of visual stimuli in the honeybee Apis mellifera
.
Anim. Cogn.
9
,
257
-
270
.
Burns
J. G.
(
2005
).
Impulsive bees forage better: the advantage of quick, sometimes inaccurate foraging decisions
.
Anim. Behav.
70
,
e1
-
e5
.
Burns
J. G.
,
Dyer
A. G.
(
2008
).
Diversity of speed accuracy strategies benefits social insects
.
Curr. Biol.
18
,
R953
-
R954
.
Chittka
L.
(
1992
).
The color hexagon: a chromaticity diagram based on photoreceptor excitations as a generalized representation of colour opponency
.
J. Comp. Physiol. A
170
,
533
-
543
.
Chittka
L.
,
Geiger
K.
(
1995
).
Can honeybees count landmarks?
Anim. Behav.
49
,
159
-
164
.
Chittka
L.
,
Jensen
K.
(
2011
).
Animal cognition: concepts from apes to bees
.
Curr. Biol.
21
,
R116
-
R119
.
Chittka
L.
,
Wells
H.
(
2004
).
Color vision in bees: mechanisms, ecology and evolution
. In
How Simple Nervous Systems Create Complex Perceptual Worlds
(ed.
Prete
F.
), pp.
165
-
191
.
Boston
:
MIT Press
.
Chittka
L.
,
Dyer
A. G.
,
Bock
F.
,
Dornhaus
A.
(
2003
).
Bees trade off foraging speed for accuracy
.
Nature
424
,
388
.
Chittka
L.
,
Skorupski
P.
,
Raine
N. E.
(
2009
).
Speed-accuracy tradeoffs in animal decision making
.
Trends Ecol. Evol.
24
,
400
-
407
.
Collett
T. S.
,
Zeil
J.
(
1997
).
The selection and use of landmarks by insects
. In
Orientation and Communication in Arthropods
(ed.
Lehrer
M.
), pp.
41
-
65
.
Basel
:
Birkhäuser Verlag
.
Collishaw
S. M.
,
Hole
G. J.
(
2000
).
Featural and configural processes in the recognition of faces if different familiarity
.
Perception
29
,
893
-
909
.
Dacke
M.
,
Srinivasan
M. V.
(
2008
).
Evidence for counting in insects
.
Anim. Cogn.
11
,
683
-
689
.
Dehaene
S.
(
1997
).
The Number Sense: How The Mind Creates Mathematics
.
New York
:
Oxford University Press
.
Dukas
R.
(
2004
).
Evolutionary biology of animal cognition
.
Annu. Rev. Ecol. Evol. Syst.
35
,
347
-
374
.
Dyer
A. G.
,
Chittka
L.
(
2004a
).
Biological significance of distinguishing between similar colours in spectrally variable illumination: bumblebees (Bombus terrestris) as a case study
.
J. Comp. Physiol. A
190
,
105
-
114
.
Dyer
A. G.
,
Chittka
L.
(
2004b
).
Bumblebees (Bombus terrestris) sacrifice foraging speed to solve difficult colour discrimination tasks
.
J. Comp. Physiol. A
190
,
759
-
763
.
Dyer
A. G.
,
Chittka
L.
(
2004c
).
Fine colour discrimination requires differential conditioning in bumblebees
.
Naturwissenschaften
91
,
224
-
227
.
Dyer
A. G.
,
Griffiths
D. W.
(
2011
).
Seeing near and seeing far: behavioural evidence for dual mechanisms of pattern vision in the honeybee (Apis mellifera)
.
J. Exp. Biol.
215
,
397
-
404
Dyer
A. G.
,
Neumeyer
C.
(
2005
).
Simultaneous and successive colour discrimination in the honeybee (Apis mellifera)
.
J. Comp. Physiol. A
191
,
547
-
557
.
Dyer
A. G.
,
Vuong
Q. C.
(
2008
).
Insect brains use image interpolation mechanisms to recognise rotated objects
.
PLoS ONE
3
,
e4086
.
Dyer
A. G.
,
Neumeyer
C.
,
Chittka
L.
(
2005
).
Honeybee (Apis mellifera) vision can discriminate between and recognise images of human faces
.
J. Exp. Biol.
208
,
4709
-
4714
.
Dyer
A. G.
,
Whitney
H. M.
,
Arnold
S. E. J.
,
Glover
B. J.
,
Chittka
L.
(
2007
).
Mutations perturbing petal cell shape and anthocyanin synthesis influence bumblebee perception of Antirrhinum majus flower colour
.
Arthropod Plant Interact.
1
,
45
-
55
.
Dyer
A. G.
,
Rosa
M. G. P.
,
Reser
D. H.
(
2008
).
Honeybees can recognise images of complex natural scenes for use as potential landmarks
.
J. Exp. Biol.
211
,
1180
-
1186
.
Dyer
A. G.
,
Paulk
A. C.
,
Reser
D. H.
(
2011
).
Colour processing in complex environments: insights from the visual system of bees
.
Proc. Royal Soc. B
278
,
952
-
959
.
Efler
D.
,
Ronacher
B.
(
2000
).
Evidence against retinoptic-template matching in honeybees pattern recognition
.
Vision Res.
40
,
3391
-
3403
.
Frisch
K. v.
(
1967
).
The Dance Language and Orientation of Bees
.
Cambridge, USA
:
Harvard University Press
.
Giger
A. D.
,
Srinivasan
M. V.
(
1995
).
Pattern recognition in honeybees: eidetic imagery and orientation discrimination
.
J. Comp. Physiol. A
176
,
791
-
795
.
Giurfa
M.
(
2004
).
Conditioning procedure and color discrimination in the honeybee Apis mellifera
.
Naturwissenschaften
91
,
228
-
231
.
Giurfa
M.
,
Eichmann
B.
,
Menzel
R.
(
1996
).
Symmetry perception in an insect
.
Nature
382
,
458
-
461
.
Giurfa
M.
,
Hammer
M.
,
Stach
S.
,
Stollhoff
N.
,
Müller-Deisig
N.
,
Mizyrycki
C.
(
1999
).
Pattern learning by honeybees: conditioning procedure and recognition strategy
.
Anim. Behav.
57
,
315
-
324
.
Giurfa
M.
,
Zhang
S.
,
Jenett
A.
,
Menzel
R.
,
Srinivasan
M. V.
(
2001
).
The concepts of ‘sameness’ and ‘difference’ in an insect
.
Nature
410
,
930
-
933
.
Giurfa
M.
,
Schubert
M.
,
Reisenman
C.
,
Gerber
B.
,
Lachnit
H.
(
2003
).
The effect of cumulative experience on the use of elemental and configural visual discrimination strategies in honeybees
.
Behav. Brain Res.
145
,
161
-
169
.
Gross
H. J.
,
Pahl
M.
,
Si
A.
,
Zhu
H.
,
Tautz
J.
,
Zhang
S. W.
(
2009
).
Number-based visual generalisation in the honeybee
.
PLoS ONE
4
,
e4263
.
Horridge
A.
(
2000
).
Seven experiments on pattern vision of the honeybee, with a model
.
Vision Res.
40
,
2589
-
2603
.
Horridge
A.
(
2005
).
What the honeybee sees: a review of the recognition system of Apis mellifera
.
Physiol. Entomol.
30
,
2
-
13
.
Horridge
A.
(
2009a
).
Generalization in visual recognition by the honeybee (Apis mellifera): a review and explanation
.
J. Insect. Physiol.
55
,
499
-
511
.
Horridge
A.
(
2009b
).
What does an insect see?
J. Exp. Biol.
212
,
2721
-
2729
.
Horridge
G. A.
,
Zhang
S. W.
(
1995
).
Pattern vision in honeybees (Apis mellifera): flower-like patterns with no predominant orientation
.
J. Insect. Physiol.
41
,
681
-
688
.
Ings
T. C.
,
Chittka
L.
(
2008
).
Speed accuracy tradeoffs and false alarms in bee responses to cryptic predators
.
Curr. Biol.
18
,
1520
-
1524
.
Kanwisher
N.
(
2000
).
Domain specificity in face recognition
.
Nat. Neurosci.
3
,
759
-
763
.
Kendrick
K. M.
,
Costa
A. P.
,
Leigh
A. E.
,
Hinton
M. R.
,
Peirce
J. W.
(
2001
).
Sheep don't forget a face
.
Nature
414
,
165
-
166
.
Le Corre
M.
,
Carey
S.
(
2007
).
One, two, three, four, nothing more: an investigation of the conceptual sources of the verbal counting principles
.
Cognition
105
,
395
-
438
.
Lehrer
M.
(
1999
).
Dorsoventral asymmetry of colour discrimination in bees
.
J. Comp. Physiol. A
184
,
195
-
206
.
Lehrer
M.
,
Horridge
G. A.
,
Zhang
S. W.
,
Gadagkar
R.
(
1995
).
Shape vision in bees: innate preference for flower-like patterns
.
Phil. Trans. R. Soc. B
347
,
123
-
137
.
Logothetis
N. K.
,
Pauls
J.
,
Bülthoff
H. H.
,
Poggio
T.
(
1994
).
View-dependent object recognition by monkeys
.
Curr. Biol.
4
,
401
-
414
.
Logothetis
N. K.
,
Pauls
J.
,
Poggio
T.
(
1995
).
Shape representation in the inferior temporal cortex of monkeys
.
Curr. Biol.
5
,
552
-
563
.
Lynn
S. K.
,
Cnaani
J.
,
Papaj
D. R.
(
2005
).
Peak shift discrimination learning as a mechanism of signal evolution
.
Evolution
59
,
1300
-
1305
.
Maurer
D.
,
Le Grand
R.
,
Mondloch
C. J.
(
2002
).
The many faces of configural processing
.
Trends Cog. Sci.
6
,
255
-
260
.
Miyashita
Y.
,
Hayashi
T.
(
2000
).
Neural representation of visual objects: encoding and top-down activation
.
Curr. Opin. Neurobiol.
10
,
187
-
194
.
Pascalis
O.
,
de Haan
M.
,
Nelson
C. A.
(
2002
).
Is face processing species-specific during the first year of life?
Science
296
,
1321
-
1323
.
Peelen
M. V.
,
Fei-Fei
L.
,
Kastner
S.
(
2009
).
Neural mechanisms of rapid natural scene categorization in human visual cortex
.
Nature
460
,
94
-
97
.
Perrett
D. I.
,
Oram
M. W.
,
Ashbridge
E.
(
1998
).
Evidence accumulation in cell populations responsive to faces: an account of generalization of recognition without mental transformations
.
Cognition
67
,
111
-
145
.
Poggio
T.
,
Edelman
S.
(
1990
).
A network that learns to recognize three-dimensional objects
.
Nature
343
,
263
-
266
.
Shettleworth
S. J.
(
2001
).
Animal cognition and animal behaviour
.
Anim. Behav.
61
,
277
-
286
.
Sigala
N.
,
Logothetis
N. K.
(
2002
).
Visual categorization shapes feature selectivity in the primate temporal cortex
.
Nature
415
,
318
-
320
.
Skorupski
P.
,
Chittka
L.
(
2011
).
Is colour cognitive?
Optics Laser Tech.
43
,
251
-
260
.
Spaethe
J.
,
Tautz
J.
,
Chittka
L.
(
2001
).
Visual constraints in foraging bumblebees: flower size and color affect search time and flight behavior
.
Proc. Natl. Acad. Sci. USA
98
,
3898
-
3903
.
Srinivasan
M. V.
,
Zhang
S. W.
,
Zhu
H.
(
1998
).
Honeybees link sights to smells
.
Nature
396
,
637
-
638
.
Stach
S.
,
Giurfa
M.
(
2005
).
The influence of training length on generalization of visual feature assemblies in honeybees
.
Behav. Brain Res.
161
,
8
-
17
.
Stach
S.
,
Bernard
J.
,
Giurfa
M.
(
2004
).
Local-feature assembling in the visual pattern recognition and generalization in honeybees
.
Nature
429
,
758
-
761
.
van Hateren
J. H.
,
Srinivasan
M. V.
,
Wait
P. B.
(
1990
).
Pattern recognition in bees: orientation discrimination
.
J. Comp. Physiol. A
167
,
649
-
654
.
Vladusich
T.
,
Hemmi
J. M.
,
Srinivasan
M. V.
,
Zeil
J.
(
2005
).
Interactions of visual odometry and landmark guidance during food search in honeybees
.
J. Exp. Biol.
208
,
4123
-
4135
.
Vorobyev
M.
,
Brandt
R.
(
1997
).
How do insect pollinators discriminate colors?
Isr. J. Plant Sci.
45
,
103
-
113
.
Warrington
E. K.
(
1996
).
Short Recognition Memory Test for Faces.
Windsor
:
Psychology Press
.
Winston
M.
(
1987
).
The Biology of the Honeybee.
Cambridge
:
Harvard University Press
.
Yang
E.-C.
,
Maddess
T.
(
1997
).
Orientation-sensitive neurons in the brain of the honey bee (Apis mellifera)
.
J. Insect Physiol.
43
,
329
-
336
.
Yin
R. K.
(
1969
).
Looking at upside down faces
.
J. Exp. Psych.
81
,
141
-
145
.
Zhang
S. W.
,
Srinivasan
M. V.
(
1994
).
Prior experience enhances pattern discrimination in insect vision
.
Nature
368
,
330
-
332
.
Zhang
S. W.
,
Srinivasan
M. V.
(
2004
).
Exploration of cognitive capacity in honeybees: higher functions emerge from a small brain
. In
Complex Worlds From Simpler Nervous Systems
(ed.
Prete
F. R.
), pp.
41
-
74
.
Cambridge
:
MIT Press
.
Zhang
S. W.
,
Srinivasan
M. V.
,
Zhu
H.
,
Wong
J.
(
2004
).
Grouping of visual objects by honeybees
.
J. Exp. Biol.
207
,
3289
-
3298
.