Edge detection is important for object detection and recognition. However, we do not know whether edge statistics accurately predict the detection of prey by potential predators. This is crucial given the growing availability of image analysis software and their application across non-human visual systems. Here, we investigated whether Boundary Strength Analysis (BSA), Local Edge Intensity Analysis (LEIA) and the Gabor edge disruption ratio (GabRat) could predict the speed and success with which triggerfish (Rhinecanthus aculeatus) detected patterned circular stimuli against a noisy visual background, in both chromatic and achromatic presentations. We found various statistically significant correlations between edge statistics and detection speed depending on treatment and viewing distance; however, individual pattern statistics only explained up to 2% of the variation in detection time, and up to 6% when considering edge statistics simultaneously. We also found changes in fish response over time. While highlighting the importance of spatial acuity and relevant viewing distances in the study of visual signals, our results demonstrate the importance of considering explained variation when interpreting colour pattern statistics in behavioural experiments. We emphasize the need for statistical approaches suitable for investigating task-specific predictive relationships and ecological effects when considering animal behaviour. This is particularly important given the ever-increasing dimensionality and size of datasets in the field of visual ecology.

Edge detection is crucial to the perception of spatial detail and informs cognitive processes such as object detection and discrimination (Bhagavatula et al., 2009; Cronin et al., 2014; Endler, 2006; Ruxton et al., 2018; Stevens and Cuthill, 2006). Therefore, edges should have an important function in defensive animal colouration. For example, edges can allow animals to hide against visual backgrounds by breaking up an animal's outline via disruptive camouflage (Cuthill et al., 2005; Endler, 2006; Troscianko et al., 2017). Alternatively, highly contrasting edges can help emphasize outlines of animals or body parts, helping to generate potent visual signals, such as those used for aposematic or deimatic signalling (Ruxton et al., 2018). Animals and objects with edge intensity distributions, frequencies, regularity and orientations matching those of the background tend to be difficult to detect or discriminate, whereas salient visual signals contrast against their visual background and are therefore easier to detect (Endler, 1978). In addition to informing object detectability per se, variation in edge contrast can have a profound impact on saliency and, thus, search optimization (Green et al., 2018; Krummenacher et al., 2010).

To approximate the perception of edge contrast at early stages of visual processing, colour pattern analyses relevant to animal vision can be performed using calibrated digital photography (Stevens et al., 2007), specifically using the Multispectral Image Calibration & Analysis (MICA) toolbox (Troscianko and Stevens, 2015) and its integrated frameworks such as Quantitative Colour Pattern Analysis (QCPA) (van den Berg et al., 2020b). There has been much work on quantifying various aspects of colour patterns including edge contrast, but few attempts to relate colour pattern statistics to ecologically relevant, task-specific behaviour using animal behaviour experiments. This problem is common in the study of defensive animal colouration, where the speed and reliability with which a predator can detect and locate prey are crucial in determining the survival rates of patterned prey, and hence the evolution of cryptic (Galloway et al., 2020) or conspicuous (Speed and Ruxton, 2005) defensive colouration.

The perception of spatial detail and thus edge contrast depends on the acuity of an animal observer and the distance at which a visual signal is observed (Caves et al., 2016; Endler, 1978), which can dramatically alter the function of animal colouration. Despite the known species and task specificity of neuronal processing and cognition, few colour pattern statistics have been investigated for their ability to reflect ecological significance in a specific context for a particular animal observer. Investigations of whether and how such modelled data correlate with animal behaviour are crucial, particularly given the steady increase in available image analysis methods and subsequent increase of data dimensionality.

Edge-detecting colour pattern analyses in the QCPA include the Gabor edge disruption ratio (GabRat) (Troscianko et al., 2017), Boundary Strength Analysis (BSA) (Endler et al., 2018) and Local Edge Intensity Analysis (LEIA) (van den Berg et al., 2020b). Troscianko et al. (2017) demonstrated that GabRat was more efficient in explaining the detection speed of artificial grey scale moth stimuli in an achromatic search task for humans compared with 12 other edge-detecting pattern metrics. However, GabRat has not yet been used in combination with spatial acuity and cone catch modelling assuming non-human observers. Sibeaux et al. (2019) used BSA to quantify female mate choice in Trinidad guppies (Poecillia reticulata). However, there have been no studies investigating BSA in a predation context, specifically in relation to detection speed and success, nor in combination with spatial acuity and cone catch modelling. Lastly, while LEIA has been used in a study quantifying camouflage in precocial chicks (Rohr et al., 2021), no study has quantified correlations between LEIA statistics and animal behaviour.

To address these gaps, we investigated how QCPA edge detection analyses correlate with the response of a fish observer in a controlled experimental predation task. We investigated whether GabRat, BSA or LEIA could predict the speed and success with which triggerfish, Rhinecanthus aculeatus, detected a circular stimulus of variable internal patterning against a noisy visual background. To do so, we applied a range of analyses using both the investigation of individual statistics and dimensionality reduction analyses. We conducted two experiments with achromatic (treatment 1) and chromatic (treatment 2) stimuli to investigate differences in search performance between stimuli when only luminance noise was present, in contrast to stimuli with both chromatic and luminance noise (Gegenfurtner and Kiper, 1992).

Edge-detecting pattern analyses in QCPA

BSA measures the colour and luminance contrast along edges, considering the relative abundance of boundaries inside colour patterns (Endler et al., 2018). The relative abundance of boundaries between colour pattern elements is determined by using a transition matrix counting synonymous and non-synonymous pixel transitions along horizontal and vertical transects across an image (Endler, 2012). LEIA quantifies colour and luminance edge contrast across a scene or object at the scale of individual edge detectors, as it does not rely on a segmented image. Instead, LEIA quantifies edge contrast by averaging the vertical, horizontal and diagonal edge contrast at the scale of the minimum resolvable acuity of the modelled visual system. This provides a non-parametric approach to the measurement of edge distributions in an image as image statistics are derived from histograms showing edge distributions (van den Berg et al., 2020b). GabRat was developed to reflect the functional principles of disruptive camouflage, quantifying the relative proportion and intensity of edges running orthogonally to the outline of an object. This is achieved by the use of Gabor filters with dimensions specified by the λmax of the contrast sensitivity function (CSF) (Troscianko et al., 2017). Here, we investigated a total of 17 pattern edge statistics (BSA: 6; GabRat, LEIA: 10; for a more detailed description of statistics used in this study, see Table S1).

Chromatic and achromatic contrast in BSA and LEIA is determined using the log-scale receptor noise limited (RNL) model (Vorobyev and Osorio, 1998). RNL compares differences in cone outputs relative to their noise by the parameter ΔS. By plotting the individual colour pattern elements in RNL colour space (Kelber et al., 2003; Vorobyev et al., 2001), distances (ΔS) can be calculated and used as approximations to receiver-specific colour and luminance contrast perception. We applied the model using the log of cone catches as this permits the use of Weber fractions (Vorobyev et al., 2001). GabRat measures contrast intensity as the response of a Gabor filter running along the shape outline of a greyscale intensity cone catch image (Troscianko et al., 2017; van den Berg et al., 2020b). In this study, we restricted the use of GabRat to the photoreceptor channel responsible for luminance contrast perception, as achromatic acuity is thought to be superior and dominating over chromatic acuity in triggerfish (Cheney et al., 2022).

Study species

We used six adult triggerfish, Rhinecanthus aculeatus (Linnaeus 1758), a common shallow reef inhabitant found throughout the Indo-Pacific, which feeds on algae, detritus and invertebrates (Randall et al., 1997). The species is easy to train and their visual system has been well studied (Cheney et al., 2022). They have trichromatic vision based on a single cone (photoreceptor λmax short wavelength=413 nm) and a double cone (photoreceptor λmax medium wavelength=480 nm; and photoreceptor λmax long wavelength=528 nm) (Cheney et al., 2013). The double cone members are used independently in colour vision (Pignatelli et al., 2010), but are thought to be used in tandem for luminance vision (Siebeck et al., 2014), as in other animals such as birds and lizards (Lythgoe, 1979). For this study, we have assumed both members to be responsible for luminance contrast perception (van den Berg et al., 2020a).

Fish were obtained from an aquarium supplier (Cairns Marine Pty Ltd, Cairns, QLD, Australia), shipped to The University of Queensland, Brisbane, and housed in individual tanks of 120 l (40 cm×80 cm×40 cm W×L×H). Aquaria were divided in two halves by a removable black PVC partition. All animals had been housed at The University of Queensland for 2–4 years and used for previous behavioural experiments, which facilitated training with the animals having already learned to peck at visual stimuli for food. Experiments were conducted consecutively between September 2020 and February 2021. All experimental procedures for this study were approved by the University of Queensland Animal Ethics Committee (SBS/077/17).

Background design

Using a custom-written Matlab (version r2019b, MathWorks) script (originally written by J.A.E. and modified by C.v.d.B.), a 14 cm×14 cm noisy background was created on which target stimuli were displayed. The background was designed to mimic the spatial frequency distribution of natural marine habitats on a coral reef determined using images from Lizard Island (Great Barrier Reef), taken in February 2019 with a Nikon D810 in a Nauticam housing in depths of less than 3 m, illuminated with natural sunlight. These images were then segmented using QCPA's RNL clustering algorithm, using QCPA's Gaussian acuity modelling to assume a triggerfish with an acuity of 3 cycles per degree (cpd) (Champ et al., 2014) observing the scenes at a distance of 10 cm, a luminance just noticeable difference (JND) threshold of 4ΔS (van den Berg et al., 2020a) and a colour discrimination threshold of 2ΔS (Green et al., 2022). The images were subjected to five cycles of RNL ranked filtering with a radius of 5 pixels and a falloff of 3. The resulting clustering was then used to determine the size distribution of randomly distributed clusters of 15,000 randomly shaped polygonal colour pattern elements belonging to 32 separate classes of equidistant 8-bit RGB values ranging from 0-0-0 RGB to 255-255-255 RGB for the achromatic and 0-0-0 RGB to 0-255-0 RGB for the chromatic background (Fig. 1).

Fig. 1.

Achromatic and chromatic stimulus presentation. (A,B) Examples of stimulus 1 randomly orientated on the noisy background for the achromatic (A) and chromatic (B) presentation. (C,D) Receptor noise limited (RNL) intensity images showing trichromatic (triggerfish) chromatic contrast ΔS values (scale shown top right) for stimulus 1 in the achromatic (C) and chromatic (D) presentation. Blue arrows indicate stimulus position for illustrative purposes.

Fig. 1.

Achromatic and chromatic stimulus presentation. (A,B) Examples of stimulus 1 randomly orientated on the noisy background for the achromatic (A) and chromatic (B) presentation. (C,D) Receptor noise limited (RNL) intensity images showing trichromatic (triggerfish) chromatic contrast ΔS values (scale shown top right) for stimulus 1 in the achromatic (C) and chromatic (D) presentation. Blue arrows indicate stimulus position for illustrative purposes.

Stimulus design

We chose circles for the overall shape and symmetric circular internal patterning as these guaranteed equal numerical responses from the pattern analyses across trials, independent of rotation and viewing angle. Using repetitions of identical internal shapes further allowed for the numerical calibration of boundary contrast to theoretically be even across all stimuli when measured as the abundance weighted mean luminance contrast between pattern elements (BSA.BML, Table S1). This calibration allowed pattern contrast to vary within a mutual design constraint across all stimuli while allowing internal pattern variation. A set of four target stimuli and a training stimulus (diameter of 1 cm) were developed using additional custom-written Matlab scripts (Table 1). Each stimulus comprised a black background and a variable combination of internal patterning composed of four regularly arranged smaller spots (diameter 0.25 cm) (Table 1). Assuming a spatial acuity of 3 cpd (Champ et al., 2014), the internal spots would be discernible by the fish at distances below 4.8 cm, whereas the entire stimulus would be visible from as far as 19 cm, with the distance from the divider separating the fish from the stimulus being 30 cm.

Table 1.

Parameters for stimuli used in the experiment

Parameters for stimuli used in the experiment
Parameters for stimuli used in the experiment

For the achromatic treatment, the internal dots were grey, with equal RGB stimulus values for each pattern element (e.g. 80-80-80 RGB). For the chromatic (green) treatment, the same patterning was repeated, but the R and B pixel value remained fixed at 0, while the G value was identical to the achromatic treatment. This resulted in the achromatic treatment having close to no measurable chromatic contrast (<1ΔS) between and within stimuli and backgrounds as per the visual system of a triggerfish, whereas the chromatic treatment did (Fig. 1C,D).

Stimulus placement and session design

Using another custom-written Matlab script, each stimulus was placed on the respective background at nine positions (top-left, top-middle, top-right, middle-left, middle-middle, middle-right, bottom-left, bottom-middle, bottom-right) in a random orientation (Fig. 1). We then compiled sessions of six stimulus presentations, using five pseudo-randomly drawn stimuli and position combinations from all possible stimulus and position combinations. To balance each session, the sixth stimulus and position combination for each session was determined by pseudo-randomly choosing from the list of least represented positions and stimuli of a given session. Specifically, about 50% of trials (n=480) in the achromatic treatment were determined using a random number generator. To make sure each position was presented equally as often by the end of all trials, this initial set of trials was augmented by a set of trials calculated using random sampling without replacement. For the chromatic treatment, all trials were pre-determined with random sampling without replacement. This session design achieved a minimum of 30 repetitions of each stimulus for each animal (mean±s.d. 38.75±4.04) consisting of at least 3 repetitions of each position for each stimulus (4.16±1.30) across all achromatic sessions (n=27) and chromatic sessions (n=29). To ensure each stimulus ended up being tested equally as often for all animals and positions by the end of trials, stimulus and position frequencies were tested using a Chi-square test, confirming there were no significant differences. Stimuli were displayed on an iPad Air 2 fitted with a matte screen protector and placed in a waterproof case (Lifeproof Nuud iPad case) with brightness set to maximum. The iPad was then placed parallel to the aquarium floor 10 cm from the bottom and against the back wall.

Stimulus quantification

Image analysis was performed with the MICA toolbox (version 2.2) running on ImageJ (version 1.53a) using a custom-designed automated batch script of the QCPA. To quantify the edge contrast provided by each of the three pattern statistics (BSA, GabRat, LEIA), a calibrated Olympus E-PL5 Penlight camera with a 60 mm macro lens was used to take images of each of the nine replicated positions for each stimulus and treatment in air but illuminated by the same lighting used in the trials. The pictures were taken in the dark (as LED screens emit rather than reflect light) and the brightest patch class (white or bright green) was used to calibrate each image using the ‘estimate black point’ function when creating the normalized and standardized multispectral image files (.mspec). The ‘white’ patches (255-255-255 RGB) were determined to be of 72.5% reflectance (even though technically radiance, but the input to QCPA is in reflectance), comparing the radiance of the patches to the reflectance of a Spectralon (Ocean Optics) white standard illuminated by a PX-2 light source (Ocean Optics). A chromatic cut-off (average cone catch per pixel below which no chromaticity is possible) was set at 3%, preventing artificial chromaticity due to minor absolute differences between cones. Some chromatic contrast (<2ΔS) remained in the achromatic treatment when viewed by triggerfish (Fig. 1C,D) as RGB is deliberately designed for human colour vision. However, residual chromaticity is well below thresholds determined to be relevant in triggerfish (van den Berg et al., 2020a).

Each image (n=36) was manually segmented into stimulus and corresponding background using manual image segmentation in ImageJ (Schneider et al., 2012) and colour patterns were analysed using an automated QCPA script. Each image was analysed at a modelled viewing distance of 2, 5, 10 and 30 cm using the Gaussian acuity modelling function in QCPA, resizing the images to a pixel per minimally resolvable angle (MRA) ratio of 5, thus removing spatial detail that cannot be resolved at a given distance (van den Berg et al., 2020b). The viewing distances are within the range of distances encountered by the fish from pecking a stimulus to observing the stimulus from the divider (30 cm). GabRat analysis was performed assuming a 1 cpd peak contrast acuity based on the CSF curve shape of a black-faced blenny (Triperygion delaisi) (Santon et al., 2019), the only marine fish for which a CSF is currently available. Despite the distant relationship, the general shape of the CSF is representative of most vertebrates, peaking at around a third of the maximum (da Silva Souza et al., 2011). For LEIA, the images were further subjected to a 5 times RNL-ranked filtering with a radius of 5 pixels and a falloff value of 3 to remove artificial colour gradients introduced during the acuity modelling. LEIA values were obtained from the untransformed edge histograms. For BSA, the images were further subjected to RNL clustering (Fig. S1), using a 2ΔS chromatic and 4ΔS achromatic threshold, which were based on empirical findings in past studies (Green et al., 2022; van den Berg et al., 2020a). RNL contrast was determined by using Weber fractions of 0.07:0.05:0.05:0.05 for short wavelength:medium wavelelgnth:long wavelength:double (sw:mw:lw:dbl) spectral sensitivities and a white LED illuminant spectrum as per (van den Berg et al., 2020a). Weber fractions were calculated assuming a receptor noise of 0.05 and a relative cone abundance of 1:2:2:2 (sw:mw:lw:dbl).

Animal training

Using operant conditioning, fish were trained to peck at a piece of squid placed on a black spot (diameter 1 cm) randomly placed (using natural adhesive properties) on a uniform grey background displayed on an iPad. Once fish had pecked at the food on the target, they were given a second piece of squid from above with tweezers. The size of the food reward on the target spot was subsequently reduced until the fish were pecking at the target spot without any food on it. Next, the target was changed to a patterned spot (Table 1, stimulus 1) on a plain background and, finally, on a noisy background (Fig. 1). Fish moved to each training stage when successful in >80% trials over six consecutive sessions of six trials per day. A trial was considered unsuccessful if the fish took longer than 30 s (measured using a stopwatch) after swimming through the door of the divider to make a choice or if it pecked at the background more than twice. As the fish sometimes get distracted, we allowed a fish two wrong pecks before concluding that it had not detected the target. Testing was suspended for the day if the fish showed multiple timeouts (failure to peck at the stimulus within 30 s). Once fish had completed treatment 1 (achromatic stimulus), they were re-trained for treatment 2 (chromatic treatment) and had to meet training criteria prior to commencing trials.

Animal testing

For both treatments, six individuals were tested. However, two individuals did not complete the chromatic trials and were excluded from the chromatic data analysis. Every fish was tested for one session of six trials per day, with each session determined as described above. Stimulus 1 did not get drawn for the first three sessions in the achromatic treatments (Fig. 2). As per training, a trial was considered unsuccessful if the fish took longer than 30 s to make a choice or if it pecked at the background more than twice. Time to detection was recorded as the time between the moment the fish moved past the divider and the successful peck at the target spot.

Fig. 2.

Marginal effects plot of the detection time for each session and stimulus summarized across all animals. Achromatic (left): n=6, chromatic (right): n=4; 95% confidence intervals indicated by shading.

Fig. 2.

Marginal effects plot of the detection time for each session and stimulus summarized across all animals. Achromatic (left): n=6, chromatic (right): n=4; 95% confidence intervals indicated by shading.

Statistical analysis

All statistics were performed in R (v4.0, http://www.R-project.org/). Time to detection was left-skewed and was transformed using an ordered quantile normalizing transformation with the bestNormalize package (Peterson and Cavanaugh, 2020) prior to subsequent analyses to ensure normality. Pattern statistics were normalized using the preProcess function in the caret package (v6.0-88, https://CRAN.R-project.org/package=caret), with the ‘range’ option to restrict values to a range from 0 to 1. To ensure the image statistics at each viewing distance (2, 5, 10, 30 cm) were capturing the pattern differences between the stimuli, we used regularized discriminant analysis (RDA) (Friedman, 1989) to predict the stimulus category. RDA classification was done using the caret and klaR (Weihs et al., 2011) R packages. This confirmed that the selected pattern statistics were effectively delineating the stimuli at all viewing distances with the stimulus type prediction rates of the RDA trained model being 100% at all viewing distances.

Differences in time to detection between stimuli of each treatment were tested with a linear mixed effects model accounting for session number as a fixed effect (after confirming the absence of an interaction with stimulus) and fish ID as a random effect using the lmer function on the lme4 package (Bates et al., 2015). Failure rates between stimuli were compared with a Fisher exact test in the rstatix package (v0.7.0, https://CRAN.R-project.org/package=rstatix).

We then investigated whether colour pattern metrics correlated with detection speed. For BSA, only the original BSA statistics provided by the QCPA were used to capture luminance and chromatic contrast (Table S1). For GabRat, only the achromatic (dbl cone) contrast was used, as the acuity for luminance contrast detection was assumed to be dominating the acuity provided by the chromatic channels (Lind and Kelber, 2011). We looked at both the statistics of the stimuli by themselves as well as in contrast to their visual backgrounds. This was done by using the absolute difference between a given stimulus statistic and the corresponding background. As GabRat measures the appearance of a stimulus edge against its background, GabRat values were identical in both instances.

To identify each pattern statistic's ability to predict detection speed, the transformed time to detection for each pattern statistic was investigated by fitting a linear mixed effect model using the lmer function in the lme4 package. Fish ID was added as a random effect to account for individual differences with viewing distance as a nested random effect. The amount of deviance in the time to detection explained by each pattern statistic was quantified using the pamer function in the LMERConvenience Functions R package (v3.0, https://CRAN.R-project.org/package=LMERConvenienceFunctions). We omitted applying alpha corrections as per Troscianko et al. (2017) to prevent inflating type II errors. The position of the stimulus and the trial number were included as fixed effects with an interaction term to reduce the amount of unexplained variation in each model and to account for learning effects as per Troscianko et al. (2017).

A principal component regression (PCR) analysis was applied to all pattern statistics at each viewing distance. The PCR was done using the pls package (https://CRAN.R-project.org/package=pls) to find a set of principal components which explains a maximum amount of variation in detection speed. To identify the best combination of fully weighted predictors, a stepwise regression analysis with sequential replacement was conducted using the leaps package (https://CRAN.R-project.org/package=leaps) to identify the model with the lowest prediction error. PCA regression analysis and RDA were performed by randomly selecting 80% of the data as training data and 20% as test data.

Differences in detection speed and success between stimuli

Treatment 1: achromatic stimuli

We conducted a total of 866 achromatic trials, of which fish successfully detected the target stimuli in 809 trials; therefore, a total of 57 out of 866 stimuli failed to be detected. There were no individual differences in failure rate between fish across all stimuli (Fisher exact test, P=0.955); however, failure rates between stimuli were generally low but varied significantly (stimulus 1: 3.58% or 8 out of 223, stimulus 2: 9.76% or 20 out of 205, stimulus 3: 3.18% or 7 out of 220, stimulus 4: 10.10% or 22 out of 218; Fisher exact test, P=0.004).

Detection speeds varied significantly between stimuli (F796.21=4.69, P=0.003) with stimulus 1 being detected significantly faster than stimulus 2, 3 and 4. Detection times improved over time for stimulus 2 (F179.53=9.55, P=0.002), stimulus 3 (F206.41=7.39, P=0.007) and stimulus 4 (F194=10.87, P<0.001), but not for stimulus 1 (F208.49=1.75, P=0.10) (Fig. 2).

Treatment 2: chromatic stimuli

We conducted a total of 692 chromatic trials, of which fish successfully detected the target stimuli in 677 trials. Fail rates in the chromatic treatment were lower than fail rates in the achromatic treatment (2.17% chromatic failure rate versus 6.58% achromatic fail rate measured as failed trial proportion of all trials). There were no individual differences in failure rate between fish across all stimuli (Fisher exact test, P=0.212). While failure rates between stimuli did not vary significantly (Fisher exact test, P=0.176), the pattern of fail rates between stimuli resembled the achromatic treatment, with stimuli 1 and 3 having the lowest fail rates (stimulus 1: 0.58% or 1 out of 172, stimulus 2: 4.10% or 7 out of 171, stimulus 3: 1.69% or 3 out of 177, stimulus 4: 2.33% or 4 out of 172).

Detection times for the chromatic context did not vary between stimuli (F666.06=2.01, P=0.11). However, while detection times generally improved over time (F669=4.56, P=0.03), those for individual stimuli did not: stimulus 1 (F166.03=0.84, P=0.36), stimulus 2 (F159.75=0.86, P=0.36), stimulus 3 (F169.16=1.02, P=0.32), stimulus 4 (F163.51=2.47, P=0.12).

Investigating individual pattern statistics to predict detection speed

The amount of explained variation in detection speed varied substantially between analyses, but overall was very low. Significant single-statistic analyses considering viewing distance as a random factor were rarely able to explain more than 1% of variation (maximum 1.22%, minimum 0.06%, mean±s.d. 0.35±0.29%; Tables 2 and 3). Considering the viewing distances separately yielded larger proportions of explained variation (maximum 1.65%, minimum 0.24%, mean±s.d. 0.82±0.37%; Figs 3 and 4; see Table S2 for details).

Fig. 3.

Explained variation in detection speed by stimulus pattern. Variation in detection speed explained by the different analyses using (A) only the stimulus pattern statistics and (B) the difference between the stimulus and its respective background. SND, all significant single statistics with distance as random effect; SWD, all significant single statistics for each viewing distance; STPW, stepwise regression analysis for each viewing distance; PCR, principal component regression analysis for each viewing distance.

Fig. 3.

Explained variation in detection speed by stimulus pattern. Variation in detection speed explained by the different analyses using (A) only the stimulus pattern statistics and (B) the difference between the stimulus and its respective background. SND, all significant single statistics with distance as random effect; SWD, all significant single statistics for each viewing distance; STPW, stepwise regression analysis for each viewing distance; PCR, principal component regression analysis for each viewing distance.

Fig. 4.

Explained variation in detection speed by each statistic at each distance. (A) Achromatic treatment, (B) chromatic treatment and (C) combined data of both treatments, considering only stimulus statistics. (D) Achromatic treatment, (E) chromatic treatment and (F) combined data of both treatments, considering stimulus against background statistics. For image statistic abbreviations, see Table S1.

Fig. 4.

Explained variation in detection speed by each statistic at each distance. (A) Achromatic treatment, (B) chromatic treatment and (C) combined data of both treatments, considering only stimulus statistics. (D) Achromatic treatment, (E) chromatic treatment and (F) combined data of both treatments, considering stimulus against background statistics. For image statistic abbreviations, see Table S1.

Table 2.

Summary table of the proportion of deviance explained by each model tested considering stimulus appearance by itself

Summary table of the proportion of deviance explained by each model tested considering stimulus appearance by itself
Summary table of the proportion of deviance explained by each model tested considering stimulus appearance by itself
Table 3.

Summary table of the proportion of deviance explained by each model tested considering the pattern contrast between each stimulus and its background

Summary table of the proportion of deviance explained by each model tested considering the pattern contrast between each stimulus and its background
Summary table of the proportion of deviance explained by each model tested considering the pattern contrast between each stimulus and its background

Considering all possible pattern statistics provided more explained variation, with the stepwise regression analysis able to explain 2–4% (maximum 4.18%, minimum 0.81%, mean±s.d. 2.52±0.85%; Fig. 3) whereas the PCR analysis was able to explain up to 6% (maximum 6.33%, minimum 0.11%, mean±s.d. 2.93±1.69%; Fig. 3), with statistics of all three pattern analyses represented in all analyses.

However, different sets of pattern statistics were relevant at different viewing distances (Fig. 4; see Tables S2–S4 for details). There was seemingly little difference between the use of the stimulus statistics themselves and the use of the respective stimulus–background contrast considering the average explained variation in fish behaviour. However, there were substantial differences between considering the stimulus by itself or in the context of its background when considering each statistic individually (Fig. 4), with maximum explained variability in detection speed coinciding with the estimated limit of internal stimulus pattern perception (∼5 cm) for most comparisons (but see Fig. 4D).

We demonstrate that different types of edge-detecting colour pattern analyses in the QCPA framework (BSA, LEIA, GabRat) correlated significantly with stimulus detectability, at least at some viewing distances. However, individual edge contrast statistics only explained a very low percentage of variation in detection speeds, and we show that considering multiple statistics in combination was more efficient in explaining behavioural variation than considering individual statistics. Stimulus detectability in our study therefore appears to be subject to a complex array of factors, with edge contrast being one of many. We further show that differences in internal patterning of visual stimuli lead to significant changes in search optimization, detection speed and detection success.

We show that some aspects of the visual appearance are (relatively speaking) more important in determining detection speed at certain distances, while not being relevant at others. This is intuitive, as brightness, colour and pattern geometry change differently as a function of viewing distance, spectral sensitivity, photoreceptor abundance and discrimination threshold. Our results align with previous results that estimating a few statistics of early-stage visual processing only reflects a fraction of the visual and cognitive processing underlying the ultimate behaviour (Troscianko et al., 2017). As viewing distances increase, finer internal patterns start to blur, subsequently changing the appearance of both the stimulus and its internal patterning, as well as the visual background. As a result, various mechanics captured by the pattern analyses used in this study change, leading to a variable landscape of distance-dependent correlations between pattern statistics and animal behaviour (Fig. 4; Table S2). However, it remains unclear whether and how detection speed is explained by the edge statistics used in the specific context of this study, given the overall low degree of explained variation. Furthermore, feature detectors in visual systems operate at different spatial scales (Elder and Sachs, 2004; Hubel and Wiesel, 1962) and considering multiple viewing distances in QCPA is akin to testing multiple spatial scales. Such variable feature detection at a given distance emphasizes the importance of considering multiple viewing distances when using QCPA or any colour pattern statistics. Doing so could contribute to our understanding of differences in acuity estimates derived from behavioural experiments as opposed to histology; or between behavioural experiments using different sets of stimuli and paradigms and different viewing distances.

We further show that the average correlation of pattern statistics with detection speed is similar when considering the stimuli by themselves or with their respective visual backgrounds. However, there are large relative differences in behaviour prediction success between pattern statistics and viewing distances depending upon whether one considers the background or not (Fig. 4). Indeed, tactical signal design is intrinsically linked to the perception of a signal in the context of its visual background (Guilford and Dawkins, 1991). Therefore, it is inappropriate to ignore the backgrounds in studies where visual signals are observed against a multitude of different visual backgrounds (Lind, 2016; Niskanen and Mappes, 2005; Pike, 2018). We expect stronger differences between prediction success with and without backgrounds when conducting similar experiments with highly variable visual backgrounds.

Our data show a pronounced effect of increasing detection speed over time in the achromatic treatment but not in the chromatic treatment. However, the effect was not equally as strong for all stimuli. While stimuli 2, 3 and 4 significantly improved detection speed over time, stimulus 1 was detected at maximal speed from the beginning (Fig. 2), indicating adaptation in how efficiently the animals were able to detect certain stimuli. This could be explained by a gradual change in search pattern (Credidio et al., 2012) coinciding with shifts in selective attention to specific features (Langley et al., 1996), a crucial effect of colour pattern diversity on predator cognition and a key mechanism behind apostatic selection as well as the evolution and persistence of colour pattern variability and polymorphism in nature (Bond and Kamil, 2006).

Differences in detection speed could also be due to the presence of a distinct visual feature unique to stimulus 1, making it significantly easier to detect. While stimulus 1 did not get presented in the first three trials, the response is profoundly different to that of the other stimuli despite a similar difference in internal contrasts and, importantly, it persists throughout the duration on the achromatic trials (Fig. 2). This makes the difference in animal response unlikely to be the consequence of a novelty effect due to delay in presentation. While the range of contrast intensity in stimulus 1 is not unique, the diagonal symmetry and potential presence of a diagonal symmetry axis is (Fig. 1). The salience of this axis could be explained by mechanisms of perceptual grouping (Brooks, 2014) and/or the presence of feature detectors with differential selectivity to stimulus orientation (Hubel and Wiesel, 1962). Perceptual grouping is crucial to the strategic design of defensive colouration, such as the function of disruptive colouration (Espinosa and Cuthill, 2014) as well as background matching (Dimitrova and Merilaita, 2012). Unsurprisingly then, perceptual grouping can also aid in the detection of patterned prey, emphasizing the potential importance of symmetry in salient visual signals, especially when seen against an irregular background (Forsman and Merilaita, 1999, 2003). However, we are not aware of any existing colour pattern analyses capable of quantifying ‘illusory’ features created by perceptual grouping in both human and non-human observers. Therefore, this remains an intriguing area of investigation for future research.

The absence of improved detection over time does not explain the reduced error rate for stimulus 3, which is equally as low as that for stimulus 1. Interestingly, stimulus 3 features a single high-contrast marking, distinguishing it from stimuli 2 and 4 (Table 1). Thus, while not making the stimulus easier to detect (i.e. by aiding in switching from sequential to parallel search; Sagi and Julesz, 1984), the bold marking potentially helps in identifying the stimulus upon detection. This may highlight cognitive differences between object detection and recognition and thus tactical and strategic signal design (Guilford and Dawkins, 1991; Hebets and Papaj, 2005). This is relevant in studies investigating the ecological significance of animal colouration, as well as studies investigating psychophysical thresholds (e.g. Santiago et al., 2020; van den Berg, et al., 2020a). Given the salient markings of stimulus 1, the coinciding reduction in error rates across both treatments makes sense although we cannot delineate whether this is caused by facilitated detection (presence of the stimulus), discrimination (identity of the stimulus) or both.

Developing approaches to the analysis of high-dimensional visual modelling data is a key requirement for the investigation of colour pattern space (van den Berg et al., 2020b). By using a variety of dimensionality reduction approaches, we highlight a subset of tools which can be used to navigate high-dimensional datasets such as those provided by the QCPA. The processing of visual signals from the moment information is registered by photoreceptors to the moment a behavioural response is observed is variable in specificity and complexity. Consequently, this makes it reasonable to assume anything from a single pattern statistic to complex multiparameter interactions correlates with ecologically relevant animal behaviour. Consequently, we suggest starting off with an unbiased presumption on the potential validity of a large array of pattern statistics. This provides an objective approach towards identifying key morphological features contributing to the behavioural outcome under investigation, rather than focusing on a select few image statistics from the start. While our study focused on a detection task in the context of a specific set of edge-detecting pattern statistics, it is reasonable to assume that different types of visual information and cognitive processes are relevant for observed or assumed behavioural and ecological outcomes. These properties are reflected by different, task-specific, constellations of pattern statistics, of which this study only considers a few.

Our study investigated just 17 out of >200 QCPA image statistics, of which all 17 exclusively measure edge contrast. This array of statistics captures only a very limited set of visual features. Different pattern statistics capture different aspects of visual signals which are affected differently by natural selection. Consequently, high levels of correlation between specific colour pattern statistics and animal behaviour can be found in one context and not another. For example, while significantly correlated, GabRat by itself managed to barely explain 1% of variation in animal behaviour in this specific experiment (Table 1), whereas Troscianko et al. (2017) found up to 11% of variation in human search behaviour to be explained by stimulus appearance when quantified using GabRat. This discrepancy could be explained by the low degree of internal patterning variability close to the edge of each stimulus in our study, a pattern property (edge disruption) that GabRat has been specifically designed to quantify. Furthermore, the acuity value used to fit the Gabor filter size to our analysis and the modelling parameters, in general, are conservative estimates, as opposed to the well-documented human parameters used in Troscianko et al., (2017).

Despite correlating significantly with detection time, edge contrast metrics in this perceptual context do not appear to capture a single dominating perceptual property of the stimuli driving variation in animal behaviour. In fact, given the low degree of explained variation independent of the statistical approach (Fig. 3), it is evident that the pattern statistics used in our analysis are not by themselves underlying behavioural outcomes in the specific context of our study. This emphasizes the need to consider the importance of explained variation of image statistics themselves when assessing the strength of reported statistics and subsequently presumed correlations between colour pattern statistics and their biological significance. This is rarely done in behavioural studies investigating colour pattern statistics. Indeed, it is common to find large amounts of variation in animal behaviour not directly attributed to a specific colour pattern descriptor despite a significant deviation from H0 (e.g. Sibeaux et al., 2019; Sibeaux et al., 2021). For example, while the fixed effect Lum.CoV (the covariance of LEIA luminance edge contrast) explains 1.40% of variation by itself (Table S2), the mixed effect linear model explains 79.03% of variation. This is on par with, if not superior to, values of linear mixed effect models reported in comparable studies such as that of Sibeaux et al. (2019), who report explained variation of their models at around 50%. Therefore, the distinction between the cumulatively explained variation of fixed and random effects and the explained variation of a fixed effect within a mixed effect model is crucial.

In conclusion, our study highlights the importance of broad and differentiated approaches when concluding ecological relevance from colour pattern statistics. We demonstrate the use of QCPA and its edge-detecting statistics for the quantification of detection speed and success when considering ecologically relevant viewing contexts. We further provide evidence for a cautious approach towards the identification of pattern statistics responsible for a behavioural response. Specifically, we recommend the unbiased consideration of large sets of colour pattern statistics unless a specific hypothesis warrants the pre-emptive narrowing down of candidate image statistics. We acknowledge the many remaining unknowns involved in visual modelling, and we affirm the continued need for ‘context-specific’ behavioural testing of theories and hypotheses brought about by means of theoretical modelling. This, consequently, requires continued testing of and comparisons between colour pattern analyses as they continue to radiate alongside the growing diversity of perceptual and ecological contexts in which they are applied.

We would like to thank Nicholas Condon for his assistance with the creation of automated QCPA scripts, various volunteers for assistance with animal husbandry and data entry. We wish to acknowledge The University of Queensland's Research Computing Centre (RCC) (Weiner and Awoonga) for its support in the computing of image statistics. We would also like to thank three anonymous reviewers and Trevor Price for constructive and detailed feedback on the manuscript.

Author contributions

Conceptualization: C.P.v.d.B.; Methodology: C.P.v.d.B.; Software: C.P.v.d.B., J.A.E.; Validation: C.P.v.d.B., J.A.E., K.L.C.; Formal analysis: C.P.v.d.B.; Investigation: C.P.v.d.B., D.E.J.P.; Resources: C.P.v.d.B., K.L.C.; Data curation: C.P.v.d.B.; Writing - original draft: C.P.v.d.B.; Writing - review & editing: C.P.v.d.B., J.A.E., K.L.C.; Visualization: C.P.v.d.B.; Supervision: C.P.v.d.B., K.L.C.; Project administration: C.P.v.d.B., K.L.C.; Funding acquisition: K.L.C.

Funding

This work was funded by an Australian Research Council Grant FT190100313 awarded to K.L.C. Open access funding provided by The University of Queensland. Deposited in PMC for immediate release.

Data availability

The data are available from UQ e-space: https://doi.org/10.48610/05034f4. All custom-written Matlab code is available from GitHub: https://github.com/cedricvandenberg?tab=repositories.

Bates
,
D.
,
Mächler
,
M.
,
Bolker
,
B.
and
Walker
,
S.
(
2015
).
Fitting linear mixed-effects models using lme4
.
J. Stat. Softw.
67
,
1
-
48
.
Bhagavatula
,
P.
,
Claudianos
,
C.
,
Ibbotson
,
M.
and
Srinivasan
,
M.
(
2009
).
Edge detection in landing budgerigars (Melopsittacus undulatus)
.
PLoS ONE
4
,
e7301
.
Bond
,
A. B.
and
Kamil
,
A. C.
(
2006
).
Spatial heterogeneity, predator cognition, and the evolution of color polymorphism in virtual prey
.
Proc. Natl. Acad. Sci. USA
103
,
3214
-
3219
.
Brooks
,
J. L.
(
2014
).
Traditional and new principles of perceptual grouping
. In
Oxford Handbook of Perceptual Organization
(ed.
J.
Wagemans
), pp.
1
-
31
.
Oxford
:
Oxford University Press
.
Caves
,
E. M.
,
Frank
,
T. M.
and
Johnsen
,
S.
(
2016
).
Spectral sensitivity, spatial resolution, and temporal resolution and their implications for conspecific signalling in cleaner shrimp
.
J. Exp. Biol.
219
,
597
-
608
.
Champ
,
C. M.
,
Wallis
,
G.
,
Vorobyev
,
M.
,
Siebeck
,
U.
and
Marshall
,
J.
(
2014
).
Visual acuity in a species of coral reef fish: Rhinecanthus aculeatus
.
Brain. Behav. Evol.
83
,
31
-
42
.
Cheney
,
K. L.
,
Newport
,
C.
,
McClure
,
E. C.
and
Marshall
,
N. J.
(
2013
).
Colour vision and response bias in a coral reef fish
.
J. Exp. Biol.
216
,
2967
-
2973
.
Cheney
,
K. L.
,
Hudson
,
J.
,
de Busserolles
,
F.
,
Luehrmann
,
M.
,
Shaughnessy
,
A.
,
van den Berg
,
C.
,
Green
,
N. F.
,
Marshall
,
N. J.
and
Cortesi
,
F.
(
2022
).
Seeing Picasso: an investigation into the visual system of the triggerfish Rhinecanthus aculeatus
.
J. Exp. Biol.
225
,
jeb243907
.
Credidio
,
H. F.
,
Teixeira
,
E. N.
,
Reis
,
S. D. S.
,
Moreira
,
A. A.
and
Andrade
,
J. S.
Jr
. (
2012
).
Statistical patterns of visual search for hidden objects
.
Sci. Rep.
2
,
920
.
Cronin
,
T. W.
,
Johnsen
,
S.
,
Marshall
,
N. J.
and
Warrant
,
E. J.
(
2014
).
Visual Ecology
.
Princeton University Press
.
Cuthill
,
I. C.
,
Stevens
,
M.
,
Sheppard
,
J.
,
Maddocks
,
T.
,
Párraga
,
C. A.
and
Troscianko
,
T. S.
(
2005
).
Disruptive coloration and background pattern matching
.
Nature
434
,
72
-
74
.
da Silva Souza
,
G.
,
Gomes
,
B. D.
and
Silveira
,
L. C. L.
(
2011
).
Comparative neurophysiology of spatial luminance contrast sensitivity
.
Psychol. Neurosci.
4
,
29
-
48
.
Dimitrova
,
M.
and
Merilaita
,
S.
(
2012
).
Prey pattern regularity and background complexity affect detectability of background-matching prey
.
Behav. Ecol.
23
,
384
-
390
.
Elder
,
J. H.
and
Sachs
,
A. J.
(
2004
).
Psychophysical receptive fields of edge detection mechanisms
.
Vision Res.
44
,
795
-
813
.
Endler
,
J. A.
(
1978
).
A predator's view of animal color patterns
.
Evol. Biol.
11
,
319
-
364
.
Endler
,
J. A.
(
2006
).
Disruptive and cryptic coloration
.
Proc. R. Soc. B
273
,
2425
-
2426
.
Endler
,
J. A.
(
2012
).
A framework for analysing colour pattern geometry: adjacent colours
.
Biol. J. Linn. Soc.
107
,
233
-
253
.
Endler
,
J. A.
,
Cole
,
G. L.
and
Kranz
,
A. M.
(
2018
).
Boundary strength analysis: combining colour pattern geometry and coloured patch visual properties for use in predicting behaviour and fitness
.
Methods Ecol. Evol.
9
,
2334
-
2348
.
Espinosa
,
I.
and
Cuthill
,
I. C.
(
2014
).
Disruptive colouration and perceptual grouping
.
PLoS ONE
9
,
e87153
.
Forsman
,
A.
and
Merilaita
,
S.
(
1999
).
Fearful symmetry: pattern size and asymmetry affects aposematic signal efficacy
.
Evol. Ecol.
13
,
131
-
140
.
Forsman
,
A.
and
Merilaita
,
S.
(
2003
).
Fearful symmetry? Intra-individual comparisons of asymmetry in cryptic vs. signalling colour patterns in butterflies
.
Evol. Ecol.
17
,
491
-
507
.
Friedman
,
J. H.
(
1989
).
Regularized discriminant analysis
.
J. Am. Stat. Assoc.
84
,
165
-
175
.
Galloway
,
J. A. M.
,
Green
,
S. D.
,
Stevens
,
M.
and
Kelley
,
L. A.
(
2020
).
Finding a signal hidden among noise: how can predators overcome camouflage strategies?
Philos. Trans. R. Soc. B
375
,
20190478
.
Gegenfurtner
,
K. R.
and
Kiper
,
D. C.
(
1992
).
Contrast detection in luminance and chromatic noise
.
J. Opt. Soc. Am. A.
9
,
1880
-
1888
.
Green
,
N. F.
,
Urquhart
,
H. H.
,
van den Berg
,
C. P.
,
Marshall
,
N. J.
and
Cheney
,
K. L.
(
2018
).
Pattern edges improve predator learning of aposematic signals
.
Behav. Ecol.
29
,
1481
-
1486
.
Green
,
N. F.
,
Guevara
,
E.
,
Osorio
,
D. C.
,
Endler
,
J. A.
,
Marshall
,
N. J.
,
Vorobyev
,
M.
and
Cheney
,
K. L.
(
2022
).
Colour discrimination thresholds vary throughout colour space in a reef fish (Rhinecanthus aculeatus)
.
J. Exp. Biol.
225
,
jeb243533
.
Guilford
,
T.
and
Dawkins
,
M. S.
(
1991
).
Receiver psycology and the evolution of animal signals
.
Anim. Behav.
42
,
1
-
14
.
Hebets
,
E. A.
and
Papaj
,
D. R.
(
2005
).
Complex signal function: developing a framework of testable hypotheses
.
Behav. Ecol. Sociobiol.
57
,
197
-
214
.
Hubel
,
D. H.
and
Wiesel
,
T. N.
(
1962
).
Receptive fields, binocular interaction and functional architecture in the cat's visual cortex
.
J. Physiol.
160
,
106
-
154
.
Kelber
,
A.
,
Vorobyev
,
M.
and
Osorio
,
D. C.
(
2003
).
Animal colour vision – behavioural tests and physiological concepts
.
Biol. Rev. Camb. Philos. Soc.
78
,
81
-
118
.
Krummenacher
,
J.
,
Grubert
,
A.
and
Müller
,
H. J.
(
2010
).
Inter-trial and redundant-signals effects in visual search and discrimination tasks: separable pre-attentive and post-selective effects
.
Vision Res.
50
,
1382
-
1395
.
Langley
,
C. M.
,
Riley
,
D. A.
,
Bond
,
A. B.
and
Goel
,
N.
(
1996
).
Visual search for natural grains in pigeons (Columba livia): search images and selective attention
.
J. Exp. Psychol. Anim. Behav. Process.
22
,
139
-
151
.
Lind
,
O.
(
2016
).
Colour vision and background adaptation in a passerine bird, the zebra finch (Taeniopygia guttata)
.
R. Soc. Open Sci.
3
,
160383
.
Lind
,
O.
and
Kelber
,
A.
(
2011
).
The spatial tuning of achromatic and chromatic vision in budgerigars
.
J. Vis.
11
,
2
.
Lythgoe
,
J. N.
(
1979
).
The Ecology of Vision
.
Clarendon Press
.
Niskanen
,
M.
and
Mappes
,
J.
(
2005
).
Significance of the dorsal zigzag pattern of Vipera latastei gaditana against avian predators
.
J. Anim. Ecol.
74
,
1091
-
1101
.
Peterson
,
R. A.
and
Cavanaugh
,
J. E.
(
2020
).
Ordered quantile normalization: a semiparametric transformation built for the cross-validation era
.
J. Appl. Stat.
47
,
2312
-
2327
.
Pignatelli
,
V.
,
Champ
,
C. M.
,
Marshall
,
J.
and
Vorobyev
,
M.
(
2010
).
Double cones are used for colour discrimination in the reef fish, Rhinecanthus aculeatus
.
Biol. Lett.
6
,
537
-
539
.
Pike
,
T. W.
(
2018
).
Quantifying camouflage and conspicuousness using visual salience
.
Methods Ecol. Evol.
9
,
1883
-
1895
.
Randall
,
J. E.
,
Allen
,
G. R.
and
Steene
,
R. C.
(
1997
).
Fishes of the Great Barrier Reef and Coral Sea
.
Crawford House Publishing
.
Rohr
,
V. A.
,
Volkmer
,
T.
,
Metzler
,
D.
and
Küpper
,
C.
(
2021
).
Neoptile feathers contribute to outline concealment of precocial chicks
.
Sci. Rep.
11
,
5483
.
Ruxton
,
G. D.
,
Allen
,
W. L.
,
Sherratt
,
T. N.
and
Speed
,
M. P.
(
2018
).
Avoiding Attack
.
New York
:
Oxford University Press
.
Sagi
,
D.
and
Julesz
,
B.
(
1984
).
Detection versus discrimination of visual orientation
.
Perception
13
,
619
-
628
.
Santiago
,
C.
,
Green
,
N. F.
,
Hamilton
,
N.
,
Endler
,
J. A.
,
Osorio
,
D. C.
,
Marshall
,
N. J.
and
Cheney
,
K. L.
(
2020
).
Does conspicuousness scale linearly with colour distance? A test using reef fish
.
Proc. R. Soc. B
287
,
20201456
.
Santon
,
M.
,
Münch
,
T. A.
and
Michiels
,
N. K.
(
2019
).
The contrast sensitivity function of a small cryptobenthic marine fish
.
J. Vis.
19
,
1
-
10
.
Schneider
,
C. A.
,
Rasband
,
W. S.
and
Eliceiri
,
K. W.
(
2012
).
NIH Image to ImageJ: 25 years of image analysis
.
Nat. Methods
9
,
671
-
675
.
Sibeaux
,
A.
,
Cole
,
G. L.
and
Endler
,
J. A.
(
2019
).
The relative importance of local and global visual contrast in mate choice
.
Anim. Behav.
154
,
143
-
159
.
Sibeaux
,
A.
,
Camduras
,
T.
and
Endler
,
J. A.
(
2021
).
The role of boundary length and adjacent patch contrast in guppy mate choice
.
Behav. Ecol.
32
,
30
-
40
.
Siebeck
,
U. E.
,
Wallis
,
G. M.
,
Litherland
,
L.
,
Ganeshina
,
O.
and
Vorobyev
,
M.
(
2014
).
Spectral and spatial selectivity of luminance vision in reef fish
.
Front. Neural Circuits
8
,
118
.
Speed
,
M. P.
and
Ruxton
,
G. D.
(
2005
).
Aposematism: what should our starting point be?
Proc. R. Soc. B
272
,
431
-
438
.
Stevens
,
M.
and
Cuthill
,
I. C.
(
2006
).
Disruptive coloration, crypsis and edge detection in early visual processing
.
Proc. R. Soc. B
273
,
2141
-
2147
.
Stevens
,
M.
,
Parraga
,
C. A.
,
Cuthill
,
I. C.
,
Partridge
,
J. C.
and
Troscianko
,
T. S.
(
2007
).
Using digital photography to study animal coloration
.
Biol. J. Linn. Soc.
90
,
211
-
237
.
Troscianko
,
J.
and
Stevens
,
M.
(
2015
).
Image calibration and analysis toolbox – a free software suite for objectively measuring reflectance, colour and pattern
.
Methods Ecol. Evol.
6
,
1320
-
1331
.
Troscianko
,
J.
,
Skelhorn
,
J.
and
Stevens
,
M.
(
2017
).
Quantifying camouflage: how to predict detectability from appearance
.
BMC Evol. Biol.
17
,
7
.
van den Berg
,
C. P.
,
Hollenkamp
,
M.
,
Mitchell
,
L. J.
,
Watson
,
E. J.
,
Green
,
N. F.
,
Marshall
,
N. J.
and
Cheney
,
K. L.
(
2020a
).
More than noise: Context-dependant luminance contrast discrimination in a coral reef fish (Rhinecanthus aculeatus)
.
J. Exp. Biol.
223
,
jeb.232090
.
van den Berg
,
C. P.
,
Troscianko
,
J.
,
Endler
,
J. A.
,
Marshall
,
N. J.
and
Cheney
,
K. L.
(
2020b
).
Quantitative Colour Pattern Analysis (QCPA): a comprehensive framework for the analysis of colour patterns in nature
.
Methods Ecol. Evol.
11
,
316
-
332
.
Vorobyev
,
M.
and
Osorio
,
D.
(
1998
).
Receptor noise as a determinant of colour thresholds
.
Philos. Trans. R. Soc. B Biol. Sci.
265
,
351
-
358
.
Vorobyev
,
M.
,
Marshall
,
N. J.
,
Osorio
,
D.
,
de Ibarra
,
N. H.
and
Menzel
,
R.
(
2001
).
Colourful objects through animal eyes
.
Color Reseach Appl.
26
,
S214
-
S217
.
Weihs
,
C.
,
Ligges
,
U.
,
Luebke
,
K.
and
Raabe
,
N.
(
2011
).
klaR Analyzing German business cycles
. In
Data Analysis and Decision Support
(ed.
D.
Baier
,
R.
Decker
and
L.
Schmidt-Thieme
), pp.
335
-
343
.
Springer-Verlag
.

Competing interests

The authors declare no competing or financial interests.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

Supplementary information