Here we consider how new experimental approaches in biomechanics can be used to attain a systems-level understanding of the dynamics of animal flight control. Our aim in this paper is not to provide detailed results and analysis, but rather to tackle several conceptual and methodological issues that have stood in the way of experimentalists in achieving this goal, and to offer tools for overcoming these. We begin by discussing the interplay between analytical and empirical methods, emphasizing that the structure of the models we use to analyse flight control dictates the empirical measurements we must make in order to parameterize them. We then provide a conceptual overview of tethered-flight paradigms, comparing classical `open-loop' and `closed-loop'setups, and describe a flight simulator that we have recently developed for making flight dynamics measurements on tethered insects. Next, we provide a conceptual overview of free-flight paradigms, focusing on the need to use system identification techniques in order to analyse the data they provide,and describe two new techniques that we have developed for making flight dynamics measurements on freely flying birds. First, we describe a technique for obtaining inertial measurements of the orientation, angular velocity and acceleration of a steppe eagle Aquila nipalensis in wide-ranging free flight, together with synchronized measurements of wing and tail kinematics using onboard instrumentation and video cameras. Second, we describe a photogrammetric method to measure the 3D wing kinematics of the eagle during take-off and landing. In each case, we provide demonstration data to illustrate the kinds of information available from each method. We conclude by discussing the prospects for systems-level analyses of flight control using these techniques and others like them.

While we may now claim a reasonable understanding of many mechanical and physiological aspects of animal flight control, we do not yet understand their dynamical interaction. Hence, while we are able to describe the neuroanatomy and physiology of certain feedback and feedforward pathways (e.g. Burrows, 1996), we do not as yet have the tools to understand how these are tuned to produce a robust control system, nor to examine the selection pressures and constraints that have operated in their evolution. A central aim of future research must therefore be to unite our understanding of the mechanical and physiological aspects of animal flight control, in order to offer insight into their evolution as interacting dynamical subsystems of the control system as a whole. This will necessarily entail embedding the empirical results of physiological studies within the theoretical framework of classical mechanics and, in consequence, the structure of the models we use to analyse animal flight dynamics will dictate the empirical measurements we must make in order to parameterize them.

Our aim here is to consider how tethered- and free-flight experiments can be used to provide the sorts of empirical data needed to parameterize models of animal flight dynamics, without being too prescriptive about model structure. In so doing, we outline a programme of empirical research aimed at unpicking the complex interaction between physics and physiology that underpins animal flight control. After reviewing the state of the art in tethered-(section 2) and free-flight (section 3) experimental paradigms, we describe several new techniques we have developed in order to overcome the main limitations we identify in existing methodologies. We do not set out here to provide detailed results or analysis from any of these methods, but instead offer demonstration data to illustrate the kinds of measurements that can be made using these new experimental methods. We conclude by discussing the prospects for systems-level analyses of animal flight control in the light of the conceptual and methodological themes that we review (section 4).

2.1 Conceptual overview

The great majority of experimental research on animal flight has been done while the animal is tethered, and therefore – strictly speaking –not flying. This somewhat paradoxical situation has arisen because it is often necessary to tether an animal in order to make any measurements at all, but the opportunity that tethering presents for measuring forces and moments directly allows us to make a virtue out of a necessity. Moreover, given that the forces and moments can be measured, it is then a relatively simple matter to predict using equations of motion how the animal would have flown at the instant the forces were measured, had it momentarily been released from its tether (Taylor and Thomas,2003; Taylor andŻbikowski, 2005). Hence, while a tethered animal is not strictly flying, tethered measurements can nevertheless be used to infer how a tethered animal would have behaved in free flight, provided that we can deal with certain difficulties implicit in the approach.

The central problem with tethering is that it breaks the dynamics of the mechanical system and thereby breaks the feedback loops that naturally operate in free flight. A tethered system cannot therefore exhibit closed-loop behaviour, unless special measures are taken to simulate free-flight conditions by modulating the stimuli that the subject receives appropriately,and for this reason such systems have often been referred to as `open-loop'. Nevertheless, the physiological component of a tethered animal's flight control system remains intact: its sensors continue to receive input, its controllers continue to process the resulting signals, and its effectors continue to produce a response. This implies that we should still be able to extract meaningful information about the closed-loop function of the physiological system in free flight from measurements we make in tethered flight. As we will now argue, however, the nature of the distinction between so-called open-loop and closed-loop tethered paradigms has not in the past been made sufficiently precise, and this has led to some confusion in the interpretation of tethered-flight results.

2.1.1 `Open-loop' tethering

Measurements made under tethered conditions are usually referred to by biologists as open-loop, unless feedback is used to modulate artificially the stimuli that the insect receives(Srinivasan, 1977). It is important to recognize, however, that measurements made under open-loop experimental conditions are not the same as measurements made on an open-loop control system. Open-loop control refers to actions made in response to a command signal without reference to the system's output during the action. For example, aerial pursuit behaviour in hoverflies appears to be initiated open-loop, with no modulation in response to changes in the target's trajectory through the initial stages of a pursuit (i.e. without using sensory feedback) (Collett and Land,1978). This differs fundamentally from the situation experienced by an animal in tethered flight, because it is reasonable to expect that a tethered animal will continue to treat as feedback whatever sensory input it receives in those modalities normally used for closed-loop control. Hence,while the experimental conditions used in tethered flight may in some sense be open-loop, the controller itself is certainly not behaving as an open-loop controller. Taylor and Żbikowski(Taylor and Żbikowski,2005) have therefore suggested using the term `broken-loop' to refer to measurements made in tethered flight, in order to distinguish them from measurements of open-loop control systems. Like the terms open-loop and closed-loop, the term broken-loop is borrowed from the control engineering literature, where it is used to describe a closed-loop control system that has been interfered with in some way (e.g. Tischler and Remple,2006).

To illustrate the importance of the distinction between open-loop and broken-loop systems, consider how a classical linear controller would respond if its normal feedback loops were broken. A typical proportional-integral-differential (PID) controller uses three kinds of feedback to achieve a commanded state: (1) the measured difference, or error,between the actual and commanded states of the system (proportional control),(2) the rate of change of the error (differential control), and (3) the sum of the error through time (integral control); see e.g. Dorf(Dorf, 2005). All three kinds of feedback are usually needed to achieve a swift but stable response without steady-state error, so it is reasonable to assume that the same kinds of sensory feedback will be used by flying animals, even if the control laws themselves might not be linear. Now, a hypothetical flying animal attempting to reach a commanded state different from its current one using a PID controller would perceive a constant error if tethered, and would therefore be expected to increase the magnitude of its response as a result of integral control until the controller reached saturation. We would therefore expect an animal in tethered flight to display a ramped, and possibly saturating,control response. This differs fundamentally from an open-loop control response, which does not change through time unless the command is changed.

The ramped and possibly saturating broken-loop control response that we predict will be observed in tethered flight is, however, consistent with the familiar observation (Dudley,1992) that tethered-flight responses tend to be exaggerated over those exhibited in response to similar disturbances in free flight(Taylor, 2007). Previously,this has been treated as an experimental artifact – which in some sense it is (Dudley, 1992) –but the observation almost certainly also provides useful information on the nature of the animal's controller, which a systems approach should be able to identify. Far from being mere artifacts, the exaggerated responses observed in tethered flight are in fact useful data when interpreted correctly. This comes as some relief in the light of the earlier observation that most previous experimental work on animal flight physiology has been done under broken-loop conditions.

2.1.2 `Closed-loop' tethering

Measurements made under tethered conditions in which feedback is used to modulate artificially the stimuli that the insect receives are usually referred to as closed-loop (Srinivasan,1977). Making measurements under closed-loop conditions does not,however, ensure that the closed-loop dynamics will be correctly identified unless great care is taken to match the dynamics of the artificial feedback loop to the free-flight dynamics it simulates. Closed-loop control refers to actions made in response to a command signal with reference to how the system's state differs from the commanded state through the course of the action. For example, insects classically exhibit an optomotor response in which they will turn to follow a moving wide-field pattern so as to stabilize their visual field. This natural tendency to stabilize the visual field by turning in free flight can be simulated artificially in tethered flight by modulating the visual stimuli that the insect receives using the measured torque as feedback (Srinivasan,1977). The resulting closed-loop experimental conditions are certainly closer to free flight than the broken-loop experimental conditions discussed in the previous section. Nevertheless, the situation is still abnormal, not least because only one of the six degrees of freedom of free flight (typically either roll or yaw) and only one of the sensory modalities(typically vision) is usually employed in the experimental feedback loop.

Incorporating only one degree of freedom in the experimental feedback loop is especially problematic for lateral motions, which display strong coupling between their different degrees of freedom in free flight. Such coupling arises partly as a result of inertial coupling, and partly because asymmetries in lift tend to be associated with asymmetries in thrust or drag. This makes it extremely difficult to generate a roll moment without also generating a yaw moment, and vice versa, so that a banked turn in an aircraft, for example, is typically achieved by coordinating the rudder (which nominally controls yaw) with the ailerons (which nominally control roll) so as to counteract adverse yaw. Furthermore, Newton's Second Law tells us that the rate of angular acceleration of an object in response to an applied torque depends on its moment of inertia, but no attempt is usually made to match the gain of the artificial feedback loop to the animal's mass distribution properties [but see Wehrhahn and Reichardt(Wehrhahn and Reichardt, 1975)for a notable exception]. It is therefore unlikely that the experimental feedback provided in most closed-loop experiments matches kinematically, let alone dynamically, the feedback that would have been obtained in free flight. While closed-loop paradigms undoubtedly come closer to simulating natural free-flight conditions than broken-loop measurements, this means that they also come with a degree of uncertainty about what response is actually being measured.

Fig. 1.

Schematic diagram of the virtual-reality flight simulator. External supports, etc., are not shown. The insect is tethered at the centre of the acrylic sphere, which is suspended in a gasket. Two customized data projectors project image sequences onto the entire surface of the sphere, which is painted as a back-projection surface. The light path is folded using mirrors to minimize the size of the apparatus and maximize resolution. The insect is mounted on a six-component force–moment balance on the end of a movable sting. In the configuration shown here, the sting is moved in an oscillatory coning motion by a brushless motor, with two further adjustable axes providing static adjustment of sting orientation to adjust the phasing of roll, pitch and yaw. The insect sits at the open mouth of a transparent wind tunnel mounted inside the sphere. The apparatus enables independent stimulation of each of the sensory modalities involved in insect flight control.

Fig. 1.

Schematic diagram of the virtual-reality flight simulator. External supports, etc., are not shown. The insect is tethered at the centre of the acrylic sphere, which is suspended in a gasket. Two customized data projectors project image sequences onto the entire surface of the sphere, which is painted as a back-projection surface. The light path is folded using mirrors to minimize the size of the apparatus and maximize resolution. The insect is mounted on a six-component force–moment balance on the end of a movable sting. In the configuration shown here, the sting is moved in an oscillatory coning motion by a brushless motor, with two further adjustable axes providing static adjustment of sting orientation to adjust the phasing of roll, pitch and yaw. The insect sits at the open mouth of a transparent wind tunnel mounted inside the sphere. The apparatus enables independent stimulation of each of the sensory modalities involved in insect flight control.

The closed-loop paradigm of insect physiology closely parallels so-called`hardware-in-the-loop' simulation techniques of aeronautical engineering,whereby physical components that are difficult to model numerically are included as real hardware with appropriate devices used to match the boundary conditions with the rest of the environment that is simulated in software. Current research efforts into wind tunnel-based hardware-in-the-loop simulation for aeronautical applications offer scope for matching airflow boundary conditions along the flight trajectory(Bacic and Daniel, 2005) and might be adapted for animal flight research. The difficulty with these kinds of simulations, however, is that the finite bandwidth of the actuator hardware necessarily impacts on the accuracy of the method. This is especially problematic on the short timescales associated with animal flight. Hence,while closed-loop experiments can indeed measure closed-loop dynamics, it is technically difficult to ensure that these are the same as the closed-loop dynamics of natural flight. Closed-loop experimental conditions therefore offer no panacea for the more general problems of tethering, and while they undoubtedly come closer to simulating free-flight conditions than broken-loop tethering, we suggest that the certainty we have over how the feedback loop is broken makes broken-loop measurements easier to interpret in the context of animal flight dynamics.

2.2 New techniques for tethered flight

Many of the issues discussed in section 2.1, once recognized, can be dealt with experimentally. This section describes the design of a virtual-reality flight simulator designed to meet these requirements for work with insects(Fig. 1). Our intention here is to provide a conceptual overview of the simulator in order to illustrate how its design overcomes these issues. The simulator has several key features,which distinguish it from the current state of the art as has been developed elsewhere. Firstly, it provides stimuli in all of the sensory modalities known to be used in flight control. Secondly, it has a bandwidth broad enough to match dynamically the free-flight motions of the insects it is designed for work with. Thirdly, it is possible to vary stimuli in the different sensory modalities independently to determine how they are combined by the insect's controller to achieve effective flight control.

The virtual-reality simulator provides an immersive visual environment for an insect tethered at the centre of a 1 m diameter acrylic sphere, whose surface is painted to act as a rear projection surface(Fig. 1). The projection system is designed to allow simulation of any combination of translation and rotation in a bright and realistic visual environment. Two customized data projectors display synchronized sequences of binary images at 1024 pixels × 768 pixels and up to 8000 frames s–1 using Digital Light Processing™ technology (Texas Instruments Inc., Dallas, TX, USA). This is well in excess of the bandwidth of any insect's visual system(Miall, 1978) and sufficient therefore to allow projection of grayscale images by time-averaging of binary images. The arc lamps used by the projectors are extremely bright (1500 lumens) and are run without flicker by a DC electronic ballast. Any suitably rendered image sequence can be projected, so it is possible to display either video recordings of natural environments or animated sequences generated on a computer. At present, computer-generated sequences are rendered in 3D animation software using an explicit computer-aided design (CAD)-based representation of the flight simulator(Fig. 2). Sequences are preloaded onto RAM prior to projection, but it is also possible to project images generated online, so that in principle the system may be run in a closed-loop configuration. Aerodynamic stimuli are provided by a thin-walled transparent suction tunnel inside the projection sphere, with an electronically adjustable flow speed.

Fig. 2.

Photograph (left) of a 3D panoramic scene projected onto the surface of the virtual-reality flight simulator. The scene shows the Oxford skyline with grayscale `clouds' above for contrast and a checkerboard below. In this simulated environment, the checkerboard is scrolled to simulate ventral translational optic flow, and the whole scene is rotated to simulate rotational optic flow. Note the mirror directly beneath the sphere; projection continues behind the visible supports. The projected scene is generated in 3D animation software using an explicit CAD-based representation of the flight simulator. The CAD-generated image (right) shows the same exterior view; the projected images are simply those that would be seen from the viewpoints of the projectors.

Fig. 2.

Photograph (left) of a 3D panoramic scene projected onto the surface of the virtual-reality flight simulator. The scene shows the Oxford skyline with grayscale `clouds' above for contrast and a checkerboard below. In this simulated environment, the checkerboard is scrolled to simulate ventral translational optic flow, and the whole scene is rotated to simulate rotational optic flow. Note the mirror directly beneath the sphere; projection continues behind the visible supports. The projected scene is generated in 3D animation software using an explicit CAD-based representation of the flight simulator. The CAD-generated image (right) shows the same exterior view; the projected images are simply those that would be seen from the viewpoints of the projectors.

The insect itself is rigidly tethered to a six-component force–moment balance sufficiently sensitive to resolve the periodic forces generated by a blowfly Calliphora erythrocephala through its wingbeat (Nano-17, ATI Industrial Automation, Apex, NC, USA). The balance is attached to a movable sting such that the insect can be rotated about its centre of mass by attaching the sting to one of several different rotary axes driven by a brushless motor with integrated position encoder. Each of these rotary axes is servoed by a computerized motion controller, which can be programmed to execute any given pattern of rotation in that axis, thereby providing static adjustment of the insect's orientation as well as dynamic stimulation of inertial sensors such as halteres. The motion of the rotary axes is phase-locked to the data projectors using a transistor–transistor logic(TTL) synchronization signal. Although it is only possible to generate a limited set of inertial rotations in the simulator, these are appropriate to the structures of most tractable models of unsteady flight dynamics. For example, if the equations of motion used to analyse the data are linearized,then any motion that the model can describe can be represented as a linear sum of the characteristic natural modes of the system. Furthermore, it can be shown that a set of four characteristic motions (flight at steady angle of attack and sideslip, pitch oscillations, yaw oscillations, and coning rotations at non-zero angle of attack and sideslip) is sufficient to parameterize a non-linear model of unsteady rigid-body flight dynamics completely (Tobak and Schiff,1981). Pitch oscillations have already been studied by Taylor et al. (Taylor et al., 2006) for desert locusts Schistocerca gregaria in a more basic experimental apparatus, but the new virtual-reality flight simulator is designed to generate all four types of characteristic motion. This is an excellent example of how the structure of a model used to analyse experimental data should dictate the experiments that need to be performed.

In summary, the virtual-reality flight simulator we have developed enables realistic simulation of all of the sensory stimuli known to be important in the moment-to-moment control of insect flight, in order to allow the forces and moments to be measured and modelled as functions or functionals of the sensory input states. The simulator has some features in common with existing facilities elsewhere, but differs in a number of important respects. For example, other state-of-the-art facilities for visual stimulation either use arrays of light-emitting diodes (Sherman and Dickinson, 2003; Lindemann et al., 2003), which offer excellent temporal resolution but are constrained in luminance and ease of programming, or commercial LCD projectors(Gray et al., 2002), which offer excellent luminance and ease of programming but are constrained in temporal resolution. Luminance is of particular importance, because the response of the visual system is much slower under dim conditions. In contrast, the virtual-reality simulator that we have developed combines excellent temporal and spatial resolution with extremely high luminance,convenience of operation and an almost unrestricted field of view for the insect. Most importantly, it is unique in combining simulation of visual,aerodynamic and inertial stimuli in a single apparatus, allowing measurement of all six components of the forces and moments that the insect produces.

3.1 Conceptual overview

However sophisticated tethered-flight paradigms may become, it goes without saying that the natural state of flight is free flight. It does not follow,however, that free flight is necessarily natural flight – in most experimental situations, the subject will be trailing leadwires, carrying a load, flying in a wind tunnel, or simply flying in a confined space. Nevertheless, it is only possible to have the chance of identifying true closed-loop dynamics in free flight, and for this reason free-flight paradigms are likely to play an increasingly important part in our developing understanding of animal flight control. The key difficulty from a flight dynamics perspective is that the forces and moments cannot be directly measured – only the animal's consequent motion. This is problematic because although Newton's Second Law tells us that knowledge of mass and acceleration is equivalent to knowledge of force for a moving particle, things are more complicated for a solid body. For example, a measured roll acceleration might reflect the direct application of a roll torque, but it might also reflect a non-zero product of the angular velocity components about the pitch and yaw axes if their moments of inertia are unequal. The issues of coupling alluded to in section 2.1.2 therefore mean that it will not in general be possible to treat different degrees of freedom separately.

Given that it is generally incorrect to infer the forces and moments in one axis from the accelerations and angular accelerations in only that axis, there is no substitute for using a physically complete set of equations of motion to analyse acceleration data obtained in free flight. Hence, in contrast to a designed tethered-flight experiment – in which the animal's response to a prescribed stimulus can be measured and the parameters of the model fitted separately – free-flight paradigms will generally require all of the parameters of the model to be fitted simultaneously. This brings us into the domain of system identification (Ljung,1998). System identification has been used successfully for over half a century to determine experimentally the dynamics of what in control engineering is termed the `plant' of a control system – typically the physical system being controlled, and in our case the animal's flight dynamics. System identification is becoming increasingly common as a means of modelling aircraft flight dynamics and control, as witnessed by the recent publication of several texts on the subject(Klein and Morelli, 2006; Jategaonkar, 2006; Tischler and Remple,2006).

There are three general approaches to system identification: (a)`white-box', where we know the physical model of the plant through sound application of fundamental laws of physics and seek to estimate the physical parameters of that model from measured data; (b) `grey-box', where we postulate a model structure at the level of assuming, say, a first order system with dead time, and seek to estimate its physical parameters; (c)`black-box', where we seek to estimate both the model structure and its parameters from data (Jategaonkar,2006). In each case, the model structure and/or parameters are identified using maximum likelihood methods, minimization of prediction error,or other similar optimization procedures. Since a white-box approach is based on a physical model of the system, there is little risk of over-fitting the model with unnecessary parameters; with a grey-box or black-box approach, this can be avoided by model reduction techniques and statistical control of the overall type I error. It is common in the identification of aircraft flight dynamics to use a white-box approach, as the basic underlying dynamic model of a fixed-wing aircraft can be readily derived. By the same token, it should be possible to use a white-box approach in the identification of the flight dynamics of gliding animals. A white-box approach may not be feasible for flapping flight, however, which is much more difficult to model theoretically,and it is likely that a grey-box or black-box approach will be required in such cases.

Identifying the dynamics of an animal's flight control system requires knowledge not only of the animal's motion but also of the control input which produces that motion. Furthermore, the control inputs we measure must be sufficient to excite all of the animal's modes of motion and the components of its motion that we measure sufficient for us to observe all of those modes. Once fitted, it is usual to validate the model by comparing its predictions of flight behaviour against validation data not used in the fitting of the model. Naturally, the accuracy of any such analysis rests on the accuracy of the kinematic data that are used to infer the dynamics. These may be collected either using instrumentation forming part of the measured system, such as onboard inertial measurement units and cameras, or using instrumentation external to the measured system, such as ground-based cameras.

3.1.1 Inertial measurement systems

The approach of getting the animal to carry sensors to measure its kinematics is at present restricted to larger animals, for which the required load forms a small enough proportion of body mass not to interfere unduly with the flight dynamics. As a rough rule of thumb, we might aim for a system constituting <10% of body mass. Birds were first made to carry inertial sensors as early as 1982, in a wind tunnel study mounting accelerometers on pigeons (Bilo et al., 1982). However, it has only become possible to mount accelerometers on birds flying freely without the constraint of trailing wires with the recent miniaturization of data loggers(Weimerskirh et al., 2005). Accelerometer data have been used to answer a variety of behavioural and biomechanical questions (Bilo et al.,1982; Hedrick et al.,2004; Weimerskirh et al.,2005), but for flight dynamics purposes this needs to be combined with information on rotation from angular sensors such as magnetometers and rate gyros. As the smallest commercially available inertial measurement units and data loggers providing these facilities have a combined mass of the order of 0.05 kg, this presently limits us to animals with a body mass of the order of 0.5 kg for studies of wide-ranging free flight. With the use of trailing wires, smaller species of bird may also be considered, although this will obviously constrain their flight dynamics. While it is impractical for insects to carry inertial measurement systems at present, tiny induction coils transducing position and orientation have been carried by blowflies flying freely in a small flight arena with an applied magnetic field(Schilstra and Van Hateren,1999). Unfortunately, this technique is not well suited to flight dynamics measurements because the insect is constrained by trailing wires, and the instantaneous position and orientation data collected in this manner still need to be differenced in order to extract velocity and acceleration.

Fig. 3.

Frame from a video sequence at 50 frames s–1 of a steppe eagle's tail taken by an onboard wireless video camera. The graphs plot the measured tail bank angle and angle of attack as functions of time with no filtering applied. Tail bank angle (ϕ) is extracted from the angle of the trailing edge, as shown by the construction lines on the image. Tail pitch angle (θ) is extracted by measuring the deviation of the trailing edge from its average position perpendicular to the line AB at the point A, and making use of the known distance of the camera to the base of the tail. Tail spread angle is not shown, but can also be determined from these data, giving 3 measurable kinematic degrees of freedom for the tail.

Fig. 3.

Frame from a video sequence at 50 frames s–1 of a steppe eagle's tail taken by an onboard wireless video camera. The graphs plot the measured tail bank angle and angle of attack as functions of time with no filtering applied. Tail bank angle (ϕ) is extracted from the angle of the trailing edge, as shown by the construction lines on the image. Tail pitch angle (θ) is extracted by measuring the deviation of the trailing edge from its average position perpendicular to the line AB at the point A, and making use of the known distance of the camera to the base of the tail. Tail spread angle is not shown, but can also be determined from these data, giving 3 measurable kinematic degrees of freedom for the tail.

Fig. 4.

Inertial measurements from a steppe eagle in soaring flight. The graph plots total measured acceleration against time: all three components of acceleration, angular velocity and orientation are logged by the inertial measurement unit, but are not shown. The thumbnails show synchronized frames from a hand-held camcorder (upper row) to provide context, and from a rearward-facing onboard camera (lower row) to confirm that the instrumentation remains steady throughout. Dashed lines denote the correspondence of the graph with the numbered frames. Note how the circled tan-coloured rump contour feathers remain steady (position of circle identical between images),indicating that the instrumentation is static with respect to the body. The visible transients therefore denote real accelerations of the bird, and are presumably excited by gusts, etc., as the bird is not actively manoeuvring in this sequence. The downy white feathers that are visible on either side of the circled contour feather are blowing freely in the wind, so provide no information on the position of the instrumentation with respect to the body.

Fig. 4.

Inertial measurements from a steppe eagle in soaring flight. The graph plots total measured acceleration against time: all three components of acceleration, angular velocity and orientation are logged by the inertial measurement unit, but are not shown. The thumbnails show synchronized frames from a hand-held camcorder (upper row) to provide context, and from a rearward-facing onboard camera (lower row) to confirm that the instrumentation remains steady throughout. Dashed lines denote the correspondence of the graph with the numbered frames. Note how the circled tan-coloured rump contour feathers remain steady (position of circle identical between images),indicating that the instrumentation is static with respect to the body. The visible transients therefore denote real accelerations of the bird, and are presumably excited by gusts, etc., as the bird is not actively manoeuvring in this sequence. The downy white feathers that are visible on either side of the circled contour feather are blowing freely in the wind, so provide no information on the position of the instrumentation with respect to the body.

Fig. 5.

External photogrammetric measurement of the wing kinematics of a free-flying steppe eagle Aquila nipalensis coming in to perch on its handler's arm. Left panel shows one of a stereo pair of images taken at 500 frames s–1. Right panel shows a calibrated reconstruction of the lower surface of the left wing based on stereo-matching of natural features of the plumage. Black points on the wing denote measurements,connected by straight lines to assist in visualizing the wing topography; the colour map denotes the local geometric angle of attack of the interpolated wing surface with respect to the horizontal. The isolated black points denote reference measurements on the head and tail, indicating the longitudinal axis of the bird. Note that whereas the angle of attack and camber of the proximal section of the wing is relatively consistent in a spanwise direction, the distal portion of the wing is set at a much greater angle of attack. This reflects the angle of attack of the interpolated surface and does not take account of the local twist of the primary feathers, which will be measured in future work. An animation of this perching sequence is available (Movie 1 in the supplementary material).

Fig. 5.

External photogrammetric measurement of the wing kinematics of a free-flying steppe eagle Aquila nipalensis coming in to perch on its handler's arm. Left panel shows one of a stereo pair of images taken at 500 frames s–1. Right panel shows a calibrated reconstruction of the lower surface of the left wing based on stereo-matching of natural features of the plumage. Black points on the wing denote measurements,connected by straight lines to assist in visualizing the wing topography; the colour map denotes the local geometric angle of attack of the interpolated wing surface with respect to the horizontal. The isolated black points denote reference measurements on the head and tail, indicating the longitudinal axis of the bird. Note that whereas the angle of attack and camber of the proximal section of the wing is relatively consistent in a spanwise direction, the distal portion of the wing is set at a much greater angle of attack. This reflects the angle of attack of the interpolated surface and does not take account of the local twist of the primary feathers, which will be measured in future work. An animation of this perching sequence is available (Movie 1 in the supplementary material).

3.1.2 External measurement systems

The alternative to onboard instrumentation is an external measurement system, which poses special problems of its own. Usually an external measurement system will be fixed in an inertial frame, but there is no particular reason why it should not be fixed to a moving frame of reference,provided that the motion of that frame of reference is known. The commonest examples of fixed measurement systems are high-speed cameras, which may be used to monitor target position and orientation (e.g. Fry et al., 2003; Hedrick and Biewener, 2007; Hedrick et al., 2007). Naturally, the fixedness of the measurement system constrains the flight volume that can be covered, and for this reason all previous work has been done indoors. Examples of moving measurements systems include pan-tilt mounted cameras used to track insects flying around a large room(Fry et al., 2000; Müller and Robert, 2001). A far greater flight volume can be covered if the external measurement system follows the free-flying animal (Fry et al.,2000), but the need to know the motion of the measurement system introduces an additional source of measurement error that may not be tolerable in flight dynamics studies. In any case, the most fundamental limitation of using any kind of external measurement system for flight dynamics measurements is that velocity, angular velocity and acceleration cannot be measured directly. Instead, the kinematics must be estimated from the measured target position and orientation using, for example, a numerical differencing procedure or Kalman filtering. This amplifies the measurement error and concomitantly reduces the useful bandwidth of the system, so that a very high degree of spatial and temporal resolution is required in order to make measurements suitable for flight dynamics modelling.

3.2 New techniques for free-flight analysis

In order to overcome some of the issues discussed in section 3.1, we have developed complementary external and onboard measurement systems for analysing the flight dynamics of free-flying birds of prey. The methods are described in detail by Carruthers et al. (Carruthers et al., 2007) and Taylor et al.(Taylor et al., 2007),respectively: here we provide a brief summary of the techniques used, together with preliminary data to demonstrate the kinds of measurements that can be made. These data are offered by way of illustration only, and it is not intended that any detailed conclusions be drawn from them: the system identification approach that we have outlined above requires large datasets and a great deal of mathematically involved analysis, which falls outside the scope of this review.

3.2.1 Onboard measurement techniques

Miniature inertial measurement units (IMUs) providing 3D information on orientation, angular velocity and acceleration have only recently become commercially available. We used an MTx/MTi unit (XSens Technologies B.V.,Enschede, The Netherlands) together with a custom-built logger (M. Bacic,Department of Engineering Science, Oxford University) to record at 100 Hz the instantaneous 3D orientation, angular velocity and acceleration of a trained male steppe eagle Aquila nipalensis weighing 2.5 kg. A pair of miniature PAL wireless video cameras were fixed rigidly to the IMU and used simultaneously to record the eagle's head and tail movements, using a ground-based video receiver recording to MiniDV. The video data were later deinterlaced to provide sequences at 50 frames s–1. The instrumentation was carried on the eagle's back and was worn on a removable harness made of webbing material and Velcro straps: the total load carried in the experiments we describe here was <0.25 kg, or 10% of body mass, but we have since managed to reduced the combined weight of the instrumentation to<0.1 kg.

Fig. 3 shows the kind of information that is available on tail kinematics during flight, while Fig. 4 shows a typical set of inertial data recorded during coastal soaring. The inertial data are shown alongside synchronized video footage of the eagle from a handheld camcorder and from a rearward-facing onboard camera. The view of the body recorded by the onboard camera is stationary throughout, confirming that the instrumentation remained steady with respect to the bird during the manoeuvre. Unfortunately, the IMU heaves with the scapular region on which it is seated during flapping, so that at present we are only able to apply the technique successfully to gliding flight, during which the IMU remains steady on the bird. Together, these data demonstrate that it is possible to use an inertial measurement system to record the body kinematics of a large bird in wide-ranging free flight, while simultaneously recording parameters of its wing or tail kinematics using the onboard video. Given a sampling frequency of 50 Hz for the input measurements (from the onboard cameras) and 100 Hz for the output measurements (from the IMU), the bandwidth over which we can identify the response of the bird ranges up to a theoretical maximum of 25 Hz. This is much broader than the bandwidth we have observed the eagle to make control inputs over, and the technique therefore permits identification of its frequency response over the full range of input frequencies that it employs.

3.2.2 External measurement techniques

Independent validation of the response properties identified using onboard instrumentation is possible by making use of a ground-based external measurement system. This has only recently become feasible with the development of ruggedized high-speed digital video cameras, which allow stereo-photogrammetric measurements to be made under field conditions with sufficient spatiotemporal resolution to extract usable flight dynamics parameters. The disadvantage of this approach is that the bird must be close to the cameras during the measurement, which limits the duration of the flight record that can be obtained. Since the lowest frequency that can be identified is inversely related to the length of the flight record, this means that high-speed video data can only be used to identify the response of a bird at higher frequencies. As such, the method is complementary to the onboard instrumentation techniques that we have developed, which can be used to obtain flight records lasting many tens of minutes and therefore offer better resolution at lower frequencies.

The photogrammetric method uses a pair of synchronized Motionscope M3 cameras (Redlake Imaging Inc., Tucson, AZ, USA) giving 1280 pixel × 1024 pixel resolution at 500 frames s–1. Using manual tracking of approximately 70 recognizable natural features of the plumage of the wings,head and tail, and using self-calibrating bundle adjustment calibration techniques we have been able to reconstruct the 3D position of each of these points on the bird as it comes in to perch on its handler's arm. Self-calibrating bundle adjustment is the state of the art in photogrammetric reconstruction techniques (e.g. Atkinson,1996). Our implementation uses non-linear least squares optimization to solve for jointly optimal estimates of the camera parameters and target coordinates. Fig. 5plots a reconstruction of the lower surface of the wing and of reference points on the head and tail. The camber and spanwise twist of the wing are clearly visible in the surface colour, which represents the local geometric angle of attack. The photogrammetric data therefore provide a rich source of information for identifying multiple-input multiple-output models of the flight dynamics, complementary to the simpler kinematic data derived from the onboard instrumentation. An animation of a short section of the perching sequence shown in Fig. 5 is provided in the supplementary material, in order to demonstrate that it is possible to extract detailed wing and body kinematic measurements from free-flying birds using natural features of the plumage alone.

The experimental techniques that we have described make it possible to determine the functional link between the state of a flying animal (i.e. its orientation, velocity, angular velocity, etc.) and the forces and moments it produces. This in itself is sufficient to derive a model of the flight dynamics (Taylor and Thomas,2003; Taylor andŻbikowski, 2005; Taylor et al., 2006), but in either case it will leave us with the measured input–output relationships as a black box. This can still yield useful insight into the mechanics of flight, for example by providing quantitative information on the trade-off between stability and manoeuvrability, or on the performance of the animal as characterized by its frequency response. However, the details of the underlying physiology remain opaque, and a key challenge is to begin to fill in the details of the black box to produce a grey-box model in which information on the underlying structure of the neuromuscular system is included in the model. This is partly achievable using the methods we have already described. For example, by allowing us to vary stimuli in the different sensory modalities independently or in concert, the virtual-reality flight simulator allows us to investigate how different sensory pathways combine to modulate flight control (see also Sherman and Dickinson,2004).

Nevertheless, detailed neurophysiological information will always be important in ensuring that the structure of the model we fit is grounded firmly in physiological reality. For example, whether we need to use integro-differential equations so as to model the effects of integral control is an empirical question that can be answered if we know enough about the animal's neurophysiology to tell whether integral control is a likely possibility. It is well known that neurons have integrative properties, but is there any neurophysiological evidence to suggest that the exaggerated responses seen in tethered flight really reflect controller saturation as suggested in section 2.1.1? Neurophysiological information might also help us to determine whether linear (Taylor and Thomas, 2003; Sun and Xiong,2005; Taylor et al.,2006) modelling is likely to be appropriate, or whether non-linear modelling (Taylor and Żbikowski,2005) is required. We know that individual neurons display highly non-linear response properties, but is this also true of the controller they combine to produce? In fact, there are good reasons to expect that it might not be. For instance, concordant visual and haltere inputs seem to combine linearly in flies (Sherman and Dickinson,2004), and the antennal positioning reaction of locusts seems to render the response of the antennae to airflow more nearly linear in airspeed(Gewecke and Heinzel, 1980). This suggests that a linearized model might capture the overall system response well, even if it could not be expected to capture the response properties of the system's piecewise components.

Integrating mechanical and physiological models of animal flight dynamics and control presents significant challenges. Nevertheless, with the aid of neurophysiological understanding to inform the structure of our models, and a consequent understanding of the parameters that must be fitted experimentally,it should be possible to make progress in developing systems-level models of animal flight control. With such models in hand, we will at last begin to understand the complex evolutionary interaction between physics and physiology, from which derives the richness of animal flight dynamics.

We thank Tony Price for building the flight simulator, and `Cossack' and Louise Crandal – eagle and handler – for their involvement in the bird flight experiments. We thank Rafał Żbikowski, Holger Krapp and Simon Laughlin for many helpful discussions, and Gregg Abate, Johnny Evers and Michael Ol for their support. G.K.T. is a Royal Society University Research Fellow and RCUK Academic Fellow. R.J.B. is a Postdoctoral Research Fellow at St Anne's College, Oxford. A.C.C. is a Royal Commission for the Exhibition of 1851 Brunel Research Fellow. The virtual-reality flight simulator was designed and built by G.K.T. and R.J.B., sponsored by the BBSRC under grant number BB/C518573/1 to G.K.T. A.C.C. undertook the eagle wing kinematics analysis, sponsored by the Air Force Office of Scientific Research,Air Force Material Command, USAF, under grant number FA8655-05-1-3077 to G.K.T., M.B. and A.L.R.T. J.G. undertook the analysis of data from the onboard measurement system under a Doctoral Training Grant from the BBSRC and EPSRC to G.K.T. S.M.W. wrote the software used to analyse the eagle wing kinematics,sponsored by the EPSRC under grant number GR/S23049/01 to A.L.R.T. The US Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright notation thereon.

Atkinson, K. (
1996
).
Close Range Photogrammetry and Machine Vision
. New York:Reinhold.
Bacic, M. and Daniel, R. (
2005
).
Towards a low-cost hardware-in-the-loop simulator for free-flight simulation of UAVs
. AIAA Paper AIAA-2005-6102, www.aiaa.org.
Bilo, D., Lauck, A., Wedekind, F., Rothe, H.-J. and Nachtigall,W. (
1982
). Linear accelerations of a pigeon flying in a wind tunnel.
Naturwissenschaften
69
,
345
-346.
Burrows, M. (
1996
).
The Neurobiology of an Insect Brain
. Oxford: Oxford University Press.
Carruthers, A., Taylor, G. K., Walker, S. and Thomas, A.(
2007
).
Use and function of a leading edge flap on the wings of eagles
. AIAA Paper AIAA-2007-43, www.aiaa.org.
Collett, T. S. and Land, M. F. (
1978
). How hoverflies compute interception courses.
J. Comp. Physiol.
125
,
191
-204.
Dorf, R. C. (
2005
).
Modern Control Systems
. Upper Saddle River, NJ: Pearson Education International.
Dudley, R. (
1992
). Aerodynamics of flight. In
Biomechanics – Structures and Systems: A Practical Approach
(ed. A. A. Biewener), pp.
97
-120. Oxford: Oxford University Press.
Fry, S. N., Bichsel, M., Müller, P. and Robert, D.(
2000
). Tracking of flying insects using pan-tilt cameras.
J. Neurosci. Methods
101
,
59
-67.
Fry, S. N., Sayaman, R. and Dickinson, M. H.(
2003
). The aerodynamics of free-flight manuevers in Drosophila.
Science
300
,
455
-505.
Gewecke, M. and Heinzel, H.-G. (
1980
). Aerodynamic and mechanical properties of the antennae as air-current sense organs in Locusta migratoria. I. Static characteristics.
J. Comp. Physiol. A
139
,
357
-366.
Gray, J. R., Pawlowski, V. and Willis, M. A.(
2002
). A method for recording behaviour and multineuronal CNS activity from tethered insects flying in virtual space.
J. Neurosci. Methods
120
,
211
-223.
Hedrick, T. L. and Biewener, A. A. (
2007
). Low speed maneuvering flight of the rose-breasted cockatoo (Eolophus roseicapillus). I. Kinematic and neuromuscular control of turning.
J. Exp. Biol.
210
,
1897
-1911.
Hedrick, T. L., Usherwood, J. R. and Biewener, A. A.(
2004
). Wing inertia and whole-body acceleration: an analysis of instantaneous force production in cockatiels (Nymphicus hollandicus)flying across a range of speeds.
J. Exp. Biol.
207
,
1689
-1702.
Hedrick, T. L., Usherwood, J. R. and Biewener, A. A.(
2007
). Low speed maneuvering flight of the rose-breasted cockatoo (Eolophus roseicapillus). II. Inertial and aerodynamic reorientation.
J. Exp. Biol.
210
,
1912
-1925.
Jategaonkar, R. V. (
2006
).
Flight Vehicle System Identification
. Reston, VA: AIAA.
Klein, V. and Morelli, E. A. (
2006
).
Aircraft System Identification: Theory and Practice
. Reston, VA: AIAA.
Lindemann, J. P., Kern, R., Michaelis, C., Meyer, P., van Hateren, J. H. and Egelhaaf, M. (
2003
). FliMax, a novel stimulus device for panoramic and highspeed presentation of behaviourally generated optic flow.
Vision Res.
43
,
779
-791.
Ljung, L. (
1998
).
System Identification: Theory for the User (2nd edn)
. Upper Saddle River, NJ: Prentice Hall.
Miall, R. C. (
1978
). The flicker fusion frequency of six laboratory insects, and the response of the compound eye to mains fluorescent `ripple'.
Physiol. Entomol.
3
,
99
-106.
Müller, P. and Robert, D. (
2001
). A shot in the dark: the silent quest of a free-flying phonotactic fly.
J. Exp. Biol.
204
,
1039
-1052.
Schilstra, C. and Van Hateren, J. H. (
1999
). Blowfly flight and optic flow. I. Thorax kinematics and flight dynamics.
J. Exp. Biol.
202
,
1508
-1508.
Sherman, A. and Dickinson, M. H. (
2003
). A comparison of visual and haltere-mediated equilibrium reflexes in the fruit fly Drosophila melanogaster.
J. Exp. Biol.
206
,
271
-307.
Sherman, A. and Dickinson, M. H. (
2004
). Summation of visual and mechanosensory feedback in Drosophila flight control.
J. Exp. Biol.
207
,
133
-142.
Srinivasan, M. V. (
1977
). A visually-evoked roll response in the housefly. Open-loop and closed-loop studies.
J. Comp. Physiol.
119
,
1
-15.
Sun, M. and Xiong, Y. (
2005
). Dynamic flight stability of a hovering bumblebee.
J. Exp. Biol.
208
,
447
-459.
Taylor, G. K. (
2007
). Modelling the effects of unsteady flow phenomena on flapping flight dynamics – stability and control. In
Flow Phenomena in Nature: A Challenge to Engineering Design
. Vol.
1
(ed. R. Liebe), pp.
155
-166. Southampton: WIT Press.
Taylor, G. K. and Thomas, A. L. R. (
2003
). Dynamic flight stability in the desert locust Schistocerca gregaria.
J. Exp. Biol.
206
,
2803
-2829.
Taylor, G. K. and Żbikowski, R. W. (
2005
). Nonlinear time-periodic models of the longitudinal flight dynamics of desert locusts.
J. R. Soc. Interface
2
,
197
-221.
Taylor, G. K., Bomphrey, R. J. and 't Hoen, J.(
2006
).
Insect flight dynamics and control
. AIAA Paper AIAA-2006-32, www.aiaa.org.
Taylor, G. K., Bacic, M., Ozawa, Y., Gillies, J. and Carruthers,A. (
2007
).
Flight control mechanisms in birds of prey
. AIAA Paper AIAA-2007-39, www.aiaa.org.
Tischler, M. B. and Remple, R. K. (
2006
).
Aircraft and Rotorcraft System Identification
. Reston,VA: AIAA.
Tobak, M. and Schiff, L. B. (
1981
). Aerodynamic mathematical modeling – basic concepts. In
Dynamic Stability Parameters (AGARD Lecture Series, Number 114)
, pp.
1.1
-1.32. Neuilly sur Seine, France: NATO Advisory Group for Aeronautical Research and Development.
Wehrhahn, C. and Reichardt, W. (
1975
). Visually induced height orientation of the fly musca domestica.
Biol. Cybern.
20
,
41
-51.
Weimerskirh, H., Le Corre, M., Ropert-Coudert, Y., Kato, A. and Marsac, F. (
2005
). The three-dimensional flight of red-footed boobies: adaptations to foraging in a tropical environment?
Proc. R. Soc. Lond. B Biol. Sci.
272
,
53
-61.