950 resultados para split-step Fourier method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accelerated life testing (ALT) is widely used to obtain reliability information about a product within a limited time frame. The Cox s proportional hazards (PH) model is often utilized for reliability prediction. My master thesis research focuses on designing accelerated life testing experiments for reliability estimation. We consider multiple step-stress ALT plans with censoring. The optimal stress levels and times of changing the stress levels are investigated. We discuss the optimal designs under three optimality criteria. They are D-, A- and Q-optimal designs. We note that the classical designs are optimal only if the model assumed is correct. Due to the nature of prediction made from ALT experimental data, attained under the stress levels higher than the normal condition, extrapolation is encountered. In such case, the assumed model cannot be tested. Therefore, for possible imprecision in the assumed PH model, the method of construction for robust designs is also explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse porte sur l’amélioration des techniques d’imagerie à haut-contraste permettant la détection directe de compagnons à de faibles séparations de leur étoile hôte. Plus précisément, elle s’inscrit dans le développement du Gemini Planet Imager (GPI) qui est un instrument de deuxième génération pour les télescopes Gemini. Cette caméra utilisera un spectromètre à champ intégral (SCI) pour caractériser les compagnons détectés et pour réduire le bruit de tavelure limitant leur détection et corrigera la turbulence atmosphérique à un niveau encore jamais atteint en utilisant deux miroirs déformables dans son système d’optique adaptative (OA) : le woofer et le tweeter. Le woofer corrigera les aberrations de basses fréquences spatiales et de grandes amplitudes alors que le tweeter compensera les aberrations de plus hautes fréquences ayant une plus faible amplitude. Dans un premier temps, les performances pouvant être atteintes à l’aide des SCIs présentement en fonction sur les télescopes de 8-10 m sont investiguées en observant le compagnon de l’étoile GQ Lup à l’aide du SCI NIFS et du système OA ALTAIR installés sur le télescope Gemini Nord. La technique de l’imagerie différentielle angulaire (IDA) est utilisée pour atténuer le bruit de tavelure d’un facteur 2 à 6. Les spectres obtenus en bandes JHK ont été utilisés pour contraindre la masse du compagnon par comparaison avec les prédictions des modèles atmosphériques et évolutifs à 8−60 MJup, où MJup représente la masse de Jupiter. Ainsi, il est déterminé qu’il s’agit plus probablement d’une naine brune que d’une planète. Comme les SCIs présentement en fonction sont des caméras polyvalentes pouvant être utilisées pour plusieurs domaines de l’astrophysique, leur conception n’a pas été optimisée pour l’imagerie à haut-contraste. Ainsi, la deuxième étape de cette thèse a consisté à concevoir et tester en laboratoire un prototype de SCI optimisé pour cette tâche. Quatre algorithmes de suppression du bruit de tavelure ont été testés sur les données obtenues : la simple différence, la double différence, la déconvolution spectrale ainsi qu’un nouvel algorithme développé au sein de cette thèse baptisé l’algorithme des spectres jumeaux. Nous trouvons que l’algorithme des spectres jumeaux est le plus performant pour les deux types de compagnons testés : les compagnons méthaniques et non-méthaniques. Le rapport signal-sur-bruit de la détection a été amélioré d’un facteur allant jusqu’à 14 pour un compagnon méthanique et d’un facteur 2 pour un compagnon non-méthanique. Dernièrement, nous nous intéressons à certains problèmes liés à la séparation de la commande entre deux miroirs déformables dans le système OA de GPI. Nous présentons tout d’abord une méthode utilisant des calculs analytiques et des simulations Monte Carlo pour déterminer les paramètres clés du woofer tels que son diamètre, son nombre d’éléments actifs et leur course qui ont ensuite eu des répercussions sur le design général de l’instrument. Ensuite, le système étudié utilisant un reconstructeur de Fourier, nous proposons de séparer la commande entre les deux miroirs dans l’espace de Fourier et de limiter les modes transférés au woofer à ceux qu’il peut précisément reproduire. Dans le contexte de GPI, ceci permet de remplacer deux matrices de 1600×69 éléments nécessaires pour une séparation “classique” de la commande par une seule de 45×69 composantes et ainsi d’utiliser un processeur prêt à être utilisé plutôt qu’une architecture informatique plus complexe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pre-publication drafts are reproduced with permission and copyright © 2013 of the Journal of Orthopaedic Trauma [Mutch J, Rouleau DM, Laflamme GY, Hagemeister N. Accurate Measurement of Greater Tuberosity Displacement without Computed Tomography: Validation of a method on Plain Radiography to guide Surgical Treatment. J Orthop Trauma. 2013 Nov 21: Epub ahead of print.] and copyright © 2014 of the British Editorial Society of Bone and Joint Surgery [Mutch JAJ, Laflamme GY, Hagemeister N, Cikes A, Rouleau DM. A new morphologic classification for greater tuberosity fractures of the proximal humerus: validation and clinical Implications. Bone Joint J 2014;96-B:In press.]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electric permittivity and magnetic permeability control electromagnetic wave propagation th rough materials. I n naturally occu rring materials, these are positive. Artificial materials exhi b iting negative material properties have been reported : they are referred to as metamaterials. This paper concentrates on a ring-type split-ring resonator (SRR) exhibiting negative magnetic permeability. The design and synthesis of the SRR using the genetic-algorithm approach is explained in detail. A user-friendly g raphical user i nterface (G U I ) for an SRR optim izer and estimator using MATLAB TM is also presented

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research in the area of geopolymer is gaining momentum during the past 20 years. Studies confirm that geopolymer concrete has good compressive strength, tensile strength, flexural strength, modulus of elasticity and durability. These properties are comparable with OPC concrete.There are many occasions where concrete is exposed to elevated temperatures like fire exposure from thermal processor, exposure from furnaces, nuclear exposure, etc.. In such cases, understanding of the behaviour of concrete and structural members exposed to elevated temperatures is vital. Even though many research reports are available about the behaviour of OPC concrete at elevated temperatures, there is limited information available about the behaviour of geopolymer concrete after exposure to elevated temperatures. A preliminary study was carried out for the selection of a mix proportion. The important variable considered in the present study include alkali/fly ash ratio, percentage of total aggregate content, fine aggregate to total aggregate ratio, molarity of sodium hydroxide, sodium silicate to sodium hydroxide ratio, curing temperature and curing period. Influence of different variables on engineering properties of geopolymer concrete was investigated. The study on interface shear strength of reinforced and unreinforced geopolymer concrete as well as OPC concrete was also carried out. Engineering properties of fly ash based geopolymer concrete after exposure to elevated temperatures (ambient to 800 °C) were studied and the corresponding results were compared with those of conventional concrete. Scanning Electron Microscope analysis, Fourier Transform Infrared analysis, X-ray powder Diffractometer analysis and Thermogravimetric analysis of geopolymer mortar or paste at ambient temperature and after exposure to elevated temperature were also carried out in the present research work. Experimental study was conducted on geopolymer concrete beams after exposure to elevated temperatures (ambient to 800 °C). Load deflection characteristics, ductility and moment-curvature behaviour of the geopolymer concrete beams after exposure to elevated temperatures were investigated. Based on the present study, major conclusions derived could be summarized as follows. There is a definite proportion for various ingredients to achieve maximum strength properties. Geopolymer concrete with total aggregate content of 70% by volume, ratio of fine aggregate to total aggregate of 0.35, NaOH molarity 10, Na2SiO3/NaOH ratio of 2.5 and alkali to fly ash ratio of 0.55 gave maximum compressive strength in the present study. An early strength development in geopolymer concrete could be achieved by the proper selection of curing temperature and the period of curing. With 24 hours of curing at 100 °C, 96.4% of the 28th day cube compressive strength could be achieved in 7 days in the present study. The interface shear strength of geopolymer concrete is lower to that of OPC concrete. Compared to OPC concrete, a reduction in the interface shear strength by 33% and 29% was observed for unreinforced and reinforced geopolymer specimens respectively. The interface shear strength of geopolymer concrete is lower than ordinary Portland cement concrete. The interface shear strength of geopolymer concrete can be approximately estimated as 50% of the value obtained based on the available equations for the calculation of interface shear strength of ordinary portland cement concrete (method used in Mattock and ACI). Fly ash based geopolymer concrete undergoes a high rate of strength loss (compressive strength, tensile strength and modulus of elasticity) during its early heating period (up to 200 °C) compared to OPC concrete. At a temperature exposure beyond 600 °C, the unreacted crystalline materials in geopolymer concrete get transformed into amorphous state and undergo polymerization. As a result, there is no further strength loss (compressive strength, tensile strength and modulus of elasticity) in geopolymer concrete, whereas, OPC concrete continues to lose its strength properties at a faster rate beyond a temperature exposure of 600 °C. At present no equation is available to predict the strength properties of geopolymer concrete after exposure to elevated temperatures. Based on the study carried out, new equations have been proposed to predict the residual strengths (cube compressive strength, split tensile strength and modulus of elasticity) of geopolymer concrete after exposure to elevated temperatures (upto 800 °C). These equations could be used for material modelling until better refined equations are available. Compared to OPC concrete, geopolymer concrete shows better resistance against surface cracking when exposed to elevated temperatures. In the present study, while OPC concrete started developing cracks at 400 °C, geopolymer concrete did not show any visible cracks up to 600 °C and developed only minor cracks at an exposure temperatureof 800 °C. Geopolymer concrete beams develop crack at an early load stages if they are exposed to elevated temperatures. Even though the material strength of the geopolymer concrete does not decrease beyond 600 °C, the flexural strength of corresponding beam reduces rapidly after 600 °C temperature exposure, primarily due to the rapid loss of the strength of steel. With increase in temperature, the curvature at yield point of geopolymer concrete beam increases and thereby the ductility reduces. In the present study, compared to the ductility at ambient temperature, the ductility of geopolymer concrete beams reduces by 63.8% at 800 °C temperature exposure. Appropriate equations have been proposed to predict the service load crack width of geopolymer concrete beam exposed to elevated temperatures. These equations could be used to limit the service load on geopolymer concrete beams exposed to elevated temperatures (up to 800 °C) for a predefined crack width (between 0.1mm and 0.3 mm) or vice versa. The moment-curvature relationship of geopolymer concrete beams at ambient temperature is similar to that of RCC beams and this could be predicted using strain compatibility approach Once exposed to an elevated temperature, the strain compatibility approach underestimates the curvature of geopolymer concrete beams between the first cracking and yielding point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to extend the method of approximate approximations to boundary value problems. This method was introduced by V. Maz'ya in 1991 and has been used until now for the approximation of smooth functions defined on the whole space and for the approximation of volume potentials. In the present paper we develop an approximation procedure for the solution of the interior Dirichlet problem for the Laplace equation in two dimensions using approximate approximations. The procedure is based on potential theoretical considerations in connection with a boundary integral equations method and consists of three approximation steps as follows. In a first step the unknown source density in the potential representation of the solution is replaced by approximate approximations. In a second step the decay behavior of the generating functions is used to gain a suitable approximation for the potential kernel, and in a third step Nyström's method leads to a linear algebraic system for the approximate source density. For every step a convergence analysis is established and corresponding error estimates are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The method of approximate approximations, introduced by Maz'ya [1], can also be used for the numerical solution of boundary integral equations. In this case, the matrix of the resulting algebraic system to compute an approximate source density depends only on the position of a finite number of boundary points and on the direction of the normal vector in these points (Boundary Point Method). We investigate this approach for the Stokes problem in the whole space and for the Stokes boundary value problem in a bounded convex domain G subset R^2, where the second part consists of three steps: In a first step the unknown potential density is replaced by a linear combination of exponentially decreasing basis functions concentrated near the boundary points. In a second step, integration over the boundary partial G is replaced by integration over the tangents at the boundary points such that even analytical expressions for the potential approximations can be obtained. In a third step, finally, the linear algebraic system is solved to determine an approximate density function and the resulting solution of the Stokes boundary value problem. Even not convergent the method leads to an efficient approximation of the form O(h^2) + epsilon, where epsilon can be chosen arbitrarily small.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Der Einsatz der Particle Image Velocimetry (PIV) zur Analyse selbsterregter Strömungsphänomene und das dafür notwendige Auswerteverfahren werden in dieser Arbeit beschrieben. Zur Untersuchung von solchen Mechanismen, die in Turbo-Verdichtern als Rotierende Instabilitäten in Erscheinung treten, wird auf Datensätze zurückgegriffen, die anhand experimenteller Untersuchungen an einem ringförmigen Verdichter-Leitrad gewonnen wurden. Die Rotierenden Instabilitäten sind zeitabhängige Strömungsphänomene, die bei hohen aerodynamischen Belastungen in Verdichtergittern auftreten können. Aufgrund der fehlenden Phaseninformation kann diese instationäre Strömung mit konventionellen PIV-Systemen nicht erfasst werden. Die Kármánsche Wirbelstraße und Rotierende Instabilitäten stellen beide selbsterregte Strömungsvorgänge dar. Die Ähnlichkeit wird genutzt um die Funktionalität des Verfahrens anhand der Kármánschen Wirbelstraße nachzuweisen. Der mittels PIV zu visualisierende Wirbeltransport erfordert ein besonderes Verfahren, da ein externes Signal zur Festlegung des Phasenwinkels dieser selbsterregten Strömung nicht zur Verfügung steht. Die Methodik basiert auf der Kopplung der PIV-Technik mit der Hitzdrahtanemometrie. Die gleichzeitige Messung mittels einer zeitlich hochaufgelösten Hitzdraht-Messung ermöglicht den Zeitpunkten der PIV-Bilder einen Phasenwinkel zuzuordnen. Hierzu wird das Hitzdrahtsignal mit einem FFT-Verfahren analysiert, um die PIV-Bilder entsprechend ihrer Phasenwinkel zu gruppieren. Dafür werden die aufgenommenen Bilder auf der Zeitachse der Hitzdrahtmessungen markiert. Eine systematische Analyse des Hitzdrahtsignals in der Umgebung der PIV-Messung liefert Daten zur Festlegung der Grundfrequenz und erlaubt es, der markierten PIV-Position einen Phasenwinkel zuzuordnen. Die sich aus den PIV-Bildern einer Klasse ergebenden Geschwindigkeitskomponenten werden anschließend gemittelt. Aus den resultierenden Bildern jeder Klasse ergibt sich das zweidimensionale zeitabhängige Geschwindigkeitsfeld, in dem die Wirbelwanderung der Kármánschen Wirbelstraße ersichtlich wird. In hierauf aufbauenden Untersuchungen werden Zeitsignale aus Messungen in einem Verdichterringgitter analysiert. Dabei zeigt sich, dass zusätzlich Filterfunktionen erforderlich sind. Im Ergebnis wird schließlich deutlich, dass die Übertragung der anhand der Kármánschen Wirbelstraße entwickelten Methode nur teilweise gelingt und weitere Forschungsarbeiten erforderlich sind.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an immersed interface method for the incompressible Navier Stokes equations capable of handling rigid immersed boundaries. The immersed boundary is represented by a set of Lagrangian control points. In order to guarantee that the no-slip condition on the boundary is satisfied, singular forces are applied on the fluid at the immersed boundary. The forces are related to the jumps in pressure and the jumps in the derivatives of both pressure and velocity, and are interpolated using cubic splines. The strength of singular forces is determined by solving a small system of equations at each time step. The Navier-Stokes equations are discretized on a staggered Cartesian grid by a second order accurate projection method for pressure and velocity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electroosmotic flow is a convenient mechanism for transporting polar fluid in a microfluidic device. The flow is generated through the application of an external electric field that acts on the free charges that exists in a thin Debye layer at the channel walls. The charge on the wall is due to the chemistry of the solid-fluid interface, and it can vary along the channel, e.g. due to modification of the wall. This investigation focuses on the simulation of the electroosmotic flow (EOF) profile in a cylindrical microchannel with step change in zeta potential. The modified Navier-Stoke equation governing the velocity field and a non-linear two-dimensional Poisson-Boltzmann equation governing the electrical double-layer (EDL) field distribution are solved numerically using finite control-volume method. Continuities of flow rate and electric current are enforced resulting in a non-uniform electrical field and pressure gradient distribution along the channel. The resulting parabolic velocity distribution at the junction of the step change in zeta potential, which is more typical of a pressure-driven velocity flow profile, is obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta Tesis se presenta el modelo de Kou, Difusión con saltos doble exponenciales, para la valoración de opciones Call de tipo europeo sobre los precios del petróleo como activo subyacente. Se mostrarán los cálculos numéricos para la formulación de expresiones analíticas que se resolverán mediante la implementación de algoritmos numéricos eficientes que conllevaran a los precios teóricos de las opciones evaluadas. Posteriormente se discutirán las ventajas de usar métodos como la transformada de Fourier por la sencillez relativa de su programación frente a los desarrollos de otras técnicas numéricas. Este método es usado en conjunto con el ejercicio de calibración no paramétrica de regularización, que mediante la minimización de los errores al cuadrado sujeto a una penalización fundamentada en el concepto de entropía relativa, resultaran en la obtención de precios para las opciones Call sobre el petróleo considerando una mejor capacidad del modelo de asignar precios justos frente a los transados en el mercado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Realistic rendering animation is known to be an expensive processing task when physically-based global illumination methods are used in order to improve illumination details. This paper presents an acceleration technique to compute animations in radiosity environments. The technique is based on an interpolated approach that exploits temporal coherence in radiosity. A fast global Monte Carlo pre-processing step is introduced to the whole computation of the animated sequence to select important frames. These are fully computed and used as a base for the interpolation of all the sequence. The approach is completely view-independent. Once the illumination is computed, it can be visualized by any animated camera. Results present significant high speed-ups showing that the technique could be an interesting alternative to deterministic methods for computing non-interactive radiosity animations for moderately complex scenarios

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Jerdon's courser Rhinoptilus bitorquatus is a nocturnally active cursorial bird that is only known to occur in a small area of scrub jungle in Andhra Pradesh, India, and is listed as critically endangered by the IUCN. Information on its habitat requirements is needed urgently to underpin conservation measures. We quantified the habitat features that correlated with the use of different areas of scrub jungle by Jerdon's coursers, and developed a model to map potentially suitable habitat over large areas from satellite imagery and facilitate the design of surveys of Jerdon's courser distribution. 2. We used 11 arrays of 5-m long tracking strips consisting of smoothed fine soil to detect the footprints of Jerdon's coursers, and measured tracking rates (tracking events per strip night). We counted the number of bushes and trees, and described other attributes of vegetation and substrate in a 10-m square plot centred on each strip. We obtained reflectance data from Landsat 7 satellite imagery for the pixel within which each strip lay. 3. We used logistic regression models to describe the relationship between tracking rate by Jerdon's coursers and characteristics of the habitat around the strips, using ground-based survey data and satellite imagery. 4. Jerdon's coursers were most likely to occur where the density of large (>2 m tall) bushes was in the range 300-700 ha(-1) and where the density of smaller bushes was less than 1000 ha(-1). This habitat was detectable using satellite imagery. 5. Synthesis and applications. The occurrence of Jerdon's courser is strongly correlated with the density of bushes and trees, and is in turn affected by grazing with domestic livestock, woodcutting and mechanical clearance of bushes to create pasture, orchards and farmland. It is likely that there is an optimal level of grazing and woodcutting that would maintain or create suitable conditions for the species. Knowledge of the species' distribution is incomplete and there is considerable pressure from human use of apparently suitable habitats. Hence, distribution mapping is a high conservation priority. A two-step procedure is proposed, involving the use of ground surveys of bush density to calibrate satellite image-based mapping of potential habitat. These maps could then be used to select priority areas for Jerdon's courser surveys. The use of tracking strips to study habitat selection and distribution has potential in studies of other scarce and secretive species.