900 resultados para LARGE-AREA TELESCOPE


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retrieval, treatment, and disposal of high-level radioactive waste (HLW) is expected to cost between 100 and 300 billion dollars. The risk to workers, public health, and the environment are also a major area of concern for HLW. Visualization of the interface between settled solids and the optically opaque liquid is needed for retrieval of the waste from underground storage tanks. A Profiling sonar selected for this research generates 2-D image of the interface. Multiple experiments were performed to demonstrate the effectiveness of sonar in real-time monitoring the interface inside HLW tanks. First set of experiments demonstrated that objects shapes could be identified even when 30% of solids entrained in liquid, thereby mapping the interface. Simulation of sonar system validated these results. Second set of experiments confirmed the sonar’s ability in detecting the solids with density similar to the immersed liquid. Third set of experiments determined the affects of near by objects on image resolution. Final set of experiments proved the functional and chemical capabilities of sonar in caustic solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Freshwater use is a major concern in the mass production of algae for biofuels. This project examined the use of canal water obtained from the Everglades Agricultural Area as a base medium for the mass production of algae. This water is not suitable for human consumption, and it is currently used for crop irrigation. A variety of canals were found to be suitable for water collection. Comparison of two methods for algal production showed no significant difference in biomass accumulation. It was discovered that synthetic reticulated foam can be used for algal biomass collection and harvest, and there is potential for its application in large-scale operations. Finally, it was determined that high alkaline conditions may help limit contaminants and competing organisms in growing algae cultures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Haemosporidians are vector-transmitted intracellular parasites that occur in many bird species worldwide and may have important implications for wild bird populations. Surveys of haemosporidians have traditionally focused on Europe and North America, and only recently have they been carried out in the Neotropics, where the prevalence and impacts of the disease have been less studied and are not well understood. In this study we carried out a survey in the endemic bird area of the Sierra Nevada de Santa Marta (SNSM), an isolated coastal massif in northern Colombia that contains a large number of biomes and that is experiencing high rates of habitat loss. We sampled birds from 25 species at 2 different altitudes (1640 and 2100 m asl) and determined avian haemosporidian infection by polymerase chain reaction and sequencing a portion of the cytochrome b (cyt b) gene of the parasite. From the sampled birds, 32.1% were infected by at least 1 of 12 unique cyt b lineages of haemosporidian genera: Plasmodium, Leucocytozoon, Haemoproteus, and subgenus Parahaemoproteus. We found a higher prevalence of avian haemosporidians at low altitudes (1640 m asl). All endemic bird species we sampled had at least one individual infected with avian haemosporidians. We also found evidence of higher overall prevalence among endemic rather than nonendemic birds, suggesting higher susceptibility in endemic birds. Overall, our findings suggest a high haemosporidian species richness in the bird community of the SNSM. Considering the rate of habitat loss that this area is experiencing, it is important to understand how avian haemosporidians affect bird populations; furthermore, more exhaustive sampling is required to fully comprehend the extent of avian haemosporidian infection in the area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the use of deep-sea imagery considerably increased during the last decades, reports on nekton falls to the deep seafloor are very scarce. Whereas there are a few reports describing the finding of whale carcasses in the deep north-eastern and south-eastern Pacific, descriptions of invertebrate or vertebrate food-falls at centimetre to metre scale are extremely rare. After 4 years of extensive work at a deep-sea long-term station in northern polar regions (AWI-"Hausgarten"), including large-scale visual observations with various camera systems covering some 10 000 m2 of seafloor at water depths between 1250 and 5600 m, this paper describes the first observation of a fish carcass at about 1280 m water depth, west off Svålbard. The fish skeleton had a total length of 36 cm and an approximated biomass of 0.5 kg wet weight. On the basis of in situ experiments, we estimated a very short residence time of this particular carcass of about 7 h at the bottom. The fast response of the motile deep-sea scavenger community to such events and the rapid utilisation of this kind of organic carbon supply might partly explain the extreme rarity of such an observation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oceanic flood basalts are poorly understood, short-term expressions of highly increased heat flux and mass flow within the convecting mantle. The uniqueness of the Caribbean Large Igneous Province (CLIP, 92-74 Ma) with respect to other Cretaceous oceanic plateaus is its extensive sub-aerial exposures, providing an excellent basis to investigate the temporal and compositional relationships within a starting plume head. We present major element, trace element and initial Sr-Nd-Pb isotope composition of 40 extrusive rocks from the Caribbean Plateau, including onland sections in Costa Rica, Colombia and Curaçao as well as DSDP Sites in the Central Caribbean. Even though the lavas were erupted over an area of ~3*10**6 km**2, the majority have strikingly uniform incompatible element patterns (La/Yb=0.96+/-0.16, n=64 out of 79 samples, 2sigma) and initial Nd-Pb isotopic compositions (e.g. 143Nd/144Ndin=0.51291+/-3, epsilon-Nd i=7.3+/-0.6, 206Pb/204Pbin=18.86+/-0.12, n=54 out of 66, 2sigma). Lavas with endmember compositions have only been sampled at the DSDP Sites, Gorgona Island (Colombia) and the 65-60 Ma accreted Quepos and Osa igneous complexes (Costa Rica) of the subsequent hotspot track. Despite the relatively uniform composition of most lavas, linear correlations exist between isotope ratios and between isotope and highly incompatible trace element ratios. The Sr-Nd-Pb isotope and trace element signatures of the chemically enriched lavas are compatible with derivation from recycled oceanic crust, while the depleted lavas are derived from a highly residual source. This source could represent either oceanic lithospheric mantle left after ocean crust formation or gabbros with interlayered ultramafic cumulates of the lower oceanic crust. High 3He/4He in olivines of enriched picrites at Quepos are ~12 times higher than the atmospheric ratio suggesting that the enriched component may have once resided in the lower mantle. Evaluation of the Sm-Nd and U-Pb isotope systematics on isochron diagrams suggests that the age of separation of enriched and depleted components from the depleted MORB source mantle could have been <=500 Ma before CLIP formation and interpreted to reflect the recycling time of the CLIP source. Mantle plume heads may provide a mechanism for transporting large volumes of possibly young recycled oceanic lithosphere residing in the lower mantle back into the shallow MORB source mantle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural-petrologic and isotopic-geochronologic data on magmatic, metamorphic, and metasomatic rocks from the Chernorud zone were used to reproduce the multistage history of their exhumation to upper crustal levels. The process is subdivided into four discrete stages, which corresponded to metamorphism to the granulite facies (500-490 Ma), metamorphism to the amphibolite facies (470-460 Ma), metamorphism to at least the epidote-amphibolite facies (440-430 Ma), and postmetamorphic events (410-400 Ma). The earliest two stages likely corresponded to the tectonic stacking of the backarc basin in response to the collision of the Siberian continent with the Eravninskaya island arc or the Barguzin microcontinent, a process that ended with the extensive generation of synmetamorphic granites. During the third and fourth stages, the granulites of the Chernorud nappe were successively exposed during intense tectonic motions along large deformation zones (Primorskii fault, collision lineament, and Orso Complex). The comparison of the histories of active thermal events for Early Caledonian folded structures in the Central Asian Foldbelt indicates that active thermal events of equal duration are reconstructed for the following five widely spiced accretion-collision structures: the Chernorud granulite zone in the Ol'khon territory, the Slyudyanka crystalline complex in the southwestern Baikal area, the western Sangilen territory in southeastern Tuva, Derbinskii terrane in the Eastern Sayan, and the Bayankhongor ophiolite zone in central Mongolia. The dates obtained by various isotopic techniques are generally consistent with the four discrete stages identified in the Chernorud nappe, whereas the dates corresponding to the island-arc evolutionary stage were obtained only for the western Sangilen and Bayankhongor ophiolite zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The James Webb Space Telescope (JWST) will likely revolutionize transiting exoplanet atmospheric science, due to a combination of its capability for continuous, long duration observations and its larger collecting area, spectral coverage, and spectral resolution compared to existing space-based facilities. However, it is unclear precisely how well JWST will perform and which of its myriad instruments and observing modes will be best suited for transiting exoplanet studies. In this article, we describe a prefatory JWST Early Release Science (ERS) Cycle 1 program that focuses on testing specific observing modes to quickly give the community the data and experience it needs to plan more efficient and successful transiting exoplanet characterization programs in later cycles. We propose a multi-pronged approach wherein one aspect of the program focuses on observing transits of a single target with all of the recommended observing modes to identify and understand potential systematics, compare transmission spectra at overlapping and neighboring wavelength regions, confirm throughputs, and determine overall performances. In our search for transiting exoplanets that are well suited to achieving these goals, we identify 12 objects (dubbed “community targets”) that meet our defined criteria. Currently, the most favorable target is WASP-62b because of its large predicted signal size, relatively bright host star, and location in JWST's continuous viewing zone. Since most of the community targets do not have well-characterized atmospheres, we recommend initiating preparatory observing programs to determine the presence of obscuring clouds/hazes within their atmospheres. Measurable spectroscopic features are needed to establish the optimal resolution and wavelength regions for exoplanet characterization. Other initiatives from our proposed ERS program include testing the instrument brightness limits and performing phase-curve observations. The latter are a unique challenge compared to transit observations because of their significantly longer durations. Using only a single mode, we propose to observe a full-orbit phase curve of one of the previously characterized, short-orbital-period planets to evaluate the facility-level aspects of long, uninterrupted time-series observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims. The large and small-scale (pc) structure of the Galactic interstellar medium can be investigated by utilising spectra of early-type stellar probes of known distances in the same region of the sky. This paper determines the variation in line strength of Ca ii at 3933.661 Å as a function of probe separation for a large sample of stars, including a number of sightlines in the Magellanic Clouds. 

Methods. FLAMES-GIRAFFE data taken with the Very Large Telescope towards early-type stars in 3 Galactic and 4 Magellanic open clusters in Ca ii are used to obtain the velocity, equivalent width, column density, and line width of interstellar Galactic calcium for a total of 657 stars, of which 443 are Magellanic Cloud sightlines. In each cluster there are between 43 and 111 stars observed. Additionally, FEROS and UVES Ca ii K and Na i D spectra of 21 Galactic and 154 Magellanic early-type stars are presented and combined with data from the literature to study the calcium column density - parallax relationship. 

Results. For the four Magellanic clusters studied with FLAMES, the strength of the Galactic interstellar Ca ii K equivalent width on transverse scales from ∼0.05-9 pc is found to vary by factors of ∼1.8-3.0, corresponding to column density variations of ∼0.3-0.5 dex in the optically-thin approximation. Using FLAMES, FEROS, and UVES archive spectra, the minimum and maximum reduced equivalent widths for Milky Way gas are found to lie in the range ∼35-125 mÅ and ∼30-160 mÅ for Ca ii K and Na i D, respectively. The range is consistent with a previously published simple model of the interstellar medium consisting of spherical cloudlets of filling factor ∼0.3, although other geometries are not ruled out. Finally, the derived functional form for parallax (π) and Ca ii column density (NCaII) is found to be π(mas) = 1 / (2.39 × 10-13 × NCaII (cm-2) + 0.11). Our derived parallax is ∼25 per cent lower than predicted by Megier et al. (2009, A&A, 507, 833) at a distance of ∼100 pc and ∼15 percent lower at a distance of ∼200 pc, reflecting inhomogeneity in the Ca ii distribution in the different sightlines studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The efficiency of lecturing or large group teaching has been called into question for many years. An abundance of literature details the components of effective teaching which are not provided in the traditional lecture setting, with many alternative methods of teaching recommended. However, with continued constraints on resources large group teaching is here to stay and student’s expect and are familiar with this method.

Technology Enhanced Learning may be the way forward, to prevent educators from “throwing out the baby with the bath water”. TEL could help Educator’s especially in the area of life sciences which is often taught by lectures to engage and involve students in their learning, provide feedback and incorporate the “quality” of small group teaching, case studies and Enquiry Based Learning into the large group setting thus promoting effective and deep learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The impact of nest predators on sea turtle hatching success is highly variable depending on predator abundance and also on interactions among different predators. Food web connectivity usually makes it difficult to understand predator-prey interactions and develop efficient conservation strategies. In the Cape Verde archipelago there is an important nesting area for loggerheads where ghost crabs are the only described nest predator. We have studied the impact of ghost crabs on loggerhead nests on this threatened population as well as the efficiency of several management practices to reduce this impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

v. 46, n. 2, p. 140-148, apr./jun. 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is increasingly recognized that ecological restoration demands conservation action beyond the borders of existing protected areas. This requires the coordination of land uses and management over a larger area, usually with a range of partners, which presents novel institutional challenges for conservation planners. Interviews were undertaken with managers of a purposive sample of large-scale conservation areas in the UK. Interviews were open-ended and analyzed using standard qualitative methods. Results show a wide variety of organizations are involved in large-scale conservation projects, and that partnerships take time to create and demand resilience in the face of different organizational practices, staff turnover, and short-term funding. Successful partnerships with local communities depend on the establishment of trust and the availability of external funds to support conservation land uses. We conclude that there is no single institutional model for large-scale conservation: success depends on finding institutional strategies that secure long-term conservation outcomes, and ensure that conservation gains are not reversed when funding runs out, private owners change priorities, or land changes hands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le rapide déclin actuel de la biodiversité est inquiétant et les activités humaines en sont la cause directe. De nombreuses aires protégées ont été mises en place pour contrer cette perte de biodiversité. Afin de maximiser leur efficacité, l’amélioration de la connectivité fonctionnelle entre elles est requise. Les changements climatiques perturbent actuellement les conditions environnementales de façon globale. C’est une menace pour la biodiversité qui n’a pas souvent été intégrée lors de la mise en place des aires protégées, jusqu’à récemment. Le mouvement des espèces, et donc la connectivité fonctionnelle du paysage, est impacté par les changements climatiques et des études ont montré qu’améliorer la connectivité fonctionnelle entre les aires protégées aiderait les espèces à faire face aux impacts des changements climatiques. Ma thèse présente une méthode pour concevoir des réseaux d’aires protégées tout en tenant compte des changements climatiques et de la connectivité fonctionnelle. Mon aire d’étude est la région de la Gaspésie au Québec (Canada). La population en voie de disparition de caribou de la Gaspésie-Atlantique (Rangifer tarandus caribou) a été utilisée comme espèce focale pour définir la connectivité fonctionnelle. Cette petite population subit un déclin continu dû à la prédation et la modification de son habitat, et les changements climatiques pourraient devenir une menace supplémentaire. J’ai d’abord construit un modèle individu-centré spatialement explicite pour expliquer et simuler le mouvement du caribou. J’ai utilisé les données VHF éparses de la population de caribou et une stratégie de modélisation patron-orienté pour paramétrer et sélectionner la meilleure hypothèse de mouvement. Mon meilleur modèle a reproduit la plupart des patrons de mouvement définis avec les données observées. Ce modèle fournit une meilleure compréhension des moteurs du mouvement du caribou de la Gaspésie-Atlantique, ainsi qu’une estimation spatiale de son utilisation du paysage dans la région. J’ai conclu que les données éparses étaient suffisantes pour ajuster un modèle individu-centré lorsqu’utilisé avec une modélisation patron-orienté. Ensuite, j’ai estimé l’impact des changements climatiques et de différentes actions de conservation sur le potentiel de mouvement du caribou. J’ai utilisé le modèle individu-centré pour simuler le mouvement du caribou dans des paysages hypothétiques représentant différents scénarios de changements climatiques et d’actions de conservation. Les actions de conservation représentaient la mise en place de nouvelles aires protégées en Gaspésie, comme définies par le scénario proposé par le gouvernement du Québec, ainsi que la restauration de routes secondaires à l’intérieur des aires protégées. Les impacts des changements climatiques sur la végétation, comme définis dans mes scénarios, ont réduit le potentiel de mouvement du caribou. La restauration des routes était capable d’atténuer ces effets négatifs, contrairement à la mise en place des nouvelles aires protégées. Enfin, j’ai présenté une méthode pour concevoir des réseaux d’aires protégées efficaces et j’ai proposé des nouvelles aires protégées à mettre en place en Gaspésie afin de protéger la biodiversité sur le long terme. J’ai créé de nombreux scénarios de réseaux d’aires protégées en étendant le réseau actuel pour protéger 12% du territoire. J’ai calculé la représentativité écologique et deux mesures de connectivité fonctionnelle sur le long terme pour chaque réseau. Les mesures de connectivité fonctionnelle représentaient l’accès général aux aires protégées pour le caribou de la Gaspésie-Atlantique ainsi que son potentiel de mouvement à l’intérieur. J’ai utilisé les estimations de potentiel de mouvement pour la période de temps actuelle ainsi que pour le futur sous différents scénarios de changements climatiques pour représenter la connectivité fonctionnelle sur le long terme. Le réseau d’aires protégées que j’ai proposé était le scénario qui maximisait le compromis entre les trois caractéristiques de réseau calculées. Dans cette thèse, j’ai expliqué et prédit le mouvement du caribou de la Gaspésie-Atlantique sous différentes conditions environnementales, notamment des paysages impactés par les changements climatiques. Ces résultats m’ont aidée à définir un réseau d’aires protégées à mettre en place en Gaspésie pour protéger le caribou au cours du temps. Je crois que cette thèse apporte de nouvelles connaissances sur le comportement de mouvement du caribou de la Gaspésie-Atlantique, ainsi que sur les actions de conservation qui peuvent être prises en Gaspésie afin d’améliorer la protection du caribou et de celle d’autres espèces. Je crois que la méthode présentée peut être applicable à d’autres écosystèmes aux caractéristiques et besoins similaires.