963 resultados para STATISTICAL MODELS
Resumo:
Habitat-based statistical models relating patterns of presence and absence of species to habitat variables could be useful to resolve conservation-related problems and highlight the causes of population declines. In this paper, we apply such a modelling approach to an endemic amphibian, the Sardinian mountain newt Euproctus platycephalus, considered by IUCN a critically endangered species. Sardinian newts inhabit freshwater habitat in streams, small lakes and pools on the island of Sardinia (Italy). Reported declines of newt populations are not yet supported by quantitative data, however, they are perceived or suspected across the species' historical range. This study represents a first attempt trying to statistically relate habitat characteristics to Sardinian newt occurrence and persistence. Linear regression analysis revealed that newts are more likely to be found in sites with colder water temperature, less riparian vegetation and, marginally, absence of fish. The implications of the results for the conservation of the species are discussed, and suggestions for the short-term management of newt inhabited sites suggested. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
1. The management of threatened species is an important practical way in which conservationists can intervene in the extinction process and reduce the loss of biodiversity. Understanding the causes of population declines (past, present and future) is pivotal to designing effective practical management. This is the declining-population paradigm identified by Caughley. 2. There are three broad classes of ecological tool used by conservationists to guide management decisions for threatened species: statistical models of habitat use, demographic models and behaviour-based models. Each of these is described here, illustrated with a case study and evaluated critically in terms of its practical application. 3. These tools are fundamentally different. Statistical models of habitat use and demographic models both use descriptions of patterns in abundance and demography, in relation to a range of factors, to inform management decisions. In contrast, behaviour-based models describe the evolutionary processes underlying these patterns, and derive such patterns from the strategies employed by individuals when competing for resources under a specific set of environmental conditions. 4. Statistical models of habitat use and demographic models have been used successfully to make management recommendations for declining populations. To do this, assumptions are made about population growth or vital rates that will apply when environmental conditions are restored, based on either past data collected under favourable environmental conditions or estimates of these parameters when the agent of decline is removed. As a result, they can only be used to make reliable quantitative predictions about future environments when a comparable environment has been experienced by the population of interest in the past. 5. Many future changes in the environment driven by management will not have been experienced by a population in the past. Under these circumstances, vital rates and their relationship with population density will change in the future in a way that is not predictable from past patterns. Reliable quantitative predictions about population-level responses then need to be based on an explicit consideration of the evolutionary processes operating at the individual level. 6. Synthesis and applications. It is argued that evolutionary theory underpins Caughley's declining-population paradigm, and that it needs to become much more widely used within mainstream conservation biology. This will help conservationists examine critically the reliability of the tools they have traditionally used to aid management decision-making. It will also give them access to alternative tools, particularly when predictions are required for changes in the environment that have not been experienced by a population in the past.
Resumo:
Natural exposure to prion disease is likely to occur throughout successive challenges, yet most experiments focus on single large doses of infectious material. We analyze the results from an experiment in which rodents were exposed to multiple doses of feed contaminated with the scrapie agent. We formally define hypotheses for how the doses combine in terms of statistical models. The competing hypotheses are that only the total dose of infectivity is important (cumulative model), doses act independently, or a general alternative that interaction between successive doses occurs (to raise or lower the risk of infection). We provide sample size calculations to distinguish these hypotheses. In the experiment, a fixed total dose has a significantly reduced probability of causing infection if the material is presented as multiple challenges, and as the time between challenges lengthens. Incubation periods are shorter and less variable if all material is consumed on one occasion. We show that the probability of infection is inconsistent with the hypothesis that each dose acts as a cumulative or independent challenge. The incubation periods are inconsistent with the independence hypothesis. Thus, although a trend exists for the risk of infection with prion disease to increase with repeated doses, it does so to a lesser degree than is expected if challenges combine independently or in a cumulative manner.
Resumo:
Aims: We conducted a systematic review of studies examining relationships between measures of beverage alcohol tax or price levels and alcohol sales or self-reported drinking. A total of 112 studies of alcohol tax or price effects were found, containing 1003 estimates of the tax/price–consumption relationship. Design: Studies included analyses of alternative outcome measures, varying subgroups of the population, several statistical models, and using different units of analysis. Multiple estimates were coded from each study, along with numerous study characteristics. Using reported estimates, standard errors, t-ratios, sample sizes and other statistics, we calculated the partial correlation for the relationship between alcohol price or tax and sales or drinking measures for each major model or subgroup reported within each study. Random-effects models were used to combine studies for inverse variance weighted overall estimates of the magnitude and significance of the relationship between alcohol tax/price and drinking. Findings: Simple means of reported elasticities are -0.46 for beer, -0.69 for wine and -0.80 for spirits. Meta-analytical results document the highly significant relationships (P < 0.001) between alcohol tax or price measures and indices of sales or consumption of alcohol (aggregate-level r = -0.17 for beer, -0.30 for wine, -0.29 for spirits and -0.44 for total alcohol). Price/tax also affects heavy drinking significantly (mean reported elasticity = -0.28, individual-level r = -0.01, P < 0.01), but the magnitude of effect is smaller than effects on overall drinking. Conclusions: A large literature establishes that beverage alcohol prices and taxes are related inversely to drinking. Effects are large compared to other prevention policies and programs. Public policies that raise prices of alcohol are an effective means to reduce drinking.
Resumo:
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub-optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out-of-sample forecasting performance of various linear and GARCH-type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decisionmaking.
Resumo:
Aim We provide a new quantitative analysis of lizard reproductive ecology. Comparative studies of lizard reproduction to date have usually considered life-history components separately. Instead, we examine the rate of production (productivity hereafter) calculated as the total mass of offspring produced in a year. We test whether productivity is influenced by proxies of adult mortality rates such as insularity and fossorial habits, by measures of temperature such as environmental and body temperatures, mode of reproduction and activity times, and by environmental productivity and diet. We further examine whether low productivity is linked to high extinction risk. Location World-wide. Methods We assembled a database containing 551 lizard species, their phylogenetic relationships and multiple life history and ecological variables from the literature. We use phylogenetically informed statistical models to estimate the factors related to lizard productivity. Results Some, but not all, predictions of metabolic and life-history theories are supported. When analysed separately, clutch size, relative clutch mass and brood frequency are poorly correlated with body mass, but their product – productivity – is well correlated with mass. The allometry of productivity scales similarly to metabolic rate, suggesting that a constant fraction of assimilated energy is allocated to production irrespective of body size. Island species were less productive than continental species. Mass-specific productivity was positively correlated with environmental temperature, but not with body temperature. Viviparous lizards were less productive than egg-laying species. Diet and primary productivity were not associated with productivity in any model. Other effects, including lower productivity of fossorial, nocturnal and active foraging species were confounded with phylogeny. Productivity was not lower in species at risk of extinction. Main conclusions Our analyses show the value of focusing on the rate of annual biomass production (productivity), and generally supported associations between productivity and environmental temperature, factors that affect mortality and the number of broods a lizard can produce in a year, but not with measures of body temperature, environmental productivity or diet.
Resumo:
The controls on aboveground community composition and diversity have been extensively studied, but our understanding of the drivers of belowground microbial communities is relatively lacking, despite their importance for ecosystem functioning. In this study, we fitted statistical models to explain landscape-scale variation in soil microbial community composition using data from 180 sites covering a broad range of grassland types, soil and climatic conditions in England. We found that variation in soil microbial communities was explained by abiotic factors like climate, pH and soil properties. Biotic factors, namely community- weighted means (CWM) of plant functional traits, also explained variation in soil microbial communities. In particular, more bacterial-dominated microbial communities were associated with exploitative plant traits versus fungal-dominated communities with resource-conservative traits, showing that plant functional traits and soil microbial communities are closely related at the landscape scale.
Resumo:
Placing a child in out-of-home care is one of the most important decisions made by professionals in the child care system, with substantial social, psychological, educational, medical and economic consequences. This paper considers the challenges and difficulties of building statistical models of this decision by reviewing the available international evidence. Despite the large number of empirical investigations over a 50 year period, a consensus on the variables associated with this decision is hard to identify. In addition, the individual models have low explanatory and predictive power and should not be relied on to make placement decisions. A number of reasons for this poor performance are offered, and some ways forwards suggested. This paper also aims to facilitate the emergence of a coherent and integrated international literature from the disconnected and fragmented empirical studies. Rather than one placement problem, there are many slightly different problems, and therefore it is expected that a number of related sub-literatures will emerge, each concentrating on a particular definition of the placement problem.
Resumo:
Wine production is largely governed by atmospheric conditions, such as air temperature and precipitation, together with soil management and viticultural/enological practices. Therefore, anthropogenic climate change is likely to have important impacts on the winemaking sector worldwide. An important winemaking region is the Portuguese Douro Valley, which is known by its world-famous Port Wine. The identification of robust relationships between atmospheric factors and wine parameters is of great relevance for the region. A multivariate linear regression analysis of a long wine production series (1932–2010) reveals that high rainfall and cool temperatures during budburst, shoot and inflorescence development (February-March) and warm temperatures during flowering and berry development (May) are generally favourable to high production. The probabilities of occurrence of three production categories (low, normal and high) are also modelled using multinomial logistic regression. Results show that both statistical models are valuable tools for predicting the production in a given year with a lead time of 3–4 months prior to harvest. These statistical models are applied to an ensemble of 16 regional climate model experiments following the SRES A1B scenario to estimate possible future changes. Wine production is projected to increase by about 10 % by the end of the 21st century, while the occurrence of high production years is expected to increase from 25 % to over 60 %. Nevertheless, further model development will be needed to include other aspects that may shape production in the future. In particular, the rising heat stress and/or changes in ripening conditions could limit the projected production increase in future decades.
Resumo:
This paper explores a number of statistical models for predicting the daily stock return volatility of an aggregate of all stocks traded on the NYSE. An application of linear and non-linear Granger causality tests highlights evidence of bidirectional causality, although the relationship is stronger from volatility to volume than the other way around. The out-of-sample forecasting performance of various linear, GARCH, EGARCH, GJR and neural network models of volatility are evaluated and compared. The models are also augmented by the addition of a measure of lagged volume to form more general ex-ante forecasting models. The results indicate that augmenting models of volatility with measures of lagged volume leads only to very modest improvements, if any, in forecasting performance.
Resumo:
The state-resolved reactivity of CH4 in its totally symmetric C-H stretch vibration (�1) has been measured on a Ni(100) surface. Methane molecules were accelerated to kinetic energies of 49 and 63:5 kJ=mol in a molecular beam and vibrationally excited to �1 by stimulated Raman pumping before surface impact at normal incidence. The reactivity of the symmetric-stretch excited CH4 is about an order of magnitude higher than that of methane excited to the antisymmetric stretch (�3) reported by Juurlink et al. [Phys. Rev. Lett. 83, 868 (1999)] and is similar to that we have previously observed for the excitation of the first overtone (2�3). The difference between the state-resolved reactivity for �1 and �3 is consistent with predictions of a vibrationally adiabatic model of the methane reaction dynamics and indicates that statistical models cannot correctly describe the chemisorption of CH4 on nickel.
Resumo:
Simulation models are widely employed to make probability forecasts of future conditions on seasonal to annual lead times. Added value in such forecasts is reflected in the information they add, either to purely empirical statistical models or to simpler simulation models. An evaluation of seasonal probability forecasts from the Development of a European Multimodel Ensemble system for seasonal to inTERannual prediction (DEMETER) and ENSEMBLES multi-model ensemble experiments is presented. Two particular regions are considered: Nino3.4 in the Pacific and the Main Development Region in the Atlantic; these regions were chosen before any spatial distribution of skill was examined. The ENSEMBLES models are found to have skill against the climatological distribution on seasonal time-scales. For models in ENSEMBLES that have a clearly defined predecessor model in DEMETER, the improvement from DEMETER to ENSEMBLES is discussed. Due to the long lead times of the forecasts and the evolution of observation technology, the forecast-outcome archive for seasonal forecast evaluation is small; arguably, evaluation data for seasonal forecasting will always be precious. Issues of information contamination from in-sample evaluation are discussed and impacts (both positive and negative) of variations in cross-validation protocol are demonstrated. Other difficulties due to the small forecast-outcome archive are identified. The claim that the multi-model ensemble provides a ‘better’ probability forecast than the best single model is examined and challenged. Significant forecast information beyond the climatological distribution is also demonstrated in a persistence probability forecast. The ENSEMBLES probability forecasts add significantly more information to empirical probability forecasts on seasonal time-scales than on decadal scales. Current operational forecasts might be enhanced by melding information from both simulation models and empirical models. Simulation models based on physical principles are sometimes expected, in principle, to outperform empirical models; direct comparison of their forecast skill provides information on progress toward that goal.
Resumo:
Many theories for the Madden-Julian oscillation (MJO) focus on diabatic processes, particularly the evolution of vertical heating and moistening. Poor MJO performance in weather and climate models is often blamed on biases in these processes and their interactions with the large-scale circulation. We introduce one of three components of a model-evaluation project, which aims to connect MJO fidelity in models to their representations of several physical processes, focusing on diabatic heating and moistening. This component consists of 20-day hindcasts, initialised daily during two MJO events in winter 2009-10. The 13 models exhibit a range of skill: several have accurate forecasts to 20 days' lead, while others perform similarly to statistical models (8-11 days). Models that maintain the observed MJO amplitude accurately predict propagation, but not vice versa. We find no link between hindcast fidelity and the precipitation-moisture relationship, in contrast to other recent studies. There is also no relationship between models' performance and the evolution of their diabatic-heating profiles with rain rate. A more robust association emerges between models' fidelity and net moistening: the highest-skill models show a clear transition from low-level moistening for light rainfall to mid-level moistening at moderate rainfall and upper-level moistening for heavy rainfall. The mid-level moistening, arising from both dynamics and physics, may be most important. Accurately representing many processes may be necessary, but not sufficient for capturing the MJO, which suggests that models fail to predict the MJO for a broad range of reasons and limits the possibility of finding a panacea.
Resumo:
Ecological forecasting is difficult but essential, because reactive management results in corrective actions that are often too late to avert significant environmental damage. Here, we appraise different forecasting methods with a particular focus on the modelling of species populations. We show how simple extrapolation of current trends in state is often inadequate because environmental drivers change in intensity over time and new drivers emerge. However, statistical models, incorporating relationships with drivers, simply offset the prediction problem, requiring us to forecast how the drivers will themselves change over time. Some authors approach this problem by focusing in detail on a single driver, whilst others use ‘storyline’ scenarios, which consider projected changes in a wide range of different drivers. We explain why both approaches are problematic and identify a compromise to model key drivers and interactions along with possible response options to help inform environmental management. We also highlight the crucial role of validation of forecasts using independent data. Although these issues are relevant for all types of ecological forecasting, we provide examples based on forecasts for populations of UK butterflies. We show how a high goodness-of-fit for models used to calibrate data is not sufficient for good forecasting. Long-term biological recording schemes rather than experiments will often provide data for ecological forecasting and validation because these schemes allow capture of landscape-scale land-use effects and their interactions with other drivers.
Resumo:
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.