882 resultados para Predicted Distribution Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Sardinian brook salamander, Euproctus platycephalus, is a cryptically coloured urodele found in streams, springs and pools in the main mountain systems of Sardinia, and is classified as critically endangered by IUCN. General reviews of the mountainous range where salamanders occur are numerous, but very few field-based distribution studies exist on this endemic species. Through a field and questionnaire survey, conducted between 1999 and 2001, we report a first attempt to increase data on the present distribution of E. platycephalus. A total of 14 localities where Sardinian salamanders are represented by apparently stable and in some cases abundant populations have been identified, as well as 30 sites where species presence has been recorded after 1991. Some 11 historical sites were identified which are no longer inhabited by the species. The implications of this distributional study for the conservation of the species and for the realization of an updated atlas are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microsatellite lengths change over evolutionary time through a process of replication slippage. A recently proposed model of this process holds that the expansionary tendencies of slippage mutation are balanced by point mutations breaking longer microsatellites into smaller units and that this process gives rise to the observed frequency distributions of uninterrupted microsatellite lengths. We refer to this as the slippage/point-mutation theory. Here we derive the theory's predictions for interrupted microsatellites comprising regions of perfect repeats, labeled segments, separated by dinucleotide interruptions containing point mutations. These predictions are tested by reference to the frequency distributions of segments of AC microsatellite in the human genome, and several predictions are shown not to be supported by the data, as follows. The estimated slippage rates are relatively low for the first four repeats, and then rise initially linearly with length, in accordance with previous work. However, contrary to expectation and the experimental evidence, the inferred slippage rates decline in segments above 10 repeats. Point mutation rates are also found to be higher within microsatellites than elsewhere. The theory provides an excellent fit to the frequency distribution of peripheral segment lengths but fails to explain why internal segments are shorter. Furthermore, there are fewer microsatellites with many segments than predicted. The frequencies of interrupted microsatellites decline geometrically with microsatellite size measured in number of segments, so that for each additional segment, the number of microsatellites is 33.6% less. Overall we conclude that the detailed structure of interrupted microsatellites cannot be reconciled with the existing slippage/point-mutation theory of microsatellite evolution, and we suggest that microsatellites are stabilized by processes acting on interior rather than on peripheral segments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper describes a method whereby the distribution of fatigue damage along riser tensioner ropes is calculated, taking account of heave motion, set tension, system geometry, tidal range and rope specification. From these data the distribution of damage along the rope is calculated for a given time period using a Miner’s summation method. This information can then be used to help the operator decide on the length of rope to ‘slip and cut’ whereby a length from the end of the rope is removed and the rope moved through the system from a storage drum such that sections of rope that have already suffered significant fatigue damage are not moved to positions where there is another peak in the distribution. There are two main advantages to be gained by using the fatigue damage model. The first is that it shows the amount of fatigue damage accumulating at different points along the rope, enabling the most highly damaged section to be removed well before failure. The second is that it makes for greater efficiency, as damage can be spread more evenly along the rope over time, avoiding the need to scrap long sections of undamaged rope.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents in detail a theoretical adaptive model of thermal comfort based on the “Black Box” theory, taking into account factors such as culture, climate, social, psychological and behavioural adaptations, which have an impact on the senses used to detect thermal comfort. The model is called the Adaptive Predicted Mean Vote (aPMV) model. The aPMV model explains, by applying the cybernetics concept, the phenomena that the Predicted Mean Vote (PMV) is greater than the Actual Mean Vote (AMV) in free-running buildings, which has been revealed by many researchers in field studies. An Adaptive coefficient (λ) representing the adaptive factors that affect the sense of thermal comfort has been proposed. The empirical coefficients in warm and cool conditions for the Chongqing area in China have been derived by applying the least square method to the monitored onsite environmental data and the thermal comfort survey results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The differential phase (ΦDP) measured by polarimetric radars is recognized to be a very good indicator of the path integrated by rain. Moreover, if a linear relationship is assumed between the specific differential phase (KDP) and the specific attenuation (AH) and specific differential attenuation (ADP), then attenuation can easily be corrected. The coefficients of proportionality, γH and γDP, are, however, known to be dependent in rain upon drop temperature, drop shapes, drop size distribution, and the presence of large drops causing Mie scattering. In this paper, the authors extensively apply a physically based method, often referred to as the “Smyth and Illingworth constraint,” which uses the constraint that the value of the differential reflectivity ZDR on the far side of the storm should be low to retrieve the γDP coefficient. More than 30 convective episodes observed by the French operational C-band polarimetric Trappes radar during two summers (2005 and 2006) are used to document the variability of γDP with respect to the intrinsic three-dimensional characteristics of the attenuating cells. The Smyth and Illingworth constraint could be applied to only 20% of all attenuated rays of the 2-yr dataset so it cannot be considered the unique solution for attenuation correction in an operational setting but is useful for characterizing the properties of the strongly attenuating cells. The range of variation of γDP is shown to be extremely large, with minimal, maximal, and mean values being, respectively, equal to 0.01, 0.11, and 0.025 dB °−1. Coefficient γDP appears to be almost linearly correlated with the horizontal reflectivity (ZH), differential reflectivity (ZDR), and specific differential phase (KDP) and correlation coefficient (ρHV) of the attenuating cells. The temperature effect is negligible with respect to that of the microphysical properties of the attenuating cells. Unusually large values of γDP, above 0.06 dB °−1, often referred to as “hot spots,” are reported for 15%—a nonnegligible figure—of the rays presenting a significant total differential phase shift (ΔϕDP > 30°). The corresponding strongly attenuating cells are shown to have extremely high ZDR (above 4 dB) and ZH (above 55 dBZ), very low ρHV (below 0.94), and high KDP (above 4° km−1). Analysis of 4 yr of observed raindrop spectra does not reproduce such low values of ρHV, suggesting that (wet) ice is likely to be present in the precipitation medium and responsible for the attenuation and high phase shifts. Furthermore, if melting ice is responsible for the high phase shifts, this suggests that KDP may not be uniquely related to rainfall rate but can result from the presence of wet ice. This hypothesis is supported by the analysis of the vertical profiles of horizontal reflectivity and the values of conventional probability of hail indexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geographic distributions of pathogens are the outcome of dynamic processes involving host availability, susceptibility and abundance, suitability of climate conditions, and historical contingency including evolutionary change. Distributions have changed fast and are changing fast in response to many factors, including climatic change. The response time of arable agriculture is intrinsically fast, but perennial crops and especially forests are unlikely to adapt easily. Predictions of many of the variables needed to predict changes in pathogen range are still rather uncertain, and their effects will be profoundly modified by changes elsewhere in the agricultural system, including both economic changes affecting growing systems and hosts and evolutionary changes in pathogens and hosts. Tools to predict changes based on environmental correlations depend on good primary data, which is often absent, and need to be checked against the historical record, which remains very poor for almost all pathogens. We argue that at present the uncertainty in predictions of change is so great that the important adaptive response is to monitor changes and to retain the capacity to innovate, both by access to economic capital with reasonably long-term rates of return and by retaining wide scientific expertise, including currently less fashionable specialisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to consider prospects for UK REITs, which were introduced on 1 January 2007. It specifically focuses on the potential influence of depreciation and expenditure on income and distributions. Design/methodology/approach – First, the ways in which depreciation can affect vehicle earnings and value are discussed. This is then set in the context of the specific rules and features of REITs. An analysis using property income and expenditure data from the Investment Property Databank (IPD) then assesses what gross and net income for a UK REIT might have been like for the period 1984-2003. Findings – A UK REIT must distribute at least 90 per cent of net income from its property rental business. Expenditure therefore plays a significant part in determining what funds remain for distribution. Over 1984-2003, expenditure has absorbed 20 per cent of gross income and been a source of earnings volatility, which would have been exacerbated by gearing. Practical implications – Expenditure must take place to help UK REITs maintain and renew their real estate portfolios. In view of this, investors should moderate expectations of a high and stable income return, although it may well still be so relative to alternative investments. Originality/value – Previous literature on depreciation has not quantified amounts spent on portfolios to keep depreciation at those rates. Nor, to our knowledge, has its ideas been placed in the indirect investor context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Implementations of incremental variational data assimilation require the iterative minimization of a series of linear least-squares cost functions. The accuracy and speed with which these linear minimization problems can be solved is determined by the condition number of the Hessian of the problem. In this study, we examine how different components of the assimilation system influence this condition number. Theoretical bounds on the condition number for a single parameter system are presented and used to predict how the condition number is affected by the observation distribution and accuracy and by the specified lengthscales in the background error covariance matrix. The theoretical results are verified in the Met Office variational data assimilation system, using both pseudo-observations and real data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical and empirical relationships used by microphysics schemes to control the rate at which vapor is transferred to ice crystals growing in supercooled clouds are compared with laboratory data to evaluate the realism of various model formulations. Ice crystal growth rates predicted from capacitance theory are compared with measurements from three independent laboratory studies. When the growth is diffusion- limited, the predicted growth rates are consistent with the measured values to within about 20% in 14 of the experiments analyzed, over the temperature range −2.5° to −22°C. Only two experiments showed significant disagreement with theory (growth rate overestimated by about 30%–40% at −3.7° and −10.6°C). Growth predictions using various ventilation factor parameterizations were also calculated and compared with supercooled wind tunnel data. It was found that neither of the standard parameterizations used for ventilation adequately described both needle and dendrite growth; however, by choosing habit-specific ventilation factors from previous numerical work it was possible to match the experimental data in both regimes. The relationships between crystal mass, capacitance, and fall velocity were investigated based on the laboratory data. It was found that for a given crystal size the capacitance was significantly overestimated by two of the microphysics schemes considered here, yet for a given crystal mass the growth rate was underestimated by those same schemes because of unrealistic mass/size assumptions. The fall speed for a given capacitance (controlling the residence time of a crystal in the supercooled layer relative to its effectiveness as a vapor sink, and the relative importance of ventilation effects) was found to be overpredicted by all the schemes in which fallout is permitted, implying that the modeled crystals reside for too short a time within the cloud layer and that the parameterized ventilation effect is too strong.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the virtual porous carbon model proposed by Harris et al, we study the effect of carbon surface oxidation on the pore size distribution (PSD) curve determined from simulated Ar, N(2) and CO(2) isotherms. It is assumed that surface oxidation is not destructive for the carbon skeleton, and that all pores are accessible for studied molecules (i.e., only the effect of the change of surface chemical composition is studied). The results obtained show two important things, i.e., oxidation of the carbon surface very slightly changes the absolute porosity (calculated from the geometric method of Bhattacharya and Gubbins (BG)); however, PSD curves calculated from simulated isotherms are to a greater or lesser extent affected by the presence of surface oxides. The most reliable results are obtained from Ar adsorption data. Not only is adsorption of this adsorbate practically independent from the presence of surface oxides, but, more importantly, for this molecule one can apply the slit-like model of pores as the first approach to recover the average pore diameter of a real carbon structure. For nitrogen, the effect of carbon surface chemical composition is observed due to the quadrupole moment of this molecule, and this effect shifts the PSD curves compared to Ar. The largest differences are seen for CO2, and it is clearly demonstrated that the PSD curves obtained from adsorption isotherms of this molecule contain artificial peaks and the average pore diameter is strongly influenced by the presence of electrostatic adsorbate-adsorbate as well as adsorbate-adsorbent interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of flood inundation models is often assessed using satellite observed data; however these data have inherent uncertainty. In this study we assess the impact of this uncertainty when calibrating a flood inundation model (LISFLOOD-FP) for a flood event in December 2006 on the River Dee, North Wales, UK. The flood extent is delineated from an ERS-2 SAR image of the event using an active contour model (snake), and water levels at the flood margin calculated through intersection of the shoreline vector with LiDAR topographic data. Gauged water levels are used to create a reference water surface slope for comparison with the satellite-derived water levels. Residuals between the satellite observed data points and those from the reference line are spatially clustered into groups of similar values. We show that model calibration achieved using pattern matching of observed and predicted flood extent is negatively influenced by this spatial dependency in the data. By contrast, model calibration using water elevations produces realistic calibrated optimum friction parameters even when spatial dependency is present. To test the impact of removing spatial dependency a new method of evaluating flood inundation model performance is developed by using multiple random subsamples of the water surface elevation data points. By testing for spatial dependency using Moran’s I, multiple subsamples of water elevations that have no significant spatial dependency are selected. The model is then calibrated against these data and the results averaged. This gives a near identical result to calibration using spatially dependent data, but has the advantage of being a statistically robust assessment of model performance in which we can have more confidence. Moreover, by using the variations found in the subsamples of the observed data it is possible to assess the effects of observational uncertainty on the assessment of flooding risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we systematically compare a wide range of observational and numerical precipitation datasets for Central Asia. Data considered include two re-analyses, three datasets based on direct observations, and the output of a regional climate model simulation driven by a global re-analysis. These are validated and intercompared with respect to their ability to represent the Central Asian precipitation climate. In each of the datasets, we consider the mean spatial distribution and the seasonal cycle of precipitation, the amplitude of interannual variability, the representation of individual yearly anomalies, the precipitation sensitivity (i.e. the response to wet and dry conditions), and the temporal homogeneity of precipitation. Additionally, we carried out part of these analyses for datasets available in real time. The mutual agreement between the observations is used as an indication of how far these data can be used for validating precipitation data from other sources. In particular, we show that the observations usually agree qualitatively on anomalies in individual years while it is not always possible to use them for the quantitative validation of the amplitude of interannual variability. The regional climate model is capable of improving the spatial distribution of precipitation. At the same time, it strongly underestimates summer precipitation and its variability, while interannual variations are well represented during the other seasons, in particular in the Central Asian mountains during winter and spring

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ice cloud representation in general circulation models remains a challenging task, due to the lack of accurate observations and the complexity of microphysical processes. In this article, we evaluate the ice water content (IWC) and ice cloud fraction statistical distributions from the numerical weather prediction models of the European Centre for Medium-Range Weather Forecasts (ECMWF) and the UK Met Office, exploiting the synergy between the CloudSat radar and CALIPSO lidar. Using the last three weeks of July 2006, we analyse the global ice cloud occurrence as a function of temperature and latitude and show that the models capture the main geographical and temperature-dependent distributions, but overestimate the ice cloud occurrence in the Tropics in the temperature range from −60 °C to −20 °C and in the Antarctic for temperatures higher than −20 °C, but underestimate ice cloud occurrence at very low temperatures. A global statistical comparison of the occurrence of grid-box mean IWC at different temperatures shows that both the mean and range of IWC increases with increasing temperature. Globally, the models capture most of the IWC variability in the temperature range between −60 °C and −5 °C, and also reproduce the observed latitudinal dependencies in the IWC distribution due to different meteorological regimes. Two versions of the ECMWF model are assessed. The recent operational version with a diagnostic representation of precipitating snow and mixed-phase ice cloud fails to represent the IWC distribution in the −20 °C to 0 °C range, but a new version with prognostic variables for liquid water, ice and snow is much closer to the observed distribution. The comparison of models and observations provides a much-needed analysis of the vertical distribution of IWC across the globe, highlighting the ability of the models to reproduce much of the observed variability as well as the deficiencies where further improvements are required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensemble forecasting of nonlinear systems involves the use of a model to run forward a discrete ensemble (or set) of initial states. Data assimilation techniques tend to focus on estimating the true state of the system, even though model error limits the value of such efforts. This paper argues for choosing the initial ensemble in order to optimise forecasting performance rather than estimate the true state of the system. Density forecasting and choosing the initial ensemble are treated as one problem. Forecasting performance can be quantified by some scoring rule. In the case of the logarithmic scoring rule, theoretical arguments and empirical results are presented. It turns out that, if the underlying noise dominates model error, we can diagnose the noise spread.