923 resultados para model selection in binary regression


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous assessments of the impacts of climate change on heat-related mortality use the "delta method" to create temperature projection time series that are applied to temperature-mortality models to estimate future mortality impacts. The delta method means that climate model bias in the modelled present does not influence the temperature projection time series and impacts. However, the delta method assumes that climate change will result only in a change in the mean temperature but there is evidence that there will also be changes in the variability of temperature with climate change. The aim of this paper is to demonstrate the importance of considering changes in temperature variability with climate change in impacts assessments of future heat-related mortality. We investigate future heatrelated mortality impacts in six cities (Boston, Budapest, Dallas, Lisbon, London and Sydney) by applying temperature projections from the UK Meteorological Office HadCM3 climate model to the temperature-mortality models constructed and validated in Part 1. We investigate the impacts for four cases based on various combinations of mean and variability changes in temperature with climate change. The results demonstrate that higher mortality is attributed to increases in the mean and variability of temperature with climate change rather than with the change in mean temperature alone. This has implications for interpreting existing impacts estimates that have used the delta method. We present a novel method for the creation of temperature projection time series that includes changes in the mean and variability of temperature with climate change and is not influenced by climate model bias in the modelled present. The method should be useful for future impacts assessments. Few studies consider the implications that the limitations of the climate model may have on the heatrelated mortality impacts. Here, we demonstrate the importance of considering this by conducting an evaluation of the daily and extreme temperatures from HadCM3, which demonstrates that the estimates of future heat-related mortality for Dallas and Lisbon may be overestimated due to positive climate model bias. Likewise, estimates for Boston and London may be underestimated due to negative climate model bias. Finally, we briefly consider uncertainties in the impacts associated with greenhouse gas emissions and acclimatisation. The uncertainties in the mortality impacts due to different emissions scenarios of greenhouse gases in the future varied considerably by location. Allowing for acclimatisation to an extra 2°C in mean temperatures reduced future heat-related mortality by approximately half that of no acclimatisation in each city.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Satellite observed data for flood events have been used to calibrate and validate flood inundation models, providing valuable information on the spatial extent of the flood. Improvements in the resolution of this satellite imagery have enabled indirect remote sensing of water levels by using an underlying LiDAR DEM to extract the water surface elevation at the flood margin. Further to comparison of the spatial extent, this now allows for direct comparison between modelled and observed water surface elevations. Using a 12.5m ERS-1 image of a flood event in 2006 on the River Dee, North Wales, UK, both of these data types are extracted and each assessed for their value in the calibration of flood inundation models. A LiDAR guided snake algorithm is used to extract an outline of the flood from the satellite image. From the extracted outline a binary grid of wet / dry cells is created at the same resolution as the model, using this the spatial extent of the modelled and observed flood can be compared using a measure of fit between the two binary patterns of flooding. Water heights are extracted using points at intervals of approximately 100m along the extracted outline, and the students T-test is used to compare modelled and observed water surface elevations. A LISFLOOD-FP model of the catchment is set up using LiDAR topographic data resampled to the 12.5m resolution of the satellite image, and calibration of the friction parameter in the model is undertaken using each of the two approaches. Comparison between the two approaches highlights the sensitivity of the spatial measure of fit to uncertainty in the observed data and the potential drawbacks of using the spatial extent when parts of the flood are contained by the topography.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fixed transactions costs that prohibit exchange engender bias in supply analysis due to censoring of the sample observations. The associated bias in conventional regression procedures applied to censored data and the construction of robust methods for mitigating bias have been preoccupations of applied economists since Tobin [Econometrica 26 (1958) 24]. This literature assumes that the true point of censoring in the data is zero and, when this is not the case, imparts a bias to parameter estimates of the censored regression model. We conjecture that this bias can be significant; affirm this from experiments; and suggest techniques for mitigating this bias using Bayesian procedures. The bias-mitigating procedures are based on modifications of the key step that facilitates Bayesian estimation of the censored regression model; are easy to implement; work well in both small and large samples; and lead to significantly improved inference in the censored regression model. These findings are important in light of the widespread use of the zero-censored Tobit regression and we investigate their consequences using data on milk-market participation in the Ethiopian highlands. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inferring the spatial expansion dynamics of invading species from molecular data is notoriously difficult due to the complexity of the processes involved. For these demographic scenarios, genetic data obtained from highly variable markers may be profitably combined with specific sampling schemes and information from other sources using a Bayesian approach. The geographic range of the introduced toad Bufo marinus is still expanding in eastern and northern Australia, in each case from isolates established around 1960. A large amount of demographic and historical information is available on both expansion areas. In each area, samples were collected along a transect representing populations of different ages and genotyped at 10 microsatellite loci. Five demographic models of expansion, differing in the dispersal pattern for migrants and founders and in the number of founders, were considered. Because the demographic history is complex, we used an approximate Bayesian method, based on a rejection-regression algorithm. to formally test the relative likelihoods of the five models of expansion and to infer demographic parameters. A stepwise migration-foundation model with founder events was statistically better supported than other four models in both expansion areas. Posterior distributions supported different dynamics of expansion in the studied areas. Populations in the eastern expansion area have a lower stable effective population size and have been founded by a smaller number of individuals than those in the northern expansion area. Once demographically stabilized, populations exchange a substantial number of effective migrants per generation in both expansion areas, and such exchanges are larger in northern than in eastern Australia. The effective number of migrants appears to be considerably lower than that of founders in both expansion areas. We found our inferences to be relatively robust to various assumptions on marker. demographic, and historical features. The method presented here is the only robust, model-based method available so far, which allows inferring complex population dynamics over a short time scale. It also provides the basis for investigating the interplay between population dynamics, drift, and selection in invasive species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Supplier selection has a great impact on supply chain management. The quality of supplier selection also affects profitability of organisations which work in the supply chain. As suppliers can provide variety of services and customers demand higher quality of service provision, the organisation is facing challenges for making the right choice of supplier for the right needs. The existing methods for supplier selection, such as data envelopment analysis (DEA) and analytical hierarchy process (AHP) can automatically perform selection of competitive suppliers and further decide winning supplier(s). However, these methods are not capable of determining the right selection criteria which should be derived from the business strategy. An ontology model described in this paper integrates the strengths of DEA and AHP with new mechanisms which ensure the right supplier to be selected by the right criteria for the right customer's needs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Little attention has been focussed on a precise definition and evaluation mechanism for project management risk specifically related to contractors. When bidding, contractors traditionally price risks using unsystematic approaches. The high business failure rate our industry records may indicate that the current unsystematic mechanisms contractors use for building up contingencies may be inadequate. The reluctance of some contractors to include a price for risk in their tenders when bidding for work competitively may also not be a useful approach. Here, instead, we first define the meaning of contractor contingency, and then we develop a facile quantitative technique that contractors can use to estimate a price for project risk. This model will help contractors analyse their exposure to project risks; and help them express the risk in monetary terms for management action. When bidding for work, they can decide how to allocate contingencies strategically in a way that balances risk and reward.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several previous studies have attempted to assess the sublimation depth-scales of ice particles from clouds into clear air. Upon examining the sublimation depth-scales in the Met Office Unified Model (MetUM), it was found that the MetUM has evaporation depth-scales 2–3 times larger than radar observations. Similar results can be seen in the European Centre for Medium-Range Weather Forecasts (ECMWF), Regional Atmospheric Climate Model (RACMO) and Météo-France models. In this study, we use radar simulation (converting model variables into radar observations) and one-dimensional explicit microphysics numerical modelling to test and diagnose the cause of the deep sublimation depth-scales in the forecast model. The MetUM data and parametrization scheme are used to predict terminal velocity, which can be compared with the observed Doppler velocity. This can then be used to test the hypothesis as to why the sublimation depth-scale is too large within the MetUM. Turbulence could lead to dry air entrainment and higher evaporation rates; particle density may be wrong, particle capacitance may be too high and lead to incorrect evaporation rates or the humidity within the sublimating layer may be incorrectly represented. We show that the most likely cause of deep sublimation zones is an incorrect representation of model humidity in the layer. This is tested further by using a one-dimensional explicit microphysics model, which tests the sensitivity of ice sublimation to key atmospheric variables and is capable of including sonde and radar measurements to simulate real cases. Results suggest that the MetUM grid resolution at ice cloud altitudes is not sufficient enough to maintain the sharp drop in humidity that is observed in the sublimation zone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compared output from 3 dynamic process-based models (DMs: ECOSSE, MILLENNIA and the Durham Carbon Model) and 9 bioclimatic envelope models (BCEMs; including BBOG ensemble and PEATSTASH) ranging from simple threshold to semi-process-based models. Model simulations were run at 4 British peatland sites using historical climate data and climate projections under a medium (A1B) emissions scenario from the 11-RCM (regional climate model) ensemble underpinning UKCP09. The models showed that blanket peatlands are vulnerable to projected climate change; however, predictions varied between models as well as between sites. All BCEMs predicted a shift from presence to absence of a climate associated with blanket peat, where the sites with the lowest total annual precipitation were closest to the presence/absence threshold. DMs showed a more variable response. ECOSSE predicted a decline in net C sink and shift to net C source by the end of this century. The Durham Carbon Model predicted a smaller decline in the net C sink strength, but no shift to net C source. MILLENNIA predicted a slight overall increase in the net C sink. In contrast to the BCEM projections, the DMs predicted that the sites with coolest temperatures and greatest total annual precipitation showed the largest change in carbon sinks. In this model inter-comparison, the greatest variation in model output in response to climate change projections was not between the BCEMs and DMs but between the DMs themselves, because of different approaches to modelling soil organic matter pools and decomposition amongst other processes. The difference in the sign of the response has major implications for future climate feedbacks, climate policy and peatland management. Enhanced data collection, in particular monitoring peatland response to current change, would significantly improve model development and projections of future change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Variations in the Atlantic Meridional Overturning Circulation (MOC) exert an important influence on climate, particularly on decadal time scales. Simulation of the MOC in coupled climate models is compromised, to a degree that is unknown, by their lack of fidelity in resolving some of the key processes involved. There is an overarching need to increase the resolution and fidelity of climate models, but also to assess how increases in resolution influence the simulation of key phenomena such as the MOC. In this study we investigate the impact of significantly increasing the (ocean and atmosphere) resolution of a coupled climate model on the simulation of MOC variability by comparing high and low resolution versions of the same model. In both versions, decadal variability of the MOC is closely linked to density anomalies that propagate from the Labrador Sea southward along the deep western boundary. We demonstrate that the MOC adjustment proceeds more rapidly in the higher resolution model due the increased speed of western boundary waves. However, the response of the Atlantic Sea Surface Temperatures (SSTs) to MOC variations is relatively robust - in pattern if not in magnitude - across the two resolutions. The MOC also excites a coupled ocean-atmosphere response in the tropical Atlantic in both model versions. In the higher resolution model, but not the lower resolution model, there is evidence of a significant response in the extratropical atmosphere over the North Atlantic 6 years after a maximum in the MOC. In both models there is evidence of a weak negative feedback on deep density anomalies in the Labrador Sea, and hence on the MOC (with a time scale of approximately ten years). Our results highlight the need for further work to understand the decadal variability of the MOC and its simulation in climate models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivation: The ability of a simple method (MODCHECK) to determine the sequence–structure compatibility of a set of structural models generated by fold recognition is tested in a thorough benchmark analysis. Four Model Quality Assessment Programs (MQAPs) were tested on 188 targets from the latest LiveBench-9 automated structure evaluation experiment. We systematically test and evaluate whether the MQAP methods can successfully detect native-likemodels. Results: We show that compared with the other three methods tested MODCHECK is the most reliable method for consistently performing the best top model selection and for ranking the models. In addition, we show that the choice of model similarity score used to assess a model's similarity to the experimental structure can influence the overall performance of these tools. Although these MQAP methods fail to improve the model selection performance for methods that already incorporate protein three dimension (3D) structural information, an improvement is observed for methods that are purely sequence-based, including the best profile–profile methods. This suggests that even the best sequence-based fold recognition methods can still be improved by taking into account the 3D structural information.