157 resultados para Simple overlap model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Eady model, where the meridional potential vorticity (PV) gradient is zero, perturbation energy growth can be partitioned cleanly into three mechanisms: (i) shear instability, (ii) resonance, and (iii) the Orr mechanism. Shear instability involves two-way interaction between Rossby edge waves on the ground and lid, resonance occurs as interior PV anomalies excite the edge waves, and the Orr mechanism involves only interior PV anomalies. These mechanisms have distinct implications for the structural and temporal linear evolution of perturbations. Here, a new framework is developed in which the same mechanisms can be distinguished for growth on basic states with nonzero interior PV gradients. It is further shown that the evolution from quite general initial conditions can be accurately described (peak error in perturbation total energy typically less than 10%) by a reduced system that involves only three Rossby wave components. Two of these are counterpropagating Rossby waves—that is, generalizations of the Rossby edge waves when the interior PV gradient is nonzero—whereas the other component depends on the structure of the initial condition and its PV is advected passively with the shear flow. In the cases considered, the three-component model outperforms approximate solutions based on truncating a modal or singular vector basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fast radiative transfer model (RTM) to compute emitted infrared radiances for a very high resolution radiometer (VHRR), onboard the operational Indian geostationary satellite Kalpana has been developed and verified. This work is a step towards the assimilation of Kalpana water vapor (WV) radiances into numerical weather prediction models. The fast RTM uses a regression‐based approach to parameterize channel‐specific convolved level to space transmittances. A comparison between the fast RTM and the line‐by‐line RTM demonstrated that the fast RTM can simulate line‐by‐line radiances for the Kalpana WV channel to an accuracy better than the instrument noise, while offering more rapid radiance calculations. A comparison of clear sky radiances of the Kalpana WV channel with the ECMWF model first guess radiances is also presented, aiming to demonstrate the fast RTM performance with the real observations. In order to assimilate the radiances from Kalpana, a simple scheme for bias correction has been suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Infants' responses in speech sound discrimination tasks can be nonmonotonic over time. Stager and Werker (1997) reported such data in a bimodal habituation task. In this task, 8-month-old infants were capable of discriminations that involved minimal contrast pairs, whereas 14-month-old infants were not. It was argued that the older infants' attenuated performance was linked to their processing of the stimuli for meaning. The authors suggested that these data are diagnostic of a qualitative shift in infant cognition. We describe an associative connectionist model showing a similar decrement in discrimination without any qualitative shift in processing. The model suggests that responses to phonemic contrasts may be a nonmonotonic function of experience with language. The implications of this idea are discussed. The model also provides a formal framework for studying habituation-dishabituation behaviors in infancy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The potential of the τ-ω model for retrieving the volumetric moisture content of bare and vegetated soil from dual polarisation passive microwave data acquired at single and multiple angles is tested. Measurement error and several additional sources of uncertainty will affect the theoretical retrieval accuracy. These include uncertainty in the soil temperature, the vegetation structure and consequently its microwave singlescattering albedo, and uncertainty in soil microwave emissivity based on its roughness. To test the effects of these uncertainties for simple homogeneous scenes, we attempt to retrieve soil moisture from a number of simulated microwave brightness temperature datasets generated using the τ-ω model. The uncertainties for each influence are estimated and applied to curves generated for typical scenarios, and an inverse model used to retrieve the soil moisture content, vegetation optical depth and soil temperature. The effect of each influence on the theoretical soil moisture retrieval limit is explored, the likelihood of each sensor configuration meeting user requirements is assessed, and the most effective means of improving moisture retrieval indicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the 1960s, Jacob Bjerknes suggested that if the top-of-the-atmosphere (TOA) fluxes and the oceanic heat storage did not vary too much, then the total energy transport by the climate system would not vary too much either. This implies that any large anomalies of oceanic and atmospheric energy transport should be equal and opposite. This simple scenario has become known as Bjerknes compensation. A long control run of the Third Hadley Centre Coupled Ocean-Atmosphere General Circulation Model (HadCM3) has been investigated. It was found that northern extratropical decadal anomalies of atmospheric and oceanic energy transports are significantly anticorrelated and have similar magnitudes, which is consistent with the predictions of Bjerknes compensation. ne degree of compensation in the northern extratropics was found to increase with increasing, time scale. Bjerknes compensation did not occur in the Tropics, primarily as large changes in the surface fluxes were associated with large changes in the TOA fluxes. In the ocean, the decadal variability of the energy transport is associated with fluctuations in the meridional overturning circulation in the Atlantic Ocean. A stronger Atlantic Ocean energy transport leads to strong warming of surface temperatures in the Greenland-Iceland-Norwegian (GIN) Seas. which results in a reduced equator-to-pole surface temperature gradient and reduced atmospheric baroclinicity. It is argued that a stronger Atlantic Ocean energy transport leads to a weakened atmospheric transient energy transport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the first systematic chronostratigraphic study of the river terraces of the Exe catchment in South West England and a new conceptual model for terrace formation in unglaciated basins with applicability to terrace staircase sequences elsewhere. The Exe catchment lay beyond the maximum extent of Pleistocene ice sheets and the drainage pattern evolved from the Tertiary to the Middle Pleistocene, by which time the major valley systems were in place and downcutting began to create a staircase of strath terraces. The higher terraces (8-6) typically exhibit altitudinal overlap or appear to be draped over the landscape, whilst the middle terraces show greater altitudinal separation and the lowest terraces are of a cut and fill form. The terrace deposits investigated in this study were deposited in cold phases of the glacial-interglacial Milankovitch climatic cycles with the lowest four being deposited in the Devensian Marine Isotope Stages (MIS) 4-2. A new cascade process-response model is proposed of basin terrace evolution in the Exe valley, which emphasises the role of lateral erosion in the creation of strath terraces and the reworking of inherited resistant lithological components down through the staircase. The resultant emergent valley topography and the reworking of artefacts along with gravel clasts, have important implications for the dating of hominin presence and the local landscapes they inhabited. Whilst the terrace chronology suggested here is still not as detailed as that for the Thames or the Solent System it does indicate a Middle Palaeolithic hominin presence in the region, probably prior to the late Wolstonian Complex or MIS 6. This supports existing data from cave sites in South West England.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Central Brazil, the long-term sustainability of beef cattle systems is under threat over vast tracts of farming areas, as more than half of the 50 million hectares of sown pastures are suffering from degradation. Overgrazing practised to maintain high stocking rates is regarded as one of the main causes. High stocking rates are deliberate and crucial decisions taken by the farmers, which appear paradoxical, even irrational given the state of knowledge regarding the consequences of overgrazing. The phenomenon however appears inextricably linked with the objectives that farmers hold. In this research those objectives were elicited first and from their ranking two, ‘asset value of cattle (representing cattle ownership)' and ‘present value of economic returns', were chosen to develop an original bi-criteria Compromise Programming model to test various hypotheses postulated to explain the overgrazing behaviour. As part of the model a pasture productivity index is derived to estimate the pasture recovery cost. Different scenarios based on farmers' attitudes towards overgrazing, pasture costs and capital availability were analysed. The results of the model runs show that benefits from holding more cattle can outweigh the increased pasture recovery and maintenance costs. This result undermines the hypothesis that farmers practise overgrazing because they are unaware or uncaring about overgrazing costs. An appropriate approach to the problem of pasture degradation requires information on the economics, and its interplay with farmers' objectives, for a wide range of pasture recovery and maintenance methods. Seen within the context of farmers' objectives, some level of overgrazing appears rational. Advocacy of the simple ‘no overgrazing' rule is an insufficient strategy to maintain the long-term sustainability of the beef production systems in Central Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Central Brazil, the long-term, sustainability of beef cattle systems is under threat over vast tracts of farming areas, as more than half of the 50 million hectares of sown pastures are suffering from. degradation. Overgrazing practised to maintain high stocking rates is regarded as one of the main causes. High stocking rates are deliberate and crucial decisions taken by the farmers, which appear paradoxical, even irrational given the state of knowledge regarding the consequences of overgrazing. The phenomenon however appears inextricably linked with the objectives that farmers hold. In this research those objectives were elicited first and from their ranking two, 'asset value of cattle (representing cattle ownership and 'present value of economic returns', were chosen to develop an original bi-criteria Compromise Programming model to test various hypotheses postulated to explain the overgrazing behaviour. As part of the model a pasture productivity index is derived to estimate the pasture recovery cost. Different scenarios based on farmers' attitudes towards overgrazing, pasture costs and capital availability were analysed. The results of the model runs show that benefits from holding more cattle can outweigh the increased pasture recovery and maintenance costs. This result undermines the hypothesis that farmers practise overgrazing because they are unaware or uncaring caring about overgrazing costs. An appropriate approach to the problem of pasture degradation requires information on the economics,and its interplay with farmers' objectives, for a wide range of pasture recovery and maintenance methods. Seen within the context of farmers' objectives, some level of overgrazing appears rational. Advocacy of the simple 'no overgrazing' rule is an insufficient strategy to maintain the long-term sustainability of the beef production systems in Central Brazil. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents a new simple approach for combining empirical with raw (i.e., not bias corrected) coupled model ensemble forecasts in order to make more skillful interval forecasts of ENSO. A Bayesian normal model has been used to combine empirical and raw coupled model December SST Niño-3.4 index forecasts started at the end of the preceding July (5-month lead time). The empirical forecasts were obtained by linear regression between December and the preceding July Niño-3.4 index values over the period 1950–2001. Coupled model ensemble forecasts for the period 1987–99 were provided by ECMWF, as part of the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) project. Empirical and raw coupled model ensemble forecasts alone have similar mean absolute error forecast skill score, compared to climatological forecasts, of around 50% over the period 1987–99. The combined forecast gives an increased skill score of 74% and provides a well-calibrated and reliable estimate of forecast uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0-an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been great interest recently in peptide amphiphiles and block copolymers containing biomimetic peptide sequences due to applications in bionanotechnology. We investigate the self-assembly of the peptide-PEG amphiphile FFFF-PEG5000 containing the hydrophobic sequence of four phenylalanine residues conjugated to PEG of molar mass 5000. This serves as a simple model peptide amphiphile. At very low concentration, association of hydrophobic aromatic phenylalanine residues occurs, as revealed by circular dichroism and UV/vis fluorescence experiments. A critical aggregation concentration associated with the formation of hydrophobic domains is determined through pyrene fluorescence assays. At higher concentration, defined beta-sheets develop as revealed by FTIR spectroscopy and X-ray diffraction. Transmission electron microscopy reveals self-assembled straight fibril structures. These are much shorter than those observed for amyloid peptides, the finite length may be set by the end cap energy due to the hydrophobicity of phenylalanine. The combination of these techniques points to different aggregation processes depending on concentration. Hydrophobic association into irregular aggregates occurs at low concentration, well-developed beta-sheets only developing at higher concentration. Drying of FFFF-PEG5000 solutions leads to crystallization of PEG, as confirmed by polarized optical microscopy (POM), FTIR and X-ray diffraction (XRD). PEG crystallization does not disrupt local beta-sheet structure (as indicated by FTIR and XRD). However on longer lengthscales the beta-sheet fibrillar structure is perturbed because spheruilites from PEG crystallization are observed by POM. (C) 2009 Elsevier B.V. All rights reserved.