129 resultados para Single-process Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Composites of wind speeds, equivalent potential temperature, mean sea level pressure, vertical velocity, and relative humidity have been produced for the 100 most intense extratropical cyclones in the Northern Hemisphere winter for the 40-yr ECMWF Re-Analysis (ERA-40) and the high resolution global environment model (HiGEM). Features of conceptual models of cyclone structure—the warm conveyor belt, cold conveyor belt, and dry intrusion—have been identified in the composites from ERA-40 and compared to HiGEM. Such features can be identified in the composite fields despite the smoothing that occurs in the compositing process. The surface features and the three-dimensional structure of the cyclones in HiGEM compare very well with those from ERA-40. The warm conveyor belt is identified in the temperature and wind fields as a mass of warm air undergoing moist isentropic uplift and is very similar in ERA-40 and HiGEM. The rate of ascent is lower in HiGEM, associated with a shallower slope of the moist isentropes in the warm sector. There are also differences in the relative humidity fields in the warm conveyor belt. In ERA-40, the high values of relative humidity are strongly associated with the moist isentropic uplift, whereas in HiGEM these are not so strongly associated. The cold conveyor belt is identified as rearward flowing air that undercuts the warm conveyor belt and produces a low-level jet, and is very similar in HiGEM and ERA-40. The dry intrusion is identified in the 500-hPa vertical velocity and relative humidity. The structure of the dry intrusion compares well between HiGEM and ERA-40 but the descent is weaker in HiGEM because of weaker along-isentrope flow behind the composite cyclone. HiGEM’s ability to represent the key features of extratropical cyclone structure can give confidence in future predictions from this model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal and analytical risk models prescribe how risk should be incorporated in construction bids. However, the actual process of how contractors and their clients negotiate and agree on price is complex, and not clearly articulated in the literature. Using participant observation, the entire tender process was shadowed in two leading UK construction firms. This was compared to propositions in analytical models and significant differences were found. 670 hours of work observed in both firms revealed three stages of the bidding process. Bidding activities were categorized and their extent estimated as deskwork (32%), calculations (19%), meetings (14%), documents (13%), off-days (11%), conversations (7%), correspondence (3%) and travel (1%). Risk allowances of 1-2% were priced in some bids and three tiers of risk apportionment in bids were identified. However, priced risks may sometimes be excluded from the final bidding price to enhance competitiveness. Thus, although risk apportionment affects a contractor’s pricing strategy, other complex, microeconomic factors also affect price. Instead of pricing in contingencies, risk was priced mostly through contractual rather than price mechanisms, to reflect commercial imperatives. The findings explain why some assumptions underpinning analytical models may not be sustainable in practice and why what actually happens in practice is important for those who seek to model the pricing of construction bids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using mixed logit models to analyse choice data is common but requires ex ante specification of the functional forms of preference distributions. We make the case for greater use of bounded functional forms and propose the use of the Marginal Likelihood, calculated using Bayesian techniques, as a single measure of model performance across non nested mixed logit specifications. Using this measure leads to very different rankings of model specifications compared to alternative rule of thumb measures. The approach is illustrated using data from a choice experiment regarding GM food types which provides insights regarding the recent WTO dispute between the EU and the US, Canada and Argentina and whether labelling and trade regimes should be based on the production process or product composition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reliable assessment of the quality of protein structural models is fundamental to the progress of structural bioinformatics. The ModFOLD server provides access to two accurate techniques for the global and local prediction of the quality of 3D models of proteins. Firstly ModFOLD, which is a fast Model Quality Assessment Program (MQAP) used for the global assessment of either single or multiple models. Secondly ModFOLDclust, which is a more intensive method that carries out clustering of multiple models and provides per-residue local quality assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical modeling of bacterial chemotaxis systems has been influential and insightful in helping to understand experimental observations. We provide here a comprehensive overview of the range of mathematical approaches used for modeling, within a single bacterium, chemotactic processes caused by changes to external gradients in its environment. Specific areas of the bacterial system which have been studied and modeled are discussed in detail, including the modeling of adaptation in response to attractant gradients, the intracellular phosphorylation cascade, membrane receptor clustering, and spatial modeling of intracellular protein signal transduction. The importance of producing robust models that address adaptation, gain, and sensitivity are also discussed. This review highlights that while mathematical modeling has aided in understanding bacterial chemotaxis on the individual cell scale and guiding experimental design, no single model succeeds in robustly describing all of the basic elements of the cell. We conclude by discussing the importance of this and the future of modeling in this area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combinations of drugs are increasingly being used for a wide variety of diseases and conditions. A pre-clinical study may allow the investigation of the response at a large number of dose combinations. In determining the response to a drug combination, interest may lie in seeking evidence of synergism, in which the joint action is greater than the actions of the individual drugs, or of antagonism, in which it is less. Two well-known response surface models representing no interaction are Loewe additivity and Bliss independence, and Loewe or Bliss synergism or antagonism is defined relative to these. We illustrate an approach to fitting these models for the case in which the marginal single drug dose-response relationships are represented by four-parameter logistic curves with common upper and lower limits, and where the response variable is normally distributed with a common variance about the dose-response curve. When the dose-response curves are not parallel, the relative potency of the two drugs varies according to the magnitude of the desired effect and the models for Loewe additivity and synergism/antagonism cannot be explicitly expressed. We present an iterative approach to fitting these models without the assumption of parallel dose-response curves. A goodness-of-fit test based on residuals is also described. Implementation using the SAS NLIN procedure is illustrated using data from a pre-clinical study. Copyright © 2007 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An example of the evolution of the interacting behaviours of parents and progeny is studied using iterative equations linking the frequencies of the gametes produced by the progeny to the frequencies of the gametes in the parental generation. This population genetics approach shows that a model in which both behaviours are determined by a single locus can lead to a stable equilibrium in which the two behaviours continue to segregate. A model in which the behaviours are determined by genes at two separate loci leads eventually to fixation of the alleles at both loci but this can take many generations of selection. Models of the type described in this paper will be needed to understand the evolution of complex behaviour when genomic or experimental information is available about the genetic determinants of behaviour and the selective values of different genomes. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tagged microarray marker (TAM) method allows high-throughput differentiation between predicted alternative PCR products. Typically, the method is used as a molecular marker approach to determining the allelic states of single nucleotide polymorphisms (SNPs) or insertion-deletion (indel) alleles at genomic loci in multiple individuals. Biotin-labeled PCR products are spotted, unpurified, onto a streptavidin-coated glass slide and the alternative products are differentiated by hybridization to fluorescent detector oligonucleotides that recognize corresponding allele-specific tags on the PCR primers. The main attractions of this method are its high throughput (thousands of PCRs are analyzed per slide), flexibility of scoring (any combination, from a single marker in thousands of samples to thousands of markers in a single sample, can be analyzed) and flexibility of scale (any experimental scale, from a small lab setting up to a large project). This protocol describes an experiment involving 3,072 PCRs scored on a slide. The whole process from the start of PCR setup to receiving the data spreadsheet takes 2 d.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The linear isomer of dodecylbenzene (DDB), 1-phenyldodecane, was aged at temperatures of 105 and 135 degrees C in air and the resultant products were analyzed using a range of analytical techniques. On ageing, the 1-phenyldodecane darkened, the acid number, dielectric loss and water content increased and significant oxidation peaks were detected in the infrared spectrum. When aged in the presence of copper, a characteristic peak at 680 nm was also detected by UV/visible spectroscopy but, compared with previous studies of a cable-grade DDB, the strength of this peak was much increased and no appreciable precipitate formation occurred. At the same time, very high values of dielectric loss were recorded. On ageing in the absence of copper, an unusually strong infrared carbonyl band was seen, which correlates well with the detection of dodecanophenone by gas chromatography / mass spectrometry and nuclear magnetic resonance spectroscopy. It was therefore concluded that the ageing process proceeds via the initial production of aromatic ketones, which may then be further oxidized to carboxylic acids. In the presence of copper, these oxidation products are present in lower quantities, most of these oxidation products being combined with the copper present in the oil to give copper carboxylates. The behavior is described in terms of a complex autoxidation mechanism, in which copper acts as both an oxidizing and a reducing agent, depending on its oxidation state and, in particular, promotes elimination via the oxidation of intermediate alkyl radical species to carbocations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The kinetics of the photodimerisation reactions of the 2- and 4-β-halogeno-derivatives of trans-cinnamic acid (where the halogen is fluorine, chlorine or bromine) have been investigated by infrared microspectroscopy. It is found that none of the reactions proceed to 100% yield. This is in line with a reaction mechanism developed by Wernick and his co-workers that postulates the formation of isolated monomers within the solid, which cannot react. β-4-Bromo and β-4-chloro-trans-cinnamic acids show approximately first order kinetics, although in both cases the reaction accelerates somewhat as it proceeds. First order kinetics is explained in terms of a reaction between one excited- and one ground-state monomer molecule, while the acceleration of the reaction implies that it is promoted as defects are formed within the crystal. By contrast β-2-chloro-trans-cinnamic acid shows a strongly accelerating reaction which models closely to the contracting cube equation. β-2-Fluoro- and β-4-fluoro-trans-cinnamic acids show a close match to first order kinetics. The 4-fluoro-derivative, however, shows a reaction that proceeds via a structural intermediate. The difference in behaviour between the 2-fluoro- and 4-fluoro-derivative may be due to different C–HF hydrogen bonds observed within these single-crystalline starting materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the construction pollution index has been put forward and proved to be an efficient approach to reducing or mitigating pollution level during the construction planning stage, the problem of how to select the best construction plan based on distinguishing the degree of its potential adverse environmental impacts is still a research task. This paper first reviews environmental issues and their characteristics in construction, which are critical factors in evaluating potential adverse impacts of a construction plan. These environmental characteristics are then used to structure two decision models for environmental-conscious construction planning by using an analytic network process (ANP), including a complicated model and a simplified model. The two ANP models are combined and called the EnvironalPlanning system, which is applied to evaluate potential adverse environmental impacts of alternative construction plans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existing literature on lean construction is overwhelmingly prescriptive with little recognition of the social and politicised nature of the diffusion process. The prevailing production-engineering perspective too often assumes that organizations are unitary entities where all parties strive for the common goal of 'improved performance'. An alternative perspective is developed that considers the diffusion of lean construction across contested pluralistic arenas. Different actors mobilize different storylines to suit their own localized political agendas. Multiple storylines of lean construction continuously compete for attention with other management fashions. The conceptualization and enactment of lean construction therefore differs across contexts, often taking on different manifestations from those envisaged. However, such localized enactments of lean construction are patterned and conditioned by pre-existing social and economic structures over which individual managers have limited influence. Taking a broader view, 'leanness' can be conceptualized in terms of a quest for structural flexibility involving restructuring, downsizing and outsourcing. From this perspective, the UK construction industry can be seen to have embarked upon leaner ways of working in the mid-1970s, long before the terminology of lean thinking came into vogue. Semi-structured interviews with construction sector policy-makers provide empirical support for the view that lean construction is a multifaceted concept that defies universal definition.