917 resultados para Intention-based models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article, we describe and compare two individual-based models constructed to investigate how genetic factors influence the development of phosphine resistance in lesser grain borer (R. dominica). One model is based on the simplifying assumption that resistance is conferred by alleles at a single locus, while the other is based on the more realistic assumption that resistance is conferred by alleles at two separate loci. We simulated the population dynamic of R. dominica in the absence of phosphine fumigation, and under high and low dose phosphine treatments, and found important differences between the predictions of the two models in all three cases. In the absence of fumigation, starting from the same initial frequencies of genotypes, the two models tended to different stable frequencies, although both reached Hardy-Weinberg equilibrium. The one-locus model exaggerated the equilibrium proportion of strongly resistant beetles by 3.6 times, compared to the aggregated predictions of the two-locus model. Under a low dose treatment the one-locus model overestimated the proportion of strongly resistant individuals within the population and underestimated the total population numbers compared to the two-locus model. These results show the importance of basing resistance evolution models on realistic genetics and that using oversimplified one-locus models to develop pest control strategies runs the risk of not correctly identifying tactics to minimise the incidence of pest infestation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi- and intralake datasets of fossil midge assemblages in surface sediments of small shallow lakes in Finland were studied to determine the most important environmental factors explaining trends in midge distribution and abundance. The aim was to develop palaeoenvironmental calibration models for the most important environmental variables for the purpose of reconstructing past environmental conditions. The developed models were applied to three high-resolution fossil midge stratigraphies from southern and eastern Finland to interpret environmental variability over the past 2000 years, with special focus on the Medieval Climate Anomaly (MCA), the Little Ice Age (LIA) and recent anthropogenic changes. The midge-based results were compared with physical properties of the sediment, historical evidence and environmental reconstructions based on diatoms (Bacillariophyta), cladocerans (Crustacea: Cladocera) and tree rings. The results showed that the most important environmental factor controlling midge distribution and abundance along a latitudinal gradient in Finland was the mean July air temperature (TJul). However, when the dataset was environmentally screened to include only pristine lakes, water depth at the sampling site became more important. Furthermore, when the dataset was geographically scaled to southern Finland, hypolimnetic oxygen conditions became the dominant environmental factor. The results from an intralake dataset from eastern Finland showed that the most important environmental factors controlling midge distribution within a lake basin were river contribution, water depth and submerged vegetation patterns. In addition, the results of the intralake dataset showed that the fossil midge assemblages represent fauna that lived in close proximity to the sampling sites, thus enabling the exploration of within-lake gradients in midge assemblages. Importantly, this within-lake heterogeneity in midge assemblages may have effects on midge-based temperature estimations, because samples taken from the deepest point of a lake basin may infer considerably colder temperatures than expected, as shown by the present test results. Therefore, it is suggested here that the samples in fossil midge studies involving shallow boreal lakes should be taken from the sublittoral, where the assemblages are most representative of the whole lake fauna. Transfer functions between midge assemblages and the environmental forcing factors that were significantly related with the assemblages, including mean air TJul, water depth, hypolimnetic oxygen, stream flow and distance to littoral vegetation, were developed using weighted averaging (WA) and weighted averaging-partial least squares (WA-PLS) techniques, which outperformed all the other tested numerical approaches. Application of the models in downcore studies showed mostly consistent trends. Based on the present results, which agreed with previous studies and historical evidence, the Medieval Climate Anomaly between ca. 800 and 1300 AD in eastern Finland was characterized by warm temperature conditions and dry summers, but probably humid winters. The Little Ice Age (LIA) prevailed in southern Finland from ca. 1550 to 1850 AD, with the coldest conditions occurring at ca. 1700 AD, whereas in eastern Finland the cold conditions prevailed over a longer time period, from ca. 1300 until 1900 AD. The recent climatic warming was clearly represented in all of the temperature reconstructions. In the terms of long-term climatology, the present results provide support for the concept that the North Atlantic Oscillation (NAO) index has a positive correlation with winter precipitation and annual temperature and a negative correlation with summer precipitation in eastern Finland. In general, the results indicate a relatively warm climate with dry summers but snowy winters during the MCA and a cool climate with rainy summers and dry winters during the LIA. The results of the present reconstructions and the forthcoming applications of the models can be used in assessments of long-term environmental dynamics to refine the understanding of past environmental reference conditions and natural variability required by environmental scientists, ecologists and policy makers to make decisions concerning the presently occurring global, regional and local changes. The developed midge-based models for temperature, hypolimnetic oxygen, water depth, littoral vegetation shift and stream flow, presented in this thesis, are open for scientific use on request.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work we numerically model isothermal turbulent swirling flow in a cylindrical burner. Three versions of the RNG k-epsilon model are assessed against performance of the standard k-epsilon model. Sensitivity of numerical predictions to grid refinement, differing convective differencing schemes and choice of (unknown) inlet dissipation rate, were closely scrutinised to ensure accuracy. Particular attention is paid to modelling the inlet conditions to within the range of uncertainty of the experimental data, as model predictions proved to be significantly sensitive to relatively small changes in upstream flow conditions. We also examine the characteristics of the swirl--induced recirculation zone predicted by the models over an extended range of inlet conditions. Our main findings are: - (i) the standard k-epsilon model performed best compared with experiment; - (ii) no one inlet specification can simultaneously optimize the performance of the models considered; - (iii) the RNG models predict both single-cell and double-cell IRZ characteristics, the latter both with and without additional internal stagnation points. The first finding indicates that the examined RNG modifications to the standard k-e model do not result in an improved eddy viscosity based model for the prediction of swirl flows. The second finding suggests that tuning established models for optimal performance in swirl flows a priori is not straightforward. The third finding indicates that the RNG based models exhibit a greater variety of structural behaviour, despite being of the same level of complexity as the standard k-e model. The plausibility of the predicted IRZ features are discussed in terms of known vortex breakdown phenomena.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A health-monitoring and life-estimation strategy for composite rotor blades is developed in this work. The cross-sectional stiffness reduction obtained by physics-based models is expressed as a function of the life of the structure using a recent phenomenological damage model. This stiffness reduction is further used to study the behavior of measurable system parameters such as blade deflections, loads, and strains of a composite rotor blade in static analysis and forward flight. The simulated measurements are obtained using an aeroelastic analysis of the composite rotor blade based on the finite element in space and time with physics-based damage modes that are then linked to the life consumption of the blade. The model-based measurements are contaminated with noise to simulate real data. Genetic fuzzy systems are developed for global online prediction of physical damage and life consumption using displacement- and force-based measurement deviations between damaged and undamaged conditions. Furthermore, local online prediction of physical damage and life consumption is done using strains measured along the blade length. It is observed that the life consumption in the matrix-cracking zone is about 12-15% and life consumption in debonding/delamination zone is about 45-55% of the total life of the blade. It is also observed that the success rate of the genetic fuzzy systems depends upon the number of measurements, type of measurements and training, and the testing noise level. The genetic fuzzy systems work quite well with noisy data and are recommended for online structural health monitoring of composite helicopter rotor blades.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper deals with the quasi-static and dynamic mechanical analysis of montmorillonite filled polypropylene composites. Nanocomposites were prepared by blending montmorillonite (nanoclay) varying from 3 to 9% by weight with polypropylene. The dynamic mechanical properties such as storage modulus, loss modulus and mechanical loss factor of PP and nano-composites were investigated by varying temperature and frequencies. Results showed better mechanical and thermomechanical properties at higher concentration of nanoclay. Regression-based models through design of experiments (DOE) were developed to find the storage modulus and compared with theoretical models and DOE-based models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.

It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.

In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.

Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we explore the possibility of using the equations of a well known compact model for CMOS transistors as a parameterized compact model for a variety of FET based nano-technology devices. This can turn out to be a practical preliminary solution for system level architectural researchers, who could simulate behaviourally large scale systems, while more physically based models become available for each new device. We have used a four parameter version of the EKV model equations and verified that fitting errors are similar to those when using them for standard CMOS FET transistors. The model has been used for fitting measured data from three types of FET nano-technology devices obeying different physics, for different fabrication steps, and under different programming conditions. © 2009 IEEE NANO Organizers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the steps and the challenges for implementing analytical, physics-based models for the insulated gate bipolar transistor (IGBT) and the PIN diode in hardware and more specifically in field programmable gate arrays (FPGAs). The models can be utilised in hardware co-simulation of complex power electronic converters and entire power systems in order to reduce the simulation time without compromising the accuracy of results. Such a co-simulation allows reliable prediction of the system's performance as well as accurate investigation of the power devices' behaviour during operation. Ultimately, this will allow application-specific optimisation of the devices' structure, circuit topologies as well as enhancement of the control and/or protection schemes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

K. Rasmani and Q. Shen. Subsethood-based fuzzy modelling and classification. Proceedings of the 2004 UK Workshop on Computational Intelligence, pages 181-188.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

INTRODUCTION: We previously reported models that characterized the synergistic interaction between remifentanil and sevoflurane in blunting responses to verbal and painful stimuli. This preliminary study evaluated the ability of these models to predict a return of responsiveness during emergence from anesthesia and a response to tibial pressure when patients required analgesics in the recovery room. We hypothesized that model predictions would be consistent with observed responses. We also hypothesized that under non-steady-state conditions, accounting for the lag time between sevoflurane effect-site concentration (Ce) and end-tidal (ET) concentration would improve predictions. METHODS: Twenty patients received a sevoflurane, remifentanil, and fentanyl anesthetic. Two model predictions of responsiveness were recorded at emergence: an ET-based and a Ce-based prediction. Similarly, 2 predictions of a response to noxious stimuli were recorded when patients first required analgesics in the recovery room. Model predictions were compared with observations with graphical and temporal analyses. RESULTS: While patients were anesthetized, model predictions indicated a high likelihood that patients would be unresponsive (> or = 99%). However, after termination of the anesthetic, models exhibited a wide range of predictions at emergence (1%-97%). Although wide, the Ce-based predictions of responsiveness were better distributed over a percentage ranking of observations than the ET-based predictions. For the ET-based model, 45% of the patients awoke within 2 min of the 50% model predicted probability of unresponsiveness and 65% awoke within 4 min. For the Ce-based model, 45% of the patients awoke within 1 min of the 50% model predicted probability of unresponsiveness and 85% awoke within 3.2 min. Predictions of a response to a painful stimulus in the recovery room were similar for the Ce- and ET-based models. DISCUSSION: Results confirmed, in part, our study hypothesis; accounting for the lag time between Ce and ET sevoflurane concentrations improved model predictions of responsiveness but had no effect on predicting a response to a noxious stimulus in the recovery room. These models may be useful in predicting events of clinical interest but large-scale evaluations with numerous patients are needed to better characterize model performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Economic analyses of climate change policies frequently focus on reductions of energy-related carbon dioxide emissions via market-based, economy-wide policies. The current course of environment and energy policy debate in the United States, however, suggests an alternative outcome: sector-based and/or inefficiently designed policies. This paper uses a collection of specialized, sector-based models in conjunction with a computable general equilibrium model of the economy to examine and compare these policies at an aggregate level. We examine the relative cost of different policies designed to achieve the same quantity of emission reductions. We find that excluding a limited number of sectors from an economy-wide policy does not significantly raise costs. Focusing policy solely on the electricity and transportation sectors doubles costs, however, and using non-market policies can raise cost by a factor of ten. These results are driven in part by, and are sensitive to, our modeling of pre-existing tax distortions. Copyright © 2006 by the IAEE. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We synthesise and update results from the suite of biophysical, larval-dispersal models developed in the Benguela Current ecosystem. Biophysical models of larval dispersal use outputs of physical hydrodynamic models as inputs to individual-based models in which biological processes acting during the larval life are included. In the Benguela, such models were first applied to simulate the dispersal of anchovy Engraulis encrasicolus and sardine Sardinops sagax ichthyoplankton, and more recently of the early life stages of chokka-squid Loligo reynaudii and Cape hakes Merluccius spp. We identify how the models have helped advance understanding of key processes for these species. We then discuss which aspects of the early life of marine species in the Benguela Current ecosystem are still not well understood and could benefit from new modelling studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We investigated 32 net primary productivity (NPP) models by assessing skills to reproduce integrated NPP in the Arctic Ocean. The models were provided with two sources each of surface chlorophyll-a concentration (chlorophyll), photosynthetically available radiation (PAR), sea surface temperature (SST), and mixed-layer depth (MLD). The models were most sensitive to uncertainties in surface chlorophyll, generally performing better with in situ chlorophyll than with satellite-derived values. They were much less sensitive to uncertainties in PAR, SST, and MLD, possibly due to relatively narrow ranges of input data and/or relatively little difference between input data sources. Regardless of type or complexity, most of the models were not able to fully reproduce the variability of in situ NPP, whereas some of them exhibited almost no bias (i.e., reproduced the mean of in situ NPP). The models performed relatively well in low-productivity seasons as well as in sea ice-covered/deep-water regions. Depth-resolved models correlated more with in situ NPP than other model types, but had a greater tendency to overestimate mean NPP whereas absorption-based models exhibited the lowest bias associated with weaker correlation. The models performed better when a subsurface chlorophyll-a maximum (SCM) was absent. As a group, the models overestimated mean NPP, however this was partly offset by some models underestimating NPP when a SCM was present. Our study suggests that NPP models need to be carefully tuned for the Arctic Ocean because most of the models performing relatively well were those that used Arctic-relevant parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Drawing on a model of spectator understanding of error in
performance in the literature, we document a qualitative
experiment that explores the relationships between domain
knowledge, mental models, intention and error recognition
by spectators of performances with electronic instruments.
Participants saw two performances with contrasting instruments,
with controls on their mental model and understanding
of intention. Based on data from a subsequent structured
interview, we identify themes in participants’ judgements
and understanding of performance and explanations
of their spectator experience. These reveal both elements
of similarity and difference between the two performances,
instruments and between domain knowledge groups. From
these, we suggest and discuss implications for the design of
novel performative interactions with technology

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A companion paper described the partial-interaction localised properties that require the development of pseudo properties. If the quantification through experimental testing of these pseudo properties could be removed by the use of mechanics-based models, which is the subject of this paper, then this would: (a) substantially reduce the cost of developing new reinforced concrete products by reducing the amount of testing; (b) increase the accuracy of designing existing and novel reinforced concrete members and structures, bearing in mind that experimentally derived pseudo properties are only applicable within the range of the testing from which they were derived; and (c) reduce the cost and increase the accuracy of developing reinforced concrete design rules. This paper deals with the development of pseudo properties and behaviours directly through mechanics, as opposed to experimental testing, and their incorporation into member global simulations. It also addresses the need for a fundamental shift to displacement-based analyses as opposed to strain-based analyses.