928 resultados para temperature-based models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long-term forecasts of pest pressure are central to the effective management of many agricultural insect pests. In the eastern cropping regions of Australia, serious infestations of Helicoverpa punctigera (Wallengren) and H. armigera (Hübner)(Lepidoptera: Noctuidae) are experienced annually. Regression analyses of a long series of light-trap catches of adult moths were used to describe the seasonal dynamics of both species. The size of the spring generation in eastern cropping zones could be related to rainfall in putative source areas in inland Australia. Subsequent generations could be related to the abundance of various crops in agricultural areas, rainfall and the magnitude of the spring population peak. As rainfall figured prominently as a predictor variable, and can itself be predicted using the Southern Oscillation Index (SOI), trap catches were also related to this variable. The geographic distribution of each species was modelled in relation to climate and CLIMEX was used to predict temporal variation in abundance at given putative source sites in inland Australia using historical meteorological data. These predictions were then correlated with subsequent pest abundance data in a major cropping region. The regression-based and bioclimatic-based approaches to predicting pest abundance are compared and their utility in predicting and interpreting pest dynamics are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most object-based approaches to Geographical Information Systems (GIS) have concentrated on the representation of geometric properties of objects in terms of fixed geometry. In our road traffic marking application domain we have a requirement to represent the static locations of the road markings but also enforce the associated regulations, which are typically geometric in nature. For example a give way line of a pedestrian crossing in the UK must be within 1100-3000 mm of the edge of the crossing pattern. In previous studies of the application of spatial rules (often called 'business logic') in GIS emphasis has been placed on the representation of topological constraints and data integrity checks. There is very little GIS literature that describes models for geometric rules, although there are some examples in the Computer Aided Design (CAD) literature. This paper introduces some of the ideas from so called variational CAD models to the GIS application domain, and extends these using a Geography Markup Language (GML) based representation. In our application we have an additional requirement; the geometric rules are often changed and vary from country to country so should be represented in a flexible manner. In this paper we describe an elegant solution to the representation of geometric rules, such as requiring lines to be offset from other objects. The method uses a feature-property model embraced in GML 3.1 and extends the possible relationships in feature collections to permit the application of parameterized geometric constraints to sub features. We show the parametric rule model we have developed and discuss the advantage of using simple parametric expressions in the rule base. We discuss the possibilities and limitations of our approach and relate our data model to GML 3.1. © 2006 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The behaviour of self adaptive systems can be emergent, which means that the system’s behaviour may be seen as unexpected by its customers and its developers. Therefore, a self-adaptive system needs to garner confidence in its customers and it also needs to resolve any surprise on the part of the developer during testing and maintenance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system’s behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, we propose the use of goal-based requirements models at runtime to offer self-explanation of how a system is meeting its requirements. We demonstrate the analysis of run-time requirements models to yield a self-explanation codified in a domain specific language, and discuss possible future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60K15, 60K20, 60G20,60J75, 60J80, 60J85, 60-08, 90B15.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The composition and distribution of diatom algae inhabiting estuaries and coasts of the subtropical Americas are poorly documented, especially relative to the central role diatoms play in coastal food webs and to their potential utility as sentinels of environmental change in these threatened ecosystems. Here, we document the distribution of diatoms among the diverse habitat types and long environmental gradients represented by the shallow topographic relief of the South Florida, USA, coastline. A total of 592 species were encountered from 38 freshwater, mangrove, and marine locations in the Everglades wetland and Florida Bay during two seasonal collections, with the highest diversity occurring at sites of high salinity and low water column organic carbon concentration (WTOC). Freshwater, mangrove, and estuarine assemblages were compositionally distinct, but seasonal differences were only detected in mangrove and estuarine sites where solute concentration differed greatly between wet and dry seasons. Epiphytic, planktonic, and sediment assemblages were compositionally similar, implying a high degree of mixing along the shallow, tidal, and storm-prone coast. The relationships between diatom taxa and salinity, water total phosphorus (WTP), water total nitrogen (WTN), and WTOC concentrations were determined and incorporated into weighted averaging partial least squares regression models. Salinity was the most influential variable, resulting in a highly predictive model (r apparent 2  = 0.97, r jackknife 2  = 0.95) that can be used in the future to infer changes in coastal freshwater delivery or sea-level rise in South Florida and compositionally similar environments. Models predicting WTN (r apparent 2  = 0.75, r jackknife 2  = 0.46), WTP (r apparent 2  = 0.75, r jackknife 2  = 0.49), and WTOC (r apparent 2  = 0.79, r jackknife 2  = 0.57) were also strong, suggesting that diatoms can provide reliable inferences of changes in solute delivery to the coastal ecosystem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We developed diatom-based prediction models of hydrology and periphyton abundance to inform assessment tools for a hydrologically managed wetland. Because hydrology is an important driver of ecosystem change, hydrologic alterations by restoration efforts could modify biological responses, such as periphyton characteristics. In karstic wetlands, diatoms are particularly important components of mat-forming calcareous periphyton assemblages that both respond and contribute to the structural organization and function of the periphyton matrix. We examined the distribution of diatoms across the Florida Everglades landscape and found hydroperiod and periphyton biovolume were strongly correlated with assemblage composition. We present species optima and tolerances for hydroperiod and periphyton biovolume, for use in interpreting the directionality of change in these important variables. Predictions of these variables were mapped to visualize landscape-scale spatial patterns in a dominant driver of change in this ecosystem (hydroperiod) and an ecosystem-level response metric of hydrologic change (periphyton biovolume). Specific diatom assemblages inhabiting periphyton mats of differing abundance can be used to infer past conditions and inform management decisions based on how assemblages are changing. This study captures diatom responses to wide gradients of hydrology and periphyton characteristics to inform ecosystem-scale bioassessment efforts in a large wetland.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rainbow smelt (Osmerus mordax) is an anadromous teleost that produces type II antifreeze protein (AFP) and accumulates modest urea and high glycerol levels in plasma and tissues as adaptive cryoprotectant mechanisms in sub-zero temperatures. It is known that glyceroneogenesis occurs in liver via a branch in glycolysis and gluconeogenesis and is activated by low temperature; however, the precise mechanisms of glycerol synthesis and trafficking in smelt remain to be elucidated. The objective of this thesis was to provide further insight using functional genomic techniques [e.g. suppression subtractive hybridization (SSH) cDNA library construction, microarray analyses] and molecular analyses [e.g. cloning, quantitative reverse transcription - polymerase chain reaction (QPCR)]. Novel molecular mechanisms related to glyceroneogenesis were deciphered by comparing the transcript expression profiles of glycerol (cold temperature) and non-glycerol (warm temperature) accumulating hepatocytes (Chapter 2) and livers from intact smelt (Chapter 3). Briefly, glycerol synthesis can be initiated from both amino acids and carbohydrate; however carbohydrate appears to be the preferred source when it is readily available. In glycerol accumulating hepatocytes, levels of the hepatic glucose transporter (GLUT2) plummeted and transcript levels of a suite of genes (PEPCK, MDH2, AAT2, GDH and AQP9) associated with the mobilization of amino acids to fuel glycerol synthesis were all transiently higher. In contrast, in glycerol accumulating livers from intact smelt, glycerol synthesis was primarily fuelled by glycogen degradation with higher PGM and PFK (glycolysis) transcript levels. Whether initiated from amino acids or carbohydrate, there were common metabolic underpinnings. Increased PDK2 (an inhibitor of PDH) transcript levels would direct pyruvate derived from amino acids and / or DHAP derived from G6P to glycerol as opposed to oxidation via the citric acid cycle. Robust LIPL (triglyceride catabolism) transcript levels would provide free fatty acids that could be oxidized to fuel ATP synthesis. Increased cGPDH (glyceroneogenesis) transcript levels were not required for increased glycerol production, suggesting that regulation is more likely by post-translational modification. Finally, levels of a transcript potentially encoding glycerol-3-phosphatase, an enzyme not yet characterized in any vertebrate species, were transiently higher. These comparisons also led to the novel discoveries that increased G6Pase (glucose synthesis) and increased GS (glutamine synthesis) transcript levels were part of the low temperature response in smelt. Glucose may provide increased colligative protection against freezing; whereas glutamine could serve to store nitrogen released from amino acid catabolism in a non-toxic form and / or be used to synthesize urea via purine synthesis-uricolysis. Novel key aspects of cryoprotectant osmolyte (glycerol and urea) trafficking were elucidated by cloning and characterizing three aquaglyceroporin (GLP)-encoding genes from smelt at the gene and cDNA levels in Chapter 4. GLPs are integral membrane proteins that facilitate passive movement of water, glycerol and urea across cellular membranes. The highlight was the discovery that AQP10ba transcript levels always increase in posterior kidney only at low temperature. This AQP10b gene paralogue may have evolved to aid in the reabsorption of urea from the proximal tubule. This research has contributed significantly to a general understanding of the cold adaptation response in smelt, and more specifically to the development of a working scenario for the mechanisms involved in glycerol synthesis and trafficking in this species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In establishing the reliability of performance-related design methods for concrete – which are relevant for resistance against chloride-induced corrosion - long-term experience of local materials and practices and detailed knowledge of the ambient and local micro-climate are critical. Furthermore, in the development of analytical models for performance-based design, calibration against test data representative of actual conditions in practice is required. To this end, the current study presents results from full-scale, concrete pier-stems under long-term exposure to a marine environment with work focussing on XS2 (below mid-tide level) in which the concrete is regarded as fully saturated and XS3 (tidal, splash and spray) in which the concrete is in an unsaturated condition. These exposures represent zones where concrete structures are most susceptible to ionic ingress and deterioration. Chloride profiles and chloride transport behaviour are studied using both an empirical model (erfc function) and a physical model (ClinConc). The time dependency of surface chloride concentration (Cs) and apparent diffusivity (Da) were established for the empirical model whereas, in the ClinConc model (originally based on saturated concrete), two new environmental factors were introduced for the XS3 environmental exposure zone. Although the XS3 is considered as one environmental exposure zone according to BS EN 206-1:2013, the work has highlighted that even within this zone, significant changes in chloride ingress are evident. This study aims to update the parameters of both models for predicting the long term transport behaviour of concrete subjected to environmental exposure classes XS2 and XS3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The resilience of a social-ecological system is measured by its ability to retain core functionality when subjected to perturbation. Resilience is contextually dependent on the state of system components, the complex interactions among these components, and the timing, location, and magnitude of perturbations. The stability landscape concept provides a useful framework for considering resilience within the specified context of a particular social-ecological system but has proven difficult to operationalize. This difficulty stems largely from the complex, multidimensional nature of the systems of interest and uncertainty in system response. Agent-based models are an effective methodology for understanding how cross-scale processes within and across social and ecological domains contribute to overall system resilience. We present the results of a stylized model of agricultural land use in a small watershed that is typical of the Midwestern United States. The spatially explicit model couples land use, biophysical models, and economic drivers with an agent-based model to explore the effects of perturbations and policy adaptations on system outcomes. By applying the coupled modeling approach within the resilience and stability landscape frameworks, we (1) estimate the sensitivity of the system to context-specific perturbations, (2) determine potential outcomes of those perturbations, (3) identify possible alternative states within state space, (4) evaluate the resilience of system states, and (5) characterize changes in system-scale resilience brought on by changes in individual land use decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.