98 resultados para Generalization of Ehrenfest’s urn Model
Resumo:
Successful pest management is often hindered by the inherent complexity of the interactions of a pest with its environment. The use of genetically characterized model plants can allow investigation of chosen aspects of these interactions by limiting the number of variables during experimentation. However, it is important to study the generic nature of these model systems if the data generated are to be assessed in a wider context, for instance, with those systems of commercial significance. This study assesses the suitability of Arabidopsis thaliana (L.) Heynh. (Brassicaceae) as a model host plant to investigate plant-herbivore-natural enemy interactions, with Plutella xylostella (L.) (Lepidoptera: Plutellidae), the diamondback moth, and Cotesia plutellae (Kurdjumov) (Hymenoptera: Braconidae), a parasitoid of P. xylostella. The growth and development of P. xylostella and C. plutellae on an A. thaliana host plant (Columbia type) were compared to that on Brassica rapa var. pekinensis (L.) (Brassicaceae), a host crop that is widely cultivated and also commonly used as a laboratory host for P. xylostella rearing. The second part of the study investigated the potential effect of the different A. thaliana background lines, Columbia and Landsberg (used in wider scientific studies), on growth and development of P. xylostella and C. plutellae. Plutella xylostella life history parameters were found generally to be similar between the host plants investigated. However, C. plutellae were more affected by the differences in host plant. Fewer adult parasitoids resulted from development on A. thaliana compared to B. rapa, and those that did emerge were significantly smaller. Adult male C. plutellae developing on Columbia were also significantly smaller than those on Landsberg A. thaliana.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
The use of glycine to limit acrylamide formation during the heating of a potato model system was also found to alter the relative proportions of alkylpyrazines. The addition of glycine increased the quantities of several alkylpyrazines, and labeling studies using [2-C-13]glycine showed that those alkylpyrazines which increased in the presence of glycine had at least one C-13-labeled methyl substituent derived from glycine. The distribution of C-13 within the pyrazines suggested two pathways by which glycine, and other amino acids, participate in alkylpyrazine formation, and showed the relative contribution of each pathway. Alkylpyrazines that involve glycine in both formation pathways displayed the largest relative increases with glycine addition. The study provided an insight into the sensitivity of alkylpyrazine formation to the amino acid composition in a heated food and demonstrated the importance of those amino acids that are able to contribute an alkyl substituent. This may aid in estimating the impact of amino acid addition on pyrazine formation, when amino acids are added to foods for acrylamide mitigation.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
The effects of meson fluctuations are studied in a nonlocal generalization of the Nambu–Jona-Lasinio model, by including terms of next-to-leading order (NLO) in 1/Nc. In the model with only scalar and pseudoscalar interactions NLO contributions to the quark condensate are found to be very small. This is a result of cancellation between virtual mesons and Fock terms, which occurs for the parameter sets of most interest. In the quark self-energy, similar cancellations arise in the tadpole diagrams, although not in other NLO pieces which contribute at the 25% level. The effects on pion properties are also found to be small. NLO contributions from real pi-pi intermediate states increase the sigma meson mass by 30%. In an extended model with vector and axial interactions, there are indications that NLO effects could be larger.
Resumo:
The work reported in this paper is motivated towards the development of a mathematical model for swarm systems based on macroscopic primitives. A pattern formation and transformation model is proposed. The pattern transformation model comprises two general methods for pattern transformation, namely a macroscopic transformation method and a mathematical transformation method. The problem of transformation is formally expressed and four special cases of transformation are considered. Simulations to confirm the feasibility of the proposed models and transformation methods are presented. Comparison between the two transformation methods is also reported.
Resumo:
This commentary raises general questions about the parsimony and generalizability of the SIMS model, before interrogating the specific roles that the amygdala and eye contact play in it. Additionally, this situates the SIMS model alongside another model of facial expression processing, with a view to incorporating individual differences in emotion perception.
Resumo:
The major component of skeletal muscle is the myofibre. Genetic intervention inducing over-enlargement of myofibres beyond a certain threshold through acellular growth causes a reduction in the specific tension generating capacity of the muscle. However the physiological parameters of a genetic model that harbours reduced skeletal muscle mass have yet to be analysed. Genetic deletion of Meox2 in mice leads to reduced limb muscle size and causes some patterning defects. The loss of Meox2 is not embryonically lethal and a small percentage of animals survive to adulthood making it an excellent model with which to investigate how skeletal muscle responds to reductions in mass. In this study we have performed a detailed analysis of both late foetal and adult muscle development in the absence of Meox2. In the adult, we show that the loss of Meox2 results in smaller limb muscles that harbour reduced numbers of myofibres. However, these fibres are enlarged. These myofibres display a molecular and metabolic fibre type switch towards a more oxidative phenotype that is induced through abnormalities in foetal fibre formation. In spite of these changes, the muscle from Meox2 mutant mice is able to generate increased levels of specific tension compared to that of the wild type.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
The Stochastic Diffusion Search (SDS) was developed as a solution to the best-fit search problem. Thus, as a special case it is capable of solving the transform invariant pattern recognition problem. SDS is efficient and, although inherently probabilistic, produces very reliable solutions in widely ranging search conditions. However, to date a systematic formal investigation of its properties has not been carried out. This thesis addresses this problem. The thesis reports results pertaining to the global convergence of SDS as well as characterising its time complexity. However, the main emphasis of the work, reports on the resource allocation aspect of the Stochastic Diffusion Search operations. The thesis introduces a novel model of the algorithm, generalising an Ehrenfest Urn Model from statistical physics. This approach makes it possible to obtain a thorough characterisation of the response of the algorithm in terms of the parameters describing the search conditions in case of a unique best-fit pattern in the search space. This model is further generalised in order to account for different search conditions: two solutions in the search space and search for a unique solution in a noisy search space. Also an approximate solution in the case of two alternative solutions is proposed and compared with predictions of the extended Ehrenfest Urn model. The analysis performed enabled a quantitative characterisation of the Stochastic Diffusion Search in terms of exploration and exploitation of the search space. It appeared that SDS is biased towards the latter mode of operation. This novel perspective on the Stochastic Diffusion Search lead to an investigation of extensions of the standard SDS, which would strike a different balance between these two modes of search space processing. Thus, two novel algorithms were derived from the standard Stochastic Diffusion Search, ‘context-free’ and ‘context-sensitive’ SDS, and their properties were analysed with respect to resource allocation. It appeared that they shared some of the desired features of their predecessor but also possessed some properties not present in the classic SDS. The theory developed in the thesis was illustrated throughout with carefully chosen simulations of a best-fit search for a string pattern, a simple but representative domain, enabling careful control of search conditions.
Resumo:
Analyzes the use of linear and neural network models for financial distress classification, with emphasis on the issues of input variable selection and model pruning. A data-driven method for selecting input variables (financial ratios, in this case) is proposed. A case study involving 60 British firms in the period 1997-2000 is used for illustration. It is shown that the use of the Optimal Brain Damage pruning technique can considerably improve the generalization ability of a neural model. Moreover, the set of financial ratios obtained with the proposed selection procedure is shown to be an appropriate alternative to the ratios usually employed by practitioners.
Resumo:
To date, a number of studies have focused on the influence of sea surface temperature (SST) on global and regional rainfall variability, with the majority of these focusing on certain ocean basins e.g. the Pacific, North Atlantic and Indian Ocean. In contrast, relatively less work has been done on the influence of the central South Atlantic, particularly in relation to rainfall over southern Africa. Previous work by the authors, using reanalysis data and general circulation model (GCM) experiments, has suggested that cold SST anomalies in the central southern Atlantic Ocean are linked to an increase in rainfall extremes across southern Africa. In this paper we present results from idealised regional climate model (RCM) experiments forced with both positive and negative SST anomalies in the southern Atlantic Ocean. These experiments reveal an unexpected response of rainfall over southern Africa. In particular it was found that SST anomalies of opposite sign can cause similar rainfall responses in the model experiments, with isolated increases in rainfall over central southern Africa as well as a large region of drying over the Mozambique Channel. The purpose of this paper is to highlight this finding and explore explanations for the behaviour of the climate model. It is suggested that the observed changes in rainfall might result from the redistribution of energy (associated with upper level changes to Rossby waves) or, of more concern, model error, and therefore the paper concludes that the results of idealised regional climate models forced with SST anomalies should be viewed cautiously.
Resumo:
Foundation construction process has been an important key point in a successful construction engineering. The frequency of using diaphragm wall construction method among many deep excavation construction methods in Taiwan is the highest in the world. The traditional view of managing diaphragm wall unit in the sequencing of construction activities is to establish each phase of the sequencing of construction activities by heuristics. However, it conflicts final phase of engineering construction with unit construction and effects planning construction time. In order to avoid this kind of situation, we use management of science in the study of diaphragm wall unit construction to formulate multi-objective combinational optimization problem. Because the characteristic (belong to NP-Complete problem) of problem mathematic model is multi-objective and combining explosive, it is advised that using the 2-type Self-Learning Neural Network (SLNN) to solve the N=12, 24, 36 of diaphragm wall unit in the sequencing of construction activities program problem. In order to compare the liability of the results, this study will use random researching method in comparison with the SLNN. It is found that the testing result of SLNN is superior to random researching method in whether solution-quality or Solving-efficiency.
Resumo:
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalizations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts.
Resumo:
In this paper, we show how a set of recently derived theoretical results for recurrent neural networks can be applied to the production of an internal model control system for a nonlinear plant. The results include determination of the relative order of a recurrent neural network and invertibility of such a network. A closed loop controller is produced without the need to retrain the neural network plant model. Stability of the closed-loop controller is also demonstrated.