116 resultados para one-boson-exchange models
Resumo:
Temperate-zone crops require a period of winter chilling to terminate dormancy and ensure adequate bud break the following spring. The exact chilling requirement of blackcurrant (Ribes nigrum), a commercially important crop in northern Europe, is relatively unknown. Chill unit models have been successfully utilized to determine the optimum chilling temperature of a range of crops, with one chill unit equating to I h exposure to the optimum temperature for chill satisfaction. Two-year-old R. nigrum plants of the cultivars 'Ben Gairn', 'Ben Hope' and 'Ben Tirran' were exposed to temperatures of -10.1 degrees C. -3.4 degrees C. 0.1 degrees C, 1.5 degrees C, 2.1 degrees C, 3.4 degrees C or 8.9 degrees C (+/- 0.7 degrees C) for durations of 0, 2, 4, 6, 8 or 10 weeks and multiple regression analyses used to determine the optimum temperature for chill satisfaction. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A comparison of the models of Vitti et al. (2000, J. Anim. Sci. 78, 2706-2712) and Fernandez (1995c, Livest. Prod. Sci. 41, 255-261) was carried out using two data sets on growing pigs as input. The two models compared were based on similar basic principles, although their aims and calculations differed. The Vitti model employs the rate:state formalism and describes phosphorus (P) flow between four pools representing P content in gut, blood, bone and soft tissue in growing goats. The Fernandez model describes flow and fractional recirculation between P pools in gut, blood and bone in growing pigs. The results from both models showed similar trends for P absorption from gut to blood and net retention in bone with increasing P intake, with the exception of the 65 kg results from Date Set 2 calculated using the FernAndez model. Endogenous loss from blood back to gut increased faster with increasing P intake in the FernAndez than in the Vitti model for Data Set 1. However, for Data Set 2, endogenous loss increased with increasing P intake using the Vitti model, but decreased when calculated using the FernAndez model. Incorporation of P into bone was not influenced by intake in the FernAndez model, while in the Vitti model there was an increasing trend. The FernAndez model produced a pattern of decreasing resorption in bone with increasing P intake, with one of the data sets, which was not observed when using the Vitti model. The pigs maintained their P homeostasis in blood by regulation of P excretion in urine. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The contribution investigates the problem of estimating the size of a population, also known as the missing cases problem. Suppose a registration system is targeting to identify all cases having a certain characteristic such as a specific disease (cancer, heart disease, ...), disease related condition (HIV, heroin use, ...) or a specific behavior (driving a car without license). Every case in such a registration system has a certain notification history in that it might have been identified several times (at least once) which can be understood as a particular capture-recapture situation. Typically, cases are left out which have never been listed at any occasion, and it is this frequency one wants to estimate. In this paper modelling is concentrating on the counting distribution, e.g. the distribution of the variable that counts how often a given case has been identified by the registration system. Besides very simple models like the binomial or Poisson distribution, finite (nonparametric) mixtures of these are considered providing rather flexible modelling tools. Estimation is done using maximum likelihood by means of the EM algorithm. A case study on heroin users in Bangkok in the year 2001 is completing the contribution.
Resumo:
Multiscale modeling is emerging as one of the key challenges in mathematical biology. However, the recent rapid increase in the number of modeling methodologies being used to describe cell populations has raised a number of interesting questions. For example, at the cellular scale, how can the appropriate discrete cell-level model be identified in a given context? Additionally, how can the many phenomenological assumptions used in the derivation of models at the continuum scale be related to individual cell behavior? In order to begin to address such questions, we consider a discrete one-dimensional cell-based model in which cells are assumed to interact via linear springs. From the discrete equations of motion, the continuous Rouse [P. E. Rouse, J. Chem. Phys. 21, 1272 (1953)] model is obtained. This formalism readily allows the definition of a cell number density for which a nonlinear "fast" diffusion equation is derived. Excellent agreement is demonstrated between the continuum and discrete models. Subsequently, via the incorporation of cell division, we demonstrate that the derived nonlinear diffusion model is robust to the inclusion of more realistic biological detail. In the limit of stiff springs, where cells can be considered to be incompressible, we show that cell velocity can be directly related to cell production. This assumption is frequently made in the literature but our derivation places limits on its validity. Finally, the model is compared with a model of a similar form recently derived for a different discrete cell-based model and it is shown how the different diffusion coefficients can be understood in terms of the underlying assumptions about cell behavior in the respective discrete models.
Resumo:
We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the data. Routine use of mixture models alongside other approaches to phylogenetic inference may often reveal hidden or unexpected patterns of sequence evolution and can improve phylogenetic inference.
Resumo:
The recent emergence of novel pathogenic human and animal coronaviruses has highlighted the need for antiviral therapies that are effective against a spectrum of these viruses. We have used several strains of murine hepatitis virus (MHV) in cell culture and in vivo in mouse models to investigate the antiviral characteristics of peptide-conjugated antisense phosphorodiamidate morpholino oligomers (P-PMOs). Ten P-PMOs directed against various target sites in the viral genome were tested in cell culture, and one of these (5TERM), which was complementary to the 5' terminus of the genomic RNA, was effective against six strains of MHV. Further studies were carried out with various arginine-rich peptides conjugated to the 5TERM PMO sequence in order to evaluate efficacy and toxicity and thereby select candidates for in vivo testing. In uninfected mice, prolonged P-PMO treatment did not result in weight loss or detectable histopathologic changes. 5TERM P-PMO treatment reduced viral titers in target organs and protected mice against virus-induced tissue damage. Prophylactic 5TERM P-PMO treatment decreased the amount of weight loss associated with infection under most experimental conditions. Treatment also prolonged survival in two lethal challenge models. In some cases of high-dose viral inoculation followed by delayed treatment, 5TERM P-PMO treatment was not protective and increased morbidity in the treated group, suggesting that P-PMO may cause toxic effects in diseased mice that were not apparent in the uninfected animals. However, the strong antiviral effect observed suggests that with further development, P-PMO may provide an effective therapeutic approach against a broad range of coronavirus infections.
Resumo:
The presented study examined the opinion of in-service and prospective chemistry teachers about the importance of usage of molecular and crystal models in secondary-level school practice, and investigated some of the reasons for their (non-) usage. The majority of participants stated that the use of models plays an important role in chemistry education and that they would use them more often if the circumstances were more favourable. Many teachers claimed that three-dimensional (3d) models are still not available in sufficient number at their schools; they also pointed to the lack of available computer facilities during chemistry lessons. The research revealed that, besides the inadequate material circumstances, less than one third of participants are able to use simple (freeware) computer programs for drawing molecular structures and their presentation in virtual space; however both groups of teachers expressed the willingness to improve their knowledge in the subject area. The investigation points to several actions which could be undertaken to improve the current situation.
Resumo:
Controlled human intervention trials are required to confirm the hypothesis that dietary fat quality may influence insulin action. The aim was to develop a food-exchange model, suitable for use in free-living volunteers, to investigate the effects of four experimental diets distinct in fat quantity and quality: high SFA (HSFA); high MUFA (HMUFA) and two low-fat (LF) diets, one supplemented with 1.24g EPA and DHA/d (LFn-3). A theoretical food-exchange model was developed. The average quantity of exchangeable fat was calculated as the sum of fat provided by added fats (spreads and oils), milk, cheese, biscuits, cakes, buns and pastries using data from the National Diet and Nutrition Survey of UK adults. Most of the exchangeable fat was replaced by specifically designed study foods. Also critical to the model was the use of carbohydrate exchanges to ensure the diets were isoenergetic. Volunteers from eight centres across Europe completed the dietary intervention. Results indicated that compositional targets were largely achieved with significant differences in fat quantity between the high-fat diets (39.9 (SEM 0.6) and 38.9 (SEM 0.51) percentage energy (%E) from fat for the HSFA and HMUFA diets respectively) and the low-fat diets (29.6 (SEM 0.6) and 29.1 (SEM 0.5) %E from fat for the LF and LFn-3 diets respectively) and fat quality (17.5 (SEM 0.3) and 10.4 (SEM 0.2) %E front SFA and 12.7 (SEM 0.3) and 18.7 (SEM 0.4) %E MUFA for the HSFA and HMUFA diets respectively). In conclusion, a robust, flexible food-exchange model was developed and implemented successfully in the LIPGENE dietary intervention trial.
Nonlinear system identification using particle swarm optimisation tuned radial basis function models
Resumo:
A novel particle swarm optimisation (PSO) tuned radial basis function (RBF) network model is proposed for identification of non-linear systems. At each stage of orthogonal forward regression (OFR) model construction process, PSO is adopted to tune one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is often more efficient in model construction. The effectiveness of the proposed PSO aided OFR algorithm for constructing tunable node RBF models is demonstrated using three real data sets.
Resumo:
Nonlinear system identification is considered using a generalized kernel regression model. Unlike the standard kernel model, which employs a fixed common variance for all the kernel regressors, each kernel regressor in the generalized kernel model has an individually tuned diagonal covariance matrix that is determined by maximizing the correlation between the training data and the regressor using a repeated guided random search based on boosting optimization. An efficient construction algorithm based on orthogonal forward regression with leave-one-out (LOO) test statistic and local regularization (LR) is then used to select a parsimonious generalized kernel regression model from the resulting full regression matrix. The proposed modeling algorithm is fully automatic and the user is not required to specify any criterion to terminate the construction procedure. Experimental results involving two real data sets demonstrate the effectiveness of the proposed nonlinear system identification approach.
Resumo:
This paper is concerned with the selection of inputs for classification models based on ratios of measured quantities. For this purpose, all possible ratios are built from the quantities involved and variable selection techniques are used to choose a convenient subset of ratios. In this context, two selection techniques are proposed: one based on a pre-selection procedure and another based on a genetic algorithm. In an example involving the financial distress prediction of companies, the models obtained from ratios selected by the proposed techniques compare favorably to a model using ratios usually found in the financial distress literature.
Resumo:
Two quantum-kinetic models of ultrafast electron transport in quantum wires are derived from the generalized electron-phonon Wigner equation. The various assumptions and approximations allowing one to find closed equations for the reduced electron Wigner function are discussed with an emphasis on their physical relevance. The models correspond to the Levinson and Barker-Ferry equations, now generalized to account for a space-dependent evolution. They are applied to study the quantum effects in the dynamics of an initial packet of highly nonequilibrium carriers, locally generated in the wire. The properties of the two model equations are compared and analyzed.
Resumo:
Large scale air pollution models are powerful tools, designed to meet the increasing demand in different environmental studies. The atmosphere is the most dynamic component of the environment, where the pollutants can be moved quickly on far distnce. Therefore the air pollution modeling must be done in a large computational domain. Moreover, all relevant physical, chemical and photochemical processes must be taken into account. In such complex models operator splitting is very often applied in order to achieve sufficient accuracy as well as efficiency of the numerical solution. The Danish Eulerian Model (DEM) is one of the most advanced such models. Its space domain (4800 × 4800 km) covers Europe, most of the Mediterian and neighboring parts of Asia and the Atlantic Ocean. Efficient parallelization is crucial for the performance and practical capabilities of this huge computational model. Different splitting schemes, based on the main processes mentioned above, have been implemented and tested with respect to accuracy and performance in the new version of DEM. Some numerical results of these experiments are presented in this paper.
Resumo:
Purpose – To describe some research done, as part of an EPSRC funded project, to assist engineers working together on collaborative tasks. Design/methodology/approach – Distributed finite state modelling and agent techniques are used successfully in a new hybrid self-organising decision making system applied to collaborative work support. For the particular application, analysis of the tasks involved has been performed and these tasks are modelled. The system then employs a novel generic agent model, where task and domain knowledge are isolated from the support system, which provides relevant information to the engineers. Findings – The method is applied in the despatch of transmission commands within the control room of The National Grid Company Plc (NGC) – tasks are completed significantly faster when the system is utilised. Research limitations/implications – The paper describes a generic approach and it would be interesting to investigate how well it works in other applications. Practical implications – Although only one application has been studied, the methodology could equally be applied to a general class of cooperative work environments. Originality/value – One key part of the work is the novel generic agent model that enables the task and domain knowledge, which are application specific, to be isolated from the support system, and hence allows the method to be applied in other domains.
Resumo:
Current e-learning systems are increasing their importance in higher education. However, the state of the art of e-learning applications, besides the state of the practice, does not achieve the level of interactivity that current learning theories advocate. In this paper, the possibility of enhancing e-learning systems to achieve deep learning has been studied by replicating an experiment in which students had to learn basic software engineering principles. One group learned these principles using a static approach, while the other group learned the same principles using a system-dynamics-based approach, which provided interactivity and feedback. The results show that, quantitatively, the latter group achieved a better understanding of the principles; furthermore, qualitatively, they enjoyed the learning experience