50 resultados para Lattice theory - Computer simulation
Resumo:
In this paper we apply a new method for the determination of surface area of carbonaceous materials, using the local surface excess isotherms obtained from the Grand Canonical Monte Carlo simulation and a concept of area distribution in terms of energy well-depth of solid–fluid interaction. The range of this well-depth considered in our GCMC simulation is from 10 to 100 K, which is wide enough to cover all carbon surfaces that we dealt with (for comparison, the well-depth for perfect graphite surface is about 58 K). Having the set of local surface excess isotherms and the differential area distribution, the overall adsorption isotherm can be obtained in an integral form. Thus, given the experimental data of nitrogen or argon adsorption on a carbon material, the differential area distribution can be obtained from the inversion process, using the regularization method. The total surface area is then obtained as the area of this distribution. We test this approach with a number of data in the literature, and compare our GCMC-surface area with that obtained from the classical BET method. In general, we find that the difference between these two surface areas is about 10%, indicating the need to reliably determine the surface area with a very consistent method. We, therefore, suggest the approach of this paper as an alternative to the BET method because of the long-recognized unrealistic assumptions used in the BET theory. Beside the surface area obtained by this method, it also provides information about the differential area distribution versus the well-depth. This information could be used as a microscopic finger-print of the carbon surface. It is expected that samples prepared from different precursors and different activation conditions will have distinct finger-prints. We illustrate this with Cabot BP120, 280 and 460 samples, and the differential area distributions obtained from the adsorption of argon at 77 K and nitrogen also at 77 K have exactly the same patterns, suggesting the characteristics of this carbon.
Resumo:
Brugada syndrome (BS) is a genetic disease identified by an abnormal electrocardiogram ( ECG) ( mainly abnormal ECGs associated with right bundle branch block and ST-elevation in right precordial leads). BS can lead to increased risk of sudden cardiac death. Experimental studies on human ventricular myocardium with BS have been limited due to difficulties in obtaining data. Thus, the use of computer simulation is an important alternative. Most previous BS simulations were based on animal heart cell models. However, due to species differences, the use of human heart cell models, especially a model with three-dimensional whole-heart anatomical structure, is needed. In this study, we developed a model of the human ventricular action potential (AP) based on refining the ten Tusscher et al (2004 Am. J. Physiol. Heart Circ. Physiol. 286 H1573 - 89) model to incorporate newly available experimental data of some major ionic currents of human ventricular myocytes. These modified channels include the L-type calcium current (ICaL), fast sodium current (I-Na), transient outward potassium current (I-to), rapidly and slowly delayed rectifier potassium currents (I-Kr and I-Ks) and inward rectifier potassium current (I-Ki). Transmural heterogeneity of APs for epicardial, endocardial and mid-myocardial (M) cells was simulated by varying the maximum conductance of IKs and Ito. The modified AP models were then used to simulate the effects of BS on cellular AP and body surface potentials using a three-dimensional dynamic heart - torso model. Our main findings are as follows. (1) BS has little effect on the AP of endocardial or mid-myocardial cells, but has a large impact on the AP of epicardial cells. (2) A likely region of BS with abnormal cell AP is near the right ventricular outflow track, and the resulting ST-segment elevation is located in the median precordium area. These simulation results are consistent with experimental findings reported in the literature. The model can reproduce a variety of electrophysiological behaviors and provides a good basis for understanding the genesis of abnormal ECG under the condition of BS disease.
Resumo:
Rail corrugation consists of undesirable periodic fluctuations in wear on railway track and costs the railway industry substantially for it's removal by regrinding. Much research has been performed on this problem, particularly over the past two decades, however, a reliable cure remains elusive for wear-type corrugations. Recently the growth behaviour of wear-type rail corrugation-has been investigated using theoretical and experimental models as part of the RailCRC Project (#18). A critical part of this work is the tuning and validation of these models via an extensive field testing program. Rail corrugations have been monitored for 2 years on sites throughout Australia. Measured rail surface profiles are used to determine corrugation growth rates on each site. Growth rates and other characteristics are compared with theoretical predictions from a computer model for validation. The results from several pertinent sites are presented and discussed.
Resumo:
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.
Resumo:
Quantifying mass and energy exchanges within tropical forests is essential for understanding their role in the global carbon budget and how they will respond to perturbations in climate. This study reviews ecosystem process models designed to predict the growth and productivity of temperate and tropical forest ecosystems. Temperate forest models were included because of the minimal number of tropical forest models. The review provides a multiscale assessment enabling potential users to select a model suited to the scale and type of information they require in tropical forests. Process models are reviewed in relation to their input and output parameters, minimum spatial and temporal units of operation, maximum spatial extent and time period of application for each organization level of modelling. Organizational levels included leaf-tree, plot-stand, regional and ecosystem levels, with model complexity decreasing as the time-step and spatial extent of model operation increases. All ecosystem models are simplified versions of reality and are typically aspatial. Remotely sensed data sets and derived products may be used to initialize, drive and validate ecosystem process models. At the simplest level, remotely sensed data are used to delimit location, extent and changes over time of vegetation communities. At a more advanced level, remotely sensed data products have been used to estimate key structural and biophysical properties associated with ecosystem processes in tropical and temperate forests. Combining ecological models and image data enables the development of carbon accounting systems that will contribute to understanding greenhouse gas budgets at biome and global scales.
Resumo:
The problem of the negative values of the interaction parameter in the equation of Frumkin has been analyzed with respect to the adsorption of nonionic molecules on energetically homogeneous surface. For this purpose, the adsorption states of a homologue series of ethoxylated nonionic surfactants on air/water interface have been determined using four different models and literature data (surface tension isotherms). The results obtained with the Frumkin adsorption isotherm imply repulsion between the adsorbed species (corresponding to negative values of the interaction parameter), while the classical lattice theory for energetically homogeneous surface (e.g., water/air) admits attraction alone. It appears that this serious contradiction can be overcome by assuming heterogeneity in the adsorption layer, that is, effects of partial condensation (formation of aggregates) on the surface. Such a phenomenon is suggested in the Fainerman-Lucassen-Reynders-Miller (FLM) 'Aggregation model'. Despite the limitations of the latter model (e.g., monodispersity of the aggregates), we have been able to estimate the sign and the order of magnitude of Frumkin's interaction parameter and the range of the aggregation numbers of the surface species. (C) 2004 Elsevier B.V All rights reserved.
Resumo:
The dynamics of mechanical milling in a vibratory mill have been studied by means of mechanical vibration, shock measurements, computer simulation and microstructural evolution measurements. Two distinct modes of ball motion during milling, periodic and chaotic vibration, were observed. Mill operation in the regime of periodic vibration, in which each collision provides a constant energy input to milled powders, enabled a quantitative description of the effect of process parameters on system dynamics. An investigation of the effect of process parameters on microstructural development in an austenitic stainless steel showed that the impact force associated with collision events is an important process parameter for characterizing microstructural evolution. (C) 1997 Elsevier Science S.A.
Resumo:
Computer assisted learning has an important role in the teaching of pharmacokinetics to health sciences students because it transfers the emphasis from the purely mathematical domain to an 'experiential' domain in which graphical and symbolic representations of actions and their consequences form the major focus for learning. Basic pharmacokinetic concepts can be taught by experimenting with the interplay between dose and dosage interval with drug absorption (e.g. absorption rate, bioavailability), drug distribution (e.g. volume of distribution, protein binding) and drug elimination (e.g. clearance) on drug concentrations using library ('canned') pharmacokinetic models. Such 'what if' approaches are found in calculator-simulators such as PharmaCalc, Practical Pharmacokinetics and PK Solutions. Others such as SAAM II, ModelMaker, and Stella represent the 'systems dynamics' genre, which requires the user to conceptualise a problem and formulate the model on-screen using symbols, icons, and directional arrows. The choice of software should be determined by the aims of the subject/course, the experience and background of the students in pharmacokinetics, and institutional factors including price and networking capabilities of the package(s). Enhanced learning may result if the computer teaching of pharmacokinetics is supported by tutorials, especially where the techniques are applied to solving problems in which the link with healthcare practices is clearly established.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
Computer simulation was used to suggest potential selection strategies for beef cattle breeders with different mixes of clients between two potential markets. The traditional market paid on the basis of carcass weight (CWT), while a new market considered marbling grade in addition to CWT as a basis for payment. Both markets instituted discounts for CWT in excess of 340 kg and light carcasses below 300 kg. Herds were simulated for each price category on the carcass weight grid for the new market. This enabled the establishment of phenotypic relationships among the traits examined [CWT, percent intramuscular fat (IMF), carcass value in the traditional market, carcass value in the new market, and the expected proportion of progeny in elite price cells in the new market pricing grid]. The appropriateness of breeding goals was assessed on the basis of client satisfaction. Satisfaction was determined by the equitable distribution of available stock between markets combined with the assessment of the utility of the animal within the market to which it was assigned. The best goal for breeders with predominantly traditional clients was a CWT in excess of 330 kg, while that for breeders with predominantly new market clients was a CWT of between 310 and 329 kg and with a marbling grade of AAA in the Ontario carcass pricing system. For breeders who wished to satisfy both new and traditional clients, the optimal CWT was 310-329 kg and the optimal marbling grade was AA-AAA. This combination resulted in satisfaction levels of greater than 75% among clients, regardless of the distribution of the clients between the traditional and new marketplaces.
Resumo:
Genetic research on risk of alcohol, tobacco or drug dependence must make allowance for the partial overlap of risk-factors for initiation of use, and risk-factors for dependence or other outcomes in users. Except in the extreme cases where genetic and environmental risk-factors for initiation and dependence overlap completely or are uncorrelated, there is no consensus about how best to estimate the magnitude of genetic or environmental correlations between Initiation and Dependence in twin and family data. We explore by computer simulation the biases to estimates of genetic and environmental parameters caused by model misspecification when Initiation can only be defined as a binary variable. For plausible simulated parameter values, the two-stage genetic models that we consider yield estimates of genetic and environmental variances for Dependence that, although biased, are not very discrepant from the true values. However, estimates of genetic (or environmental) correlations between Initiation and Dependence may be seriously biased, and may differ markedly under different two-stage models. Such estimates may have little credibility unless external data favor selection of one particular model. These problems can be avoided if Initiation can be assessed as a multiple-category variable (e.g. never versus early-onset versus later onset user), with at least two categories measurable in users at risk for dependence. Under these conditions, under certain distributional assumptions., recovery of simulated genetic and environmental correlations becomes possible, Illustrative application of the model to Australian twin data on smoking confirmed substantial heritability of smoking persistence (42%) with minimal overlap with genetic influences on initiation.