935 resultados para computational modeling
Resumo:
This thesis presents ab initio studies of two kinds of physical systems, quantum dots and bosons, using two program packages of which the bosonic one has mainly been developed by the author. The implemented models, \emph{i.e.}, configuration interaction (CI) and coupled cluster (CC) take the correlated motion of the particles into account, and provide a hierarchy of computational schemes, on top of which the exact solution, within the limit of the single-particle basis set, is obtained. The theory underlying the models is presented in some detail, in order to provide insight into the approximations made and the circumstances under which they hold. Some of the computational methods are also highlighted. In the final sections the results are summarized. The CI and CC calculations on multiexciton complexes in self-assembled semiconductor quantum dots are presented and compared, along with radiative and non-radiative transition rates. Full CI calculations on quantum rings and double quantum rings are also presented. In the latter case, experimental and theoretical results from the literature are re-examined and an alternative explanation for the reported photoluminescence spectra is found. The boson program is first applied on a fictitious model system consisting of bosonic electrons in a central Coulomb field for which CI at the singles and doubles level is found to account for almost all of the correlation energy. Finally, the boson program is employed to study Bose-Einstein condensates confined in different anisotropic trap potentials. The effects of the anisotropy on the relative correlation energy is examined, as well as the effect of varying the interaction potential.}
Resumo:
In this work a physically based analytical quantum threshold voltage model for the triple gate long channel metal oxide semiconductor field effect transistor is developed The proposed model is based on the analytical solution of two-dimensional Poisson and two-dimensional Schrodinger equation Proposed model is extended for short channel devices by including semi-empirical correction The impact of effective mass variation with film thicknesses is also discussed using the proposed model All models are fully validated against the professional numerical device simulator for a wide range of device geometries (C) 2010 Elsevier Ltd All rights reserved
Resumo:
Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Determining the sequence of amino acid residues in a heteropolymer chain of a protein with a given conformation is a discrete combinatorial problem that is not generally amenable for gradient-based continuous optimization algorithms. In this paper we present a new approach to this problem using continuous models. In this modeling, continuous "state functions" are proposed to designate the type of each residue in the chain. Such a continuous model helps define a continuous sequence space in which a chosen criterion is optimized to find the most appropriate sequence. Searching a continuous sequence space using a deterministic optimization algorithm makes it possible to find the optimal sequences with much less computation than many other approaches. The computational efficiency of this method is further improved by combining it with a graph spectral method, which explicitly takes into account the topology of the desired conformation and also helps make the combined method more robust. The continuous modeling used here appears to have additional advantages in mimicking the folding pathways and in creating the energy landscapes that help find sequences with high stability and kinetic accessibility. To illustrate the new approach, a widely used simplifying assumption is made by considering only two types of residues: hydrophobic (H) and polar (P). Self-avoiding compact lattice models are used to validate the method with known results in the literature and data that can be practically obtained by exhaustive enumeration on a desktop computer. We also present examples of sequence design for the HP models of some real proteins, which are solved in less than five minutes on a single-processor desktop computer Some open issues and future extensions are noted.
Resumo:
Homomorphic analysis and pole-zero modeling of electrocardiogram (ECG) signals are presented in this paper. Four typical ECG signals are considered and deconvolved into their minimum and maximum phase components through cepstral filtering, with a view to study the possibility of more efficient feature selection from the component signals for diagnostic purposes. The complex cepstra of the signals are linearly filtered to extract the basic wavelet and the excitation function. The ECG signals are, in general, mixed phase and hence, exponential weighting is done to aid deconvolution of the signals. The basic wavelet for normal ECG approximates the action potential of the muscle fiber of the heart and the excitation function corresponds to the excitation pattern of the heart muscles during a cardiac cycle. The ECG signals and their components are pole-zero modeled and the pole-zero pattern of the models can give a clue to classify the normal and abnormal signals. Besides, storing only the parameters of the model can result in a data reduction of more than 3:1 for normal signals sampled at a moderate 128 samples/s
Resumo:
The performance of the Advanced Regional Prediction System (ARPS) in simulating an extreme rainfall event is evaluated, and subsequently the physical mechanisms leading to its initiation and sustenance are explored. As a case study, the heavy precipitation event that led to 65 cm of rainfall accumulation in a span of around 6 h (1430 LT-2030 LT) over Santacruz (Mumbai, India), on 26 July, 2005, is selected. Three sets of numerical experiments have been conducted. The first set of experiments (EXP1) consisted of a four-member ensemble, and was carried out in an idealized mode with a model grid spacing of 1 km. In spite of the idealized framework, signatures of heavy rainfall were seen in two of the ensemble members. The second set (EXP2) consisted of a five-member ensemble, with a four-level one-way nested integration and grid spacing of 54, 18, 6 and 1 km. The model was able to simulate a realistic spatial structure with the 54, 18, and 6 km grids; however, with the 1 km grid, the simulations were dominated by the prescribed boundary conditions. The third and final set of experiments (EXP3) consisted of a five-member ensemble, with a four-level one-way nesting and grid spacing of 54, 18, 6, and 2 km. The Scaled Lagged Average Forecasting (SLAF) methodology was employed to construct the ensemble members. The model simulations in this case were closer to observations, as compared to EXP2. Specifically, among all experiments, the timing of maximum rainfall, the abrupt increase in rainfall intensities, which was a major feature of this event, and the rainfall intensities simulated in EXP3 (at 6 km resolution) were closest to observations. Analysis of the physical mechanisms causing the initiation and sustenance of the event reveals some interesting aspects. Deep convection was found to be initiated by mid-tropospheric convergence that extended to lower levels during the later stage. In addition, there was a high negative vertical gradient of equivalent potential temperature suggesting strong atmospheric instability prior to and during the occurrence of the event. Finally, the presence of a conducive vertical wind shear in the lower and mid-troposphere is thought to be one of the major factors influencing the longevity of the event.
Resumo:
Computational fluid dynamics has reached a stage where flow field in practical situation can be predicted to aid the design and to probe into the fundamental flow physics to understand and resolve the issues in fundamental fluid mechanics The study examines the computation of reacting flows After exploring the conservation equations for species and energy, the methods of closing the reaction rate terms in turbulent flow have been examined briefly Two cases of computation where combustion-flow interaction plays important role, have been discussed to illustrate the computational aspects and the physical insight that can be gained by the reacting flow computation
Resumo:
Even research models of helicopter dynamics often lead to a large number of equations of motion with periodic coefficients; and Floquet theory is a widely used mathematical tool for dynamic analysis. Presently, three approaches are used in generating the equations of motion. These are (1) general-purpose symbolic processors such as REDUCE and MACSYMA, (2) a special-purpose symbolic processor, DEHIM (Dynamic Equations for Helicopter Interpretive Models), and (3) completely numerical approaches. In this paper, comparative aspects of the first two purely algebraic approaches are studied by applying REDUCE and DEHIM to the same set of problems. These problems range from a linear model with one degree of freedom to a mildly non-linear multi-bladed rotor model with several degrees of freedom. Further, computational issues in applying Floquet theory are also studied, which refer to (1) the equilibrium solution for periodic forced response together with the transition matrix for perturbations about that response and (2) a small number of eigenvalues and eigenvectors of the unsymmetric transition matrix. The study showed the following: (1) compared to REDUCE, DEHIM is far more portable and economical, but it is also less user-friendly, particularly during learning phases; (2) the problems of finding the periodic response and eigenvalues are well conditioned.
Resumo:
Molecular dynamics simulation studies on polyene antifungal antibiotic amphotericin B, its head-to-tail dimeric structure and lipid - amphotericin B complex demonstrate interesting features of the flexibilities within the molecule and define the optimal interactions for the formation of a stable dimeric structure and complex with phospholipid.
Resumo:
Various factore controlling the preferred facial selectivity in the reductions of a number of sterically unbiased ketones have been evaluated using a semiempirical MO procedure. MNDO optimized geometries do not reveal any significant ground-state distortions which can be correlated with the observed face selectivities. Electrostatic effecta due to an approaching reagent were modeled by placing a test negative charge at a fixed distance from the carbonyl carbon on each of the two faces. A second series of calculations was carried out using the hydride ion as a test nucleophile. The latter calculations effectively include orbital interactions involving the u and u* orbitals of the newly formed bond in the reaction. The computed energy differences with the charge model are generally much larger compared to those with the hydride ion. However, both models lead to predictions which are qualitatively consistent with the experimentally determined facial preferences for most of the systems. Thus, electrostatic interactions between the nucleophile and the substrate seem to effectively determine the face selectivities in these molecules. However, there are a few exceptions in which orbital interactions are found to contribute significantly and occasionally reverse the preference dictated by electrostatic effecta. The remarkable succew of the hydride model calculations, in spite of retaining the unperturbed geometries of the substrates, points to the unimportance of torsional effeds and orbital distortions associated with the pyramidalized carbonyl unit in the transition state in most of the substrates considered. Additional experimental results are reported which provide useful calibration for the present computational approach.
Resumo:
The modes of binding of Gp(2',5')A, Gp(2',5')C, Gp(2',5')G and Gp(2',5')U to RNase T1 have been determined by computer modelling studies. All these dinucleoside phosphates assume extended conformations in the active site leading to better interactions with the enzyme. The 5'-terminal guanine of all these ligands is placed in the primary base binding site of the enzyme in an orientation similar to that of 2'-GMP in the RNase T1-2'-GMP complex. The 2'-terminal purines are placed close to the hydrophobic pocket formed by the residues Gly71, Ser72, Pro73 and Gly74 which occur in a loop region. However, the orientation of the 2'-terminal pyrimidines is different from that of 2'-terminal purines. This perhaps explains the higher binding affinity of the 2',5'-linked guanine dinucleoside phosphates with 2'-terminal purines than those with 2'-terminal pyrimidines. A comparison of the binding of the guanine dinucleoside phosphates with 2',5'- and 3',5'-linkages suggests significant differences in the ribose pucker and hydrogen bonding interactions between the catalytic residues and the bound nucleoside phosphate implying that 2',5'-linked dinucleoside phosphates may not be the ideal ligands to probe the role of the catalytic amino acid residues. A change in the amino acid sequence in the surface loop region formed by the residues Gly71 to Gly74 drastically affects the conformation of the base binding subsite, and this may account for the inactivity of the enzyme with altered sequence i.e., with Pro, Gly and Ser at positions 71 to 73 respectively. These results thus suggest that in addition to recognition and catalytic sites, interactions at the loop regions which constitute the subsite for base binding are also crucial in determining the substrate specificity.
Resumo:
We make an assessment of the impact of projected climate change on forest ecosystems in India. This assessment is based on climate projections of the Regional Climate Model of the Hadley Centre (HadRM3) and the dynamic global vegetation model IBIS for A2 and B2 scenarios. According to the model projections, 39% of forest grids are likely to undergo vegetation type change under the A2 scenario and 34% under the B2 scenario by the end of this century. However, in many forest dominant states such as Chattisgarh, Karnataka and Andhra Pradesh up to 73%, 67% and 62% of forested grids are projected to undergo change. Net Primary Productivity (NPP) is projected to increase by 68.8% and 51.2% under the A2 and B2 scenarios, respectively, and soil organic carbon (SOC) by 37.5% for A2 and 30.2% for B2 scenario. Based on the dynamic global vegetation modeling, we present a forest vulnerability index for India which is based on the observed datasets of forest density, forest biodiversity as well as model predicted vegetation type shift estimates for forested grids. The vulnerability index suggests that upper Himalayas, northern and central parts of Western Ghats and parts of central India are most vulnerable to projected impacts of climate change, while Northeastern forests are more resilient. Thus our study points to the need for developing and implementing adaptation strategies to reduce vulnerability of forests to projected climate change.