966 resultados para self-consistent calculation
Resumo:
We investigate the dynamics of localized solutions of the relativistic cold-fluid plasma model in the small but finite amplitude limit, for slightly overcritical plasma density. Adopting a multiple scale analysis, we derive a perturbed nonlinear Schrödinger equation that describes the evolution of the envelope of circularly polarized electromagnetic field. Retaining terms up to fifth order in the small perturbation parameter, we derive a self-consistent framework for the description of the plasma response in the presence of localized electromagnetic field. The formalism is applied to standing electromagnetic soliton interactions and the results are validated by simulations of the full cold-fluid model. To lowest order, a cubic nonlinear Schrödinger equation with a focusing nonlinearity is recovered. Classical quasiparticle theory is used to obtain analytical estimates for the collision time and minimum distance of approach between solitons. For larger soliton amplitudes the inclusion of the fifth-order terms is essential for a qualitatively correct description of soliton interactions. The defocusing quintic nonlinearity leads to inelastic soliton collisions, while bound states of solitons do not persist under perturbations in the initial phase or amplitude
Resumo:
The beam properties of tapered semiconductor optical amplifiers emitting at 1.57 μm are analyzed by means of simulations with a self-consistent steady state electro-optical and thermal simulator. The results indicate that the self-focusing caused by carrier lensing is delayed to higher currents for devices with taper angle slightly higher than the free diffraction angle.
Resumo:
An important aspect of Process Simulators for photovoltaics is prediction of defect evolution during device fabrication. Over the last twenty years, these tools have accelerated process optimization, and several Process Simulators for iron, a ubiquitous and deleterious impurity in silicon, have been developed. The diversity of these tools can make it difficult to build intuition about the physics governing iron behavior during processing. Thus, in one unified software environment and using self-consistent terminology, we combine and describe three of these Simulators. We vary structural defect distribution and iron precipitation equations to create eight distinct Models, which we then use to simulate different stages of processing. We find that the structural defect distribution influences the final interstitial iron concentration ([Fe-i]) more strongly than the iron precipitation equations. We identify two regimes of iron behavior: (1) diffusivity-limited, in which iron evolution is kinetically limited and bulk [Fe-i] predictions can vary by an order of magnitude or more, and (2) solubility-limited, in which iron evolution is near thermodynamic equilibrium and the Models yield similar results. This rigorous analysis provides new intuition that can inform Process Simulation, material, and process development, and it enables scientists and engineers to choose an appropriate level of Model complexity based on wafer type and quality, processing conditions, and available computation time.
Resumo:
The gas phase and aqueous thermochemistry and reactivity of nitroxyl (nitrosyl hydride, HNO) were elucidated with multiconfigurational self-consistent field and hybrid density functional theory calculations and continuum solvation methods. The pKa of HNO is predicted to be 7.2 ± 1.0, considerably different from the value of 4.7 reported from pulse radiolysis experiments. The ground-state triplet nature of NO− affects the rates of acid-base chemistry of the HNO/NO− couple. HNO is highly reactive toward dimerization and addition of soft nucleophiles but is predicted to undergo negligible hydration (Keq = 6.9 × 10−5). HNO is predicted to exist as a discrete species in solution and is a viable participant in the chemical biology of nitric oxide and derivatives.
Resumo:
In TJ-II stellarator plasmas, in the electron cyclotron heating regime, an increase in the ion temperature is observed, synchronized with that of the electron temperature, during the transition to the core electron-root confinement (CERC) regime. This rise in ion temperature should be attributed to the joint action of the electron–ion energy transfer (which changes slightly during the CERC formation) and an enhancement of the ion confinement. This improvement must be related to the increase in the positive electric field in the core region. In this paper, we confirm this hypothesis by estimating the ion collisional transport in TJ-II under the physical conditions established before and after the transition to CERC. We calculate a large number of ion orbits in the guiding-centre approximation considering the collisions with a background plasma composed of electrons and ions. The ion temperature profile and the thermal flux are calculated in a self-consistent way, so that the change in the ion heat transport can be assessed.
Resumo:
The study of long-term evolution of neutron star (NS) magnetic fields is key to understanding the rich diversity of NS observations, and to unifying their nature despite the different emission mechanisms and observed properties. Such studies in principle permit a deeper understanding of the most important parameters driving their apparent variety, e.g. radio pulsars, magnetars, X-ray dim isolated NSs, gamma-ray pulsars. We describe, for the first time, the results from self-consistent magnetothermal simulations considering not only the effects of the Hall-driven field dissipation in the crust, but also adding a complete set of proposed driving forces in a superconducting core. We emphasize how each of these core-field processes drive magnetic evolution and affect observables, and show that when all forces are considered together in vectorial form, the net expulsion of core magnetic flux is negligible, and will have no observable effect in the crust (consequently in the observed surface emission) on megayear time-scales. Our new simulations suggest that strong magnetic fields in NS cores (and the signatures on the NS surface) will persist long after the crustal magnetic field has evolved and decayed, due to the weak combined effects of dissipation and expulsion in the stellar core.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A recent all-object spectroscopic survey centred on the Fornax cluster of galaxies has discovered a population of subluminous and extremely compact members, called 'ultra-compact dwarf' (UCD) galaxies. In order to clarify the origin of these objects, we have used self-consistent numerical simulations to study the dynamical evolution a nucleated dwarf galaxy would undergo if orbiting the centre of the Fornax cluster and suffering from its strong tidal gravitational field. We find that the outer stellar components of a nucleated dwarf are removed by the strong tidal field of the cluster, whereas the nucleus manages to survive as a result of its initially compact nature. The developed naked nucleus is found to have physical properties (e. g. size and mass) similar to those observed for UCDs. We also find that although this formation process does not have a strong dependence on the initial total luminosity of the nucleated dwarf, it does depend on the radial density profile of the dark halo in the sense that UCDs are less likely to be formed from dwarfs embedded in dark matter haloes with central 'cuspy' density profiles. Our simulations also suggest that very massive and compact stellar systems can be rapidly and efficiently formed in the central regions of dwarfs through the merging of smaller star clusters. We provide some theoretical predictions on the total number and radial number density profile of UCDs in a cluster and their dependencies on cluster masses.
Resumo:
Understanding and explaining emergent constitutive laws in the multi-scale evolution from point defects, dislocations and two-dimensional defects to plate tectonic scales is an arduous challenge in condensed matter physics. The Earth appears to be the only planet known to have developed stable plate tectonics as a means to get rid of its heat. The emergence of plate tectonics out of mantle convection appears to rely intrinsically on the capacity to form extremely weak faults in the top 100 km of the planet. These faults have a memory of at least several hundred millions of years, yet they appear to rely on the effects of water on line defects. This important phenomenon was first discovered in laboratory and dubbed ``hydrolytic weakening''. At the large scale it explains cycles of co-located resurgence of plate generation and consumption (the Wilson cycle), but the exact physics underlying the process itself and the enormous spanning of scales still remains unclear. We present an attempt to use the multi-scale non-equilibrium thermodynamic energy evolution inside the deforming lithosphere to move phenomenological laws to laws derived from basic scaling quantities, develop self-consistent weakening laws at lithospheric scale and give a fully coupled deformation-weakening constitutive framework. At meso- to plate scale we encounter in a stepwise manner three basic domains governed by the diffusion/reaction time scales of grain growth, thermal diffusion and finally water mobility through point defects in the crystalline lattice. The latter process governs the planetary scale and controls the stability of its heat transfer mode.
Resumo:
A total pressure apparatus has been developed to measure vapour-liquid equilibrium data on binary mixtures at atmospheric and sub-atmospheric pressures. The method gives isothermal data which can be obtained rapidly. Only measurements of total pressure are made as a direct function of composition of synthetic liquid phase composition, the vapour phase composition being deduced through the Gibbs-Duhem relationship. The need to analyse either of the phases is eliminated. As such the errors introduced by sampling and analysis are removed. The essential requirements are that the pure components be degassed completely since any deficiency in degassing would introduce errors into the measured pressures. A similarly essential requirement was that the central apparatus would have to be absolutely leak-tight as any leakage of air either in or out of the apparatus would introduce erroneous pressure readings. The apparatus was commissioned by measuring the saturated vapour pressures of both degassed water and ethanol as a function of temperature. The pressure-temperature data on degassed water measured were directly compared with data in the literature, with good agreement. Similarly the pressure-temperature data were measured for ethanol, methanol and cyclohexane and where possible a direct comparison made with the literature data. Good agreement between the pure component data of this work and those available in the literature demonstrates firstly that a satisfactory degassing procedure has been achieved and that secondly the measurements of pressure-temperature are consistent for any one component; since this is true for a number of components, the measurements of both temperature and pressure are both self-consistent and of sufficient accuracy, with an observed compatibility between the precision/accuracy of the separate means of measuring pressure and temperature. The liquid mixtures studied were of ethanol-water, methanol-water and ethanol-cyclohexane. The total pressure was measured as the composition inside the equilibrium cell was varied at a set temperature. This gave P-T-x data sets for each mixture at a range of temperatures. A standard fitting-package from the literature was used to reduce the raw data to yield y-values to complete the x-y-P-T data sets. A consistency test could not be applied to the P-T-x data set as no y-values were obtained during the experimental measurements. In general satisfactory agreement was found between the data of this work and those available in the literature. For some runs discrepancies were observed, and further work recommended to eliminate the problems identified.
Resumo:
This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.
Resumo:
Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made are freely available online at the URL http://www.jenner.ac.uk/MHCPred.
Resumo:
The relative distribution of rare-earth ions R3+ (Dy3+ or Ho3+) in the phosphate glass RAl0.30P3.05O9.62 was measured by employing the method of isomorphic substitution in neutron diffraction and, by taking the role of Al into explicit account, a self-consistent model of the glass structure was developed. The glass network is found to be made from corner sharing PO4 tetrahedra in which there are, on average, 2.32(9) terminal oxygen atoms, OT, at 1.50(1) Å and 1.68(9) bridging oxygen atoms, OB, at 1.60(1) Å. The network modifying R3+ ions bind to an average of 6.7(1) OT and are distributed such that 7.9(7) R–R nearest neighbours reside at 5.62(6) Å. The Al3+ ion also has a network modifying role in which it helps to strengthen the glass through the formation of OT–Al–OT linkages. The connectivity of the R-centred coordination polyhedra in (M2O3)x(P2O5)1−x glasses, where M3+ denotes a network modifying cation (R3+ or Al3+), is quantified in terms of a parameter fs. Methods for reducing the clustering of rare-earth ions in these materials are then discussed, based on a reduction of fs via the replacement of R3+ by Al3+ at fixed total modifier content or via a change of x to increase the number of OT available per network modifying M3+ cation.
Resumo:
Neutron diffraction was used to measure the total structure factors for several rare-earth ion R3+ (La3+ or Ce3+) phosphate glasses with composition close to RAl0.35P3.24O10.12. By assuming isomorphic structures, difference function methods were employed to separate, essentially, those correlations involving R3+ from the remainder. A self-consistent model of the glass structure was thereby developed in which the Al correlations were taken into explicit account. The glass network was found to be made from interlinked PO4 tetrahedra having 2.2(1) terminal oxygen atoms, OT, at 1.51(1) Angstrom, and 1.8(1) bridging oxygen atoms, OB, at 1.60(1) Angstrom. Rare-earth cations bonded to an average of 7.5(2) OT nearest neighbors in a broad and asymmetric distribution. The Al3+ ion acted as a network modifier and formed OT-A1-OT linkages that helped strengthen the glass. The connectivity of the R-centered coordination polyhedra was quantified in terms of a parameter f(s) and used to develop a model for the dependence on composition of the A1-OT coordination number in R-A1-P-O glasses. By using recent 17 A1 nuclear-magnetic-resonance data, it was shown that this connectivity decreases monotonically with increasing Al content. The chemical durability of the glasses appeared to be at a maximum when the connectivity of the R-centered coordination polyhedra was at a minimum. The relation of f(s) to the glass transition temperature, Tg, was discussed.
Resumo:
The relative distribution of rare-earth ions R3+ (Dy3+ or Ho3+) in the phosphate glass RAl0.30P3.05O9.62 was measured by employing the method of isomorphic substitution in neutron diffraction. It is found that 7.9(7) R-R nearest neighbors reside at 5.62(6) Angstrom in a network made from interlinked PO4 tetrahedra. Provided that the role of Al is explicitly considered, a self-consistent account of the local matrix atom correlations can be developed in which there are 1.68(9) bridging and 2.32(9) terminal oxygen atoms per phosphorus.