943 resultados para self-consistent calculation
Resumo:
In TJ-II stellarator plasmas, in the electron cyclotron heating regime, an increase in the ion temperature is observed, synchronized with that of the electron temperature, during the transition to the core electron-root confinement (CERC) regime. This rise in ion temperature should be attributed to the joint action of the electron–ion energy transfer (which changes slightly during the CERC formation) and an enhancement of the ion confinement. This improvement must be related to the increase in the positive electric field in the core region. In this paper, we confirm this hypothesis by estimating the ion collisional transport in TJ-II under the physical conditions established before and after the transition to CERC. We calculate a large number of ion orbits in the guiding-centre approximation considering the collisions with a background plasma composed of electrons and ions. The ion temperature profile and the thermal flux are calculated in a self-consistent way, so that the change in the ion heat transport can be assessed.
Resumo:
The study of long-term evolution of neutron star (NS) magnetic fields is key to understanding the rich diversity of NS observations, and to unifying their nature despite the different emission mechanisms and observed properties. Such studies in principle permit a deeper understanding of the most important parameters driving their apparent variety, e.g. radio pulsars, magnetars, X-ray dim isolated NSs, gamma-ray pulsars. We describe, for the first time, the results from self-consistent magnetothermal simulations considering not only the effects of the Hall-driven field dissipation in the crust, but also adding a complete set of proposed driving forces in a superconducting core. We emphasize how each of these core-field processes drive magnetic evolution and affect observables, and show that when all forces are considered together in vectorial form, the net expulsion of core magnetic flux is negligible, and will have no observable effect in the crust (consequently in the observed surface emission) on megayear time-scales. Our new simulations suggest that strong magnetic fields in NS cores (and the signatures on the NS surface) will persist long after the crustal magnetic field has evolved and decayed, due to the weak combined effects of dissipation and expulsion in the stellar core.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A recent all-object spectroscopic survey centred on the Fornax cluster of galaxies has discovered a population of subluminous and extremely compact members, called 'ultra-compact dwarf' (UCD) galaxies. In order to clarify the origin of these objects, we have used self-consistent numerical simulations to study the dynamical evolution a nucleated dwarf galaxy would undergo if orbiting the centre of the Fornax cluster and suffering from its strong tidal gravitational field. We find that the outer stellar components of a nucleated dwarf are removed by the strong tidal field of the cluster, whereas the nucleus manages to survive as a result of its initially compact nature. The developed naked nucleus is found to have physical properties (e. g. size and mass) similar to those observed for UCDs. We also find that although this formation process does not have a strong dependence on the initial total luminosity of the nucleated dwarf, it does depend on the radial density profile of the dark halo in the sense that UCDs are less likely to be formed from dwarfs embedded in dark matter haloes with central 'cuspy' density profiles. Our simulations also suggest that very massive and compact stellar systems can be rapidly and efficiently formed in the central regions of dwarfs through the merging of smaller star clusters. We provide some theoretical predictions on the total number and radial number density profile of UCDs in a cluster and their dependencies on cluster masses.
Resumo:
Understanding and explaining emergent constitutive laws in the multi-scale evolution from point defects, dislocations and two-dimensional defects to plate tectonic scales is an arduous challenge in condensed matter physics. The Earth appears to be the only planet known to have developed stable plate tectonics as a means to get rid of its heat. The emergence of plate tectonics out of mantle convection appears to rely intrinsically on the capacity to form extremely weak faults in the top 100 km of the planet. These faults have a memory of at least several hundred millions of years, yet they appear to rely on the effects of water on line defects. This important phenomenon was first discovered in laboratory and dubbed ``hydrolytic weakening''. At the large scale it explains cycles of co-located resurgence of plate generation and consumption (the Wilson cycle), but the exact physics underlying the process itself and the enormous spanning of scales still remains unclear. We present an attempt to use the multi-scale non-equilibrium thermodynamic energy evolution inside the deforming lithosphere to move phenomenological laws to laws derived from basic scaling quantities, develop self-consistent weakening laws at lithospheric scale and give a fully coupled deformation-weakening constitutive framework. At meso- to plate scale we encounter in a stepwise manner three basic domains governed by the diffusion/reaction time scales of grain growth, thermal diffusion and finally water mobility through point defects in the crystalline lattice. The latter process governs the planetary scale and controls the stability of its heat transfer mode.
Resumo:
A total pressure apparatus has been developed to measure vapour-liquid equilibrium data on binary mixtures at atmospheric and sub-atmospheric pressures. The method gives isothermal data which can be obtained rapidly. Only measurements of total pressure are made as a direct function of composition of synthetic liquid phase composition, the vapour phase composition being deduced through the Gibbs-Duhem relationship. The need to analyse either of the phases is eliminated. As such the errors introduced by sampling and analysis are removed. The essential requirements are that the pure components be degassed completely since any deficiency in degassing would introduce errors into the measured pressures. A similarly essential requirement was that the central apparatus would have to be absolutely leak-tight as any leakage of air either in or out of the apparatus would introduce erroneous pressure readings. The apparatus was commissioned by measuring the saturated vapour pressures of both degassed water and ethanol as a function of temperature. The pressure-temperature data on degassed water measured were directly compared with data in the literature, with good agreement. Similarly the pressure-temperature data were measured for ethanol, methanol and cyclohexane and where possible a direct comparison made with the literature data. Good agreement between the pure component data of this work and those available in the literature demonstrates firstly that a satisfactory degassing procedure has been achieved and that secondly the measurements of pressure-temperature are consistent for any one component; since this is true for a number of components, the measurements of both temperature and pressure are both self-consistent and of sufficient accuracy, with an observed compatibility between the precision/accuracy of the separate means of measuring pressure and temperature. The liquid mixtures studied were of ethanol-water, methanol-water and ethanol-cyclohexane. The total pressure was measured as the composition inside the equilibrium cell was varied at a set temperature. This gave P-T-x data sets for each mixture at a range of temperatures. A standard fitting-package from the literature was used to reduce the raw data to yield y-values to complete the x-y-P-T data sets. A consistency test could not be applied to the P-T-x data set as no y-values were obtained during the experimental measurements. In general satisfactory agreement was found between the data of this work and those available in the literature. For some runs discrepancies were observed, and further work recommended to eliminate the problems identified.
Resumo:
This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.
Resumo:
Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made are freely available online at the URL http://www.jenner.ac.uk/MHCPred.
Resumo:
The relative distribution of rare-earth ions R3+ (Dy3+ or Ho3+) in the phosphate glass RAl0.30P3.05O9.62 was measured by employing the method of isomorphic substitution in neutron diffraction and, by taking the role of Al into explicit account, a self-consistent model of the glass structure was developed. The glass network is found to be made from corner sharing PO4 tetrahedra in which there are, on average, 2.32(9) terminal oxygen atoms, OT, at 1.50(1) Å and 1.68(9) bridging oxygen atoms, OB, at 1.60(1) Å. The network modifying R3+ ions bind to an average of 6.7(1) OT and are distributed such that 7.9(7) R–R nearest neighbours reside at 5.62(6) Å. The Al3+ ion also has a network modifying role in which it helps to strengthen the glass through the formation of OT–Al–OT linkages. The connectivity of the R-centred coordination polyhedra in (M2O3)x(P2O5)1−x glasses, where M3+ denotes a network modifying cation (R3+ or Al3+), is quantified in terms of a parameter fs. Methods for reducing the clustering of rare-earth ions in these materials are then discussed, based on a reduction of fs via the replacement of R3+ by Al3+ at fixed total modifier content or via a change of x to increase the number of OT available per network modifying M3+ cation.
Resumo:
Neutron diffraction was used to measure the total structure factors for several rare-earth ion R3+ (La3+ or Ce3+) phosphate glasses with composition close to RAl0.35P3.24O10.12. By assuming isomorphic structures, difference function methods were employed to separate, essentially, those correlations involving R3+ from the remainder. A self-consistent model of the glass structure was thereby developed in which the Al correlations were taken into explicit account. The glass network was found to be made from interlinked PO4 tetrahedra having 2.2(1) terminal oxygen atoms, OT, at 1.51(1) Angstrom, and 1.8(1) bridging oxygen atoms, OB, at 1.60(1) Angstrom. Rare-earth cations bonded to an average of 7.5(2) OT nearest neighbors in a broad and asymmetric distribution. The Al3+ ion acted as a network modifier and formed OT-A1-OT linkages that helped strengthen the glass. The connectivity of the R-centered coordination polyhedra was quantified in terms of a parameter f(s) and used to develop a model for the dependence on composition of the A1-OT coordination number in R-A1-P-O glasses. By using recent 17 A1 nuclear-magnetic-resonance data, it was shown that this connectivity decreases monotonically with increasing Al content. The chemical durability of the glasses appeared to be at a maximum when the connectivity of the R-centered coordination polyhedra was at a minimum. The relation of f(s) to the glass transition temperature, Tg, was discussed.
Resumo:
The relative distribution of rare-earth ions R3+ (Dy3+ or Ho3+) in the phosphate glass RAl0.30P3.05O9.62 was measured by employing the method of isomorphic substitution in neutron diffraction. It is found that 7.9(7) R-R nearest neighbors reside at 5.62(6) Angstrom in a network made from interlinked PO4 tetrahedra. Provided that the role of Al is explicitly considered, a self-consistent account of the local matrix atom correlations can be developed in which there are 1.68(9) bridging and 2.32(9) terminal oxygen atoms per phosphorus.
Resumo:
This thesis presents a two-dimensional water model investigation and development of a multiscale method for the modelling of large systems, such as virus in water or peptide immersed in the solvent. We have implemented a two-dimensional ‘Mercedes Benz’ (MB) or BN2D water model using Molecular Dynamics. We have studied its dynamical and structural properties dependence on the model’s parameters. For the first time we derived formulas to calculate thermodynamic properties of the MB model in the microcanonical (NVE) ensemble. We also derived equations of motion in the isothermal–isobaric (NPT) ensemble. We have analysed the rotational degree of freedom of the model in both ensembles. We have developed and implemented a self-consistent multiscale method, which is able to communicate micro- and macro- scales. This multiscale method assumes, that matter consists of the two phases. One phase is related to micro- and the other to macroscale. We simulate the macro scale using Landau Lifshitz-Fluctuating Hydrodynamics, while we describe the microscale using Molecular Dynamics. We have demonstrated that the communication between the disparate scales is possible without introduction of fictitious interface or approximations which reduce the accuracy of the information exchange between the scales. We have investigated control parameters, which were introduced to control the contribution of each phases to the matter behaviour. We have shown, that microscales inherit dynamical properties of the macroscales and vice versa, depending on the concentration of each phase. We have shown, that Radial Distribution Function is not altered and velocity autocorrelation functions are gradually transformed, from Molecular Dynamics to Fluctuating Hydrodynamics description, when phase balance is changed. In this work we test our multiscale method for the liquid argon, BN2D and SPC/E water models. For the SPC/E water model we investigate microscale fluctuations which are computed using advanced mapping technique of the small scales to the large scales, which was developed by Voulgarakisand et. al.
Resumo:
We suggest a variant of the nonlinear σ model for the description of disordered superconductors. The main distinction from existing models lies in the fact that the saddle point equation is solved nonperturbatively in the superconducting pairing field. It allows one to use the model both in the vicinity of the metal-superconductor transition and well below its critical temperature with full account for the self-consistency conditions. We show that the model reproduces a set of known results in different limiting cases, and apply it for a self-consistent description of the proximity effect at the superconductor-metal interface.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).
Resumo:
A two-phase three-dimensional computational model of an intermediate temperature (120--190°C) proton exchange membrane (PEM) fuel cell is presented. This represents the first attempt to model PEM fuel cells employing intermediate temperature membranes, in this case, phosphoric acid doped polybenzimidazole (PBI). To date, mathematical modeling of PEM fuel cells has been restricted to low temperature operation, especially to those employing Nafion ® membranes; while research on PBI as an intermediate temperature membrane has been solely at the experimental level. This work is an advancement in the state of the art of both these fields of research. With a growing trend toward higher temperature operation of PEM fuel cells, mathematical modeling of such systems is necessary to help hasten the development of the technology and highlight areas where research should be focused.^ This mathematical model accounted for all the major transport and polarization processes occurring inside the fuel cell, including the two phase phenomenon of gas dissolution in the polymer electrolyte. Results were presented for polarization performance, flux distributions, concentration variations in both the gaseous and aqueous phases, and temperature variations for various heat management strategies. The model predictions matched well with published experimental data, and were self-consistent.^ The major finding of this research was that, due to the transport limitations imposed by the use of phosphoric acid as a doping agent, namely low solubility and diffusivity of dissolved gases and anion adsorption onto catalyst sites, the catalyst utilization is very low (∼1--2%). Significant cost savings were predicted with the use of advanced catalyst deposition techniques that would greatly reduce the eventual thickness of the catalyst layer, and subsequently improve catalyst utilization. The model also predicted that an increase in power output in the order of 50% is expected if alternative doping agents to phosphoric acid can be found, which afford better transport properties of dissolved gases, reduced anion adsorption onto catalyst sites, and which maintain stability and conductive properties at elevated temperatures.^