7 resultados para Alpha-cluster model
em CaltechTHESIS
Resumo:
A review is presented of the statistical bootstrap model of Hagedorn and Frautschi. This model is an attempt to apply the methods of statistical mechanics in high-energy physics, while treating all hadron states (stable or unstable) on an equal footing. A statistical calculation of the resonance spectrum on this basis leads to an exponentially rising level density ρ(m) ~ cm-3 eβom at high masses.
In the present work, explicit formulae are given for the asymptotic dependence of the level density on quantum numbers, in various cases. Hamer and Frautschi's model for a realistic hadron spectrum is described.
A statistical model for hadron reactions is then put forward, analogous to the Bohr compound nucleus model in nuclear physics, which makes use of this level density. Some general features of resonance decay are predicted. The model is applied to the process of NN annihilation at rest with overall success, and explains the high final state pion multiplicity, together with the low individual branching ratios into two-body final states, which are characteristic of the process. For more general reactions, the model needs modification to take account of correlation effects. Nevertheless it is capable of explaining the phenomenon of limited transverse momenta, and the exponential decrease in the production frequency of heavy particles with their mass, as shown by Hagedorn. Frautschi's results on "Ericson fluctuations" in hadron physics are outlined briefly. The value of βo required in all these applications is consistently around [120 MeV]-1 corresponding to a "resonance volume" whose radius is very close to ƛπ. The construction of a "multiperipheral cluster model" for high-energy collisions is advocated.
Resumo:
Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.
We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.
We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.
Resumo:
We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.
We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.
Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.
Resumo:
Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.
In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.
Resumo:
The structural specificity of α-chymotrypsin for polypeptides and denatured proteins has been examined. The primary specificity of the enzyme for these natural substrates is shown to closely correspond to that observed for model substrates. A pattern of secondary specificity is proposed.
A series of N-acetylated peptide esters of varying length have been evaluated as substrates of α-chymotrypsin. The results are interpreted in terms of proposed specificity theories.
The α-chymotrypsin-catalyzed hydrolyses of a number of N-acetylated dipeptide methyl esters were studied. The results are interpreted in terms of the available specificity theories and are compared with results obtained in the study of polypeptide substrates. The importance of non-productive binding in determining the kinetic parameters of these substrates is discussed. A partial model of the locus of the active site which interacts with the R’1CONH- group of a substrate of the form R’1CONHCHR2COR’3 is proposed.
Finally, some reactive esters of N-acetylated amino acids have been evaluated as substrates of α-chymotrypsin. Their reactivity and stereo-chemical behavior are discussed in terms of the specificity theories available. The importance of a binding interaction between the carboxyl function of the substrate and the enzyme is suggested by the results obtained.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
Cross sections for the reaction 12C(α,γ)16O have been measured for a range of center-of-mass alpha particle energies extending from 1.72 MeV to 2.94 MeV. Two 8"x5" NaI (Tℓ) crystals were used to detect gamma rays; time-of-flight technique was employed to suppress cosmic ray background and background due to neutrons arising mainly from the 13C(α,n)16O reaction. Angular distributions were measured at center-of-mass alpha energies of 2.18, 2.42, 2.56 and 2.83 MeV. Upper limits were placed on the amount of radiation cascading through the 6.92 or 7.12-MeV states in 16O. By means of theoretical fits to the measured electric dipole component of the total cross section, in which interference between the 1¯ states in 16O at 7.12 MeV and at 9.60 MeV is taken into account, it is possible to extract the dimensionless, reduced-alpha-width of the 7.12-MeV state in 16O. A three-level R-matrix parameterization of the data yields the width Θα,F2 = 0.14+0.10-0.08. A "hybrid" R-matrix-optical-model parameterization yields Θα,F2 = 0.11+0.11-0.07. This quantity is of crucial importance in determining the abundances of 12C and 16O at the end of helium burning in stars.