967 resultados para Models : mixing length
Resumo:
Precision measurements of phenomena related to fermion mixing require the inclusion of higher order corrections in the calculation of corresponding theoretical predictions. For this, a complete renormalization scheme for models that allow for fermion mixing is highly required. The correct treatment of unstable particles makes this task difficult and yet, no satisfactory and general solution can be found in the literature. In the present work, we study the renormalization of the fermion Lagrange density with Dirac and Majorana particles in models that involve mixing. The first part of the thesis provides a general renormalization prescription for the Lagrangian, while the second one is an application to specific models. In a general framework, using the on-shell renormalization scheme, we identify the physical mass and the decay width of a fermion from its full propagator. The so-called wave function renormalization constants are determined such that the subtracted propagator is diagonal on-shell. As a consequence of absorptive parts in the self-energy, the constants that are supposed to renormalize the incoming fermion and the outgoing antifermion are different from the ones that should renormalize the outgoing fermion and the incoming antifermion and not related by hermiticity, as desired. Instead of defining field renormalization constants identical to the wave function renormalization ones, we differentiate the two by a set of finite constants. Using the additional freedom offered by this finite difference, we investigate the possibility of defining field renormalization constants related by hermiticity. We show that for Dirac fermions, unless the model has very special features, the hermiticity condition leads to ill-defined matrix elements due to self-energy corrections of external legs. In the case of Majorana fermions, the constraints for the model are less restrictive. Here one might have a better chance to define field renormalization constants related by hermiticity. After analysing the complete renormalized Lagrangian in a general theory including vector and scalar bosons with arbitrary renormalizable interactions, we consider two specific models: quark mixing in the electroweak Standard Model and mixing of Majorana neutrinos in the seesaw mechanism. A counter term for fermion mixing matrices can not be fixed by only taking into account self-energy corrections or fermion field renormalization constants. The presence of unstable particles in the theory can lead to a non-unitary renormalized mixing matrix or to a gauge parameter dependence in its counter term. Therefore, we propose to determine the mixing matrix counter term by fixing the complete correction terms for a physical process to experimental measurements. As an example, we calculate the decay rate of a top quark and of a heavy neutrino. We provide in each of the chosen models sample calculations that can be easily extended to other theories.
Resumo:
Tracking activities during daily life and assessing movement parameters is essential for complementing the information gathered in confined environments such as clinical and physical activity laboratories for the assessment of mobility. Inertial measurement units (IMUs) are used as to monitor the motion of human movement for prolonged periods of time and without space limitations. The focus in this study was to provide a robust, low-cost and an unobtrusive solution for evaluating human motion using a single IMU. First part of the study focused on monitoring and classification of the daily life activities. A simple method that analyses the variations in signal was developed to distinguish two types of activity intervals: active and inactive. Neural classifier was used to classify active intervals; the angle with respect to gravity was used to classify inactive intervals. Second part of the study focused on extraction of gait parameters using a single inertial measurement unit (IMU) attached to the pelvis. Two complementary methods were proposed for gait parameters estimation. First method was a wavelet based method developed for the estimation of gait events. Second method was developed for estimating step and stride length during level walking using the estimations of the previous method. A special integration algorithm was extended to operate on each gait cycle using a specially designed Kalman filter. The developed methods were also applied on various scenarios. Activity monitoring method was used in a PRIN’07 project to assess the mobility levels of individuals living in a urban area. The same method was applied on volleyball players to analyze the fitness levels of them by monitoring their daily life activities. The methods proposed in these studies provided a simple, unobtrusive and low-cost solution for monitoring and assessing activities outside of controlled environments.
Resumo:
In this thesis, the influence of composition changes on the glass transition behavior of binary liquids in two and three spatial dimensions (2D/3D) is studied in the framework of mode-coupling theory (MCT).The well-established MCT equations are generalized to isotropic and homogeneous multicomponent liquids in arbitrary spatial dimensions. Furthermore, a new method is introduced which allows a fast and precise determination of special properties of glass transition lines. The new equations are then applied to the following model systems: binary mixtures of hard disks/spheres in 2D/3D, binary mixtures of dipolar point particles in 2D, and binary mixtures of dipolar hard disks in 2D. Some general features of the glass transition lines are also discussed. The direct comparison of the binary hard disk/sphere models in 2D/3D shows similar qualitative behavior. Particularly, for binary mixtures of hard disks in 2D the same four so-called mixing effects are identified as have been found before by Götze and Voigtmann for binary hard spheres in 3D [Phys. Rev. E 67, 021502 (2003)]. For instance, depending on the size disparity, adding a second component to a one-component liquid may lead to a stabilization of either the liquid or the glassy state. The MCT results for the 2D system are on a qualitative level in agreement with available computer simulation data. Furthermore, the glass transition diagram found for binary hard disks in 2D strongly resembles the corresponding random close packing diagram. Concerning dipolar systems, it is demonstrated that the experimental system of König et al. [Eur. Phys. J. E 18, 287 (2005)] is well described by binary point dipoles in 2D through a comparison between the experimental partial structure factors and those from computer simulations. For such mixtures of point particles it is demonstrated that MCT predicts always a plasticization effect, i.e. a stabilization of the liquid state due to mixing, in contrast to binary hard disks in 2D or binary hard spheres in 3D. It is demonstrated that the predicted plasticization effect is in qualitative agreement with experimental results. Finally, a glass transition diagram for binary mixtures of dipolar hard disks in 2D is calculated. These results demonstrate that at higher packing fractions there is a competition between the mixing effects occurring for binary hard disks in 2D and those for binary point dipoles in 2D.
Resumo:
The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.
Resumo:
Small-scale dynamic stochastic general equilibrium have been treated as the benchmark of much of the monetary policy literature, given their ability to explain the impact of monetary policy on output, inflation and financial markets. One cause of the empirical failure of New Keynesian models is partially due to the Rational Expectations (RE) paradigm, which entails a tight structure on the dynamics of the system. Under this hypothesis, the agents are assumed to know the data genereting process. In this paper, we propose the econometric analysis of New Keynesian DSGE models under an alternative expectations generating paradigm, which can be regarded as an intermediate position between rational expectations and learning, nameley an adapted version of the "Quasi-Rational" Expectatations (QRE) hypothesis. Given the agents' statistical model, we build a pseudo-structural form from the baseline system of Euler equations, imposing that the length of the reduced form is the same as in the `best' statistical model.
Resumo:
In den vergangenen Jahren wurden einige bislang unbekannte Phänomene experimentell beobachtet, wie etwa die Existenz unterschiedlicher Prä-Nukleations-Strukturen. Diese haben zu einem neuen Verständnis von Prozessen, die auf molekularer Ebene während der Nukleation und dem Wachstum von Kristallen auftreten, beigetragen. Die Auswirkungen solcher Prä-Nukleations-Strukturen auf den Prozess der Biomineralisation sind noch nicht hinreichend verstanden. Die Mechanismen, mittels derer biomolekulare Modifikatoren, wie Peptide, mit Prä-Nukleations-Strukturen interagieren und somit den Nukleationsprozess von Mineralen beeinflussen könnten, sind vielfältig. Molekulare Simulationen sind zur Analyse der Formation von Prä-Nukleations-Strukturen in Anwesenheit von Modifikatoren gut geeignet. Die vorliegende Arbeit beschreibt einen Ansatz zur Analyse der Interaktion von Peptiden mit den in Lösung befindlichen Bestandteilen der entstehenden Kristalle mit Hilfe von Molekular-Dynamik Simulationen.rnUm informative Simulationen zu ermöglichen, wurde in einem ersten Schritt die Qualität bestehender Kraftfelder im Hinblick auf die Beschreibung von mit Calciumionen interagierenden Oligoglutamaten in wässrigen Lösungen untersucht. Es zeigte sich, dass große Unstimmigkeiten zwischen etablierten Kraftfeldern bestehen, und dass keines der untersuchten Kraftfelder eine realistische Beschreibung der Ionen-Paarung dieser komplexen Ionen widerspiegelte. Daher wurde eine Strategie zur Optimierung bestehender biomolekularer Kraftfelder in dieser Hinsicht entwickelt. Relativ geringe Veränderungen der auf die Ionen–Peptid van-der-Waals-Wechselwirkungen bezogenen Parameter reichten aus, um ein verlässliches Modell für das untersuchte System zu erzielen. rnDas umfassende Sampling des Phasenraumes der Systeme stellt aufgrund der zahlreichen Freiheitsgrade und der starken Interaktionen zwischen Calciumionen und Glutamat in Lösung eine besondere Herausforderung dar. Daher wurde die Methode der Biasing Potential Replica Exchange Molekular-Dynamik Simulationen im Hinblick auf das Sampling von Oligoglutamaten justiert und es erfolgte die Simulation von Peptiden verschiedener Kettenlängen in Anwesenheit von Calciumionen. Mit Hilfe der Sketch-Map Analyse konnten im Rahmen der Simulationen zahlreiche stabile Ionen-Peptid-Komplexe identifiziert werden, welche die Formation von Prä-Nukleations-Strukturen beeinflussen könnten. Abhängig von der Kettenlänge des Peptids weisen diese Komplexe charakteristische Abstände zwischen den Calciumionen auf. Diese ähneln einigen Abständen zwischen den Calciumionen in jenen Phasen von Calcium-Oxalat Kristallen, die in Anwesenheit von Oligoglutamaten gewachsen sind. Die Analogie der Abstände zwischen Calciumionen in gelösten Ionen-Peptid-Komplexen und in Calcium-Oxalat Kristallen könnte auf die Bedeutung von Ionen-Peptid-Komplexen im Prozess der Nukleation und des Wachstums von Biomineralen hindeuten und stellt einen möglichen Erklärungsansatz für die Fähigkeit von Oligoglutamaten zur Beeinflussung der Phase des sich formierenden Kristalls dar, die experimentell beobachtet wurde.
Resumo:
This work is focused on axions and axion like particles (ALPs) and their possible relation with the 3.55 keV photon line detected, in recent years, from galaxy clusters and other astrophysical objects. We focus on axions that come from string compactification and we study the vacuum structure of the resulting low energy 4D N=1 supergravity effective field theory. We then provide a model which might explain the 3.55 keV line through the following processes. A 7.1 keV dark matter axion decays in two light axions, which, in turn, are transformed into photons thanks to the Primakoff effect and the existence of a kinetic mixing between two U(1)s gauge symmetries belonging respectively to the hidden and the visible sector. We present two models, the first one gives an outcome inconsistent with experimental data, while the second can yield the desired result.
Resumo:
Adaptive radiation is usually thought to be associated with speciation, but the evolution of intraspecific polymorphisms without speciation is also possible. The radiation of cichlid fish in Lake Victoria (LV) is perhaps the most impressive example of a recent rapid adaptive radiation, with 600+ very young species. Key questions about its origin remain poorly characterized, such as the importance of speciation versus polymorphism, whether species persist on evolutionary time scales, and if speciation happens more commonly in small isolated or in large connected populations. We used 320 individuals from 105 putative species from Lakes Victoria, Edward, Kivu, Albert, Nabugabo and Saka, in a radiation-wide amplified fragment length polymorphism (AFLP) genome scan to address some of these questions. We demonstrate pervasive signatures of speciation supporting the classical model of adaptive radiation associated with speciation. A positive relationship between the age of lakes and the average genomic differentiation of their species, and a significant fraction of molecular variance explained by above-species level taxonomy suggest the persistence of species on evolutionary time scales, with radiation through sequential speciation rather than a single starburst. Finally the large gene diversity retained from colonization to individual species in every radiation suggests large effective population sizes and makes speciation in small geographical isolates unlikely.
Resumo:
With the advent of cheaper and faster DNA sequencing technologies, assembly methods have greatly changed. Instead of outputting reads that are thousands of base pairs long, new sequencers parallelize the task by producing read lengths between 35 and 400 base pairs. Reconstructing an organism’s genome from these millions of reads is a computationally expensive task. Our algorithm solves this problem by organizing and indexing the reads using n-grams, which are short, fixed-length DNA sequences of length n. These n-grams are used to efficiently locate putative read joins, thereby eliminating the need to perform an exhaustive search over all possible read pairs. Our goal was develop a novel n-gram method for the assembly of genomes from next-generation sequencers. Specifically, a probabilistic, iterative approach was utilized to determine the most likely reads to join through development of a new metric that models the probability of any two arbitrary reads being joined together. Tests were run using simulated short read data based on randomly created genomes ranging in lengths from 10,000 to 100,000 nucleotides with 16 to 20x coverage. We were able to successfully re-assemble entire genomes up to 100,000 nucleotides in length.
Resumo:
Cold-formed steel (CFS) combined with wood sheathing, such as oriented strand board (OSB), forms shear walls that can provide lateral resistance to seismic forces. The ability to accurately predict building deformations in damaged states under seismic excitations is a must for modern performance-based seismic design. However, few static or dynamic tests have been conducted on the non-linear behavior of CFS shear walls. Thus, the purpose of this research work is to provide and demonstrate a fastener-based computational model of CFS wall models that incorporates essential nonlinearities that may eventually lead to improvement of the current seismic design requirements. The approach is based on the understanding that complex interaction of the fasteners with the sheathing is an important factor in the non-linear behavior of the shear wall. The computational model consists of beam-column elements for the CFS framing and a rigid diaphragm for the sheathing. The framing and sheathing are connected with non-linear zero-length fastener elements to capture the OSB sheathing damage surrounding the fastener area. Employing computational programs such as OpenSees and MATLAB, 4 ft. x 9 ft., 8 ft. x 9 ft. and 12 ft. x 9 ft. shear wall models are created, and monotonic lateral forces are applied to the computer models. The output data are then compared and analyzed with the available results of physical testing. The results indicate that the OpenSees model can accurately capture the initial stiffness, strength and non-linear behavior of the shear walls.
Resumo:
Internal combustion engines are, and will continue to be, a primary mode of power generation for ground transportation. Challenges exist in meeting fuel consumption regulations and emission standards while upholding performance, as fuel prices rise, and resource depletion and environmental impacts are of increasing concern. Diesel engines are advantageous due to their inherent efficiency advantage over spark ignition engines; however, their NOx and soot emissions can be difficult to control and reduce due to an inherent tradeoff. Diesel combustion is spray and mixing controlled providing an intrinsic link between spray and emissions, motivating detailed, fundamental studies on spray, vaporization, mixing, and combustion characteristics under engine relevant conditions. An optical combustion vessel facility has been developed at Michigan Technological University for these studies, with detailed tests and analysis being conducted. In this combustion vessel facility a preburn procedure for thermodynamic state generation is used, and validated using chemical kinetics modeling both for the MTU vessel, and institutions comprising the Engine Combustion Network international collaborative research initiative. It is shown that minor species produced are representative of modern diesel engines running exhaust gas recirculation and do not impact the autoignition of n-heptane. Diesel spray testing of a high-pressure (2000 bar) multi-hole injector is undertaken including non-vaporizing, vaporizing, and combusting tests, with sprays characterized using Mie back scatter imaging diagnostics. Liquid phase spray parameter trends agree with literature. Fluctuations in liquid length about a quasi-steady value are quantified, along with plume to plume variations. Hypotheses are developed for their causes including fuel pressure fluctuations, nozzle cavitation, internal injector flow and geometry, chamber temperature gradients, and turbulence. These are explored using a mixing limited vaporization model with an equation of state approach for thermopyhysical properties. This model is also applied to single and multi-component surrogates. Results include the development of the combustion research facility and validated thermodynamic state generation procedure. The developed equation of state approach provides application for improving surrogate fuels, both single and multi-component, in terms of diesel spray liquid length, with knowledge of only critical fuel properties. Experimental studies are coupled with modeling incorporating improved thermodynamic non-ideal gas and fuel
Resumo:
The primary challenge in groundwater and contaminant transport modeling is obtaining the data needed for constructing, calibrating and testing the models. Large amounts of data are necessary for describing the hydrostratigraphy in areas with complex geology. Increasingly states are making spatial data available that can be used for input to groundwater flow models. The appropriateness of this data for large-scale flow systems has not been tested. This study focuses on modeling a plume of 1,4-dioxane in a heterogeneous aquifer system in Scio Township, Washtenaw County, Michigan. The analysis consisted of: (1) characterization of hydrogeology of the area and construction of a conceptual model based on publicly available spatial data, (2) development and calibration of a regional flow model for the site, (3) conversion of the regional model to a more highly resolved local model, (4) simulation of the dioxane plume, and (5) evaluation of the model's ability to simulate field data and estimation of the possible dioxane sources and subsequent migration until maximum concentrations are at or below the Michigan Department of Environmental Quality's residential cleanup standard for groundwater (85 ppb). MODFLOW-2000 and MT3D programs were utilized to simulate the groundwater flow and the development and movement of the 1, 4-dioxane plume, respectively. MODFLOW simulates transient groundwater flow in a quasi-3-dimensional sense, subject to a variety of boundary conditions that can simulate recharge, pumping, and surface-/groundwater interactions. MT3D simulates solute advection with groundwater flow (using the flow solution from MODFLOW), dispersion, source/sink mixing, and chemical reaction of contaminants. This modeling approach was successful at simulating the groundwater flows by calibrating recharge and hydraulic conductivities. The plume transport was adequately simulated using literature dispersivity and sorption coefficients, although the plume geometries were not well constrained.
Resumo:
Free-radical retrograde-precipitation polymerization, FRRPP in short, is a novel polymerization process discovered by Dr. Gerard Caneba in the late 1980s. The current study is aimed at gaining a better understanding of the reaction mechanism of the FRRPP and its thermodynamically-driven features that are predominant in controlling the chain reaction. A previously developed mathematical model to represent free radical polymerization kinetics was used to simulate a classic bulk polymerization system from the literature. Unlike other existing models, such a sparse-matrix-based representation allows one to explicitly accommodate the chain length dependent kinetic parameters. Extrapolating from the past results, mixing was experimentally shown to be exerting a significant influence on reaction control in FRRPP systems. Mixing alone drives the otherwise severely diffusion-controlled reaction propagation in phase-separated polymer domains. Therefore, in a quiescent system, in the absence of mixing, it is possible to retard the growth of phase-separated domains, thus producing isolated polymer nanoparticles (globules). Such a diffusion-controlled, self-limiting phenomenon of chain growth was also observed using time-resolved small angle x-ray scattering studies of reaction kinetics in quiescent systems of FRRPP. Combining the concept of self-limiting chain growth in quiescent FRRPP systems with spatioselective reaction initiation of lithography, microgel structures were synthesized in a single step, without the use of molds or additives. Hard x-rays from the bending magnet radiation of a synchrotron were used as an initiation source, instead of the more statistally-oriented chemical initiators. Such a spatially-defined reaction was shown to be self-limiting to the irradiated regions following a polymerization-induced self-assembly phenomenon. The pattern transfer aspects of this technique were, therefore, studied in the FRRP polymerization of N-isopropylacrylamide (NIPAm) and methacrylic acid (MAA), a thermoreversible and ionic hydrogel, respectively. Reaction temperature increases the contrast between the exposed and unexposed zones of the formed microgels, while the irradiation dose is directly proportional to the extent of phase separation. The response of Poly (NIPAm) microgels prepared from the technique described in this study was also characterized by small angle neutron scattering.
Resumo:
The study is based on experimental work conducted in alpine snow. We made microwave radiometric and near-infrared reflectance measurements of snow slabs under different experimental conditions. We used an empirical relation to link near-infrared reflectance of snow to the specific surface area (SSA), and converted the SSA into the correlation length. From the measurements of snow radiances at 21 and 35 GHz , we derived the microwave scattering coefficient by inverting two coupled radiative transfer models (the sandwich and six-flux model). The correlation lengths found are in the same range as those determined in the literature using cold laboratory work. The technique shows great potential in the determination of the snow correlation length under field conditions.
Resumo:
The African great lakes are of utmost importance for the local economy (fishing), as well as being essential to the survival of the local people. During the past decades, these lakes experienced fast changes in ecosystem structure and functioning, and their future evolution is a major concern. In this study, for the first time a set of one-dimensional lake models are evaluated for Lake Kivu (2.28°S; 28.98°E), East Africa. The unique limnology of this meromictic lake, with the importance of salinity and subsurface springs in a tropical high-altitude climate, presents a worthy challenge to the seven models involved in the Lake Model Intercomparison Project (LakeMIP). Meteorological observations from two automatic weather stations are used to drive the models, whereas a unique dataset, containing over 150 temperature profiles recorded since 2002, is used to assess the model’s performance. Simulations are performed over the freshwater layer only (60 m) and over the average lake depth (240 m), since salinity increases with depth below 60 m in Lake Kivu and some lake models do not account for the influence of salinity upon lake stratification. All models are able to reproduce the mixing seasonality in Lake Kivu, as well as the magnitude and seasonal cycle of the lake enthalpy change. Differences between the models can be ascribed to variations in the treatment of the radiative forcing and the computation of the turbulent heat fluxes. Fluctuations in wind velocity and solar radiation explain inter-annual variability of observed water column temperatures. The good agreement between the deep simulations and the observed meromictic stratification also shows that a subset of models is able to account for the salinity- and geothermal-induced effects upon deep-water stratification. Finally, based on the strengths and weaknesses discerned in this study, an informed choice of a one-dimensional lake model for a given research purpose becomes possible.