947 resultados para Non-Standard Model Higgs bosons
Resumo:
Mathematical programming can be used for the optimal design of shell-and-tube heat exchangers (STHEs). This paper proposes a mixed integer non-linear programming (MINLP) model for the design of STHEs, following rigorously the standards of the Tubular Exchanger Manufacturers Association (TEMA). Bell–Delaware Method is used for the shell-side calculations. This approach produces a large and non-convex model that cannot be solved to global optimality with the current state of the art solvers. Notwithstanding, it is proposed to perform a sequential optimization approach of partial objective targets through the division of the problem into sets of related equations that are easier to solve. For each one of these problems a heuristic objective function is selected based on the physical behavior of the problem. The global optimal solution of the original problem cannot be ensured even in the case in which each of the sub-problems is solved to global optimality, but at least a very good solution is always guaranteed. Three cases extracted from the literature were studied. The results showed that in all cases the values obtained using the proposed MINLP model containing multiple objective functions improved the values presented in the literature.
Resumo:
The first part of the paper addresses the theoretical background of economic growth and competitive advantage models. Although there is a whole set of research on a relationship between foreign direct investments and economic growth, little has been said on foreign direct investments and national competitive advantage with respect to economic growth of oil and gas abundant countries of Middle East and Central Asia. The second part of our paper introduces the framework of the so-called "Dubai Model" in detail and outlines the key components necessary to develop sustainable comparative advantage for the oil-rich economies. The third part proceeds with the methodology employed to measure the success of the "Dubai Model" in the UAE and in application to other regions. The last part brings the results and investigates the degree to which other oil and gas countries in the region (i.e. Saudi Arabia, Kuwait, Qatar, Iran) have adopted the so-called "Dubai Model". It also examines if the Dubai Model is being employed in the Eurasian (Central Asian) oil and gas regions of Kazakhstan, Azerbaijan, Turkmenistan and Uzbekistan. The objective is to gauge if the Eurasian economies are employing the traditional growth strategies of oil-rich non-OECD countries in managing their natural resources or are they adopting the newer non-traditional model of economic growth, such as the "Dubai Model."
Resumo:
This paper explores the effects of non-standard monetary policies on international yield relationships. Based on a descriptive analysis of international long-term yields, we find evidence that long-term rates followed a global downward trend prior to as well as during the financial crisis. Comparing interest rate developments in the US and the eurozone, it is difficult to detect a distinct impact of the first round of the Fed’s quantitative easing programme (QE1) on US interest rates for which the global environment – the global downward trend in interest rates – does not account. Motivated by these findings, we analyse the impact of the Fed’s QE1 programme on the stability of the US-euro long-term interest rate relationship by using a CVAR (cointegrated vector autoregressive) model and, in particular, recursive estimation methods. Using data gathered between 2002 and 2014, we find limited evidence that QE1 caused the break-up or destabilised the transatlantic interest rate relationship. Taking global interest rate developments into account, we thus find no significant evidence that QE had any independent, distinct impact on US interest rates.
Resumo:
The goal of this study is to determine if various measures of contraction rate are regionally patterned in written Standard American English. In order to answer this question, this study employs a corpus-based approach to data collection and a statistical approach to data analysis. Based on a spatial autocorrelation analysis of the values of eleven measures of contraction across a 25 million word corpus of letters to the editor representing the language of 200 cities from across the contiguous United States, two primary regional patterns were identified: easterners tend to produce relatively few standard contractions (not contraction, verb contraction) compared to westerners, and northeasterners tend to produce relatively few non-standard contractions (to contraction, non-standard not contraction) compared to southeasterners. These findings demonstrate that regional linguistic variation exists in written Standard American English and that regional linguistic variation is more common than is generally assumed.
Resumo:
The.use of high-chromium cast irons for abrasive wear resistance is restricted due to their poor fracture toughness properties. An.attempt was made to improve the fracture characteristics by altering the distribution, size and.shape of the eutectic carbide phase without sacrificing their excellent wear resistance. This was achieved by additions of molybdenum or tungsten followed by high temperature heat treatments. The absence of these alloying elements or replacement of them with vanadium or manganese did not show any significant effect and the continuous eutectic carbide morphology remained the same after application of high temperature heat treatments. The fracture characteristics of the alloys with these metallurgical variables were evaluated for both sharp-cracks and blunt notches. The results were used in conjunction with metallographic and fractographic observations to establish possible failure mechanisms. The fracture mechanism of the austenitic alloys was found to be controlled not only by the volume percent but was also greatly influenced by the size and distribution of the eutectic carbides. On the other hand, the fracture mechanism of martensitic alloys was independent of the eutectic carbide morphology. The uniformity of the secondary carbide precipitation during hardening heat treatments was shown to be a reason for consistant fracture toughness results being obtained with this series of alloys although their eutectic carbide morphologies were different. The collected data were applied to a model which incorporated the microstructural parameters and correlated them with the experimentally obtained valid stress intensity factors. The stress intensity coefficients of different short-bar fracture toughness test specimens were evaluated from analytical and experimental compliance studies. The.validity and applicability of this non-standard testing technique for determination of the fracture toughness of high-chromium cast irons were investigated. The results obtained correlated well with the valid results obtained from standard fracture toughness tests.
Resumo:
This study focuses on the interactional functions of non-standard spelling, in particular letter repetition, used in text-based computer-mediated communication as a means of non-verbal signalling. The aim of this paper is to assess the current state of non-verbal cue research in computer-mediated discourse and demonstrate the need for a more comprehensive and methodologically rigorous exploration of written non-verbal signalling. The study proposes a contextual and usage-centered view of written paralanguage. Through illustrative, close linguistic analyses the study proves that previous approaches to non-standard spelling based on their relation to the spoken word might not account for the complexities of this CMC cue, and in order to further our understanding of their interactional functions it is more fruitful to describe the role they play during the contextualisation of the verbal messages. The interactional sociolinguistic approach taken in the analysis demonstrates the range of interactional functions letter repetition can achieve, including contribution to the inscription of socio-emotional information into writing, to the evoking of auditory cues or to a display of informality through using a relaxed writing style.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The conventional mechanism of fermion mass generation in the Standard Model involves Spontaneous Symmetry Breaking (SSB). In this thesis, we study an alternate mechanism for the generation of fermion masses that does not require SSB, in the context of lattice field theories. Being inherently strongly coupled, this mechanism requires a non-perturbative approach like the lattice approach.
In order to explore this mechanism, we study a simple lattice model with a four-fermion interaction that has massless fermions at weak couplings and massive fermions at strong couplings, but without any spontaneous symmetry breaking. Prior work on this type of mass generation mechanism in 4D, was done long ago using either mean-field theory or Monte-Carlo calculations on small lattices. In this thesis, we have developed a new computational approach that enables us to perform large scale quantum Monte-Carlo calculations to study the phase structure of this theory. In 4D, our results confirm prior results, but differ in some quantitative details of the phase diagram. In contrast, in 3D, we discover a new second order critical point using calculations on lattices up to size $ 60^3$. Such large scale calculations are unprecedented. The presence of the critical point implies the existence of an alternate mechanism of fermion mass generation without any SSB, that could be of interest in continuum quantum field theory.
Resumo:
The diverse kinds of legal temporary contracts and the employment forms that do not comply with legal requirements both facilitate employment adjustment to firms´ requirements and entail labour cost reductions. Their employment incidence depends not only on the economic and labour market evolutions but also on other factors, in particular the historical trajectories followed by labour legislation, state enforcement, and the degree of compliance. To contribute to the understanding of the determinants of the degree of utilization of different employment practices, the study reported in this article explores the use made of the various legal temporary contracts and of precarious employment relationships by private enterprises in three Latin American countries (Argentina, Chile and Peru) during 2003-2012, a period of economic growth, and the explanatory role of diverse factors.
Resumo:
Context: Model atmosphere analyses have been previously undertaken for both Galactic and extragalactic B-type supergiants. By contrast, little attention has been given to a comparison of the properties of single supergiants and those that are members of multiple systems.
Aims: Atmospheric parameters and nitrogen abundances have been estimated for all the B-type supergiants identified in the VLT-FLAMES Tarantula survey. These include both single targets and binary candidates. The results have been analysed to investigate the role of binarity in the evolutionary history of supergiants.
Methods: tlusty non-local thermodynamic equilibrium (LTE) model atmosphere calculations have been used to determine atmospheric parameters and nitrogen abundances for 34 single and 18 binary supergiants. Effective temperatures were deduced using the silicon balance technique, complemented by the helium ionisation in the hotter spectra. Surface gravities were estimated using Balmer line profiles and microturbulent velocities deduced using the silicon spectrum. Nitrogen abundances or upper limits were estimated from the Nii spectrum. The effects of a flux contribution from an unseen secondary were considered for the binary sample. Results. We present the first systematic study of the incidence of binarity for a sample of B-type supergiants across the theoretical terminal age main sequence (TAMS). To account for the distribution of effective temperatures of the B-type supergiants it may be necessary to extend the TAMS to lower temperatures. This is also consistent with the derived distribution of mass discrepancies, projected rotational velocities and nitrogen abundances, provided that stars cooler than this temperature are post-red supergiant objects. For all the supergiants in the Tarantula and in a previous FLAMES survey, the majority have small projected rotational velocities. The distribution peaks at about 50 km s-1 with 65% in the range 30 km s-1 ≤ νe sin i ≤ 60 km s-1. About ten per cent have larger ve sin i (≥100 km s-1), but surprisingly these show little or no nitrogen enhancement. All the cooler supergiants have low projected rotational velocities of ≤70 km s-1 and high nitrogen abundance estimates, implying that either bi-stability braking or evolution on a blue loop may be important. Additionally, there is a lack of cooler binaries, possibly reflecting the small sample sizes. Single-star evolutionary models, which include rotation, can account for all of the nitrogen enhancement in both the single and binary samples. The detailed distribution of nitrogen abundances in the single and binary samples may be different, possibly reflecting differences in their evolutionary history.
Conclusions: The first comparative study of single and binary B-type supergiants has revealed that the main sequence may be significantly wider than previously assumed, extending to Teff = 20 000 K. Some marginal differences in single and binary atmospheric parameters and abundances have been identified, possibly implying non-standard evolution for some of the sample. This sample as a whole has implications for several aspects of our understanding of the evolutionary status of blue supergiants.
Resumo:
Searches for the supersymmetric partner of the top quark (stop) are motivated by natural supersymmetry, where the stop has to be light to cancel the large radiative corrections to the Higgs boson mass. This thesis presents three different searches for the stop at √s = 8 TeV and √s = 13 TeV using data from the ATLAS experiment at CERN’s Large Hadron Collider. The thesis also includes a study of the primary vertex reconstruction performance in data and simulation at √s = 7 TeV using tt and Z events. All stop searches presented are carried out in final states with a single lepton, four or more jets and large missing transverse energy. A search for direct stop pair production is conducted with 20.3 fb−1 of data at a center-of-mass energy of √s = 8 TeV. Several stop decay scenarios are considered, including those to a top quark and the lightest neutralino and to a bottom quark and the lightest chargino. The sensitivity of the analysis is also studied in the context of various phenomenological MSSM models in which more complex decay scenarios can be present. Two different analyses are carried out at √s = 13 TeV. The first one is a search for both gluino-mediated and direct stop pair production with 3.2 fb−1 of data while the second one is a search for direct stop pair production with 13.2 fb−1 of data in the decay scenario to a bottom quark and the lightest chargino. The results of the analyses show no significant excess over the Standard Model predictions in the observed data. Consequently, exclusion limits are set at 95% CL on the masses of the stop and the lightest neutralino.
Resumo:
Ce mémoire s’intéresse à l’étude du critère de validation croisée pour le choix des modèles relatifs aux petits domaines. L’étude est limitée aux modèles de petits domaines au niveau des unités. Le modèle de base des petits domaines est introduit par Battese, Harter et Fuller en 1988. C’est un modèle de régression linéaire mixte avec une ordonnée à l’origine aléatoire. Il se compose d’un certain nombre de paramètres : le paramètre β de la partie fixe, la composante aléatoire et les variances relatives à l’erreur résiduelle. Le modèle de Battese et al. est utilisé pour prédire, lors d’une enquête, la moyenne d’une variable d’intérêt y dans chaque petit domaine en utilisant une variable auxiliaire administrative x connue sur toute la population. La méthode d’estimation consiste à utiliser une distribution normale, pour modéliser la composante résiduelle du modèle. La considération d’une dépendance résiduelle générale, c’est-à-dire autre que la loi normale donne une méthodologie plus flexible. Cette généralisation conduit à une nouvelle classe de modèles échangeables. En effet, la généralisation se situe au niveau de la modélisation de la dépendance résiduelle qui peut être soit normale (c’est le cas du modèle de Battese et al.) ou non-normale. L’objectif est de déterminer les paramètres propres aux petits domaines avec le plus de précision possible. Cet enjeu est lié au choix de la bonne dépendance résiduelle à utiliser dans le modèle. Le critère de validation croisée sera étudié à cet effet.
Resumo:
O principal objetivo desta dissertação é a produção de charginos (partículas supersimétricascarregadas) leves no futuro acelerador internacional linear de e +e− (ILC) para diferentescenários de quebra de supersimetria. Charginos são partículas constituídas pela mistura docampo Wino carregado com o Higgsino carregado. A principal motivação para se estudar teorias supersimétricas deve-se ao grande número de problemas do Modelo Padrão (SM) que esta consegue solucionar, entre eles: massa dos neutrinos, matéria escura fria e o ajuste-fine (finetuning). Além disso, estudamos os princípios fundamentais que norteam a física de partículas,isto é, o princípio de gauge e o mecanismo de Higgs.
A new age of fuel performance code criteria studied through advanced atomistic simulation techniques
Resumo:
A fundamental step in understanding the effects of irradiation on metallic uranium and uranium dioxide ceramic fuels, or any material, must start with the nature of radiation damage on the atomic level. The atomic damage displacement results in a multitude of defects that influence the fuel performance. Nuclear reactions are coupled, in that changing one variable will alter others through feedback. In the field of fuel performance modeling, these difficulties are addressed through the use of empirical models rather than models based on first principles. Empirical models can be used as a predictive code through the careful manipulation of input variables for the limited circumstances that are closely tied to the data used to create the model. While empirical models are efficient and give acceptable results, these results are only applicable within the range of the existing data. This narrow window prevents modeling changes in operating conditions that would invalidate the model as the new operating conditions would not be within the calibration data set. This work is part of a larger effort to correct for this modeling deficiency. Uranium dioxide and metallic uranium fuels are analyzed through a kinetic Monte Carlo code (kMC) as part of an overall effort to generate a stochastic and predictive fuel code. The kMC investigations include sensitivity analysis of point defect concentrations, thermal gradients implemented through a temperature variation mesh-grid, and migration energy values. In this work, fission damage is primarily represented through defects on the oxygen anion sublattice. Results were also compared between the various models. Past studies of kMC point defect migration have not adequately addressed non-standard migration events such as clustering and dissociation of vacancies. As such, the General Utility Lattice Program (GULP) code was utilized to generate new migration energies so that additional non-migration events could be included into kMC code in the future for more comprehensive studies. Defect energies were calculated to generate barrier heights for single vacancy migration, clustering and dissociation of two vacancies, and vacancy migration while under the influence of both an additional oxygen and uranium vacancy.
Resumo:
Simplifying the Einstein field equation by assuming the cosmological principle yields a set of differential equations which governs the dynamics of the universe as described in the cosmological standard model. The cosmological principle assumes the space appears the same everywhere and in every direction and moreover, the principle has earned its position as a fundamental assumption in cosmology by being compatible with the observations of the 20th century. It was not until the current century when observations in cosmological scales showed significant deviation from isotropy and homogeneity implying the violation of the principle. Among these observations are the inconsistency between local and non-local Hubble parameter evaluations, baryon acoustic features of the Lyman-α forest and the anomalies of the cosmic microwave background radiation. As a consequence, cosmological models beyond the cosmological principle have been studied vastly; after all, the principle is a hypothesis and as such should frequently be tested as any other assumption in physics. In this thesis, the effects of inhomogeneity and anisotropy, arising as a consequence of discarding the cosmological principle, is investigated. The geometry and matter content of the universe becomes more cumbersome and the resulting effects on the Einstein field equation is introduced. The cosmological standard model and its issues, both fundamental and observational are presented. Particular interest is given to the local Hubble parameter, supernova explosion, baryon acoustic oscillation, and cosmic microwave background observations and the cosmological constant problems. Explored and proposed resolutions emerging by violating the cosmological principle are reviewed. This thesis is concluded by a summary and outlook of the included research papers.