997 resultados para Model violation
Resumo:
L’explosion du nombre de séquences permet à la phylogénomique, c’est-à-dire l’étude des liens de parenté entre espèces à partir de grands alignements multi-gènes, de prendre son essor. C’est incontestablement un moyen de pallier aux erreurs stochastiques des phylogénies simple gène, mais de nombreux problèmes demeurent malgré les progrès réalisés dans la modélisation du processus évolutif. Dans cette thèse, nous nous attachons à caractériser certains aspects du mauvais ajustement du modèle aux données, et à étudier leur impact sur l’exactitude de l’inférence. Contrairement à l’hétérotachie, la variation au cours du temps du processus de substitution en acides aminés a reçu peu d’attention jusqu’alors. Non seulement nous montrons que cette hétérogénéité est largement répandue chez les animaux, mais aussi que son existence peut nuire à la qualité de l’inférence phylogénomique. Ainsi en l’absence d’un modèle adéquat, la suppression des colonnes hétérogènes, mal gérées par le modèle, peut faire disparaître un artéfact de reconstruction. Dans un cadre phylogénomique, les techniques de séquençage utilisées impliquent souvent que tous les gènes ne sont pas présents pour toutes les espèces. La controverse sur l’impact de la quantité de cellules vides a récemment été réactualisée, mais la majorité des études sur les données manquantes sont faites sur de petits jeux de séquences simulées. Nous nous sommes donc intéressés à quantifier cet impact dans le cas d’un large alignement de données réelles. Pour un taux raisonnable de données manquantes, il appert que l’incomplétude de l’alignement affecte moins l’exactitude de l’inférence que le choix du modèle. Au contraire, l’ajout d’une séquence incomplète mais qui casse une longue branche peut restaurer, au moins partiellement, une phylogénie erronée. Comme les violations de modèle constituent toujours la limitation majeure dans l’exactitude de l’inférence phylogénétique, l’amélioration de l’échantillonnage des espèces et des gènes reste une alternative utile en l’absence d’un modèle adéquat. Nous avons donc développé un logiciel de sélection de séquences qui construit des jeux de données reproductibles, en se basant sur la quantité de données présentes, la vitesse d’évolution et les biais de composition. Lors de cette étude nous avons montré que l’expertise humaine apporte pour l’instant encore un savoir incontournable. Les différentes analyses réalisées pour cette thèse concluent à l’importance primordiale du modèle évolutif.
Resumo:
Lepton masses and mixing angles via localization of 5-dimensional fields in the bulk are revisited in the context of Randall-Sundrum models. The Higgs is assumed to be localized on the IR brane. Three cases for neutrino masses are considered: (a) The higher-dimensional neutrino mass operator (LH.LH), (b) Dirac masses, and (c) Type I seesaw with bulk Majorana mass terms. Neutrino masses and mixing as well as charged lepton masses are fit in the first two cases using chi(2) minimization for the bulk mass parameters, while varying the O(1) Yukawa couplings between 0.1 and 4. Lepton flavor violation is studied for all the three cases. It is shown that large negative bulk mass parameters are required for the right-handed fields to fit the data in the LH.LH case. This case is characterized by a very large Kaluza-Klein (KK) spectrum and relatively weak flavor-violating constraints at leading order. The zero modes for the charged singlets are composite in this case, and their corresponding effective 4-dimensional Yukawa couplings to the KK modes could be large. For the Dirac case, good fits can be obtained for the bulk mass parameters, c(i), lying between 0 and 1. However, most of the ``best-fit regions'' are ruled out from flavor-violating constraints. In the bulk Majorana terms case, we have solved the profile equations numerically. We give example points for inverted hierarchy and normal hierarchy of neutrino masses. Lepton flavor violating rates are large for these points. We then discuss various minimal flavor violation schemes for Dirac and bulk Majorana cases. In the Dirac case with minimal-flavor-violation hypothesis, it is possible to simultaneously fit leptonic masses and mixing angles and alleviate lepton flavor violating constraints for KK modes with masses of around 3 TeV. Similar examples are also provided in the Majorana case.
Resumo:
This thesis describes simple extensions of the standard model with new sources of baryon number violation but no proton decay. The motivation for constructing such theories comes from the shortcomings of the standard model to explain the generation of baryon asymmetry in the universe, and from the absence of experimental evidence for proton decay. However, lack of any direct evidence for baryon number violation in general puts strong bounds on the naturalness of some of those models and favors theories with suppressed baryon number violation below the TeV scale. The initial part of the thesis concentrates on investigating models containing new scalars responsible for baryon number breaking. A model with new color sextet scalars is analyzed in more detail. Apart from generating cosmological baryon number, it gives nontrivial predictions for the neutron-antineutron oscillations, the electric dipole moment of the neutron, and neutral meson mixing. The second model discussed in the thesis contains a new scalar leptoquark. Although this model predicts mainly lepton flavor violation and a nonzero electric dipole moment of the electron, it includes, in its original form, baryon number violating nonrenormalizable dimension-five operators triggering proton decay. Imposing an appropriate discrete symmetry forbids such operators. Finally, a supersymmetric model with gauged baryon and lepton numbers is proposed. It provides a natural explanation for proton stability and predicts lepton number violating processes below the supersymmetry breaking scale, which can be tested at the Large Hadron Collider. The dark matter candidate in this model carries baryon number and can be searched for in direct detection experiments as well. The thesis is completed by constructing and briefly discussing a minimal extension of the standard model with gauged baryon, lepton, and flavor symmetries.
Resumo:
We implement the mechanism of spontaneous CP violation in the 3-3-1 model with right-handed neutrinos and recognize their sources of CP violation. Our main result is that the mechanism works already in the minimal version of the model and new sources of CP violation emerges as an effect of new physics at energies higher than the electroweak scale.
Resumo:
We show that it is possible to implement soft superweak CP violation in the context of a 3-3-1 model with only three triplets. All CP violation effects come from the exchange of singly and doubly charged scalars. We consider the implication of this mechanism in the quark and lepton sectors. In particular it is shown that, in this model, as in most of those which incorporate scalar mediated CP violation, it is possible to have large electric dipole moments for the muon and the tau lepton while keeping small those of the electron and neutron. The CKM mixing matrix is real at the tree level but gets a phase at the 1-up loop level. ©1999 The American Physical Society.
Resumo:
We show that it is possible to implement soft superweak CP violation in the context of a 3-3-1 model with only three triplets. All CP violation effects come from the exchange of singly and doubly charged scalars. We consider the implication of this mechanism in the quark and lepton sectors. In particular it is shown that, in this model, as in most of those which incorporate scalar mediated CP violation, it is possible to have large electric dioole moments for the muon and the tau lepton while keeping small those of the electron and neutron. The CKM mixing matrix is real at the tree level but gets a phase at the 1-up loop level. ©1999 The American Physical Society.
Resumo:
We analyse the production of multileptons in the simplest supergravity model with bilinear violation of R parity at the Fermilab Tevatron. Despite the small .R-parity violating couplings needed to generate the neutrino masses indicated by current atmospheric neutrino data, the lightest supersymmetric particle is unstable and can decay inside the detector. This leads to a phenomenology quite distinct from that of the R-parity conserving scenario. We quantify by how much the supersymmetric multilepton signals differ from the R-parity conserving expectations, displaying our results in the m0 ⊙ m1/2 plane. We show that the presence of bilinear R-parity violating interactions enhances the supersymmetric multilepton signals over most of the parameter space, specially at moderate and large m0. © SISSA/ISAS 2003.
Resumo:
We study lepton flavor observables in the Standard Model (SM) extended with all dimension-6 operators which are invariant under the SM gauge group. We calculate the complete one-loop predictions to the radiative lepton decays μ → eγ, τ → μγ and τ → eγ as well as to the closely related anomalous magnetic moments and electric dipole moments of charged leptons, taking into account all dimension-6 operators which can generate lepton flavor violation. Also the 3-body flavor violating charged lepton decays τ ± → μ ± μ + μ −, τ ± → e ± e + e −, τ ± → e ± μ + μ −, τ ± → μ ± e + e −, τ ± → e ∓ μ ± μ ±, τ ± → μ ∓ e ± e ± and μ ± → e ± e + e − and the Z 0 decays Z 0 → ℓ+iℓ−j are considered, taking into account all tree-level contributions.
Resumo:
Hydraulic excavators in the mining industry are widely used owing to the large payload capabilities these machines can achieve. However, there are very few optimisation studies for producing efficient hydraulic excavator backets. An efficient bucket can avoid unnecessary weight; greatly influence the payload and optimise the efficiency of hydraulic mining excavators. This paper presents a framework for the development of a scaled hydraulic excavator by examining the geometry and force relationships. A small hydraulic excavator was purchased and fitted with a broom scaled to a factor. Geometric and force relationships of the model were derived to assist computer instrumentation to retrieve necessary variable input for bucket design.
Resumo:
This article presents a novel approach to confidentiality violation detection based on taint marking. Information flows are dynamically tracked between applications and objects of the operating system such as files, processes and sockets. A confidentiality policy is defined by labelling sensitive information and defining which information may leave the local system through network exchanges. Furthermore, per application profiles can be defined to restrict the sets of information each application may access and/or send through the network. In previous works, we focused on the use of mandatory access control mechanisms for information flow tracking. In this current work, we have extended the previous information flow model to track network exchanges, and we are able to define a policy attached to network sockets. We show an example application of this extension in the context of a compromised web browser: our implementation detects a confidentiality violation when the browser attempts to leak private information to a remote host over the network.
Resumo:
Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.
Resumo:
The Standard Model of particle physics consists of the quantum electrodynamics (QED) and the weak and strong nuclear interactions. The QED is the basis for molecular properties, and thus it defines much of the world we see. The weak nuclear interaction is responsible for decays of nuclei, among other things, and in principle, it should also effects at the molecular scale. The strong nuclear interaction is hidden in interactions inside nuclei. From the high-energy and atomic experiments it is known that the weak interaction does not conserve parity. Consequently, the weak interaction and specifically the exchange of the Z^0 boson between a nucleon and an electron induces small energy shifts of different sign for mirror image molecules. This in turn will make the other enantiomer of a molecule energetically favorable than the other and also shifts the spectral lines of the mirror image pair of molecules into different directions creating a split. Parity violation (PV) in molecules, however, has not been observed. The topic of this thesis is how the weak interaction affects certain molecular magnetic properties, namely certain parameters of nuclear magnetic resonance (NMR) and electron spin resonance (ESR) spectroscopies. The thesis consists of numerical estimates of NMR and ESR spectral parameters and investigations of the effects of different aspects of quantum chemical computations to them. PV contributions to the NMR shielding and spin-spin coupling constants are investigated from the computational point of view. All the aspects of quantum chemical electronic structure computations are found to be very important, which makes accurate computations challenging. Effects of molecular geometry are also investigated using a model system of polysilyene chains. PV contribution to the NMR shielding constant is found to saturate after the chain reaches a certain length, but the effects of local geometry can be large. Rigorous vibrational averaging is also performed for a relatively small and rigid molecule. Vibrational corrections to the PV contribution are found to be only a couple of per cents. PV contributions to the ESR g-tensor are also evaluated using a series of molecules. Unfortunately, all the estimates are below the experimental limits, but PV in some of the heavier molecules comes close to the present day experimental resolution.
Resumo:
Fuzzy Waste Load Allocation Model (FWLAM), developed in an earlier study, derives the optimal fractional levels, for the base flow conditions, considering the goals of the Pollution Control Agency (PCA) and dischargers. The Modified Fuzzy Waste Load Allocation Model (MFWLAM) developed subsequently is a stochastic model and considers the moments (mean, variance and skewness) of water quality indicators, incorporating uncertainty due to randomness of input variables along with uncertainty due to imprecision. The risk of low water quality is reduced significantly by using this modified model, but inclusion of new constraints leads to a low value of acceptability level, A, interpreted as the maximized minimum satisfaction in the system. To improve this value, a new model, which is a combination Of FWLAM and MFWLAM, is presented, allowing for some violations in the constraints of MFWLAM. This combined model is a multiobjective optimization model having the objectives, maximization of acceptability level and minimization of violation of constraints. Fuzzy multiobjective programming, goal programming and fuzzy goal programming are used to find the solutions. For the optimization model, Probabilistic Global Search Lausanne (PGSL) is used as a nonlinear optimization tool. The methodology is applied to a case study of the Tunga-Bhadra river system in south India. The model results in a compromised solution of a higher value of acceptability level as compared to MFWLAM, with a satisfactory value of risk. Thus the goal of risk minimization is achieved with a comparatively better value of acceptability level.