8 resultados para Large Hadron Collider (France and Switzerland)
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
We discuss theoretical and phenomenological aspects of two-Higgs-doublet extensions of the Standard Model. In general, these extensions have scalar mediated flavour changing neutral currents which are strongly constrained by experiment. Various strategies are discussed to control these flavour changing scalar currents and their phenomenological consequences are analysed. In particular, scenarios with natural flavour conservation are investigated, including the so-called type I and type II models as well as lepton-specific and inert models. Type III models are then discussed, where scalar flavour changing neutral currents are present at tree level, but are suppressed by either a specific ansatz for the Yukawa couplings or by the introduction of family symmetries leading to a natural suppression mechanism. We also consider the phenomenology of charged scalars in these models. Next we turn to the role of symmetries in the scalar sector. We discuss the six symmetry-constrained scalar potentials and their extension into the fermion sector. The vacuum structure of the scalar potential is analysed, including a study of the vacuum stability conditions on the potential and the renormalization-group improvement of these conditions is also presented. The stability of the tree level minimum of the scalar potential in connection with electric charge conservation and its behaviour under CP is analysed. The question of CP violation is addressed in detail, including the cases of explicit CP violation and spontaneous CP violation. We present a detailed study of weak basis invariants which are odd under CP. These invariants allow for the possibility of studying the CP properties of any two-Higgs-doublet model in an arbitrary Higgs basis. A careful study of spontaneous CP violation is presented, including an analysis of the conditions which have to be satisfied in order for a vacuum to violate CP. We present minimal models of CP violation where the vacuum phase is sufficient to generate a complex CKM matrix, which is at present a requirement for any realistic model of spontaneous CP violation.
Resumo:
We show that a light charged Higgs boson signal via tau(+/-)nu decay can be established at the Large Hadron Collider (LHC) also in the case of single top production. This process complements searches for the same signal in the case of charged Higgs bosons emerging from t (t) over bar production. The models accessible include the Minimal Supersymmetric Standard Model (MSSM) as well a variety of 2-Higgs Doublet Models (2HDMs). High energies and luminosities are however required, thereby restricting interest on this mode to the case of the LHC running at 14TeV with design configuration.
Resumo:
With the discovery of the Higgs boson at the Large Hadron Collider the high energy physics community's attention has now turned to understanding the properties of the Higgs boson, together with the hope of finding more scalars during run 2. In this work we discuss scenarios where using a combination of three decays, involving the 125 GeV Higgs boson, the Z boson and at least one more scalar, an indisputable signal of CP-violation arises. We use a complex two-Higgs doublet model as a reference model and present some benchmark points that have passed all current experimental and theoretical constraints, and that have cross sections large enough to be probed during run 2.
Resumo:
The Higgs boson recently discovered at the Large Hadron Collider has shown to have couplings to the remaining particles well within what is predicted by the Standard Model. The search for other new heavy scalar states has so far revealed to be fruitless, imposing constraints on the existence of new scalar particles. However, it is still possible that any existing heavy scalars would preferentially decay to final states involving the light Higgs boson thus evading the current LHC bounds on heavy scalar states. Moreover, decays of the heavy scalars could increase the number of light Higgs bosons being produced. Since the number of light Higgs bosons decaying to Standard Model particles is within the predicted range, this could mean that part of the light Higgs bosons could have their origin in heavy scalar decays. This situation would occur if the light Higgs couplings to Standard Model particles were reduced by a concomitant amount. Using a very simple extension of the SM - the two-Higgs doublet model we show that in fact we could already be observing the effect of the heavy scalar states even if all results related to the Higgs are in excellent agreement with the Standard Model predictions.
Resumo:
This study explores a large set of OC and EC measurements in PM(10) and PM(2.5) aerosol samples, undertaken with a long term constant analytical methodology, to evaluate the capability of the OC/EC minimum ratio to represent the ratio between the OC and EC aerosol components resulting from fossil fuel combustion (OC(ff)/EC(ff)). The data set covers a wide geographical area in Europe, but with a particular focus upon Portugal, Spain and the United Kingdom, and includes a great variety of sites: urban (background, kerbside and tunnel), industrial, rural and remote. The highest minimum ratios were found in samples from remote and rural sites. Urban background sites have shown spatially and temporally consistent minimum ratios, of around 1.0 for PM(10) and 0.7 for PM(2.5).The consistency of results has suggested that the method could be used as a tool to derive the ratio between OC and EC from fossil fuel combustion and consequently to differentiate OC from primary and secondary sources. To explore this capability, OC and EC measurements were performed in a busy roadway tunnel in central Lisbon. The OC/EC ratio, which reflected the composition of vehicle combustion emissions, was in the range of 03-0.4. Ratios of OC/EC in roadside increment air (roadside minus urban background) in Birmingham, UK also lie within the range 03-0.4. Additional measurements were performed under heavy traffic conditions at two double kerbside sites located in the centre of Lisbon and Madrid. The OC/EC minimum ratios observed at both sites were found to be between those of the tunnel and those of urban background air, suggesting that minimum values commonly obtained for this parameter in open urban atmospheres over-predict the direct emissions of OC(ff) from road transport. Possible reasons for this discrepancy are explored. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Dissertation elaborated for the partial fulfilment of the requirements of the Master Degree in Civil Engineering in the Speciality Area of Hydarulics
Resumo:
Toluene hydrogenation was studied over catalysts based on Pt supported on large pore zeolites (HUSY and HBEA) with different metal/acid ratios. Acidity of zeolites was assessed by pyridine adsorption followed by FTIR showing only small changes before and after Pt introduction. Metal dispersion was determined by H2–O2 titration and verified by a linear correlation with the intensity of Pt0–CO band obtained by in situ FTIR. It was also observed that the electronic properties of Pt0 clusters were similar for the different catalysts. Catalytic tests showed rapid catalyst deactivation with an activity loss of 80–95% after 60 min of reaction. The turnover frequency of fresh catalysts depended both on metal dispersion and the support. For the same support, it changed by a 1.7-fold (HBEA) and 4.0-fold (HUSY) showing that toluene hydrogenation is structure-sensitive, i.e. hydrogenating activity is not a unique function of accessible metal. This was proposed to be due to the contribution to the overall activity of the hydrogenation of adsorbed toluene on acid sites via hydrogen spillover. Taking into account the role of zeolite acidity, the catalysts series were compared by the activity per total adsorbing sites which was observed to increase steadily with nPt/(nPt + nA). An increase of the accessible Pt atoms leads to an increase on the amount of spilled over hydrogen available in acid sites therefore increasing the overall activity. Pt/HBEA catalysts were found to be more active per total adsorbing site than Pt/HUSY which is proposed to be due to an augmentation in the efficiency of spilled over hydrogen diffusion related to the proximity between Pt clusters and acid sites. The intervention of Lewis acid sites in a greater extent than that measured by pyridine adsorption may also contribute to this higher activity of Pt/HBEA catalysts. These results reinforce the importance of model reactions as a closer perspective to the relevant catalyst properties in reaction conditions.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.