984 resultados para ELECTROMAGNETIC DECAYS
Resumo:
In this paper, we present a multilayer device based on a-Si:H/a-SiC:H that operates as photodetector and optical filter. The use of such device in protein detection applications is relevant in Fluorescence Resonance Energy Transfer (FRET) measurements. This method demands the detection of fluorescent signals located at specific wavelengths bands in the visible part of the electromagnetic spectrum. The device operates in the visible range with a selective sensitivity dependent on electrical and optical bias. Several nanosensors were tested with a commercial spectrophotometer to assess the performance of FRET signals using glucose solutions of different concentrations. The proposed device was used to demonstrate the possibility of FRET signals detection, using visible signals of similar wavelength and intensity. The device sensitivity was tuned to enhance the wavelength band of interest using steady state optical bias at 400 nm. Results show the ability of the device to detect signals in this range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde
Resumo:
The Higgs boson recently discovered at the Large Hadron Collider has shown to have couplings to the remaining particles well within what is predicted by the Standard Model. The search for other new heavy scalar states has so far revealed to be fruitless, imposing constraints on the existence of new scalar particles. However, it is still possible that any existing heavy scalars would preferentially decay to final states involving the light Higgs boson thus evading the current LHC bounds on heavy scalar states. Moreover, decays of the heavy scalars could increase the number of light Higgs bosons being produced. Since the number of light Higgs bosons decaying to Standard Model particles is within the predicted range, this could mean that part of the light Higgs bosons could have their origin in heavy scalar decays. This situation would occur if the light Higgs couplings to Standard Model particles were reduced by a concomitant amount. Using a very simple extension of the SM - the two-Higgs doublet model we show that in fact we could already be observing the effect of the heavy scalar states even if all results related to the Higgs are in excellent agreement with the Standard Model predictions.
Resumo:
The latest LHC data confirmed the existence of a Higgs-like particle and made interesting measurements on its decays into gamma gamma, ZZ*, WW*, tau(+)tau(-), and b (b) over bar. It is expected that a decay into Z gamma might be measured at the next LHC round, for which there already exists an upper bound. The Higgs-like particle could be a mixture of scalar with a relatively large component of pseudoscalar. We compute the decay of such a mixed state into Z gamma, and we study its properties in the context of the complex two Higgs doublet model, analysing the effect of the current measurements on the four versions of this model. We show that a measurement of the h -> Z gamma rate at a level consistent with the SM can be used to place interesting constraints on the pseudoscalar component. We also comment on the issue of a wrong sign Yukawa coupling for the bottom in Type II models.
Resumo:
We look for minimal chiral sets of fermions beyond the standard model that are anomaly free and, simultaneously, vectorlike particles with respect to color SU(3) and electromagnetic U(1). We then study whether the addition of such particles to the standard model particle content allows for the unification of gauge couplings at a high energy scale, above 5.0 x 10(15) GeV so as to be safely consistent with proton decay bounds. The possibility to have unification at the string scale is also considered. Inspired in grand unified theories, we also search for minimal chiral fermion sets that belong to SU(5) multiplets, restricted to representations up to dimension 50. It is shown that, in various cases, it is possible to achieve gauge unification provided that some of the extra fermions decouple at relatively high intermediate scales.
Resumo:
We present the modeling efforts on antenna design and frequency selection to monitor brain temperature during prolonged surgery using noninvasive microwave radiometry. A tapered log-spiral antenna design is chosen for its wideband characteristics that allow higher power collection from deep brain. Parametric analysis with the software HFSS is used to optimize antenna performance for deep brain temperature sensing. Radiometric antenna efficiency (eta) is evaluated in terms of the ratio of power collected from brain to total power received by the antenna. Anatomical information extracted from several adult computed tomography scans is used to establish design parameters for constructing an accurate layered 3-D tissue phantom. This head phantom includes separate brain and scalp regions, with tissue equivalent liquids circulating at independent temperatures on either side of an intact skull. The optimized frequency band is 1.1-1.6 GHz producing an average antenna efficiency of 50.3% from a two turn log-spiral antenna. The entire sensor package is contained in a lightweight and low-profile 2.8 cm diameter by 1.5 cm high assembly that can be held in place over the skin with an electromagnetic interference shielding adhesive patch. The calculated radiometric equivalent brain temperature tracks within 0.4 degrees C of the measured brain phantom temperature when the brain phantom is lowered 10. C and then returned to the original temperature (37 degrees C) over a 4.6-h experiment. The numerical and experimental results demonstrate that the optimized 2.5-cm log-spiral antenna is well suited for the noninvasive radiometric sensing of deep brain temperature.
Resumo:
We discuss theoretical and phenomenological aspects of two-Higgs-doublet extensions of the Standard Model. In general, these extensions have scalar mediated flavour changing neutral currents which are strongly constrained by experiment. Various strategies are discussed to control these flavour changing scalar currents and their phenomenological consequences are analysed. In particular, scenarios with natural flavour conservation are investigated, including the so-called type I and type II models as well as lepton-specific and inert models. Type III models are then discussed, where scalar flavour changing neutral currents are present at tree level, but are suppressed by either a specific ansatz for the Yukawa couplings or by the introduction of family symmetries leading to a natural suppression mechanism. We also consider the phenomenology of charged scalars in these models. Next we turn to the role of symmetries in the scalar sector. We discuss the six symmetry-constrained scalar potentials and their extension into the fermion sector. The vacuum structure of the scalar potential is analysed, including a study of the vacuum stability conditions on the potential and the renormalization-group improvement of these conditions is also presented. The stability of the tree level minimum of the scalar potential in connection with electric charge conservation and its behaviour under CP is analysed. The question of CP violation is addressed in detail, including the cases of explicit CP violation and spontaneous CP violation. We present a detailed study of weak basis invariants which are odd under CP. These invariants allow for the possibility of studying the CP properties of any two-Higgs-doublet model in an arbitrary Higgs basis. A careful study of spontaneous CP violation is presented, including an analysis of the conditions which have to be satisfied in order for a vacuum to violate CP. We present minimal models of CP violation where the vacuum phase is sufficient to generate a complex CKM matrix, which is at present a requirement for any realistic model of spontaneous CP violation.
Resumo:
To increase the amount of logic available to the users in SRAM-based FPGAs, manufacturers are using nanometric technologies to boost logic density and reduce costs, making its use more attractive. However, these technological improvements also make FPGAs particularly vulnerable to configuration memory bit-flips caused by power fluctuations, strong electromagnetic fields and radiation. This issue is particularly sensitive because of the increasing amount of configuration memory cells needed to define their functionality. A short survey of the most recent publications is presented to support the options assumed during the definition of a framework for implementing circuits immune to bit-flips induction mechanisms in memory cells, based on a customized redundant infrastructure and on a detection-and-fix controller.
Resumo:
The Maxwell equations constitute a formalism for the development of models describing electromagnetic phenomena. The four Maxwell laws have been adopted successfully in many applications and involve only the integer order differential calculus. Recently, a closer look for the cases of transmission lines, electrical motors and transformers, that reveal the so-called skin effect, motivated a new perspective towards the replacement of classical models by fractional-order mathematical descriptions. Bearing these facts in mind this paper addresses the concept of static fractional electric potential. The fractional potential was suggested some years ago. However, the idea was not fully explored and practical methods of implementation were not proposed. In this line of thought, this paper develops a new approximation algorithm for establishing the fractional order electrical potential and analyzes its characteristics.
Resumo:
The associated production of a Higgs boson and a top-quark pair, t (t) over barH, in proton-proton collisions is addressed in this paper for a center of mass energy of 13 TeV at the LHC. Dileptonic final states of t (t) over barH events with two oppositely charged leptons and four jets from the decays t -> bW(+) -> bl(+)v(l), (t) over bar -> (b) over barW(-) -> (b) over barl(-)(v) over bar (l) and h -> b (b) over bar are used. Signal events, generated with MadGraph5_aMC@NLO, are fully reconstructed by applying a kinematic fit. New angular distributions of the decay products as well as angular asymmetries are explored in order to improve discrimination of t (t) over barH signal events over the dominant irreducible background contribution, t (t) over barb (b) over bar. Even after the full kinematic fit reconstruction of the events, the proposed angular distributions and asymmetries are still quite different in the t (t) over barH signal and the dominant background (t (t) over barb (b) over bar).
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Mecânica
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
O transporte ferroviário é um meio de transporte em que o meio de deslocamento ocorre por meio de vias férreas, transportando, entre outros, pessoas e cargas. Este meio de transporte é um dos mais antigos e a sua origem está ligada directamente com a Primeira Revolução Industrial, acontecimento histórico que sucedeu na Europa no final do século XVIII e início do século XIX. Uma rede ferroviária é um sistema único no ponto de vista do uso de tração elétrica assim como no modo que se insere na sociedade por ser um meio de transporte seguro, rápido e bastante utilizado pela população. As redes de alimentação de energia (transporte e distribuição) e a rede de alta velocidade ditaram novas soluções para a alimentação elétrica ferroviária contribuindo para a sua evolução técnica, na segurança e também na compatibilidade eletromagnética no sentido de se estabelecerem critérios de controlo e prevenção dos efeitos indesejáveis provocados pela interferência magnética. O presente trabalho tem por objetivo analisar e estudar tecnicamente como se comportam as redes que alimentam os veículos de tração elétrica desde as subestações até à alimentação das locomotivas. Dada a complexidade da sua análise torna-se necessário o recurso a ferramentas de simulação mais ou menos complexas. No presente trabalho recorreu-se ao MATLABTM, nomeadamente, ao MATLABTM/Simulink. Foram analisadas as principais grandezas elétricas em cenários distintos para os sistemas de alimentação da catenária de 1x25 kV e 2x25 kV.
Resumo:
A Thesis submitted for the co-tutelle degree of Doctor in Physics at Universidade Nova de Lisboa and Université Pierre et Marie Curie