32 resultados para Constrained Local Models, Non-rigid Face Alignment, Active Appearance Models
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
Conferência: 39th Annual Conference of the IEEE Industrial-Electronics-Society (IECON) - NOV 10-14, 2013
Resumo:
The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.
Resumo:
We write down the renormalization-group equations for the Yukawa-coupling matrices in a general multi-Higgs-doublet model. We then assume that the matrices of the Yukawa couplings of the various Higgs doublets to right-handed fermions of fixed quantum numbers are all proportional to each other. We demonstrate that, in the case of the two-Higgs-doublet model, this proportionality is preserved by the renormalization-group running only in the cases of the standard type-I, II, X, and Y models. We furthermore show that a similar result holds even when there are more than two Higgs doublets: the Yukawa-coupling matrices to fermions of a given electric charge remain proportional under the renormalization-group running if and only if there is a basis for the Higgs doublets in which all the fermions of a given electric charge couple to only one Higgs doublet.
Resumo:
We analyse the possibility that, in two Higgs doublet models, one or more of the Higgs couplings to fermions or to gauge bosons change sign, relative to the respective Higgs Standard Model couplings. Possible sign changes in the coupling of a neutral scalar to charged ones are also discussed. These wrong signs can have important physical consequences, manifesting themselves in Higgs production via gluon fusion or Higgs decay into two gluons or into two photons. We consider all possible wrong sign scenarios, and also the symmetric limit, in all possible Yukawa implementations of the two Higgs doublet model, in two different possibilities: the observed Higgs boson is the lightest CP-even scalar, or the heaviest one. We also analyse thoroughly the impact of the currently available LHC data on such scenarios. With all 8 TeV data analysed, all wrong sign scenarios are allowed in all Yukawa types, even at the 1 sigma level. However, we will show that B-physics constraints are crucial in excluding the possibility of wrong sign scenarios in the case where tan beta is below 1. We will also discuss the future prospects for probing the wrong sign scenarios at the next LHC run. Finally we will present a scenario where the alignment limit could be excluded due to non-decoupling in the case where the heavy CP-even Higgs is the one discovered at the LHC.
Resumo:
In the field of appearance-based robot localization, the mainstream approach uses a quantized representation of local image features. An alternative strategy is the exploitation of raw feature descriptors, thus avoiding approximations due to quantization. In this work, the quantized and non-quantized representations are compared with respect to their discriminativity, in the context of the robot global localization problem. Having demonstrated the advantages of the non-quantized representation, the paper proposes mechanisms to reduce the computational burden this approach would carry, when applied in its simplest form. This reduction is achieved through a hierarchical strategy which gradually discards candidate locations and by exploring two simplifying assumptions about the training data. The potential of the non-quantized representation is exploited by resorting to the entropy-discriminativity relation. The idea behind this approach is that the non-quantized representation facilitates the assessment of the distinctiveness of features, through the entropy measure. Building on this finding, the robustness of the localization system is enhanced by modulating the importance of features according to the entropy measure. Experimental results support the effectiveness of this approach, as well as the validity of the proposed computation reduction methods.
Resumo:
We directly visualize the response of nematic liquid crystal drops of toroidal topology threaded in cellulosic fibers, suspended in air, to an AC electric field and at different temperatures over the N-I transition. This new liquid crystal system can exhibit non-trivial point defects, which can be energetically unstable against expanding into ring defects depending on the fiber constraining geometries. The director anchoring tangentially near the fiber surface and homeotropically at the air interface makes a hybrid shell distribution that in turn causes a ring disclination line around the main axis of the fiber at the center of the droplet. Upon application of an electric field, E, the disclination ring first expands and moves along the fiber main axis, followed by the appearance of a stable "spherical particle" object orbiting around the fiber at the center of the liquid crystal drop. The rotation speed of this particle was found to vary linearly with the applied voltage. This constrained liquid crystal geometry seems to meet the essential requirements in which soliton-like deformations can develop and exhibit stable orbiting in three dimensions upon application of an external electric field. On changing the temperature the system remains stable and allows the study of the defect evolution near the nematic-isotropic transition, showing qualitatively different behaviour on cooling and heating processes. The necklaces of such liquid crystal drops constitute excellent systems for the study of topological defects and their evolution and open new perspectives for application in microelectronics and photonics.
Resumo:
In the last decade, local image features have been widely used in robot visual localization. In order to assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image with those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, in this paper we compare several candidate combiners with respect to their performance in the visual localization task. For this evaluation, we selected the most popular methods in the class of non-trained combiners, namely the sum rule and product rule. A deeper insight into the potential of these combiners is provided through a discriminativity analysis involving the algebraic rules and two extensions of these methods: the threshold, as well as the weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. Furthermore, we address the process of constructing a model of the environment by describing how the model granularity impacts upon performance. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance, confirming the general agreement on the robustness of this rule in other classification problems. The voting method, whilst competitive with the product rule in its standard form, is shown to be outperformed by its modified versions.
Resumo:
This paper addresses the estimation of surfaces from a set of 3D points using the unified framework described in [1]. This framework proposes the use of competitive learning for curve estimation, i.e., a set of points is defined on a deformable curve and they all compete to represent the available data. This paper extends the use of the unified framework to surface estimation. It o shown that competitive learning performes better than snakes, improving the model performance in the presence of concavities and allowing to desciminate close surfaces. The proposed model is evaluated in this paper using syntheticdata and medical images (MRI and ultrasound images).
Resumo:
Dissertação apresentada à Universidade de Cabo Verde e à Escola Superior de Educação de Lisboa para a obtenção do Grau de Mestre em Ciências da Educação - especialização em Educação Especial
Resumo:
: A new active-contraction visco-elastic numerical model of the pelvic floor (skeletal) muscle is presented. Our model includes all elements that represent the muscle constitutive behavior, contraction and relaxation. In contrast with the previous models, the activation function can be null. The complete equations are shown and exactly linearized. Small verification and validation tests are performed and the pelvis is modeled using the data from the intra-abdominal pressure tests
Resumo:
Since collaborative networked organisations are usually formed by independent and heterogeneous entities, it is natural that each member holds his own set of values, and that conflicts among partners might emerge because of some misalignment of values. In contrast, it is often stated in literature that the alignment between the value systems of members involved in collaborative processes is a prerequisite for successful co-working. As a result, the issue of core value alignment in collaborative networks started to attract attention. However, methods to analyse such alignment are lacking mainly because the concept of 'alignment' in this context is still ill defined and shows a multifaceted nature. As a contribution to the area, this article introduces an approach based on causal models and graph theory for the analysis of core value alignment in collaborative networks. The potential application of the approach is then discussed in the virtual organisations' breeding environment context.
Resumo:
In this review paper different designs based on stacked p-i'-n-p-i-n heterojunctions are presented and compared with the single p-i-n sensing structures. The imagers utilise self-field induced depletion layers for light detection and a modulated laser beam for sequential readout. The effect of the sensing element structure, cell configurations (single or tandem), and light source properties (intensity and wavelength) are correlated with the sensor output characteristics (light-to-dark sensivity, spatial resolution, linearity and S/N ratio). The readout frequency is optimized showing that scans speeds up to 104 lines per second can be achieved without degradation in the resolution. Multilayered p-i'-n-p-i-n heterostructures can also be used as wavelength-division multiplexing /demultiplexing devices in the visible range. Here the sensor element faces the modulated light from different input colour channels, each one with a specific wavelength and bit rate. By reading out the photocurrent at appropriated applied bias, the information is multiplexed or demultiplexed and can be transmitted or recovered again. Electrical models are present to support the sensing methodologies.
Resumo:
We are concerned with providing more empirical evidence on forecast failure, developing forecast models, and examining the impact of events such as audit reports. A joint consideration of classic financial ratios and relevant external indicators leads us to build a basic prediction model focused in non-financial Galician SMEs. Explanatory variables are relevant financial indicators from the viewpoint of the financial logic and financial failure theory. The paper explores three mathematical models: discriminant analysis, Logit, and linear multivariate regression. We conclude that, even though they both offer high explanatory and predictive abilities, Logit and MDA models should be used and interpreted jointly.
Resumo:
We study the implications for two-Higgs-doublet models of the recent announcement at the LHC giving a tantalizing hint for a Higgs boson of mass 125 GeV decaying into two photons. We require that the experimental result be within a factor of 2 of the theoretical standard model prediction, and analyze the type I and type II models as well as the lepton-specific and flipped models, subject to this requirement. It is assumed that there is no new physics other than two Higgs doublets. In all of the models, we display the allowed region of parameter space taking the recent LHC announcement at face value, and we analyze the W+W-, ZZ, (b) over barb, and tau(+)tau(-) expectations in these allowed regions. Throughout the entire range of parameter space allowed by the gamma gamma constraint, the numbers of events for Higgs decays into WW, ZZ, and b (b) over bar are not changed from the standard model by more than a factor of 2. In contrast, in the lepton-specific model, decays to tau(+)tau(-) are very sensitive across the entire gamma gamma-allowed region.