31 resultados para Discrete Data Models

em Reposit


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the quark sector of theories containing three scalar SU(2)(L) doublets in the triplet representation of A(4) (or S-4) and three generations of quarks in arbitrary A(4) (or S-4) representations. We show that for all possible choices of quark field representations and for all possible alignments of the Higgs vacuum expectation values that can constitute global minima of the scalar potential, it is not possible to obtain simultaneously nonvanishing quark masses and a nonvanishing CP-violating phase in the Cabibbo-Kobayashi-Maskawa quark mixing matrix. As a result, in this minimal form, models with three scalar fields in the triplet representation of A(4) or S-4 cannot be extended to the quark sector in a way consistent with experiment. DOI: 10.1103/PhysRevD.87.055010.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Discrete data representations are necessary, or at least convenient, in many machine learning problems. While feature selection (FS) techniques aim at finding relevant subsets of features, the goal of feature discretization (FD) is to find concise (quantized) data representations, adequate for the learning task at hand. In this paper, we propose two incremental methods for FD. The first method belongs to the filter family, in which the quality of the discretization is assessed by a (supervised or unsupervised) relevance criterion. The second method is a wrapper, where discretized features are assessed using a classifier. Both methods can be coupled with any static (unsupervised or supervised) discretization procedure and can be used to perform FS as pre-processing or post-processing stages. The proposed methods attain efficient representations suitable for binary and multi-class problems with different types of data, being competitive with existing methods. Moreover, using well-known FS methods with the features discretized by our techniques leads to better accuracy than with the features discretized by other methods or with the original features. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We produce five flavour models for the lepton sector. All five models fit perfectly well - at the 1 sigma level - the existing data on the neutrino mass-squared differences and on the lepton mixing angles. The models are based on the type I seesaw mechanism, on a Z(2) symmetry for each lepton flavour, and either on a (spontaneously broken) symmetry under the interchange of two lepton flavours or on a (spontaneously broken) CP symmetry incorporating that interchange - or on both symmetries simultaneously. Each model makes definite predictions both for the scale of the neutrino masses and for the phase delta in lepton mixing; the fifth model also predicts a correlation between the lepton mixing angles theta(12) and theta(23).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To study a flavour model with a non-minimal Higgs sector one must first define the symmetries of the fields; then identify what types of vacua exist and how they may break the symmetries; and finally determine whether the remnant symmetries are compatible with the experimental data. Here we address all these issues in the context of flavour models with any number of Higgs doublets. We stress the importance of analysing the Higgs vacuum expectation values that are pseudo-invariant under the generators of all subgroups. It is shown that the only way of obtaining a physical CKM mixing matrix and, simultaneously, non-degenerate and non-zero quark masses is requiring the vacuum expectation values of the Higgs fields to break completely the full flavour group, except possibly for some symmetry belonging to baryon number. The application of this technique to some illustrative examples, such as the flavour groups Delta (27), A(4) and S-3, is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A package of B-spline finite strip models is developed for the linear analysis of piezolaminated plates and shells. This package is associated to a global optimization technique in order to enhance the performance of these types of structures, subjected to various types of objective functions and/or constraints, with discrete and continuous design variables. The models considered are based on a higher-order displacement field and one can apply them to the static, free vibration and buckling analyses of laminated adaptive structures with arbitrary lay-ups, loading and boundary conditions. Genetic algorithms, with either binary or floating point encoding of design variables, were considered to find optimal locations of piezoelectric actuators as well as to determine the best voltages applied to them in order to obtain a desired structure shape. These models provide an overall economy of computing effort for static and vibration problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objectivo deste trabalho passa pelo desenvolvimento de uma ferramenta de simulação dinâmica de recursos rádio em LTE no sentido descendente, com recurso à Framework OMNeT++. A ferramenta desenvolvida permite realizar o planeamento das estações base, simulação e análise de resultados. São descritos os principais aspectos da tecnologia de acesso rádio, designadamente a arquitectura da rede, a codificação, definição dos recursos rádio, os ritmos de transmissão suportados ao nível de canal e o mecanismo de controlo de admissão. Foi definido o cenário de utilização de recursos rádio que inclui a definição de modelos de tráfego e de serviços orientados a pacotes e circuitos. Foi ainda considerado um cenário de referência para a verificação e validação do modelo de simulação. A simulação efectua-se ao nível de sistema, suportada por um modelo dinâmico, estocástico e orientado por eventos discretos de modo a contemplar os diferentes mecanismos característicos da tecnologia OFDMA. Os resultados obtidos permitem a análise de desempenho dos serviços, estações base e sistema ao nível do throughput médio da rede, throughput médio por eNodeB e throughput médio por móvel para além de permitir analisar o contributo de outros parâmetros designadamente, largura de banda, raio de cobertura, perfil dos serviços, esquema de modulação, entre outros. Dos resultados obtidos foi possível verificar que, considerando um cenário com estações base com raio de cobertura de 100 m obteve-se um throughput ao nível do utilizador final igual a 4.69494 Mbps, ou seja, 7 vezes superior quando comparado a estações base com raios de cobertura de 200m.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to analyze the forecasting ability of the CARR model proposed by Chou (2005) using the S&P 500. We extend the data sample, allowing for the analysis of different stock market circumstances and propose the use of various range estimators in order to analyze their forecasting performance. Our results show that there are two range-based models that outperform the forecasting ability of the GARCH model. The Parkinson model is better for upward trends and volatilities which are higher and lower than the mean while the CARR model is better for downward trends and mean volatilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a simple extension of the Standard Model by adding two Higgs triplets and a complex scalar singlet to its particle content. In this framework, the CP symmetry is spontaneously broken at high energies by the complex vacuum expectation value of the scalar singlet. Such a breaking leads to leptonic CP violation at low energies. The model also exhibits an A(4) X Z(4) flavor symmetry which, after being spontaneously broken at a high-energy scale, yields a tribimaximal pattern in the lepton sector. We consider small perturbations around the tribimaximal vacuum alignment condition in order to generate nonzero values of theta(13), as required by the latest neutrino oscillation data. It is shown that the value of theta(13) recently measured by the Daya Bay Reactor Neutrino Experiment can be accommodated in our framework together with large Dirac-type CP violation. We also address the viability of leptogenesis in our model through the out-of-equilibrium decays of the Higgs triplets. In particular, the CP asymmetries in the triplet decays into two leptons are computed and it is shown that the effective leptogenesis and low-energy CP-violating phases are directly linked.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solubility measurements of quinizarin. (1,4-dihydroxyanthraquinone), disperse red 9 (1-(methylamino) anthraquinone), and disperse blue 14 (1,4-bis(methylamino)anthraquinone) in supercritical carbon dioxide (SC CO2) were carried out in a flow type apparatus, at a temperature range from (333.2 to 393.2) K and at pressures from (12.0 to 40.0) MPa. Mole fraction solubility of the three dyes decreases in the order quinizarin (2.9 x 10(-6) to 2.9.10(-4)), red 9 (1.4 x 10(-6) to 3.2 x 10(-4)), and blue 14 (7.8 x 10(-8) to 2.2 x 10(-5)). Four semiempirical density based models were used to correlatethe solubility of the dyes in the SC CO2. From the correlation results, the total heat of reaction, heat of vaporization plus the heat of solvation of the solute, were calculated and compared with the results presented in the literature. The solubilities of the three dyes were correlated also applying the Soave-Redlich-Kwong cubic equation of state (SRK CEoS) with classical mixing rules, and the physical properties required for the modeling were estimated and reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new dynamical approach to the Blumberg's equation, a family of unimodal maps. These maps are proportional to Beta(p, q) probability densities functions. Using the symmetry of the Beta(p, q) distribution and symbolic dynamics techniques, a new concept of mirror symmetry is defined for this family of maps. The kneading theory is used to analyze the effect of such symmetry in the presented models. The main result proves that two mirror symmetric unimodal maps have the same topological entropy. Different population dynamics regimes are identified, when the intrinsic growth rate is modified: extinctions, stabilities, bifurcations, chaos and Allee effect. To illustrate our results, we present a numerical analysis, where are demonstrated: monotonicity of the topological entropy with the variation of the intrinsic growth rate, existence of isentropic sets in the parameters space and mirror symmetry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study neutrino masses and mixing in the context of flavor models with A(4) symmetry, three scalar doublets in the triplet representation, and three lepton families. We show that there is no representation assignment that yields a dimension-5 mass operator consistent with experiment. We then consider a type-I seesaw with three heavy right-handed neutrinos, explaining in detail why it fails, and allowing us to show that agreement with the present neutrino oscillation data can be recovered with the inclusion of dimension-3 heavy neutrino mass terms that break softly the A(4) symmetry.