29 resultados para Random matrix theory
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Topology optimization consists in finding the spatial distribution of a given total volume of material for the resulting structure to have some optimal property, for instance, maximization of structural stiffness or maximization of the fundamental eigenfrequency. In this paper a Genetic Algorithm (GA) employing a representation method based on trees is developed to generate initial feasible individuals that remain feasible upon crossover and mutation and as such do not require any repairing operator to ensure feasibility. Several application examples are studied involving the topology optimization of structures where the objective functions is the maximization of the stiffness and the maximization of the first and the second eigenfrequencies of a plate, all cases having a prescribed material volume constraint.
Resumo:
We discuss theoretical and phenomenological aspects of two-Higgs-doublet extensions of the Standard Model. In general, these extensions have scalar mediated flavour changing neutral currents which are strongly constrained by experiment. Various strategies are discussed to control these flavour changing scalar currents and their phenomenological consequences are analysed. In particular, scenarios with natural flavour conservation are investigated, including the so-called type I and type II models as well as lepton-specific and inert models. Type III models are then discussed, where scalar flavour changing neutral currents are present at tree level, but are suppressed by either a specific ansatz for the Yukawa couplings or by the introduction of family symmetries leading to a natural suppression mechanism. We also consider the phenomenology of charged scalars in these models. Next we turn to the role of symmetries in the scalar sector. We discuss the six symmetry-constrained scalar potentials and their extension into the fermion sector. The vacuum structure of the scalar potential is analysed, including a study of the vacuum stability conditions on the potential and the renormalization-group improvement of these conditions is also presented. The stability of the tree level minimum of the scalar potential in connection with electric charge conservation and its behaviour under CP is analysed. The question of CP violation is addressed in detail, including the cases of explicit CP violation and spontaneous CP violation. We present a detailed study of weak basis invariants which are odd under CP. These invariants allow for the possibility of studying the CP properties of any two-Higgs-doublet model in an arbitrary Higgs basis. A careful study of spontaneous CP violation is presented, including an analysis of the conditions which have to be satisfied in order for a vacuum to violate CP. We present minimal models of CP violation where the vacuum phase is sufficient to generate a complex CKM matrix, which is at present a requirement for any realistic model of spontaneous CP violation.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
This paper presents the Direct Power Control of Three-Phase Matrix Converters (DPC-MC) operating as Unified Power Flow Controllers (UPFC). Since matrix converters allow direct AC/AC power conversion without intermediate energy storage link, the resulting UPFC has reduced volume and cost, together with higher reliability. Theoretical principles of DPC-MC method are established based on an UPFC model, together with a new direct power control approach based on sliding mode control techniques. As a result, active and reactive power can be directly controlled by selection of an appropriate switching state of matrix converter. This new direct power control approach associated to matrix converters technology guarantees decoupled active and reactive power control, zero error tracking, fast response times and timely control actions. Simulation results show good performance of the proposed system.
Resumo:
This paper presents a predictive optimal matrix converter controller for a flywheel energy storage system used as Dynamic Voltage Restorer (DVR). The flywheel energy storage device is based on a steel seamless tube mounted as a vertical axis flywheel to store kinetic energy. The motor/generator is a Permanent Magnet Synchronous Machine driven by the AC-AC Matrix Converter. The matrix control method uses a discrete-time model of the converter system to predict the expected values of the input and output currents for all the 27 possible vectors generated by the matrix converter. An optimal controller minimizes control errors using a weighted cost functional. The flywheel and control process was tested as a DVR to mitigate voltage sags and swells. Simulation results show that the DVR is able to compensate the critical load voltage without delays, voltage undershoots or overshoots, overcoming the input/output coupling of matrix converters.
Resumo:
We analyze generalized CP symmetries of two-Higgs doublet models, extending them from the scalar to the fermion sector of the theory. We show that, other than the usual CP transformation, there is only one of those symmetries which does not imply massless charged fermions. That single model which accommodates a fermionic mass spectrum compatible with experimental data possesses a remarkable feature. Through a soft breaking of the symmetry it displays a new type of spontaneous CP violation, which does not occur in the scalar sector responsible for the symmetry breaking mechanism but, rather, in the fermion sector.
Resumo:
We generalize the Flory-Stockmayer theory of percolation to a model of associating (patchy) colloids, which consists of hard spherical particles, having on their surfaces f short-ranged-attractive sites of m different types. These sites can form bonds between particles and thus promote self-assembly. It is shown that the percolation threshold is given in terms of the eigenvalues of a m x m matrix, which describes the recursive relations for the number of bonded particles on the ith level of a cluster with no loops; percolation occurs when the largest of these eigenvalues equals unity. Expressions for the probability that a particle is not bonded to the giant cluster, for the average cluster size and the average size of a cluster to which a randomly chosen particle belongs, are also derived. Explicit results for these quantities are computed for the case f = 3 and m = 2. We show how these structural properties are related to the thermodynamics of the associating system by regarding bond formation as a (equilibrium) chemical reaction. This solution of the percolation problem, combined with Wertheim's thermodynamic first-order perturbation theory, allows the investigation of the interplay between phase behavior and cluster formation for general models of patchy colloids.
Resumo:
Topological defects in foam, either isolated (disclinations and dislocations) or in pairs, affect the energy and stress, and play an important role in foam deformation. Surface Evolver simulations were performed on large finite clusters of bubbles. These allow us to evaluate the effect of the topology of the defects, and the distance between defects, on the energy and pressure of foam clusters of different sizes. The energy of such defects follows trends similar to known analytical results for a continuous medium.
Resumo:
Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).
Resumo:
This paper presents a direct power control (DPC) for three-phase matrix converters operating as unified power flow controllers (UPFCs). Matrix converters (MCs) allow the direct ac/ac power conversion without dc energy storage links; therefore, the MC-based UPFC (MC-UPFC) has reduced volume and cost, reduced capacitor power losses, together with higher reliability. Theoretical principles of direct power control (DPC) based on sliding mode control techniques are established for an MC-UPFC dynamic model including the input filter. As a result, line active and reactive power, together with ac supply reactive power, can be directly controlled by selecting an appropriate matrix converter switching state guaranteeing good steady-state and dynamic responses. Experimental results of DPC controllers for MC-UPFC show decoupled active and reactive power control, zero steady-state tracking error, and fast response times. Compared to an MC-UPFC using active and reactive power linear controllers based on a modified Venturini high-frequency PWM modulator, the experimental results of the advanced DPC-MC guarantee faster responses without overshoot and no steady-state error, presenting no cross-coupling in dynamic and steady-state responses.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
The development of children's school achievements in mathematics is one of the most important aims of education in Poland. The results of research concerning monitoring of school achievements in maths is not optimistic. We can observe low levels of children’s understanding of the merits of maths, self-developed strategies in solving problems and practical usage of maths skills. This article frames the discussion of this problem in its psychological and didactic context and analyses the causes as they relate to school practice in teaching maths
Resumo:
We have generalized earlier work on anchoring of nematic liquid crystals by Sullivan, and Sluckin and Poniewierski, in order to study transitions which may occur in binary mixtures of nematic liquid crystals as a function of composition. Microscopic expressions have been obtained for the anchoring energy of (i) a liquid crystal in contact with a solid aligning surface; (ii) a liquid crystal in contact with an immiscible isotropic medium; (iii) a liquid crystal mixture in contact with a solid aligning surface. For (iii), possible phase diagrams of anchoring angle versus dopant concentration have been calculated using a simple liquid crystal model. These exhibit some interesting features including re-entrant conical anchoring, for what are believed to be realistic values of the molecular parameters. A way of relaxing the most drastic approximation implicit in the above approach is also briefly discussed.
Resumo:
We suggest that the weak-basis independent condition det(M-nu) = 0 for the effective neutrino mass matrix can be used in order to remove the ambiguities in the reconstruction of the neutrino mass matrix from input data available from present and future feasible experiments. In this framework, we study the full reconstruction of M-nu with special emphasis on the correlation between the Majorana CP-violating phase and the various mixing angles. The impact of the recent KamLAND results on the effective neutrino mass parameter is also briefly discussed. (C) 2003 Elsevier Science B.V. All rights reserved.