11 resultados para Linear Codes over Finite Fields

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

40.00% 40.00%

Publicador:

Resumo:

There exist striking analogies in the behaviour of eigenvalues of Hermitian compact operators, singular values of compact operators and invariant factors of homomorphisms of modules over principal ideal domains, namely diagonalization theorems, interlacing inequalities and Courant-Fischer type formulae. Carlson and Sa [D. Carlson and E.M. Sa, Generalized minimax and interlacing inequalities, Linear Multilinear Algebra 15 (1984) pp. 77-103.] introduced an abstract structure, the s-space, where they proved unified versions of these theorems in the finite-dimensional case. We show that this unification can be done using modular lattices with Goldie dimension, which have a natural structure of s-space in the finite-dimensional case, and extend the unification to the countable-dimensional case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A package of B-spline finite strip models is developed for the linear analysis of piezolaminated plates and shells. This package is associated to a global optimization technique in order to enhance the performance of these types of structures, subjected to various types of objective functions and/or constraints, with discrete and continuous design variables. The models considered are based on a higher-order displacement field and one can apply them to the static, free vibration and buckling analyses of laminated adaptive structures with arbitrary lay-ups, loading and boundary conditions. Genetic algorithms, with either binary or floating point encoding of design variables, were considered to find optimal locations of piezoelectric actuators as well as to determine the best voltages applied to them in order to obtain a desired structure shape. These models provide an overall economy of computing effort for static and vibration problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nesta dissertação pretende-se simular o comportamento dinâmico de uma laje de betão armado aplicando o Método de Elementos Finitos através da sua implementação no programa FreeFEM++. Este programa permite-nos a análise do modelo matemático tridimensional da Teoria da Elasticidade Linear, englobando a Equação de Equilíbrio, Equação de Compatibilidade e Relações Constitutivas. Tratando-se de um problema dinâmico é necessário recorrer a métodos numéricos de Integração Directa de modo a obter a resposta em termos de deslocamento ao longo do tempo. Para este trabalho escolhemos o Método de Newmark e o Método de Euler para a discretização temporal, um pela sua popularidade e o outro pela sua simplicidade de implementação. Os resultados obtidos pelo FreeFEM++ são validados através da comparação com resultados adquiridos a partir do SAP2000 e de Soluções Teóricas, quando possível.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New K/Ar dating and geochemical analyses have been carried out on the WNW-ESE elongated oceanic island of S. Jorge to reconstruct the volcanic evolution of a linear ridge developed close to the Azores triple junction. We show that S. Jorge sub-aerial construction encompasses the last 1.3 Myr, a time interval far much longer than previously reported. The early development of the ridge involved a sub-aerial building phase exposed in the southeast end of the island and now constrained between 1.32 +/- 0.02 and 1.21 +/- 0.02 Ma. Basic lavas from this older stage are alkaline and enriched in incompatible elements, reflecting partial melting of an enriched mantle source. At least three differentiation cycles from alkaline basalts to mugearites are documented within this stage. The successive episodes of magma rising, storage and evolution suggest an intermittent reopening of the magma feeding system, possibly due to recurrent tensional or trans-tensional tectonic events. Present data show a gap in sub-aerial volcanism before a second main ongoing building phase starting at about 750 ka. Sub-aerial construction of the S. Jorge ridge migrated progressively towards the west, but involved several overlapping volcanic episodes constrained along the main WNW-ESE structural axis of the island. Malic magmas erupted during the second phase have been also generated by partial melting of an enriched mantle source. Trace element data suggest, however, variable and lower degrees of partial melting of a shallower mantle domain, which is interpreted as an increasing control of lithospheric deformation on the genesis and extraction of primitive melts during the last 750 kyr. The multi-stage development of the S. Jorge volcanic ridge over the last 1.3 Myr has most likely been greatly influenced by regional tectonics, controlled by deformation along the diffuse boundary between the Nubian and the Eurasian plates, and the increasing effect of sea-floor spreading at the Mid-Atlantic Ridge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RESUMO: Introdução – A Radioterapia (RT) é uma abordagem terapêutica para tratamento de neoplasia de mama. Contudo, diferentes técnicas de irradiação (TI) podem ser usadas. Objetivos – Comparar 4 TI, considerando a irradiação dos volumes alvo (PTV) e dos órgãos de risco (OAR). Metodologia – Selecionaram-se 7 pacientes com indicação para RT de mama esquerda. Sobre tomografia computorizada foram feitos os contornos do PTV e dos OAR. Foram calculadas 4 planimetrias/paciente para as TI: conformacional externa (EBRT), intensidade modulada com 2 (IMRT2) e 5 campos (IMRT5) e arco dinâmico (DART). Resultados – Histogramas de dose volume foram comparados para todas as TI usando o software de análise estatística, IBM SPSS v20. Com IMRT5 e DART, os OAR recebem mais doses baixas. No entanto, IMRT5 apresenta melhores índices de conformidade e homogeneidade para o PTV. Conclusões – IMRT5 apresenta o melhor índice de conformidade; EBRT e IMRT2 apresentam melhores resultados que DART. Há d.e.s entre as TI, sobretudo em doses mais baixas nos OAR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use a simple model of associating fluids which consists of spherical particles having a hard-core repulsion, complemented by three short-ranged attractive sites on the surface (sticky spots). Two of the spots are of type A and one is of type B; the bonding interactions between each pair of spots have strengths epsilon(AA), epsilon(BB), and epsilon(AB). The theory is applied over the whole range of bonding strengths and the results are interpreted in terms of the equilibrium cluster structures of the phases. In addition to our numerical results, we derive asymptotic expansions for the free energy in the limits for which there is no liquid-vapor critical point: linear chains (epsilon(AA)not equal 0, epsilon(AB)=epsilon(BB)=0), hyperbranched polymers (epsilon(AB)not equal 0, epsilon(AA)=epsilon(BB)=0), and dimers (epsilon(BB)not equal 0, epsilon(AA)=epsilon(AB)=0). These expansions also allow us to calculate the structure of the critical fluid by perturbing around the above limits, yielding three different types of condensation: of linear chains (AA clusters connected by a few AB or BB bonds); of hyperbranched polymers (AB clusters connected by AA bonds); or of dimers (BB clusters connected by AA bonds). Interestingly, there is no critical point when epsilon(AA) vanishes despite the fact that AA bonds alone cannot drive condensation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work a mixed integer optimization linear programming (MILP) model was applied to mixed line rate (MLR) IP over WDM and IP over OTN over WDM (with and without OTN grooming) networks, with aim to reduce network energy consumption. Energy-aware and energy-aware & short-path routing techniques were used. Simulations were made based on a real network topology as well as on forecasts of traffic matrix based on statistical data from 2005 up to 2017. Energy aware routing optimization model on IPoWDM network, showed the lowest energy consumption along all years, and once compared with energy-aware & short-path routing, has led to an overall reduction in energy consumption up to 29%, expecting to save even more than shortest-path routing. © 2014 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dual-phase functionally graded materials are a particular type of composite materials whose properties are tailored to vary continuously, depending on its two constituent's composition distribution, and which use is increasing on the most diverse application fields. These materials are known to provide superior thermal and mechanical performances when compared to the traditional laminated composites, exactly because of this continuous properties variation characteristic, which enables among other advantages smoother stresses distribution profile. In this paper we study the influence of different homogenization schemes, namely the schemes due to Voigt, Hashin-Shtrikman and Mod-Tanaka, which can be used to obtain bounds estimates for the material properties of particulate composite structures. To achieve this goal we also use a set of finite element models based on higher order shear deformation theories and also on first order theory. From the studies carried out, on linear static analyses and on free vibration analyses, it is shown that the bounds estimates are as important as the deformation kinematics basis assumed to analyse these types of multifunctional structures. Concerning to the homogenization schemes studied, it is shown that Mori-Tanaka and Hashin-Shtrikman estimates lead to less conservative results when compared to Voigt rule of mixtures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The associated production of a Higgs boson and a top-quark pair, t (t) over barH, in proton-proton collisions is addressed in this paper for a center of mass energy of 13 TeV at the LHC. Dileptonic final states of t (t) over barH events with two oppositely charged leptons and four jets from the decays t -> bW(+) -> bl(+)v(l), (t) over bar -> (b) over barW(-) -> (b) over barl(-)(v) over bar (l) and h -> b (b) over bar are used. Signal events, generated with MadGraph5_aMC@NLO, are fully reconstructed by applying a kinematic fit. New angular distributions of the decay products as well as angular asymmetries are explored in order to improve discrimination of t (t) over barH signal events over the dominant irreducible background contribution, t (t) over barb (b) over bar. Even after the full kinematic fit reconstruction of the events, the proposed angular distributions and asymmetries are still quite different in the t (t) over barH signal and the dominant background (t (t) over barb (b) over bar).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.