923 resultados para Linear matrix inequalities
Resumo:
In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.
Resumo:
The present paper describes an integrated micro/macro mechanical study of the elastic-viscoplastic behavior of unidirectional metal matrix composites (MMC). The micromechanical analysis of the elastic moduli is based on the Composites Cylinder Assemblage model (CCA) with comparisons also draw with a Representative Unit Cell (RUC) technique. These "homogenization" techniques are later incorporated into the Vanishing Fiber Diameter (VFD) model and a new formulation is proposed. The concept of a smeared element procedure is employed in conjunction with two different versions of the Bodner and Partom elastic-viscoplastic constitutive model for the associated macroscopic analysis. The formulations developed are also compared against experimental and analytical results available in the literature.
Resumo:
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.
Resumo:
In the present thesis, we discuss the main notions of an axiomatic approach for an invariant Harnack inequality. This procedure, originated from techniques for fully nonlinear elliptic operators, has been developed by Di Fazio, Gutiérrez, and Lanconelli in the general settings of doubling Hölder quasi-metric spaces. The main tools of the approach are the so-called double ball property and critical density property: the validity of these properties implies an invariant Harnack inequality. We are mainly interested in the horizontally elliptic operators, i.e. some second order linear degenerate-elliptic operators which are elliptic with respect to the horizontal directions of a Carnot group. An invariant Harnack inequality of Krylov-Safonov type is still an open problem in this context. In the thesis we show how the double ball property is related to the solvability of a kind of exterior Dirichlet problem for these operators. More precisely, it is a consequence of the existence of some suitable interior barrier functions of Bouligand-type. By following these ideas, we prove the double ball property for a generic step two Carnot group. Regarding the critical density, we generalize to the setting of H-type groups some arguments by Gutiérrez and Tournier for the Heisenberg group. We recognize that the critical density holds true in these peculiar contexts by assuming a Cordes-Landis type condition for the coefficient matrix of the operator. By the axiomatic approach, we thus prove an invariant Harnack inequality in H-type groups which is uniform in the class of the coefficient matrices with prescribed bounds for the eigenvalues and satisfying such a Cordes-Landis condition.
Resumo:
A unified solution framework is presented for one-, two- or three-dimensional complex non-symmetric eigenvalue problems, respectively governing linear modal instability of incompressible fluid flows in rectangular domains having two, one or no homogeneous spatial directions. The solution algorithm is based on subspace iteration in which the spatial discretization matrix is formed, stored and inverted serially. Results delivered by spectral collocation based on the Chebyshev-Gauss-Lobatto (CGL) points and a suite of high-order finite-difference methods comprising the previously employed for this type of work Dispersion-Relation-Preserving (DRP) and Padé finite-difference schemes, as well as the Summationby- parts (SBP) and the new high-order finite-difference scheme of order q (FD-q) have been compared from the point of view of accuracy and efficiency in standard validation cases of temporal local and BiGlobal linear instability. The FD-q method has been found to significantly outperform all other finite difference schemes in solving classic linear local, BiGlobal, and TriGlobal eigenvalue problems, as regards both memory and CPU time requirements. Results shown in the present study disprove the paradigm that spectral methods are superior to finite difference methods in terms of computational cost, at equal accuracy, FD-q spatial discretization delivering a speedup of ð (10 4). Consequently, accurate solutions of the three-dimensional (TriGlobal) eigenvalue problems may be solved on typical desktop computers with modest computational effort.
Resumo:
In this paper we propose a novel fast random search clustering (RSC) algorithm for mixing matrix identification in multiple input multiple output (MIMO) linear blind inverse problems with sparse inputs. The proposed approach is based on the clustering of the observations around the directions given by the columns of the mixing matrix that occurs typically for sparse inputs. Exploiting this fact, the RSC algorithm proceeds by parameterizing the mixing matrix using hyperspherical coordinates, randomly selecting candidate basis vectors (i.e. clustering directions) from the observations, and accepting or rejecting them according to a binary hypothesis test based on the Neyman–Pearson criterion. The RSC algorithm is not tailored to any specific distribution for the sources, can deal with an arbitrary number of inputs and outputs (thus solving the difficult under-determined problem), and is applicable to both instantaneous and convolutive mixtures. Extensive simulations for synthetic and real data with different number of inputs and outputs, data size, sparsity factors of the inputs and signal to noise ratios confirm the good performance of the proposed approach under moderate/high signal to noise ratios. RESUMEN. Método de separación ciega de fuentes para señales dispersas basado en la identificación de la matriz de mezcla mediante técnicas de "clustering" aleatorio.
Resumo:
"(This is being submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics, June 1959.)"
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
OBJECTIVES: To assess risk and protective factors for chronic noncommunicable diseases (CNCD) and to identify social inequalities in their distribution among Brazilian adults. METHODS: The data used were collected in 2007 through VIGITEL, an ongoing population-based telephone survey. This surveillance system was implemented in all of the Brazilian State capitals, over 54,000 interviews were analyzed. Age-adjusted prevalence ratios for trends at different schooling levels were calculated using Poisson regression with linear models. RESULTS: These analyses have shown differences in the prevalence of risk and protective factors for CNCD by gender and schooling. Among men, the prevalence ratios of overweight, consumption of meat with visible fat, and dyslipidemia were higher among men with more schooling, while tobacco use, sedentary lifestyle, and high-blood pressure were lower. Among women, tobacco use, overweight, obesity, high-blood pressure and diabetes were lower among men with more schooling, and consumption of meat with visible fat and sedentary lifestyles were higher. As for protective factors, fruit and vegetables intake and physical activity were higher in both men and women with more schooling. CONCLUSION: Gender and schooling influence on risk and protective factors for CNCD, being the values less favorable for men. vigitel is a useful tool for monitoring these factors amongst the Brazilian population.
Resumo:
Background: Large inequalities of mortality by most cancers in general, by mouth and pharynx cancer in particular, have been associated to behaviour and geopolitical factors. The assessment of socioeconomic covariates of cancer mortality may be relevant to a full comprehension of distal determinants of the disease, and to appraise opportune interventions. The objective of this study was to compare socioeconomic inequalities in male mortality by oral and pharyngeal cancer in two major cities of Europe and South America. Methods: The official system of information on mortality provided data on deaths in each city; general censuses informed population data. Age-adjusted death rates by oral and pharyngeal cancer for men were independently assessed for neighbourhoods of Barcelona, Spain, and Sao Paulo, Brazil, from 1995 to 2003. Uniform methodological criteria instructed the comparative assessment of magnitude, trends and spatial distribution of mortality. General linear models assessed ecologic correlations between death rates and socioeconomic indices (unemployment, schooling levels and the human development index) at the inner-city area level. Results obtained for each city were subsequently compared. Results: Mortality of men by oral and pharyngeal cancer ranked higher in Barcelona (9.45 yearly deaths per 100,000 male inhabitants) than in Spain and Europe as a whole; rates were on decrease. Sao Paulo presented a poorer profile, with higher magnitude (11.86) and stationary trend. The appraisal of ecologic correlations indicated an unequal and inequitably distributed burden of disease in both cities, with poorer areas tending to present higher mortality. Barcelona had a larger gradient of mortality than Sao Paulo, indicating a higher inequality of cancer deaths across its neighbourhoods. Conclusion: The quantitative monitoring of inequalities in health may contribute to the formulation of redistributive policies aimed at the concurrent promotion of wellbeing and social justice. The assessment of groups experiencing a higher burden of disease can instruct health services to provide additional resources for expanding preventive actions and facilities aimed at early diagnosis, standardized treatments and rehabilitation.
Resumo:
This paper proposes a physical non-linear formulation to deal with steel fiber reinforced concrete by the finite element method. The proposed formulation allows the consideration of short or long fibers placed arbitrarily inside a continuum domain (matrix). The most important feature of the formulation is that no additional degree of freedom is introduced in the pre-existent finite element numerical system to consider any distribution or quantity of fiber inclusions. In other words, the size of the system of equations used to solve a non-reinforced medium is the same as the one used to solve the reinforced counterpart. Another important characteristic of the formulation is the reduced work required by the user to introduce reinforcements, avoiding ""rebar"" elements, node by node geometrical definitions or even complex mesh generation. Bounded connection between long fibers and continuum is considered, for short fibers a simplified approach is proposed to consider splitting. Non-associative plasticity is adopted for the continuum and one dimensional plasticity is adopted to model fibers. Examples are presented in order to show the capabilities of the formulation.
Resumo:
One of the electrical impedance tomography objectives is to estimate the electrical resistivity distribution in a domain based only on electrical potential measurements at its boundary generated by an imposed electrical current distribution into the boundary. One of the methods used in dynamic estimation is the Kalman filter. In biomedical applications, the random walk model is frequently used as evolution model and, under this conditions, poor tracking ability of the extended Kalman filter (EKF) is achieved. An analytically developed evolution model is not feasible at this moment. The paper investigates the identification of the evolution model in parallel to the EKF and updating the evolution model with certain periodicity. The evolution model transition matrix is identified using the history of the estimated resistivity distribution obtained by a sensitivity matrix based algorithm and a Newton-Raphson algorithm. To numerically identify the linear evolution model, the Ibrahim time-domain method is used. The investigation is performed by numerical simulations of a domain with time-varying resistivity and by experimental data collected from the boundary of a human chest during normal breathing. The obtained dynamic resistivity values lie within the expected values for the tissues of a human chest. The EKF results suggest that the tracking ability is significantly improved with this approach.