252 resultados para Eigenvalue


Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente trabalho objetiva avaliar o desempenho do MECID (Método dos Elementos de Contorno com Interpolação Direta) para resolver o termo integral referente à inércia na Equação de Helmholtz e, deste modo, permitir a modelagem do Problema de Autovalor assim como calcular as frequências naturais, comparando-o com os resultados obtidos pelo MEF (Método dos Elementos Finitos), gerado pela Formulação Clássica de Galerkin. Em primeira instância, serão abordados alguns problemas governados pela equação de Poisson, possibilitando iniciar a comparação de desempenho entre os métodos numéricos aqui abordados. Os problemas resolvidos se aplicam em diferentes e importantes áreas da engenharia, como na transmissão de calor, no eletromagnetismo e em problemas elásticos particulares. Em termos numéricos, sabe-se das dificuldades existentes na aproximação precisa de distribuições mais complexas de cargas, fontes ou sorvedouros no interior do domínio para qualquer técnica de contorno. No entanto, este trabalho mostra que, apesar de tais dificuldades, o desempenho do Método dos Elementos de Contorno é superior, tanto no cálculo da variável básica, quanto na sua derivada. Para tanto, são resolvidos problemas bidimensionais referentes a membranas elásticas, esforços em barras devido ao peso próprio e problemas de determinação de frequências naturais em problemas acústicos em domínios fechados, dentre outros apresentados, utilizando malhas com diferentes graus de refinamento, além de elementos lineares com funções de bases radiais para o MECID e funções base de interpolação polinomial de grau (um) para o MEF. São geradas curvas de desempenho através do cálculo do erro médio percentual para cada malha, demonstrando a convergência e a precisão de cada método. Os resultados também são comparados com as soluções analíticas, quando disponíveis, para cada exemplo resolvido neste trabalho.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A hierarchical matrix is an efficient data-sparse representation of a matrix, especially useful for large dimensional problems. It consists of low-rank subblocks leading to low memory requirements as well as inexpensive computational costs. In this work, we discuss the use of the hierarchical matrix technique in the numerical solution of a large scale eigenvalue problem arising from a finite rank discretization of an integral operator. The operator is of convolution type, it is defined through the first exponential-integral function and, hence, it is weakly singular. We develop analytical expressions for the approximate degenerate kernels and deduce error upper bounds for these approximations. Some computational results illustrating the efficiency and robustness of the approach are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A análise de componentes principais é uma técnica de estatística multivariada utilizada para examinar a interdependência entre variáveis. A sua principal característica é a capacidade de reduzir dados, e tem sido usada para o desenvolvimento de instrumentos de pesquisas psiquiátricas e na classificação dos transtornos psiquiátricos. Esta técnica foi utilizada para estudar a estrutura fatorial do Questionário de Morbidade Psiquiátrica do Adulto (QMPA). O questionário foi composto de 45 questões de resposta sim/não que identificam sintomas psiquiátricos, uso de serviço e de drogas psicotrópicas. O questionário foi aplicado em 6.470 indivíduos maiores de 15 anos, em amostras representativas da população de três cidades brasileiras (Brasília, São Paulo e Porto Alegre). O estudo teve como objetivo comparar a estrutura fatorial do questionário nas três regiões urbanas brasileiras. Sete fatores foram encontrados que explicam 42,7% da variância total da amostra. O fator 1, Ansiedade/Somatização ("eigenvalue" (EV) = 3.812 e variância explicada (VE) = 10,9%); O fator 2, Irritabilidade/Depressão (EV = 2.412 e VE = 6,9%); O fator 3, Deficiência Mental (EV= 2.014 e VE = 5,8%); O fator 4, Alcoolismo (EV = 1.903 e VE = 5,4%); O fator 5, Exaltação do Humor (EV = 1.621 e VE = 4,6%); O fator 6, Transtorno de Percepção (EV = 1.599 e VE = 4,6%) e o fator 7, Tratamento (EV = 1.592 e VE = 4,5%).O QMPA apresentou estruturas fatoriais semelhantes nas três cidades. Baseados nos achados, são feitas sugestões para que algumas questões sejam modificadas e para a exclusão de outras em uma futura versão do questionário.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Preliminary version

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this work was to investigate the application of experimental design techniques for the identification of Michaelis-Menten kinetic parameters. More specifically, this study attempts to elucidate the relative advantages/disadvantages of employing complex experimental design techniques in relation to equidistant sampling when applied to different reactor operation modes. All studies were supported by simulation data of a generic enzymatic process that obeys to the Michaelis-Menten kinetic equation. Different aspects were investigated, such as the influence of the reactor operation mode (batch, fed-batch with pulse wise feeding and fed-batch with continuous feeding) and the experimental design optimality criteria on the effectiveness of kinetic parameters identification. The following experimental design optimality criteria were investigated: 1) minimization of the sum of the diagonal of the Fisher information matrix (FIM) inverse (A-criterion), 2) maximization of the determinant of the FIM (D-criterion), 3) maximization of the smallest eigenvalue of the FIM (E-criterion) and 4) minimization of the quotient between the largest and the smallest eigenvalue (modified E-criterion). The comparison and assessment of the different methodologies was made on the basis of the Cramér-Rao lower bounds (CRLB) error in respect to the parameters vmax and Km of the Michaelis-Menten kinetic equation. In what concerns the reactor operation mode, it was concluded that fed-batch (pulses) is better than batch operation for parameter identification. When the former operation mode is adopted, the vmax CRLB error is lowered by 18.6 % while the Km CRLB error is lowered by 26.4 % when compared to the batch operation mode. Regarding the optimality criteria, the best method was the A-criterion, with an average vmax CRLB of 6.34 % and 5.27 %, for batch and fed-batch (pulses), respectively, while presenting a Km’s CRLB of 25.1 % and 18.1 %, for batch and fed-batch (pulses), respectively. As a general conclusion of the present study, it can be stated that experimental design is justified if the starting parameters CRLB errors are inferior to 19.5 % (vmax) and 45% (Km), for batch processes, and inferior to 42 % and to 50% for fed-batch (pulses) process. Otherwise equidistant sampling is a more rational decision. This conclusion clearly supports that, for fed-batch operation, the use of experimental design is likely to largely improve the identification of Michaelis-Menten kinetic parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce the notions of equilibrium distribution and time of convergence in discrete non-autonomous graphs. Under some conditions we give an estimate to the convergence time to the equilibrium distribution using the second largest eigenvalue of some matrices associated with the system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A relação entre consciência fonológica e consciência morfológica e a contribuição independente de cada uma para a aprendizagem da leitura não reúnem ainda consenso na literatura. Alguns autores argumentam que a consciência morfológica não contribui de forma independente da consciência fonológica para a aprendizagem da leitura. No entanto, outros encontraram dados que indicam que a consciência morfológica tem um papel específi co na progressão da aprendizagem da leitura. Todavia, para além da variedade de tarefas usadas não permitir a comparação de resultados, a ausência de estudos prévios sobre a validade e a fi delidade das mesmas conduz a resultados cuja confi abilidade pode ser posta em causa. Este estudo tem como objetivo apresentar uma análise das qualidades psicométricas da PCM - Prova de Consciência Morfológica. A amostra é constituída por 243 crianças do 2.º (n = 79), 3.º (n = 83) e 4.º (n = 81) anos frequentando escolas públicas, urbanas, do distrito do Porto (norte de Portugal). Os resultados revelaram que a PCM possui uma elevada consistência interna (α = .95). Na análise em componentes principais, foi extraído um único fator, com valor próprio igual a 10.88, que explica 54.42% da variância total dos resultados. Os itens são todos saturados no fator, variando as saturações fatoriais entre um mínimo de .42 e o máximo de .91

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJETIVO: Este trabalho apresenta resultados acerca das propriedades psicométricas da "Escala de atitudes frente ao HIV/AIDS". Os dados, provenientes de uma amostra de 549 alunos entre universitários, ensinos médio e ensino fundamental. MÉTODOS: Os dados foram tratados pelo método dos componentes principais da análise fatorial. A análise final, postulado um eigenvalue mínimo de 2, resultou cinco fatores. Foram eliminados itens que apresentaram carga fatorial menor que 0,30. Neste estudo, o menor alfa observado foi de 0,79. Portanto, é provável que todos os 47 itens do instrumento final elaborado meçam o mesmo construto: atitude frente ao HIV/AIDS. RESULTADOS: Escores inferiores a 96 foram considerados "fraco grau de conhecimento sobre HIV/AIDS"; entre 96 e 192 "moderado grau de conhecimento" e acima de 192 "alto grau de conhecimento sobre HIV/AIDS". Foram estabelecidos os fatores: 1, 2 e 3, sendo "fator geral de percepção da informação técnico-científica"; "fator de percepção da informação técnico-científica versus sexualidade e preconceito"; "fator de percepção da informação técnico-científica no uso de drogas", respectivamente. CONCLUSÕES: O alfa de Cronbach encontrado para a escala como um todo foi de 0,859, sugerindo fortemente a existência da fidedignidade do instrumento que se mostrou útil para avaliar o grau de conhecimento acerca do HIV/AIDS e o risco decorrente do desconhecimento, entre estudantes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distribution systems, eigenvalue analysis, nodal admittance matrix, power quality, spectral decomposition

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently graph theory and complex networks have been widely used as a mean to model functionality of the brain. Among different neuroimaging techniques available for constructing the brain functional networks, electroencephalography (EEG) with its high temporal resolution is a useful instrument of the analysis of functional interdependencies between different brain regions. Alzheimer's disease (AD) is a neurodegenerative disease, which leads to substantial cognitive decline, and eventually, dementia in aged people. To achieve a deeper insight into the behavior of functional cerebral networks in AD, here we study their synchronizability in 17 newly diagnosed AD patients compared to 17 healthy control subjects at no-task, eyes-closed condition. The cross-correlation of artifact-free EEGs was used to construct brain functional networks. The extracted networks were then tested for their synchronization properties by calculating the eigenratio of the Laplacian matrix of the connection graph, i.e., the largest eigenvalue divided by the second smallest one. In AD patients, we found an increase in the eigenratio, i.e., a decrease in the synchronizability of brain networks across delta, alpha, beta, and gamma EEG frequencies within the wide range of network costs. The finding indicates the destruction of functional brain networks in early AD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study discusses retention criteria for principal components analysis (PCA) applied to Likert scale items typical in psychological questionnaires. The main aim is to recommend applied researchers to restrain from relying only on the eigenvalue-than-one criterion; alternative procedures are suggested for adjusting for sampling error. An additional objective is to add evidence on the consequences of applying this rule when PCA is used with discrete variables. The experimental conditions were studied by means of Monte Carlo sampling including several sample sizes, different number of variables and answer alternatives, and four non-normal distributions. The results suggest that even when all the items and thus the underlying dimensions are independent, eigenvalues greater than one are frequent and they can explain up to 80% of the variance in data, meeting the empirical criterion. The consequences of using Kaiser"s rule are illustrated with a clinical psychology example. The size of the eigenvalues resulted to be a function of the sample size and the number of variables, which is also the case for parallel analysis as previous research shows. To enhance the application of alternative criteria, an R package was developed for deciding the number of principal components to retain by means of confidence intervals constructed about the eigenvalues corresponding to lack of relationship between discrete variables.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assessing the contribution of promoters and coding sequences to gene evolution is an important step toward discovering the major genetic determinants of human evolution. Many specific examples have revealed the evolutionary importance of cis-regulatory regions. However, the relative contribution of regulatory and coding regions to the evolutionary process and whether systemic factors differentially influence their evolution remains unclear. To address these questions, we carried out an analysis at the genome scale to identify signatures of positive selection in human proximal promoters. Next, we examined whether genes with positively selected promoters (Prom+ genes) show systemic differences with respect to a set of genes with positively selected protein-coding regions (Cod+ genes). We found that the number of genes in each set was not significantly different (8.1% and 8.5%, respectively). Furthermore, a functional analysis showed that, in both cases, positive selection affects almost all biological processes and only a few genes of each group are located in enriched categories, indicating that promoters and coding regions are not evolutionarily specialized with respect to gene function. On the other hand, we show that the topology of the human protein network has a different influence on the molecular evolution of proximal promoters and coding regions. Notably, Prom+ genes have an unexpectedly high centrality when compared with a reference distribution (P = 0.008, for Eigenvalue centrality). Moreover, the frequency of Prom+ genes increases from the periphery to the center of the protein network (P = 0.02, for the logistic regression coefficient). This means that gene centrality does not constrain the evolution of proximal promoters, unlike the case with coding regions, and further indicates that the evolution of proximal promoters is more efficient in the center of the protein network than in the periphery. These results show that proximal promoters have had a systemic contribution to human evolution by increasing the participation of central genes in the evolutionary process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Falls are common in the elderly, and potentially result in injury and disability. Thus, preventing falls as soon as possible in older adults is a public health priority, yet there is no specific marker that is predictive of the first fall onset. We hypothesized that gait features should be the most relevant variables for predicting the first fall. Clinical baseline characteristics (e.g., gender, cognitive function) were assessed in 259 home-dwelling people aged 66 to 75 that had never fallen. Likewise, global kinetic behavior of gait was recorded from 22 variables in 1036 walking tests with an accelerometric gait analysis system. Afterward, monthly telephone monitoring reported the date of the first fall over 24 months. A principal components analysis was used to assess the relationship between gait variables and fall status in four groups: non-fallers, fallers from 0 to 6 months, fallers from 6 to 12 months and fallers from 12 to 24 months. The association of significant principal components (PC) with an increased risk of first fall was then evaluated using the area under the Receiver Operator Characteristic Curve (ROC). No effect of clinical confounding variables was shown as a function of groups. An eigenvalue decomposition of the correlation matrix identified a large statistical PC1 (termed "Global kinetics of gait pattern"), which accounted for 36.7% of total variance. Principal component loadings also revealed a PC2 (12.6% of total variance), related to the "Global gait regularity." Subsequent ANOVAs showed that only PC1 discriminated the fall status during the first 6 months, while PC2 discriminated the first fall onset between 6 and 12 months. After one year, any PC was associated with falls. These results were bolstered by the ROC analyses, showing good predictive models of the first fall during the first six months or from 6 to 12 months. Overall, these findings suggest that the performance of a standardized walking test at least once a year is essential for fall prevention.