982 resultados para singular-value decomposition
Resumo:
A ambiguidade na inversão de dados de geofísica de poço é estudada através da análise fatorial Q-modal. Este método é baseado na análise de um número finito de soluções aceitáveis, que são ordenadas, no espaço de soluções, segundo a direção de maior ambiguidade. A análise da variação dos parâmetros ao longo dessas soluções ordenadas permite caracterizar aqueles que são mais influentes na ambiguidade. Como a análise Q-modal é baseada na determinação de uma região de ambiguidade, obtida de modo empírico a partir de um número finito de soluções aceitáveis, é possível analisar a ambiguidade devida não só a erros nas observações, como também a pequenos erros no modelo interpretativo. Além disso, a análise pode ser aplicada mesmo quando os modelos interpretativos ou a relação entre os parâmetros não são lineares. A análise fatorial é feita utilizando-se dados sintéticos, e então comparada com a análise por decomposição em valores singulares, mostrando-se mais eficaz, uma vez que requer premissas menos restritivas, permitindo, desse modo, caracterizar a ambiguidade de modo mais realístico. A partir da determinação dos parâmetros com maior influência na ambiguidade do modelo é possível reparametrizá-lo, agrupando-os em um único parâmetro, redefinindo assim o modelo interpretativo. Apesar desta reparametrização incorrer na perda de resolução dos parâmetros agrupados, o novo modelo tem sua ambiguidade bastante reduzida.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
[EN]A natural generalization of the classical Moore-Penrose inverse is presented. The so-called S-Moore-Penrose inverse of a m x n complex matrix A, denoted by As, is defined for any linear subspace S of the matrix vector space Cnxm. The S-Moore-Penrose inverse As is characterized using either the singular value decomposition or (for the nonsingular square case) the orthogonal complements with respect to the Frobenius inner product. These results are applied to the preconditioning of linear systems based on Frobenius norm minimization and to the linearly constrained linear least squares problem.
Resumo:
We establish a fundamental equivalence between singular value decomposition (SVD) and functional principal components analysis (FPCA) models. The constructive relationship allows to deploy the numerical efficiency of SVD to fully estimate the components of FPCA, even for extremely high-dimensional functional objects, such as brain images. As an example, a functional mixed effect model is fitted to high-resolution morphometric (RAVENS) images. The main directions of morphometric variation in brain volumes are identified and discussed.
Resumo:
A basic approach to study a NVH problem is to break down the system in three basic elements – source, path and receiver. While the receiver (response) and the transfer path can be measured, it is difficult to measure the source (forces) acting on the system. It becomes necessary to predict these forces to know how they influence the responses. This requires inverting the transfer path. Singular Value Decomposition (SVD) method is used to decompose the transfer path matrix into its principle components which is required for the inversion. The usual approach to force prediction requires rejecting the small singular values obtained during SVD by setting a threshold, as these small values dominate the inverse matrix. This assumption of the threshold may be subjected to rejecting important singular values severely affecting force prediction. The new approach discussed in this report looks at the column space of the transfer path matrix which is the basis for the predicted response. The response participation is an indication of how the small singular values influence the force participation. The ability to accurately reconstruct the response vector is important to establish a confidence in force vector prediction. The goal of this report is to suggest a solution that is mathematically feasible, physically meaningful, and numerically more efficient through examples. This understanding adds new insight to the effects of current code and how to apply algorithms and understanding to new codes.
Resumo:
In this paper, we investigate how a multilinear model can be used to represent human motion data. Based on technical modes (referring to degrees of freedom and number of frames) and natural modes that typically appear in the context of a motion capture session (referring to actor, style, and repetition), the motion data is encoded in form of a high-order tensor. This tensor is then reduced by using N-mode singular value decomposition. Our experiments show that the reduced model approximates the original motion better then previously introduced PCA-based approaches. Furthermore, we discuss how the tensor representation may be used as a valuable tool for the synthesis of new motions.
Resumo:
The goal of acute stroke treatment with intravenous thrombolysis or endovascular recanalization techniques is to rescue the penumbral tissue. Therefore, knowing the factors that influence the loss of penumbral tissue is of major interest. In this study we aimed to identify factors that determine the evolution of the penumbra in patients with proximal (M1 or M2) middle cerebral artery occlusion. Among these factors collaterals as seen on angiography were of special interest. Forty-four patients were included in this analysis. They had all received endovascular therapy and at least minimal reperfusion was achieved. Their penumbra was assessed with perfusion- and diffusion-weighted imaging. Perfusion-weighted imaging volumes were defined by circular singular value decomposition deconvolution maps (Tmax > 6 s) and results were compared with volumes obtained with non-deconvolved maps (time to peak > 4 s). Loss of penumbral volume was defined as difference of post- minus pretreatment diffusion-weighted imaging volumes and calculated in per cent of pretreatment penumbral volume. Correlations between baseline characteristics, reperfusion, collaterals, time to reperfusion and penumbral volume loss were assessed using analysis of covariance. Collaterals (P = 0.021), reperfusion (P = 0.003) and their interaction (P = 0.031) independently influenced penumbral tissue loss, but not time from magnetic resonance (P = 0.254) or from symptom onset (P = 0.360) to reperfusion. Good collaterals markedly slowed down and reduced the penumbra loss: in patients with thrombolysis in cerebral infarction 2 b-3 reperfusion and without any haemorrhage, 27% of the penumbra was lost with 8.9 ml/h with grade 0 collaterals, whereas 11% with 3.4 ml/h were lost with grade 1 collaterals. With grade 2 collaterals the penumbral volume change was -2% with -1.5 ml/h, indicating an overall diffusion-weighted imaging lesion reversal. We conclude that collaterals and reperfusion are the main factors determining loss of penumbral tissue in patients with middle cerebral artery occlusions. Collaterals markedly reduce and slow down penumbra loss. In patients with good collaterals, time to successful reperfusion accounts only for a minor fraction of penumbra loss. These results support the hypothesis that good collaterals extend the time window for acute stroke treatment.
Resumo:
X-ray diffraction analyses of the clay-sized fraction of sediments from the Nankai Trough and Shikoku Basin (Sites 1173, 1174, and 1177 of the Ocean Drilling Program) reveal spatial and temporal trends in clay minerals and diagenesis. More detrital smectite was transported into the Shikoku Basin during the early-middle Miocene than what we observe today, and smectite input decreased progressively through the late Miocene and Pliocene. Volcanic ash has been altered to dioctahedral smectite in the upper Shikoku Basin facies at Site 1173; the ash alteration front shifts upsection to the outer trench-wedge facies at Site 1174. At greater depths (lower Shikoku Basin facies), smectite alters to illite/smectite mixed-layer clay, but reaction progress is incomplete. Using ambient geothermal conditions, a kinetic model overpredicts the amount of illite in illite/smectite clays by 15%-20% at Site 1174. Numerical simulations come closer to observations if the concentration of potassium in pore water is reduced or the time of burial is shortened. Model results match X-ray diffraction results fairly well at Site 1173. The geothermal gradient at Site 1177 is substantially lower than at Sites 1173 and 1174; consequently, volcanic ash alters to smectite in lower Shikoku Basin deposits but smectite-illite diagenesis has not started. The absolute abundance of smectite in mudstones from Site 1177 is sufficient (30-60 wt%) to influence the strata's shear strength and hydrogeology as they subduct along the Ashizuri Transect.
Resumo:
This data report documents the acquisition of two new sets of normalization factors for semiquantitative X-ray diffraction analyses. One set of factors is for bulk sediment powders, and the other applies to oriented aggregates of clay-sized fractions (<2 µm). We analyzed mixtures of standard minerals with known weight percentages of each component and solved for the normalization factors using matrix singular value decomposition. The components in bulk powders include total clay minerals (a mixture of smectite, illite, and chlorite), quartz, plagioclase, and calcite. For clay-sized fractions, the minerals are smectite, illite, chlorite, and quartz. We tested the utility of the method by analyzing natural mudstone specimens from Site 297 of the Deep Sea Drilling Project, which is located in the Shikoku Basin south of Site 1177 of the Ocean Drilling Program (Ashizuri transect).
Resumo:
Multiuser multiple-input multiple-output (MIMO) downlink (DL) transmission schemes experience both multiuser interference as well as inter-antenna interference. The singular value decomposition provides an appropriate mean to process channel information and allows us to take the individual user’s channel characteristics into account rather than treating all users channels jointly as in zero-forcing (ZF) multiuser transmission techniques. However, uncorrelated MIMO channels has attracted a lot of attention and reached a state of maturity. By contrast, the performance analysis in the presence of antenna fading correlation, which decreases the channel capacity, requires substantial further research. The joint optimization of the number of activated MIMO layers and the number of bits per symbol along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers has to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.
Resumo:
We propose a new methodology to evaluate the balance between segregation and integration in functional brain networks by using singular value decomposition techniques. By means of magnetoencephalography, we obtain the brain activity of a control group of 19 individuals during a memory task. Next, we project the node-to-node correlations into a complex network that is analyzed from the perspective of its modular structure encoded in the contribution matrix. In this way, we are able to study the role that nodes play I/O its community and to identify connector and local hubs. At the mesoscale level, the analysis of the contribution matrix allows us to measure the degree of overlapping between communities and quantify how far the functional networks are from the configuration that better balances the integrated and segregated activity
Resumo:
Realistic operation of helicopter flight simulators in complex topographies (such as urban environments) requires appropriate prediction of the incoming wind, and this prediction should be made in real time. Unfortunately, the wind topology around complex topographies shows time-dependent, fully nonlinear, turbulent patterns (i.e., wakes) whose simulation cannot be made using computationally inexpensive tools based on corrected potential approximations. Instead, the full Navier-Stokes plus some kind of turbulent modeling is necessary, which is quite computationally expensive. The complete unsteady flow depends on two parameters, namely the velocity and orientation of the free stream flow. The aim of this MSc thesis is to develop a methodology for the real time simulation of these complex flows. For simplicity, the flow around a single building (20 mx20 m cross section and 100 m height) is considered, with free stream velocity in the range 5-25 m/s. Because of the square cross section, the problem shows two reflection symmetries, which allows for restricting the orientations to the range 0° < a. < 45°. The methodology includes an offline preprocess and the online operation. The preprocess consists in three steps: An appropriate, unstructured mesh is selected in which the flow is sim¬ulated using OpenFOAM, and this is done for 33 combinations of 3 free stream intensities and 11 orientations. For each of these, the simulation proceeds for a sufficiently large time as to eliminate transients. This step is quite computationally expensive. Each flow field is post-processed using a combination of proper orthogonal decomposition, fast Fourier transform, and a convenient optimization tool, which identifies the relevant frequencies (namely, both the basic frequencies and their harmonics) and modes in the computational mesh. This combination includes several new ingredients to filter errors out and identify the relevant spatio-temporal patterns. Note that, in principle, the basic frequencies depend on both the intensity and the orientation of the free stream flow. The outcome of this step is a set of modes (vectors containing the three velocity components at all mesh points) for the various Fourier components, intensities, and orientations, which can be organized as a third order tensor. This step is fairly computationally inexpensive. The above mentioned tensor is treated using a combination of truncated high order singular value, decomposition and appropriate one-dimensional interpolation (as in Lorente, Velazquez, Vega, J. Aircraft, 45 (2008) 1779-1788). The outcome is a tensor representation of both the relevant fre¬quencies and the associated Fourier modes for a given pair of values of the free stream flow intensity and orientation. This step is fairly compu¬tationally inexpensive. The online, operation requires just reconstructing the time-dependent flow field from its Fourier representation, which is extremely computationally inex¬pensive. The whole method is quite robust.