968 resultados para singular value decomposition (SVD)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new methodology to evaluate the balance between segregation and integration in functional brain networks by using singular value decomposition techniques. By means of magnetoencephalography, we obtain the brain activity of a control group of 19 individuals during a memory task. Next, we project the node-to-node correlations into a complex network that is analyzed from the perspective of its modular structure encoded in the contribution matrix. In this way, we are able to study the role that nodes play I/O its community and to identify connector and local hubs. At the mesoscale level, the analysis of the contribution matrix allows us to measure the degree of overlapping between communities and quantify how far the functional networks are from the configuration that better balances the integrated and segregated activity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Realistic operation of helicopter flight simulators in complex topographies (such as urban environments) requires appropriate prediction of the incoming wind, and this prediction should be made in real time. Unfortunately, the wind topology around complex topographies shows time-dependent, fully nonlinear, turbulent patterns (i.e., wakes) whose simulation cannot be made using computationally inexpensive tools based on corrected potential approximations. Instead, the full Navier-Stokes plus some kind of turbulent modeling is necessary, which is quite computationally expensive. The complete unsteady flow depends on two parameters, namely the velocity and orientation of the free stream flow. The aim of this MSc thesis is to develop a methodology for the real time simulation of these complex flows. For simplicity, the flow around a single building (20 mx20 m cross section and 100 m height) is considered, with free stream velocity in the range 5-25 m/s. Because of the square cross section, the problem shows two reflection symmetries, which allows for restricting the orientations to the range 0° < a. < 45°. The methodology includes an offline preprocess and the online operation. The preprocess consists in three steps: An appropriate, unstructured mesh is selected in which the flow is sim¬ulated using OpenFOAM, and this is done for 33 combinations of 3 free stream intensities and 11 orientations. For each of these, the simulation proceeds for a sufficiently large time as to eliminate transients. This step is quite computationally expensive. Each flow field is post-processed using a combination of proper orthogonal decomposition, fast Fourier transform, and a convenient optimization tool, which identifies the relevant frequencies (namely, both the basic frequencies and their harmonics) and modes in the computational mesh. This combination includes several new ingredients to filter errors out and identify the relevant spatio-temporal patterns. Note that, in principle, the basic frequencies depend on both the intensity and the orientation of the free stream flow. The outcome of this step is a set of modes (vectors containing the three velocity components at all mesh points) for the various Fourier components, intensities, and orientations, which can be organized as a third order tensor. This step is fairly computationally inexpensive. The above mentioned tensor is treated using a combination of truncated high order singular value, decomposition and appropriate one-dimensional interpolation (as in Lorente, Velazquez, Vega, J. Aircraft, 45 (2008) 1779-1788). The outcome is a tensor representation of both the relevant fre¬quencies and the associated Fourier modes for a given pair of values of the free stream flow intensity and orientation. This step is fairly compu¬tationally inexpensive. The online, operation requires just reconstructing the time-dependent flow field from its Fourier representation, which is extremely computationally inex¬pensive. The whole method is quite robust.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis of previously published sets of DNA microarray gene expression data by singular value decomposition has uncovered underlying patterns or “characteristic modes” in their temporal profiles. These patterns contribute unequally to the structure of the expression profiles. Moreover, the essential features of a given set of expression profiles are captured using just a small number of characteristic modes. This leads to the striking conclusion that the transcriptional response of a genome is orchestrated in a few fundamental patterns of gene expression change. These patterns are both simple and robust, dominating the alterations in expression of genes throughout the genome. Moreover, the characteristic modes of gene expression change in response to environmental perturbations are similar in such distant organisms as yeast and human cells. This analysis reveals simple regularities in the seemingly complex transcriptional transitions of diverse cells to new states, and these provide insights into the operation of the underlying genetic networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe the time evolution of gene expression levels by using a time translational matrix to predict future expression levels of genes based on their expression levels at some initial time. We deduce the time translational matrix for previously published DNA microarray gene expression data sets by modeling them within a linear framework by using the characteristic modes obtained by singular value decomposition. The resulting time translation matrix provides a measure of the relationships among the modes and governs their time evolution. We show that a truncated matrix linking just a few modes is a good approximation of the full time translation matrix. This finding suggests that the number of essential connections among the genes is small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have undertaken two-dimensional gel electrophoresis proteomic profiling on a series of cell lines with different recombinant antibody production rates. Due to the nature of gel-based experiments not all protein spots are detected across all samples in an experiment, and hence datasets are invariably incomplete. New approaches are therefore required for the analysis of such graduated datasets. We approached this problem in two ways. Firstly, we applied a missing value imputation technique to calculate missing data points. Secondly, we combined a singular value decomposition based hierarchical clustering with the expression variability test to identify protein spots whose expression correlates with increased antibody production. The results have shown that while imputation of missing data was a useful method to improve the statistical analysis of such data sets, this was of limited use in differentiating between the samples investigated, and highlighted a small number of candidate proteins for further investigation. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The method of isotope substitution in neutron diffraction was used to measure the structure of liquid ZnCl2 at 332(5)?°C and glassy ZnCl2 at 25(1)?°C. The partial structure factors were obtained from the measured diffraction patterns by using the method of singular value decomposition and by using the reverse Monte Carlo procedure. The partial structure factors reproduce the diffraction patterns measured by high-energy x-ray diffraction once a correction for the resolution function of the neutron diffractometer has been made. The results show that the predominant structural motif in both phases is the corner sharing ZnCl4 tetrahedron and that there is a small number of edge-sharing configurations, these being more abundant in the liquid. The tetrahedra organize on an intermediate length scale to give a first sharp diffraction peak in the measured diffraction patterns at a scattering vector kFSDP?1 Å-1 that is most prominent for the Zn-Zn correlations. The results support the notion that the relative fragility of tetrahedral glass forming MX2 liquids is related to the occurrence of edge-sharing units.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurement and variation control of geometrical Key Characteristics (KCs), such as flatness and gap of joint faces, coaxiality of cabin sections, is the crucial issue in large components assembly from the aerospace industry. Aiming to control geometrical KCs and to attain the best fit of posture, an optimization algorithm based on KCs for large components assembly is proposed. This approach regards the posture best fit, which is a key activity in Measurement Aided Assembly (MAA), as a two-phase optimal problem. In the first phase, the global measurement coordinate system of digital model and shop floor is unified with minimum error based on singular value decomposition, and the current posture of components being assembly is optimally solved in terms of minimum variation of all reference points. In the second phase, the best posture of the movable component is optimally determined by minimizing multiple KCs' variation with the constraints that every KC respectively conforms to its product specification. The optimal models and the process procedures for these two-phase optimal problems based on Particle Swarm Optimization (PSO) are proposed. In each model, every posture to be calculated is modeled as a 6 dimensional particle (three movement and three rotation parameters). Finally, an example that two cabin sections of satellite mainframe structure are being assembled is selected to verify the effectiveness of the proposed approach, models and algorithms. The experiment result shows the approach is promising and will provide a foundation for further study and application. © 2013 The Authors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital systems can generate left and right audio channels that create the effect of virtual sound source placement (spatialization) by processing an audio signal through pairs of Head-Related Transfer Functions (HRTFs) or, equivalently, Head-Related Impulse Responses (HRIRs). The spatialization effect is better when individually-measured HRTFs or HRIRs are used than when generic ones (e.g., from a mannequin) are used. However, the measurement process is not available to the majority of users. There is ongoing interest to find mechanisms to customize HRTFs or HRIRs to a specific user, in order to achieve an improved spatialization effect for that subject. Unfortunately, the current models used for HRTFs and HRIRs contain over a hundred parameters and none of those parameters can be easily related to the characteristics of the subject. This dissertation proposes an alternative model for the representation of HRTFs, which contains at most 30 parameters, all of which have a defined functional significance. It also presents methods to obtain the value of parameters in the model to make it approximately equivalent to an individually-measured HRTF. This conversion is achieved by the systematic deconstruction of HRIR sequences through an augmented version of the Hankel Total Least Squares (HTLS) decomposition approach. An average 95% match (fit) was observed between the original HRIRs and those re-constructed from the Damped and Delayed Sinusoids (DDSs) found by the decomposition process, for ipsilateral source locations. The dissertation also introduces and evaluates an HRIR customization procedure, based on a multilinear model implemented through a 3-mode tensor, for mapping of anatomical data from the subjects to the HRIR sequences at different sound source locations. This model uses the Higher-Order Singular Value Decomposition (HOSVD) method to represent the HRIRs and is capable of generating customized HRIRs from easily attainable anatomical measurements of a new intended user of the system. Listening tests were performed to compare the spatialization performance of customized, generic and individually-measured HRIRs when they are used for synthesized spatial audio. Statistical analysis of the results confirms that the type of HRIRs used for spatialization is a significant factor in the spatialization success, with the customized HRIRs yielding better results than generic HRIRs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.

This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.

Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrostatic interactions are of fundamental importance in determining the structure and stability of macromolecules. For example, charge-charge interactions modulate the folding and binding of proteins and influence protein solubility. Electrostatic interactions are highly variable and can be both favorable and unfavorable. The ability to quantify these interactions is challenging but vital to understanding the detailed balance and major roles that they have in different proteins and biological processes. Measuring pKa values of ionizable groups provides a sensitive method for experimentally probing the electrostatic properties of a protein.

pKa values report the free energy of site-specific proton binding and provide a direct means of studying protein folding and pH-dependent stability. Using a combination of NMR, circular dichroism, and fluorescence spectroscopy along with singular value decomposition, we investigated the contributions of electrostatic interactions to the thermodynamic stability and folding of the protein subunit of Bacillus subtilis ribonuclease P, P protein. Taken together, the results suggest that unfavorable electrostatics alone do not account for the fact that P protein is intrinsically unfolded in the absence of ligand because the pKa differences observed between the folded and unfolded state are small. Presumably, multiple factors encoded in the P protein sequence account for its IUP property, which may play an important role in its function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three sites were cored on the landward slope of the Nankai margin of southwest Japan during Leg 190 of the Ocean Drilling Program. Sites 1175 and 1176 are located in a trench-slope basin that was constructed during the early Pleistocene (~1 Ma) by frontal offscraping of coarse-grained trench-wedge deposits. Rapid uplift elevated the substrate above the calcite compensation depth and rerouted a transverse canyon-channel system that had delivered most of the trench sediment during the late Pliocene (1.06-1.95 Ma). The basin's depth is now ~3000 to 3020 m below sea level. Clay-sized detritus (<2 µm) did not change significantly in composition during the transition from trench-floor to slope-basin environment. Relative mineral abundances for the two slope-basin sites average 36-37 wt% illite, 25 wt% smectite, 22-24 wt% chlorite, and 15-16 wt% quartz. Site 1178 is located higher up the landward slope at a water depth of 1741 m, ~70 km from the present-day deformation front. There is a pronounced discontinuity ~200 m below seafloor between muddy slope-apron deposits (Quaternary-late Miocene) and sandier trench-wedge deposits (late Miocene; 6.8-9.63 Ma). Clay minerals change downsection from an illite-chlorite assemblage (similar to Sites 1175 and 1176) to one that contains substantial amounts of smectite (average = 45 wt% of the clay-sized fraction; maximum = 76 wt%). Mixing in the water column homogenizes fine-grained suspended sediment eroded from the Izu-Bonin volcanic arc, the Izu-Honshu collision zone, and the Outer Zone of Kyushu and Shikoku, but the spatial balance among those contributors has shifted through time. Closure of the Central America Seaway at ~3 Ma was particularly important because it triggered intensification of the Kuroshio Current. With stronger and deeper flow of surface water toward the northeast, the flux of smectite from the Izu-Bonin volcanic arc was dampened and more detrital illite and chlorite were transported into the Shikoku-Nankai system from the Outer Zone of Japan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dependency of word similarity in vector space models on the frequency of words has been noted in a few studies, but has received very little attention. We study the influence of word frequency in a set of 10 000 randomly selected word pairs for a number of different combinations of feature weighting schemes and similarity measures. We find that the similarity of word pairs for all methods, except for the one using singular value decomposition to reduce the dimensionality of the feature space, is determined to a large extent by the frequency of the words. In a binary classification task of pairs of synonyms and unrelated words we find that for all similarity measures the results can be improved when we correct for the frequency bias.