982 resultados para singular-value decomposition
Resumo:
The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.
This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.
Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.
Resumo:
Electrostatic interactions are of fundamental importance in determining the structure and stability of macromolecules. For example, charge-charge interactions modulate the folding and binding of proteins and influence protein solubility. Electrostatic interactions are highly variable and can be both favorable and unfavorable. The ability to quantify these interactions is challenging but vital to understanding the detailed balance and major roles that they have in different proteins and biological processes. Measuring pKa values of ionizable groups provides a sensitive method for experimentally probing the electrostatic properties of a protein.
pKa values report the free energy of site-specific proton binding and provide a direct means of studying protein folding and pH-dependent stability. Using a combination of NMR, circular dichroism, and fluorescence spectroscopy along with singular value decomposition, we investigated the contributions of electrostatic interactions to the thermodynamic stability and folding of the protein subunit of Bacillus subtilis ribonuclease P, P protein. Taken together, the results suggest that unfavorable electrostatics alone do not account for the fact that P protein is intrinsically unfolded in the absence of ligand because the pKa differences observed between the folded and unfolded state are small. Presumably, multiple factors encoded in the P protein sequence account for its IUP property, which may play an important role in its function.
Resumo:
Three sites were cored on the landward slope of the Nankai margin of southwest Japan during Leg 190 of the Ocean Drilling Program. Sites 1175 and 1176 are located in a trench-slope basin that was constructed during the early Pleistocene (~1 Ma) by frontal offscraping of coarse-grained trench-wedge deposits. Rapid uplift elevated the substrate above the calcite compensation depth and rerouted a transverse canyon-channel system that had delivered most of the trench sediment during the late Pliocene (1.06-1.95 Ma). The basin's depth is now ~3000 to 3020 m below sea level. Clay-sized detritus (<2 µm) did not change significantly in composition during the transition from trench-floor to slope-basin environment. Relative mineral abundances for the two slope-basin sites average 36-37 wt% illite, 25 wt% smectite, 22-24 wt% chlorite, and 15-16 wt% quartz. Site 1178 is located higher up the landward slope at a water depth of 1741 m, ~70 km from the present-day deformation front. There is a pronounced discontinuity ~200 m below seafloor between muddy slope-apron deposits (Quaternary-late Miocene) and sandier trench-wedge deposits (late Miocene; 6.8-9.63 Ma). Clay minerals change downsection from an illite-chlorite assemblage (similar to Sites 1175 and 1176) to one that contains substantial amounts of smectite (average = 45 wt% of the clay-sized fraction; maximum = 76 wt%). Mixing in the water column homogenizes fine-grained suspended sediment eroded from the Izu-Bonin volcanic arc, the Izu-Honshu collision zone, and the Outer Zone of Kyushu and Shikoku, but the spatial balance among those contributors has shifted through time. Closure of the Central America Seaway at ~3 Ma was particularly important because it triggered intensification of the Kuroshio Current. With stronger and deeper flow of surface water toward the northeast, the flux of smectite from the Izu-Bonin volcanic arc was dampened and more detrital illite and chlorite were transported into the Shikoku-Nankai system from the Outer Zone of Japan.
Resumo:
The dependency of word similarity in vector space models on the frequency of words has been noted in a few studies, but has received very little attention. We study the influence of word frequency in a set of 10 000 randomly selected word pairs for a number of different combinations of feature weighting schemes and similarity measures. We find that the similarity of word pairs for all methods, except for the one using singular value decomposition to reduce the dimensionality of the feature space, is determined to a large extent by the frequency of the words. In a binary classification task of pairs of synonyms and unrelated words we find that for all similarity measures the results can be improved when we correct for the frequency bias.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
In geophysics there are several steps in the study of the Earth, one of them is the processing of seismic records. These records are obtained through observations made on the earth surface and are useful for information about the structure and composition of the inaccessible parts in great depths. Most of the tools and techniques developed for such studies has been applied in academic projects. The big problem is that the seismic processing power unwanted, recorded by receivers that do not bring any kind of information related to the reflectors can mask the information and/or generate erroneous information from the subsurface. This energy is known as unwanted seismic noise. To reduce the noise and improve a signal indicating a reflection, without losing desirable signals is sometimes a problem of difficult solution. The project aims to get rid of the ground roll noise, which shows a pattern characterized by low frequency, low rate of decay, low velocity and high amplituds. The Karhunen-Loève Transform is a great tool for identification of patterns based on the eigenvalues and eigenvectors. Together with the Karhunen-Loève Transform we will be using the Singular Value Decomposition, since it is a great mathematical technique for manipulating data
Resumo:
Matrix factorization (MF) has evolved as one of the better practice to handle sparse data in field of recommender systems. Funk singular value decomposition (SVD) is a variant of MF that exists as state-of-the-art method that enabled winning the Netflix prize competition. The method is widely used with modifications in present day research in field of recommender systems. With the potential of data points to grow at very high velocity, it is prudent to devise newer methods that can handle such data accurately as well as efficiently than Funk-SVD in the context of recommender system. In view of the growing data points, I propose a latent factor model that caters to both accuracy and efficiency by reducing the number of latent features of either users or items making it less complex than Funk-SVD, where latent features of both users and items are equal and often larger. A comprehensive empirical evaluation of accuracy on two publicly available, amazon and ml-100 k datasets reveals the comparable accuracy and lesser complexity of proposed methods than Funk-SVD.
Resumo:
A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of-reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced-assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.
Resumo:
A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of- reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced- assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.
Resumo:
In this article, we describe a novel methodology to extract semantic characteristics from protein structures using linear algebra in order to compose structural signature vectors which may be used efficiently to compare and classify protein structures into fold families. These signatures are built from the pattern of hydrophobic intrachain interactions using Singular Value Decomposition (SVD) and Latent Semantic Indexing (LSI) techniques. Considering proteins as documents and contacts as terms, we have built a retrieval system which is able to find conserved contacts in samples of myoglobin fold family and to retrieve these proteins among proteins of varied folds with precision of up to 80%. The classifier is a web tool available at our laboratory website. Users can search for similar chains from a specific PDB, view and compare their contact maps and browse their structures using a JMol plug-in.
Resumo:
This paper presents an analyze of numeric conditioning of the Hessian matrix of Lagrangian of modified barrier function Lagrangian method (MBFL) and primal-dual logarithmic barrier method (PDLB), which are obtained in the process of solution of an optimal power flow problem (OPF). This analyze is done by a comparative study through the singular values decomposition (SVD) of those matrixes. In the MBLF method the inequality constraints are treated by the modified barrier and PDLB methods. The inequality constraints are transformed into equalities by introducing positive auxiliary variables and are perturbed by the barrier parameter. The first-order necessary conditions of the Lagrangian function are solved by Newton's method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, allowing the limits of the inequality constraints to be reached. The electric systems IEEE 14, 162 and 300 buses were used in the comparative analysis. ©2007 IEEE.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Developments in the statistical analysis of compositional data over the last twodecades have made possible a much deeper exploration of the nature of variability,and the possible processes associated with compositional data sets from manydisciplines. In this paper we concentrate on geochemical data sets. First we explainhow hypotheses of compositional variability may be formulated within the naturalsample space, the unit simplex, including useful hypotheses of subcompositionaldiscrimination and specific perturbational change. Then we develop through standardmethodology, such as generalised likelihood ratio tests, statistical tools to allow thesystematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require specialconstruction. We comment on the use of graphical methods in compositional dataanalysis and on the ordination of specimens. The recent development of the conceptof compositional processes is then explained together with the necessary tools for astaying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland.Finally we point out a number of unresolved problems in the statistical analysis ofcompositional processes
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
Although correspondence analysis is now widely available in statistical software packages and applied in a variety of contexts, notably the social and environmental sciences, there are still some misconceptions about this method as well as unresolved issues which remain controversial to this day. In this paper we hope to settle these matters, namely (i) the way CA measures variance in a two-way table and how to compare variances between tables of different sizes, (ii) the influence, or rather lack of influence, of outliers in the usual CA maps, (iii) the scaling issue and the biplot interpretation of maps,(iv) whether or not to rotate a solution, and (v) statistical significance of results.