972 resultados para Laplace-Metropolis estimator
Resumo:
液—液萃取是化工体系中广泛应用的分离技术.它具有选择性高、分离效果好、适应性强等优点.液—液萃取过程中的两相流动和相际传质极为复杂,两相的密度差、粘度、互溶度、界面张力及体系纯度等许多因素对其都有重要影响.Marangoni效应是液—液萃取过程中的重要现象.对液-液系统液滴传质的Marangoni效应的研究论著中,目前还未见有数值模拟方面的工作.本文对单液滴在不互溶介质中运动和传质过程进行了数值模拟,考虑轴对称情况,采用正交贴体坐标变换,通过协变Laplace方程将液滴内外的求解区域变换成计算平面上几何形状规整的正方形区域.采用Ryskin等人的ADI方法求解动量方程在正交贴体坐标系下离散化得到的代数方程组.浓度的对流扩散方程用Patankar提出的控制容积法离散,对流项用幂函数方案离散
Resumo:
液-液萃取是化工体系中广泛应用的分离技术。它具有选择性高、分离效果好、适应性强等优点。液-液萃取过程中的两相流动和相际传质极为复杂,两相的密度差、粘度、互溶度、界面张力及体系纯度等许多因素对其都有重要影响。Marangoni效应是液-液萃取过程中的重要现象。对液-液系统液滴传质的Marangoni效应的研究论著中,目前还未见有数值模拟方面的工作。本文对单液滴在不互溶介质中运动和传质过程进行了数值模拟,考虑轴对称情况,采用正交贴体坐标变换,通过协变Laplace方程将液滴内外的求解区域变换成计算平面上几何形状规整的正方形区域。采用Ryskin等人的ADI方法求解动量方程在正交贴体坐标系下离散化得到的代数方程组。浓度的对流扩散方程用Patankar提出的控制容积法离散,对流项用幂函数方案离散。
Resumo:
本文研究粘弹性材料界面裂纹对冲击载荷的瞬态响应和对广义平面波的稳态散射。相对于已有广泛研究的弹性材料裂纹瞬态响应和稳态散射问题,本文的研究有三个突出特点:1)粘弹性材料;2)界面裂纹;3)广义平面波入射。粘弹性材料界面裂纹对冲击载荷的瞬态响应和对广义平面波的散射尚无开展研究,本文在弹性材料相应问题的研究基础上,首先开展了这一问题的研究。对于冲击载荷下粘弹性界面裂纹的瞬态响应问题,利用Laplace积分变换方法,将粘弹性材料卷积型本构方程转化为Laplace变换域内的代数型本构方程,从而可以在Laplace变换域内象处理弹性材料的冲击响应一样,将相应的混合边值问题归结为关于裂纹张开位移COD的对偶积分方程,并进一步引入裂纹位错密度函数CDD (Crack Dislocation Density),将对偶积分方程化成关于CDD的奇异积分方程(SIE)。用数值方法求解奇异积分方程得到变换域内的动应力强度因子数值解,最后利用Laplace积分逆变换数值方法得到时间域内的动应力强度因子的时间响应。理论分析考虑了两种裂纹模型,即Griffith界面裂纹和柱面圆弧型界面裂纹。考虑的载荷包括反平面冲击载荷和平面冲击载荷。对于平面冲击载荷,通过对裂尖应力场的奇性分析,首次发现粘弹性界面裂纹裂尖动应力场奇性指数不是常数0.5,而是与震荡指数一样依赖材料参数。针对反平面冲击载荷给出了一个算例,计算了裂尖动应力强度因子的时间响应,并与弹性材料的结果作了比较,发现粘弹性效应的影响不仅使过冲击峰值降低,而且使峰值点后移。粘性效应较大时,过冲击现象甚至不出现。关于粘弹性界面裂纹对广东省义平面波的散射问题,首先研究广义平面波在无裂纹存在的理想界面的反射和透射,再研究由于界面裂纹的存在而产生的附加散射场。利用粘弹性材料的复模量理论,可将粘弹性材料的卷积型相构方程化成频率域内的代数型本构方程。类似弹性平面波的处理,在频率域内将问题最终归结为关于裂纹位错密度CDD的奇异积分方程。数值方法求解奇异积分方程即可得到频率域内的散射场,并进而得到裂尖动应力强度因子和远场位移型函数和散射截面。理论分析考虑了两种裂纹模型:Griffith界面裂纹和柱面圆弧型界面裂纹。研究的入射波有广义的SH波和P波。对于广义平面P波入射的情况,通过对裂尖应力场的奇性分析,同样发现粘弹性界面裂纹裂尖动应力场奇性指数不地常数0.5,而是与震荡指数一样依赖于材料参数。对柱面裂纹散射远场的渐近分析,发现远场位移和应力除含有几何衰减因子外,都含有一个材料衰减速因子。散射截面由于材料衰减因子的存在也成为依赖散射半径的量。为了使散射截面仍有意义,文中提出一种修正办法。对Griffith界面裂纹,给出了一个广义平面SH波入射的算例;对柱面界面裂纹,给出了一个广义平面P波入射的算例。计算了不同入射角和入射频率下裂纹的张开位移和动就应力强度因子,并分析了其依赖关系。求解奇异积分方程的数值方法和Laplace积分逆变换数值方法是本文的基本数值方法。本文对这两种方法作了大量的调研和系统的研究。在对比分析的基础上,对现有的各种方法从原理,适用范围,计数效率,优势及特点进行了归纳总结。并尝试了奇异积分方程的最新数值方法--分片连续函数法,证实了其适用性和方便性.
Resumo:
A sliding mode position control for high-performance real-time applications of induction motors in developed in this work. The design also incorporates a simple flux estimator in order to avoid the flux sensors. Then, the proposed control scheme presents a low computational cost and therefore can be implemented easily in a real-time applications using a low cost DSP-processor. The stability analysis of the controller under parameter uncertainties and load disturbances in provided using Lyapunov stability theory. Finally, simulated and experimental results show that the proposed controller with the proposed observer provides a good trajectory tracking and that this scheme is robust with respect to plant parameter variations and external load disturbances.
Resumo:
ICEM 2010
Resumo:
《高等断裂力学》系统论述断裂力学的基本概念、理论基础、力学原理、分析方法以及断裂力学的实验测定和工程应用。深入阐明了断裂力学各个重要发展阶段的新颖学术思想和原创性工作,同时融会贯通地介绍了国内学者在作者熟悉的若干领域内的创造性贡献。 《高等断裂力学》共14章。第1章介绍断裂力学的历史背景和发展脉络;第2~5章介绍线弹性断裂力学;第6~8章论述弹塑性断裂力学;第9及第10章分别介绍疲劳裂纹扩展和界面裂纹;第11~14章阐述裂纹体弹性动力学和裂纹动态扩展。 《高等断裂力学》适合从事断裂力学研究和应用的科技工作者及工程师使用和参考,也可供力学专业的高年级本科生和研究生阅读参考.
目录
Resumo:
液滴是自然界中普遍存在的一种物质形态。非连续微流体(液滴)是近年来微流体技术重要发展方向之一。对液滴的产生、启动、移动、合并、分离和碰撞过程的研究对于航天、微纳系统、电子显示、计算机冷却、喷墨、生物医学等学科领域有着重要的应用价值。液滴属于软物质,其力学性质介于流体和固体之间,其类固体(solid-like)行为来自于曲率产生的Laplace压力和表面张力的约束。对液滴动力学行为的研究有着重要的学术价值。 本文的主要工作是针对生物微电子机械系统(Bio-MEMS)以及柔性微纳电子加工中常用的材料聚二甲基硅氧烷(Polydimethylsiloxane,PDMS)为基底的液滴动力学实验研究。 液滴是一个理想的微反应器,许多实验可以集成在一个液滴或多个液滴内完成。液滴本身的动力学特性对于实验的完成效率和质量有着重要的影响。液滴的微操控技术包括多相流法、电润湿法、热毛细管法、介电泳法等。液滴的动力学特性受到基底的影响非常大,包括基频、振动模态、运动过程等均随基底的润湿性、弹性模量的变化而有所变化。 在Bio-MEMS以及柔性微纳电子加工当中,PDMS扮演着越来越重要的角色,尤其是PDMS的润湿性和电润湿特性。目前的PDMS在Bio-MEMS当中主要是用于制备各种微流道。常见的问题主要是一方面PDMS是疏水材料,影响流体的输运。另一方面是液体在这种低Reynolds数情况下不易混合,反应效率低。本文提出了在PDMS表面溅射纳米厚度的金来减小PDMS表观接触角的方法。这种方式在特定喷金量的情况下可以在PDMS表面产生多层次的压应力波纹。这种压应力波纹对于柔性微纳电子加工,以及微流道中加速流体混合有着非常重要的作用。 电润湿是另一种可以使PDMS亲水化的方法。实验证明,PDMS具有较好的电润湿性质。此外电润湿也是目前操纵液滴的主要方式。目前一个常见的问题是电击穿现象阻碍了驱动电压的低压化,且低Reynolds数情况下液滴的混合效率偏低。此外电极还会由于少量电解的发生导致腐蚀及对液体样品的污染。本文提出了接触式的电润湿,在电极逐渐触碰液滴的过程中,液滴发生百Hz的失稳振动,稳定后接触角减小。这种电润湿模式可以有效的提高临界击穿电压,避免液滴被腐蚀后的电极污染,同时可以加快液滴的混合效率。其失稳特征时间在10 ms量级,这恰是所用液滴特征尺度在1 mm左右的电润湿器件的最快响应时间。并采用液滴振动的理论估算了液滴的失稳时间,同时还考虑了基底润湿性对液滴振动过程的影响。 液滴的启动是电润湿操控液滴过程中的重要环节。通常的液滴启动都是在非连续基底上依靠逻辑电路产生的电势变化来驱动液滴。无论是逻辑电路的设计还是驱动装置的加工都非常复杂。本文首次实现了在超疏水生物样品荷叶上的液滴启动,启动速度为数十毫米/秒,启动时间为10 ms量级。并利用PDMS成功的仿制了荷叶结构实现了超疏水的PDMS表面,荷叶同仿荷叶的PDMS超疏水表面具有相近的润湿性。 在数字微流体操控液滴的过程中,液滴的合并涉及液滴的碰撞,而且MEMS系统当中利用液滴撞击进行冷却的实验已经有所开展。同时理解液滴碰撞还对许多领域包括生物、化学、喷墨、大气物理等有着非常重要的作用。本文实验研究了Weber数和毛细数对液滴碰撞过程的影响,通过改变Weber数和毛细数得到了四种不同的响应模式。
Resumo:
The involvement of women in the marketing of frozen fish in Lagos State (Nigeria) was examined in this study. Two hundred questionnaires were administered to fish marketers in five markets randomly selected within the Lagos metropolis based on their storage capacities. These markets were Balogun (500 tones), Idumagbo and Idumota (250 tonnes each) Obalende and Epetedo (37.5 tonnes each). From the study results, a greater percentage of women (64.2%) are actively involved in marketing of frozen fish in the study areas. Over 56% of these traders are retailers while about 33% are wholesalers. More than 91% of the marketers were found to be literate. A high percentage of the frozen fish are imported (68%), 27% from coastal fishing and 5% from riverine fishing. The commonest fish in the markets were titus (34%), sardine (32%), hake 19%, catfish 10% and argentine 5%. Catfish has the highest profit margin. The greatest problem of these traders is the lack of modern storage facilities and where available, the erratic power supply constitutes a problem
Resumo:
This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.
A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.
Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.
This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.
Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.
Resumo:
Some aspects of wave propagation in thin elastic shells are considered. The governing equations are derived by a method which makes their relationship to the exact equations of linear elasticity quite clear. Finite wave propagation speeds are ensured by the inclusion of the appropriate physical effects.
The problem of a constant pressure front moving with constant velocity along a semi-infinite circular cylindrical shell is studied. The behavior of the solution immediately under the leading wave is found, as well as the short time solution behind the characteristic wavefronts. The main long time disturbance is found to travel with the velocity of very long longitudinal waves in a bar and an expression for this part of the solution is given.
When a constant moment is applied to the lip of an open spherical shell, there is an interesting effect due to the focusing of the waves. This phenomenon is studied and an expression is derived for the wavefront behavior for the first passage of the leading wave and its first reflection.
For the two problems mentioned, the method used involves reducing the governing partial differential equations to ordinary differential equations by means of a Laplace transform in time. The information sought is then extracted by doing the appropriate asymptotic expansion with the Laplace variable as parameter.
Resumo:
We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.
We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.
Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
Cosmic birefringence (CB)---a rotation of photon-polarization plane in vacuum---is a generic signature of new scalar fields that could provide dark energy. Previously, WMAP observations excluded a uniform CB-rotation angle larger than a degree.
In this thesis, we develop a minimum-variance--estimator formalism for reconstructing direction-dependent rotation from full-sky CMB maps, and forecast more than an order-of-magnitude improvement in sensitivity with incoming Planck data and future satellite missions. Next, we perform the first analysis of WMAP-7 data to look for rotation-angle anisotropies and report null detection of the rotation-angle power-spectrum multipoles below L=512, constraining quadrupole amplitude of a scale-invariant power to less than one degree. We further explore the use of a cross-correlation between CMB temperature and the rotation for detecting the CB signal, for different quintessence models. We find that it may improve sensitivity in case of marginal detection, and provide an empirical handle for distinguishing details of new physics indicated by CB.
We then consider other parity-violating physics beyond standard models---in particular, a chiral inflationary-gravitational-wave background. We show that WMAP has no constraining power, while a cosmic-variance--limited experiment would be capable of detecting only a large parity violation. In case of a strong detection of EB/TB correlations, CB can be readily distinguished from chiral gravity waves.
We next adopt our CB analysis to investigate patchy screening of the CMB, driven by inhomogeneities during the Epoch of Reionization (EoR). We constrain a toy model of reionization with WMAP-7 data, and show that data from Planck should start approaching interesting portions of the EoR parameter space and can be used to exclude reionization tomographies with large ionized bubbles.
In light of the upcoming data from low-frequency radio observations of the redshifted 21-cm line from the EoR, we examine probability-distribution functions (PDFs) and difference PDFs of the simulated 21-cm brightness temperature, and discuss the information that can be recovered using these statistics. We find that PDFs are insensitive to details of small-scale physics, but highly sensitive to the properties of the ionizing sources and the size of ionized bubbles.
Finally, we discuss prospects for related future investigations.
Resumo:
This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.
Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.
Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.
The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.
In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.
Resumo:
In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.
For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.
Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.