931 resultados para Generalized inverse
Resumo:
2000 Mathematics Subject Classification: Primary 46F25, 26A33; Secondary: 46G20
Resumo:
2000 Mathematics Subject Classification: 41A25, 41A27, 41A36.
Resumo:
2000 Mathematics Subject Classification: 15A29.
Resumo:
In this thesis, novel analog-to-digital and digital-to-analog generalized time-interleaved variable bandpass sigma-delta modulators are designed, analysed, evaluated and implemented that are suitable for high performance data conversion for a broad-spectrum of applications. These generalized time-interleaved variable bandpass sigma-delta modulators can perform noise-shaping for any centre frequency from DC to Nyquist. The proposed topologies are well-suited for Butterworth, Chebyshev, inverse-Chebyshev and elliptical filters, where designers have the flexibility of specifying the centre frequency, bandwidth as well as the passband and stopband attenuation parameters. The application of the time-interleaving approach, in combination with these bandpass loop-filters, not only overcomes the limitations that are associated with conventional and mid-band resonator-based bandpass sigma-delta modulators, but also offers an elegant means to increase the conversion bandwidth, thereby relaxing the need to use faster or higher-order sigma-delta modulators. A step-by-step design technique has been developed for the design of time-interleaved variable bandpass sigma-delta modulators. Using this technique, an assortment of lower- and higher-order single- and multi-path generalized A/D variable bandpass sigma-delta modulators were designed, evaluated and compared in terms of their signal-to-noise ratios, hardware complexity, stability, tonality and sensitivity for ideal and non-ideal topologies. Extensive behavioural-level simulations verified that one of the proposed topologies not only used fewer coefficients but also exhibited greater robustness to non-idealties. Furthermore, second-, fourth- and sixth-order single- and multi-path digital variable bandpass digital sigma-delta modulators are designed using this technique. The mathematical modelling and evaluation of tones caused by the finite wordlengths of these digital multi-path sigmadelta modulators, when excited by sinusoidal input signals, are also derived from first principles and verified using simulation and experimental results. The fourth-order digital variable-band sigma-delta modulator topologies are implemented in VHDL and synthesized on Xilinx® SpartanTM-3 Development Kit using fixed-point arithmetic. Circuit outputs were taken via RS232 connection provided on the FPGA board and evaluated using MATLAB routines developed by the author. These routines included the decimation process as well. The experiments undertaken by the author further validated the design methodology presented in the work. In addition, a novel tunable and reconfigurable second-order variable bandpass sigma-delta modulator has been designed and evaluated at the behavioural-level. This topology offers a flexible set of choices for designers and can operate either in single- or dual-mode enabling multi-band implementations on a single digital variable bandpass sigma-delta modulator. This work is also supported by a novel user-friendly design and evaluation tool that has been developed in MATLAB/Simulink that can speed-up the design, evaluation and comparison of analog and digital single-stage and time-interleaved variable bandpass sigma-delta modulators. This tool enables the user to specify the conversion type, topology, loop-filter type, path number and oversampling ratio.
Resumo:
This thesis deals with tensor completion for the solution of multidimensional inverse problems. We study the problem of reconstructing an approximately low rank tensor from a small number of noisy linear measurements. New recovery guarantees, numerical algorithms, non-uniform sampling strategies, and parameter selection algorithms are developed. We derive a fixed point continuation algorithm for tensor completion and prove its convergence. A restricted isometry property (RIP) based tensor recovery guarantee is proved. Probabilistic recovery guarantees are obtained for sub-Gaussian measurement operators and for measurements obtained by non-uniform sampling from a Parseval tight frame. We show how tensor completion can be used to solve multidimensional inverse problems arising in NMR relaxometry. Algorithms are developed for regularization parameter selection, including accelerated k-fold cross-validation and generalized cross-validation. These methods are validated on experimental and simulated data. We also derive condition number estimates for nonnegative least squares problems. Tensor recovery promises to significantly accelerate N-dimensional NMR relaxometry and related experiments, enabling previously impractical experiments. Our methods could also be applied to other inverse problems arising in machine learning, image processing, signal processing, computer vision, and other fields.
Resumo:
The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.
Resumo:
We study how the crossover exponent, phi, between the directed percolation (DP) and compact directed percolation (CDP) behaves as a function of the diffusion rate in a model that generalizes the contact process. Our conclusions are based in results pointed by perturbative series expansions and numerical simulations, and are consistent with a value phi = 2 for finite diffusion rates and phi = 1 in the limit of infinite diffusion rate.
Resumo:
We consider a nontrivial one-species population dynamics model with finite and infinite carrying capacities. Time-dependent intrinsic and extrinsic growth rates are considered in these models. Through the model per capita growth rate we obtain a heuristic general procedure to generate scaling functions to collapse data into a simple linear behavior even if an extrinsic growth rate is included. With this data collapse, all the models studied become independent from the parameters and initial condition. Analytical solutions are found when time-dependent coefficients are considered. These solutions allow us to perceive nontrivial transitions between species extinction and survival and to calculate the transition's critical exponents. Considering an extrinsic growth rate as a cancer treatment, we show that the relevant quantity depends not only on the intensity of the treatment, but also on when the cancerous cell growth is maximum.
Resumo:
This article focuses on the identification of the number of paths with different lengths between pairs of nodes in complex networks and how these paths can be used for characterization of topological properties of theoretical and real-world complex networks. This analysis revealed that the number of paths can provide a better discrimination of network models than traditional network measurements. In addition, the analysis of real-world networks suggests that the long-range connectivity tends to be limited in these networks and may be strongly related to network growth and organization.
Resumo:
In the last decade the Sznajd model has been successfully employed in modeling some properties and scale features of both proportional and majority elections. We propose a version of the Sznajd model with a generalized bounded confidence rule-a rule that limits the convincing capability of agents and that is essential to allow coexistence of opinions in the stationary state. With an appropriate choice of parameters it can be reduced to previous models. We solved this model both in a mean-field approach (for an arbitrary number of opinions) and numerically in a Barabaacutesi-Albert network (for three and four opinions), studying the transient and the possible stationary states. We built the phase portrait for the special cases of three and four opinions, defining the attractors and their basins of attraction. Through this analysis, we were able to understand and explain discrepancies between mean-field and simulation results obtained in previous works for the usual Sznajd model with bounded confidence and three opinions. Both the dynamical system approach and our generalized bounded confidence rule are quite general and we think it can be useful to the understanding of other similar models.
Resumo:
The Sznajd model is a sociophysics model that mimics the propagation of opinions in a closed society, where the interactions favor groups of agreeing people. It is based in the Ising and Potts ferromagnetic models and, although the original model used only linear chains, it has since been adapted to general networks. This model has a very rich transient, which has been used to model several aspects of elections, but its stationary states are always consensus states. In order to model more complex behaviors, we have, in a recent work, introduced the idea of biases and prejudices to the Sznajd model by generalizing the bounded confidence rule, which is common to many continuous opinion models, to what we called confidence rules. In that work we have found that the mean field version of this model (corresponding to a complete network) allows for stationary states where noninteracting opinions survive, but never for the coexistence of interacting opinions. In the present work, we provide networks that allow for the coexistence of interacting opinions for certain confidence rules. Moreover, we show that the model does not become inactive; that is, the opinions keep changing, even in the stationary regime. This is an important result in the context of understanding how a rule that breeds local conformity is still able to sustain global diversity while avoiding a frozen stationary state. We also provide results that give some insights on how this behavior approaches the mean field behavior as the networks are changed.
Resumo:
A simple and completely general representation of the exact exchange-correlation functional of density-functional theory is derived from the universal Lieb-Oxford bound, which holds for any Coulomb-interacting system. This representation leads to an alternative point of view on popular hybrid functionals, providing a rationale for why they work and how they can be constructed. A similar representation of the exact correlation functional allows to construct fully nonempirical hyper-generalized-gradient approximations (HGGAs), radically departing from established paradigms of functional construction. Numerical tests of these HGGAs for atomic and molecular correlation energies and molecular atomization energies show that even simple HGGAs match or outperform state-of-the-art correlation functionals currently used in solid-state physics and quantum chemistry.
Resumo:
Inverse analysis is currently an important subject of study in several fields of science and engineering. The identification of physical and geometric parameters using experimental measurements is required in many applications. In this work a boundary element formulation to identify boundary and interface values as well as material properties is proposed. In particular the proposed formulation is dedicated to identifying material parameters when a cohesive crack model is assumed for 2D problems. A computer code is developed and implemented using the BEM multi-region technique and regularisation methods to perform the inverse analysis. Several examples are shown to demonstrate the efficiency of the proposed model. (C) 2010 Elsevier Ltd. All rights reserved,
Resumo:
The Generalized Finite Element Method (GFEM) is employed in this paper for the numerical analysis of three-dimensional solids tinder nonlinear behavior. A brief summary of the GFEM as well as a description of the formulation of the hexahedral element based oil the proposed enrichment strategy are initially presented. Next, in order to introduce the nonlinear analysis of solids, two constitutive models are briefly reviewed: Lemaitre`s model, in which damage and plasticity are coupled, and Mazars`s damage model suitable for concrete tinder increased loading. Both models are employed in the framework of a nonlocal approach to ensure solution objectivity. In the numerical analyses carried out, a selective enrichment of approximation at regions of concern in the domain (mainly those with high strain and damage gradients) is exploited. Such a possibility makes the three-dimensional analysis less expensive and practicable since re-meshing resources, characteristic of h-adaptivity, can be minimized. Moreover, a combination of three-dimensional analysis and the selective enrichment presents a valuable good tool for a better description of both damage and plastic strain scatterings.
Resumo:
Leakage reduction in water supply systems and distribution networks has been an increasingly important issue in the water industry since leaks and ruptures result in major physical and economic losses. Hydraulic transient solvers can be used in the system operational diagnosis, namely for leak detection purposes, due to their capability to describe the dynamic behaviour of the systems and to provide substantial amounts of data. In this research work, the association of hydraulic transient analysis with an optimisation model, through inverse transient analysis (ITA), has been used for leak detection and its location in an experimental facility containing PVC pipes. Observed transient pressure data have been used for testing ITA. A key factor for the success of the leak detection technique used is the accurate calibration of the transient solver, namely adequate boundary conditions and the description of energy dissipation effects since PVC pipes are characterised by a viscoelastic mechanical response. Results have shown that leaks were located with an accuracy between 4-15% of the total length of the pipeline, depending on the discretisation of the system model.