932 resultados para Generalized linear models
Resumo:
Purely data-driven approaches for machine learning present difficulties when data are scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic approaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the issue of how to parameterize the system. In this paper, we present a hybrid approach using Gaussian processes and differential equations to combine data-driven modeling with a physical model of the system. We show how different, physically inspired, kernel functions can be developed through sensible, simple, mechanistic assumptions about the underlying system. The versatility of our approach is illustrated with three case studies from motion capture, computational biology, and geostatistics.
Resumo:
El estudio sísmico en los últimos 50 años y el análisis del comportamiento dinámico del suelo revelan que el comportamiento del suelo es altamente no lineal e histéretico incluso para pequeñas deformaciones. El comportamiento no lineal del suelo durante un evento sísmico tiene un papel predominante en el análisis de la respuesta de sitio. Los análisis unidimensionales de la respuesta sísmica del suelo son a menudo realizados utilizando procedimientos lineales equivalentes, que requieren generalmente pocos parámetros conocidos. Los análisis de respuesta de sitio no lineal tienen el potencial para simular con mayor precisión el comportamiento del suelo, pero su aplicación en la práctica se ha visto limitada debido a la selección de parámetros poco documentadas y poco claras, así como una inadecuada documentación de los beneficios del modelado no lineal en relación al modelado lineal equivalente. En el análisis del suelo, el comportamiento del suelo es aproximado como un sólido Kelvin-Voigt con un módulo de corte elástico y amortiguamiento viscoso. En el análisis lineal y no lineal del suelo se están considerando geometrías y modelos reológicos más complejos. El primero está siendo dirigido por considerar parametrizaciones más ricas del comportamiento linealizado y el segundo mediante el uso de multi-modo de los elementos de resorte-amortiguador con un eventual amortiguador fraccional. El uso del cálculo fraccional está motivado en gran parte por el hecho de que se requieren menos parámetros para lograr la aproximación exacta a los datos experimentales. Basándose en el modelo de Kelvin-Voigt, la viscoelasticidad es revisada desde su formulación más estándar a algunas descripciones más avanzada que implica la amortiguación dependiente de la frecuencia (o viscosidad), analizando los efectos de considerar derivados fraccionarios para representar esas contribuciones viscosas. Vamos a demostrar que tal elección se traduce en modelos más ricos que pueden adaptarse a diferentes limitaciones relacionadas con la potencia disipada, amplitud de la respuesta y el ángulo de fase. Por otra parte, el uso de derivados fraccionarios permite acomodar en paralelo, dentro de un análogo de Kelvin-Voigt generalizado, muchos amortiguadores que contribuyen a aumentar la flexibilidad del modelado para la descripción de los resultados experimentales. Obviamente estos modelos ricos implican muchos parámetros, los asociados con el comportamiento y los relacionados con los derivados fraccionarios. El análisis paramétrico de estos modelos requiere técnicas numéricas eficientemente capaces de simular comportamientos complejos. El método de la Descomposición Propia Generalizada (PGD) es el candidato perfecto para la construcción de este tipo de soluciones paramétricas. Podemos calcular off-line la solución paramétrica para el depósito de suelo, para todos los parámetros del modelo, tan pronto como tales soluciones paramétricas están disponibles, el problema puede ser resuelto en tiempo real, porque no se necesita ningún nuevo cálculo, el solucionador sólo necesita particularizar on-line la solución paramétrica calculada off-line, que aliviará significativamente el procedimiento de solución. En el marco de la PGD, parámetros de los materiales y los diferentes poderes de derivación podrían introducirse como extra-coordenadas en el procedimiento de solución. El cálculo fraccional y el nuevo método de reducción modelo llamado Descomposición Propia Generalizada han sido aplicado en esta tesis tanto al análisis lineal como al análisis no lineal de la respuesta del suelo utilizando un método lineal equivalente. ABSTRACT Studies of earthquakes over the last 50 years and the examination of dynamic soil behavior reveal that soil behavior is highly nonlinear and hysteretic even at small strains. Nonlinear behavior of soils during a seismic event has a predominant role in current site response analysis. One-dimensional seismic ground response analysis are often performed using equivalent-linear procedures, which require few, generally well-known parameters. Nonlinear analyses have the potential to more accurately simulate soil behavior, but their implementation in practice has been limited because of poorly documented and unclear parameter selection, as well as inadequate documentation of the benefits of nonlinear modeling relative to equivalent linear modeling. In soil analysis, soil behaviour is approximated as a Kelvin-Voigt solid with a elastic shear modulus and viscous damping. In linear and nonlinear analysis more complex geometries and more complex rheological models are being considered. The first is being addressed by considering richer parametrizations of the linearized behavior and the second by using multi-mode spring-dashpot elements with eventual fractional damping. The use of fractional calculus is motivated in large part by the fact that fewer parameters are required to achieve accurate approximation of experimental data. Based in Kelvin-Voigt model the viscoelastodynamics is revisited from its most standard formulation to some more advanced description involving frequency-dependent damping (or viscosity), analyzing the effects of considering fractional derivatives for representing such viscous contributions. We will prove that such a choice results in richer models that can accommodate different constraints related to the dissipated power, response amplitude and phase angle. Moreover, the use of fractional derivatives allows to accommodate in parallel, within a generalized Kelvin-Voigt analog, many dashpots that contribute to increase the modeling flexibility for describing experimental findings. Obviously these rich models involve many parameters, the ones associated with the behavior and the ones related to the fractional derivatives. The parametric analysis of all these models require efficient numerical techniques able to simulate complex behaviors. The Proper Generalized Decomposition (PGD) is the perfect candidate for producing such kind of parametric solutions. We can compute off-line the parametric solution for the soil deposit, for all parameter of the model, as soon as such parametric solutions are available, the problem can be solved in real time because no new calculation is needed, the solver only needs particularize on-line the parametric solution calculated off-line, which will alleviate significantly the solution procedure. Within the PGD framework material parameters and the different derivation powers could be introduced as extra-coordinates in the solution procedure. Fractional calculus and the new model reduction method called Proper Generalized Decomposition has been applied in this thesis to the linear analysis and nonlinear soil response analysis using a equivalent linear method.
Resumo:
En la presente tesis desarrollamos una estrategia para la simulación numérica del comportamiento mecánico de la aorta humana usando modelos de elementos finitos no lineales. Prestamos especial atención a tres aspectos claves relacionados con la biomecánica de los tejidos blandos. Primero, el análisis del comportamiento anisótropo característico de los tejidos blandos debido a las familias de fibras de colágeno. Segundo, el análisis del ablandamiento presentado por los vasos sanguíneos cuando estos soportan cargas fuera del rango de funcionamiento fisiológico. Y finalmente, la inclusión de las tensiones residuales en las simulaciones en concordancia con el experimento de apertura de ángulo. El análisis del daño se aborda mediante dos aproximaciones diferentes. En la primera aproximación se presenta una formulación de daño local con regularización. Esta formulación tiene dos ingredientes principales. Por una parte, usa los principios de la teoría de la fisura difusa para garantizar la objetividad de los resultados con diferentes mallas. Por otra parte, usa el modelo bidimensional de Hodge-Petruska para describir el comportamiento mesoscópico de los fibriles. Partiendo de este modelo mesoscópico, las propiedades macroscópicas de las fibras de colágeno son obtenidas a través de un proceso de homogenización. En la segunda aproximación se presenta un modelo de daño no-local enriquecido con el gradiente de la variable de daño. El modelo se construye a partir del enriquecimiento de la función de energía con un término que contiene el gradiente material de la variable de daño no-local. La inclusión de este término asegura una regularización implícita de la implementación por elementos finitos, dando lugar a resultados de las simulaciones que no dependen de la malla. La aplicabilidad de este último modelo a problemas de biomecánica se estudia por medio de una simulación de un procedimiento quirúrgico típico conocido como angioplastia de balón. In the present thesis we develop a framework for the numerical simulation of the mechanical behaviour of the human aorta using non-linear finite element models. Special attention is paid to three key aspects related to the biomechanics of soft tissues. First, the modelling of the characteristic anisotropic behaviour of the softue due to the collagen fibre families. Secondly, the modelling of damage-related softening that blood vessels exhibit when subjected to loads beyond their physiological range. And finally, the inclusion of the residual stresses in the simulations in accordance with the opening-angle experiment The modelling of damage is addressed with two major and different approaches. In the first approach a continuum local damage formulation with regularisation is presented. This formulation has two principal ingredients. On the one hand, it makes use of the principles of the smeared crack theory to avoid the mesh size dependence of the structural response in softening. On the other hand, it uses a Hodge-Petruska bidimensional model to describe the fibrils as staggered arrays of tropocollagen molecules, and from this mesoscopic model the macroscopic material properties of the collagen fibres are obtained using an homogenisation process. In the second approach a non-local gradient-enhanced damage formulation is introduced. The model is built around the enhancement of the free energy function by means of a term that contains the referential gradient of the non-local damage variable. The inclusion of this term ensures an implicit regularisation of the finite element implementation, yielding mesh-objective results of the simulations. The applicability of the later model to biomechanically-related problems is studied by means of the simulation of a typical surgical procedure, namely, the balloon angioplasty.
Avaliação de métodos numéricos de análise linear de estabilidade para perfis de aço formados a frio.
Resumo:
Para o projeto de estruturas com perfis de aço formados a frio, é fundamental a compreensão dos fenômenos da instabilidade local e global, uma vez que estes apresentam alta esbeltez e baixa rigidez à torção. A determinação do carregamento crítico e a identificação do modo de instabilidade contribuem para o entendimento do comportamento dessas estruturas. Este trabalho avalia três metodologias para a análise linear de estabilidade de perfis de aço formados a frio isolados, com o objetivo de determinar os carregamentos críticos elásticos de bifurcação e os modos de instabilidade associados. Estritamente, analisa-se perfis de seção U enrijecido e Z enrijecido isolados, de diversos comprimentos e diferentes condições de vinculação e carregamento. Determinam-se os carregamentos críticos elásticos de bifurcação e os modos de instabilidade globais e locais por meio de: (i) análise com o Método das Faixas Finitas (MFF), através do uso do programa computacional CUFSM; (ii) análise com elementos finitos de barra baseados na Teoria Generalizada de Vigas (MEF-GBT), via uso do programa GBTUL; e (iii) análise com elementos finitos de casca (MEF-cascas) por meio do uso do programa ABAQUS. Algumas restrições e ressalvas com relação ao uso do MFF são apresentadas, assim como limitações da Teoria Generalizada de Viga e precauções a serem tomadas nos modelos de cascas. Analisa-se também a influência do grau de discretização da seção transversal. No entanto, não é feita avaliação em relação aos procedimentos normativos e tampouco análises não lineares, considerando as imperfeições geométricas iniciais, tensões residuais e o comportamento elastoplástico do material.
Resumo:
In recent years fractionally differenced processes have received a great deal of attention due to its flexibility in financial applications with long memory. This paper considers a class of models generated by Gegenbauer polynomials, incorporating the long memory in stochastic volatility (SV) components in order to develop the General Long Memory SV (GLMSV) model. We examine the statistical properties of the new model, suggest using the spectral likelihood estimation for long memory processes, and investigate the finite sample properties via Monte Carlo experiments. We apply the model to three exchange rate return series. Overall, the results of the out-of-sample forecasts show the adequacy of the new GLMSV model.
Resumo:
Cover title.
Resumo:
In this paper, we investigate the effects of potential models on the description of equilibria of linear molecules (ethylene and ethane) adsorption on graphitized thermal carbon black. GCMC simulation is used as a tool to give adsorption isotherms, isosteric heat of adsorption and the microscopic configurations of these molecules. At the heart of the GCMC are the potential models, describing fluid-fluid interaction and solid-fluid interaction. Here we studied the two potential models recently proposed in the literature, the UA-TraPPE and AUA4. Their impact in the description of adsorption behavior of pure components will be discussed. Mixtures of these components with nitrogen and argon are also studied. Nitrogen is modeled a two-site plus discrete charges while argon as a spherical particle. GCMC simulation is also used for generating simulation mixture isotherms. It is found that co-operation between species occurs when the surface is fractionally covered while competition is important when surface is fully loaded.
Resumo:
One of the most significant challenges facing the development of linear optics quantum computing (LOQC) is mode mismatch, whereby photon distinguishability is introduced within circuits, undermining quantum interference effects. We examine the effects of mode mismatch on the parity (or fusion) gate, the fundamental building block in several recent LOQC schemes. We derive simple error models for the effects of mode mismatch on its operation, and relate these error models to current fault-tolerant-threshold estimates.
Resumo:
In this paper, the exchange rate forecasting performance of neural network models are evaluated against the random walk, autoregressive moving average and generalised autoregressive conditional heteroskedasticity models. There are no guidelines available that can be used to choose the parameters of neural network models and therefore, the parameters are chosen according to what the researcher considers to be the best. Such an approach, however,implies that the risk of making bad decisions is extremely high, which could explain why in many studies, neural network models do not consistently perform better than their time series counterparts. In this paper, through extensive experimentation, the level of subjectivity in building neural network models is considerably reduced and therefore giving them a better chance of Forecasting exchange rates with linear and nonlinear models 415 performing well. The results show that in general, neural network models perform better than the traditionally used time series models in forecasting exchange rates.
Resumo:
In this paper the exchange rate forecasting performance of neural network models are evaluated against random walk and a range of time series models. There are no guidelines available that can be used to choose the parameters of neural network models and therefore the parameters are chosen according to what the researcher considers to be the best. Such an approach, however, implies that the risk of making bad decisions is extremely high which could explain why in many studies neural network models do not consistently perform better than their time series counterparts. In this paper through extensive experimentation the level of subjectivity in building neural network models is considerably reduced and therefore giving them a better chance of performing well. Our results show that in general neural network models perform better than traditionally used time series models in forecasting exchange rates.
Resumo:
MSC 2010: 46F30, 46F10
Resumo:
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
Resumo:
Esta tesis doctoral nace con el propósito de entender, analizar y sobre todo modelizar el comportamiento estadístico de las series financieras. En este sentido, se puede afirmar que los modelos que mejor recogen las especiales características de estas series son los modelos de heterocedasticidad condicionada en tiempo discreto,si los intervalos de tiempo en los que se recogen los datos lo permiten, y en tiempo continuo si tenemos datos diarios o datos intradía. Con esta finalidad, en esta tesis se proponen distintos estimadores bayesianos para la estimación de los parámetros de los modelos GARCH en tiempo discreto (Bollerslev (1986)) y COGARCH en tiempo continuo (Kluppelberg et al. (2004)). En el capítulo 1 se introducen las características de las series financieras y se presentan los modelos ARCH, GARCH y COGARCH, así como sus principales propiedades. Mandelbrot (1963) destacó que las series financieras no presentan estacionariedad y que sus incrementos no presentan autocorrelación, aunque sus cuadrados sí están correlacionados. Señaló también que la volatilidad que presentan no es constante y que aparecen clusters de volatilidad. Observó la falta de normalidad de las series financieras, debida principalmente a su comportamiento leptocúrtico, y también destacó los efectos estacionales que presentan las series, analizando como se ven afectadas por la época del año o el día de la semana. Posteriormente Black (1976) completó la lista de características especiales incluyendo los denominados leverage effects relacionados con como las fluctuaciones positivas y negativas de los precios de los activos afectan a la volatilidad de las series de forma distinta.
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.