988 resultados para Mathematics modeling
Resumo:
Period adding cascades have been observed experimentally/numerically in the dynamics of neurons and pancreatic cells, lasers, electric circuits, chemical reactions, oceanic internal waves, and also in air bubbling. We show that the period adding cascades appearing in bubbling from a nozzle submerged in a viscous liquid can be reproduced by a simple model, based on some hydrodynamical principles, dealing with the time evolution of two variables, bubble position and pressure of the air chamber, through a system of differential equations with a rule of detachment based on force balance. The model further reduces to an iterating one-dimensional map giving the pressures at the detachments, where time between bubbles come out as an observable of the dynamics. The model has not only good agreement with experimental data, but is also able to predict the influence of the main parameters involved, like the length of the hose connecting the air supplier with the needle, the needle radius and the needle length. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.3695345]
Resumo:
In this paper, we carry out robust modeling and influence diagnostics in Birnbaum-Saunders (BS) regression models. Specifically, we present some aspects related to BS and log-BS distributions and their generalizations from the Student-t distribution, and develop BS-t regression models, including maximum likelihood estimation based on the EM algorithm and diagnostic tools. In addition, we apply the obtained results to real data from insurance, which shows the uses of the proposed model. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
This work presents major results from a novel dynamic model intended to deterministically represent the complex relation between HIV-1 and the human immune system. The novel structure of the model extends previous work by representing different host anatomic compartments under a more in-depth cellular and molecular immunological phenomenology. Recently identified mechanisms related to HIV-1 infection as well as other well known relevant mechanisms typically ignored in mathematical models of HIV-1 pathogenesis and immunology, such as cell-cell transmission, are also addressed. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper deals with the numerical analysis of saturated porous media, taking into account the damage phenomena on the solid skeleton. The porous media is taken into poro-elastic framework, in full-saturated condition, based on Biot's Theory. A scalar damage model is assumed for this analysis. An implicit boundary element method (BEM) formulation, based on time-independent fundamental solutions, is developed and implemented to couple the fluid flow and two-dimensional elastostatic problems. The integration over boundary elements is evaluated using a numerical Gauss procedure. A semi-analytical scheme for the case of triangular domain cells is followed to carry out the relevant domain integrals. The non-linear problem is solved by a Newton-Raphson procedure. Numerical examples are presented, in order to validate the implemented formulation and to illustrate its efficacy. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.
Resumo:
Ozon (O3) ist ein wichtiges Oxidierungs- und Treibhausgas in der Erdatmosphäre. Es hat Einfluss auf das Klima, die Luftqualität sowie auf die menschliche Gesundheit und die Vegetation. Ökosysteme, wie beispielsweise Wälder, sind Senken für troposphärisches Ozon und werden in Zukunft, bedingt durch Stürme, Pflanzenschädlinge und Änderungen in der Landnutzung, heterogener sein. Es ist anzunehmen, dass diese Heterogenitäten die Aufnahme von Treibhausgasen verringern und signifikante Rückkopplungen auf das Klimasystem bewirken werden. Beeinflusst wird der Atmosphären-Biosphären-Austausch von Ozon durch stomatäre Aufnahme, Deposition auf Pflanzenoberflächen und Böden sowie chemische Umwandlungen. Diese Prozesse zu verstehen und den Ozonaustausch für verschiedene Ökosysteme zu quantifizieren sind Voraussetzungen, um von lokalen Messungen auf regionale Ozonflüsse zu schließen.rnFür die Messung von vertikalen turbulenten Ozonflüssen wird die Eddy Kovarianz Methode genutzt. Die Verwendung von Eddy Kovarianz Systemen mit geschlossenem Pfad, basierend auf schnellen Chemilumineszenz-Ozonsensoren, kann zu Fehlern in der Flussmessung führen. Ein direkter Vergleich von nebeneinander angebrachten Ozonsensoren ermöglichte es einen Einblick in die Faktoren zu erhalten, die die Genauigkeit der Messungen beeinflussen. Systematische Unterschiede zwischen einzelnen Sensoren und der Einfluss von unterschiedlichen Längen des Einlassschlauches wurden untersucht, indem Frequenzspektren analysiert und Korrekturfaktoren für die Ozonflüsse bestimmt wurden. Die experimentell bestimmten Korrekturfaktoren zeigten keinen signifikanten Unterschied zu Korrekturfaktoren, die mithilfe von theoretischen Transferfunktionen bestimmt wurden, wodurch die Anwendbarkeit der theoretisch ermittelten Faktoren zur Korrektur von Ozonflüssen bestätigt wurde.rnIm Sommer 2011 wurden im Rahmen des EGER (ExchanGE processes in mountainous Regions) Projektes Messungen durchgeführt, um zu einem besseren Verständnis des Atmosphären-Biosphären Ozonaustauschs in gestörten Ökosystemen beizutragen. Ozonflüsse wurden auf beiden Seiten einer Waldkante gemessen, die einen Fichtenwald und einen Windwurf trennt. Auf der straßenähnlichen Freifläche, die durch den Sturm "Kyrill" (2007) entstand, entwickelte sich eine Sekundärvegetation, die sich in ihrer Phänologie und Blattphysiologie vom ursprünglich vorherrschenden Fichtenwald unterschied. Der mittlere nächtliche Fluss über dem Fichtenwald war -6 bis -7 nmol m2 s-1 und nahm auf -13 nmol m2 s-1 um die Mittagszeit ab. Die Ozonflüsse zeigten eine deutliche Beziehung zur Pflanzenverdunstung und CO2 Aufnahme, was darauf hinwies, dass während des Tages der Großteil des Ozons von den Pflanzenstomata aufgenommen wurde. Die relativ hohe nächtliche Deposition wurde durch nicht-stomatäre Prozesse verursacht. Die Deposition über dem Wald war im gesamten Tagesverlauf in etwa doppelt so hoch wie über der Freifläche. Dieses Verhältnis stimmte mit dem Verhältnis des Pflanzenflächenindex (PAI) überein. Die Störung des Ökosystems verringerte somit die Fähigkeit des Bewuchses, als Senke für troposphärisches Ozon zu fungieren. Der deutliche Unterschied der Ozonflüsse der beiden Bewuchsarten verdeutlichte die Herausforderung bei der Regionalisierung von Ozonflüssen in heterogen bewaldeten Gebieten.rnDie gemessenen Flüsse wurden darüber hinaus mit Simulationen verglichen, die mit dem Chemiemodell MLC-CHEM durchgeführt wurden. Um das Modell bezüglich der Berechnung von Ozonflüssen zu evaluieren, wurden gemessene und modellierte Flüsse von zwei Positionen im EGER-Gebiet verwendet. Obwohl die Größenordnung der Flüsse übereinstimmte, zeigten die Ergebnisse eine signifikante Differenz zwischen gemessenen und modellierten Flüssen. Zudem gab es eine klare Abhängigkeit der Differenz von der relativen Feuchte, mit abnehmender Differenz bei zunehmender Feuchte, was zeigte, dass das Modell vor einer Verwendung für umfangreiche Studien des Ozonflusses weiterer Verbesserungen bedarf.rn
Resumo:
Proteins are linear chain molecules made out of amino acids. Only when they fold to their native states, they become functional. This dissertation aims to model the solvent (environment) effect and to develop & implement enhanced sampling methods that enable a reliable study of the protein folding problem in silico. We have developed an enhanced solvation model based on the solution to the Poisson-Boltzmann equation in order to describe the solvent effect. Following the quantum mechanical Polarizable Continuum Model (PCM), we decomposed net solvation free energy into three physical terms– Polarization, Dispersion and Cavitation. All the terms were implemented, analyzed and parametrized individually to obtain a high level of accuracy. In order to describe the thermodynamics of proteins, their conformational space needs to be sampled thoroughly. Simulations of proteins are hampered by slow relaxation due to their rugged free-energy landscape, with the barriers between minima being higher than the thermal energy at physiological temperatures. In order to overcome this problem a number of approaches have been proposed of which replica exchange method (REM) is the most popular. In this dissertation we describe a new variant of canonical replica exchange method in the context of molecular dynamic simulation. The advantage of this new method is the easily tunable high acceptance rate for the replica exchange. We call our method Microcanonical Replica Exchange Molecular Dynamic (MREMD). We have described the theoretical frame work, comment on its actual implementation, and its application to Trp-cage mini-protein in implicit solvent. We have been able to correctly predict the folding thermodynamics of this protein using our approach.
Resumo:
In recent years, the bio-conjugated nanostructured materials have emerged as a new class of materials for the bio-sensing and medical diagnostics applications. In spite of their multi-directional applications, interfacing nanomaterials with bio-molecules has been a challenge due to somewhat limited knowledge about the underlying physics and chemistry behind these interactions and also for the complexity of biomolecules. The main objective of this dissertation is to provide such a detailed knowledge on bioconjugated nanomaterials toward their applications in designing the next generation of sensing devices. Specifically, we investigate the changes in the electronic properties of a boron nitride nanotube (BNNT) due to the adsorption of different bio-molecules, ranging from neutral (DNA/RNA nucleobases) to polar (amino acid molecules). BNNT is a typical member of III-V compounds semiconductors with morphology similar to that of carbon nanotubes (CNTs) but with its own distinct properties. More specifically, the natural affinity of BNNTs toward living cells with no apparent toxicity instigates the applications of BNNTs in drug delivery and cell therapy. Our results predict that the adsorption of DNA/RNA nucleobases on BNNTs amounts to different degrees of modulation in the band gap of BNNTs, which can be exploited for distinguishing these nucleobases from each other. Interestingly, for the polar amino acid molecules, the nature of interaction appeared to vary ranging from Coulombic, van der Waals and covalent depending on the polarity of the individual molecules, each with a different binding strength and amount of charge transfer involved in the interaction. The strong binding of amino acid molecules on the BNNTs explains the observed protein wrapping onto BNNTs without any linkers, unlike carbon nanotubes (CNTs). Additionally, the widely varying binding energies corresponding to different amino acid molecules toward BNNTs indicate to the suitability of BNNTs for the biosensing applications, as compared to the metallic CNTs. The calculated I-V characteristics in these bioconjugated nanotubes predict notable changes in the conductivity of BNNTs due to the physisorption of DNA/RNA nucleobases. This is not the case with metallic CNTs whose transport properties remained unaltered in their conjugated systems with the nucleobases. Collectively, the bioconjugated BNNTs are found to be an excellent system for the next generation sensing devices.
DIMENSION REDUCTION FOR POWER SYSTEM MODELING USING PCA METHODS CONSIDERING INCOMPLETE DATA READINGS
Resumo:
Principal Component Analysis (PCA) is a popular method for dimension reduction that can be used in many fields including data compression, image processing, exploratory data analysis, etc. However, traditional PCA method has several drawbacks, since the traditional PCA method is not efficient for dealing with high dimensional data and cannot be effectively applied to compute accurate enough principal components when handling relatively large portion of missing data. In this report, we propose to use EM-PCA method for dimension reduction of power system measurement with missing data, and provide a comparative study of traditional PCA and EM-PCA methods. Our extensive experimental results show that EM-PCA method is more effective and more accurate for dimension reduction of power system measurement data than traditional PCA method when dealing with large portion of missing data set.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Ecologists and economists both use models to help develop strategies for biodiversity management. The practical use of disciplinary models, however, can be limited because ecological models tend not to address the socioeconomic dimension of biodiversity management, whereas economic models tend to neglect the ecological dimension. Given these shortcomings of disciplinary models, there is a necessity to combine ecological and economic knowledge into ecological-economic models. It is insufficient if scientists work separately in their own disciplines and combine their knowledge only when it comes to formulating management recommendations. Such an approach does not capture feedback loops between the ecological and the socioeconomic systems. Furthermore, each discipline poses the management problem in its own way and comes up with its own most appropriate solution. These disciplinary solutions, however are likely to be so different that a combined solution considering aspects of both disciplines cannot be found. Preconditions for a successful model-based integration of ecology and economics include (1) an in-depth knowledge of the two disciplines, (2) the adequate identification and framing of the problem to be investigated, and (3) a common understanding between economists and ecologists of modeling and scale. To further advance ecological-economic modeling the development of common benchmarks, quality controls, and refereeing standards for ecological-economic models is desirable.