918 resultados para Physics Based Modeling
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
We propose a new type of high-order elements that incorporates the mesh-free Galerkin formulations into the framework of finite element method. Traditional polynomial interpolation is replaced by mesh-free interpolations in the present high-order elements, and the strain smoothing technique is used for integration of the governing equations based on smoothing cells. The properties of high-order elements, which are influenced by the basis function of mesh-free interpolations and boundary nodes, are discussed through numerical examples. It can be found that the basis function has significant influence on the computational accuracy and upper-lower bounds of energy norm, when the strain smoothing technique retains the softening phenomenon. This new type of high-order elements shows good performance when quadratic basis functions are used in the mesh-free interpolations and present elements prove advantageous in adaptive mesh and nodes refinement schemes. Furthermore, it shows less sensitive to the quality of element because it uses the mesh-free interpolations and obeys the Weakened Weak (W2) formulation as introduced in [3, 5].
Resumo:
This work reports on the fabrication of a superhydrophobic nylon textile based on the organic charge transfer complex CuTCNAQ (TCNAQ = 11,11,12,12-tetracyanoanthraquinodimethane). The nylon fabric that is metallized with copper undergoes a spontaneous chemical reaction with TCNAQ dissolved in acetonitrile to form nanorods of CuTCNAQ that are intertwined over the entire surface of the fabric. This creates the necessary micro and nanoscale roughness that is required for the Cassie-Baxter state thereby achieving a superhydrophobic/superoleophilic surface without the need for a fluorinated surface. The material is characterised with SEM, FT-IR and XPS spectroscopy and investigated for its ability to separate oil and water in two modes, namely under gravity and as an absorbent. It is found that the fabric can separate dichloromethane, olive oil and crude oil from water and in fact reduce the water content of the oil during the separation process. The fabric is reusable and tolerant to conditions such as seawater, hydrochloric acid and extensive time periods on the shelf. Given that CuTCNAQ is a copper based semiconductor may also open up the possibility of other applications in areas such as photocatalysis and antibacterial applications.
Spray deposition of exfoliated MoS2 flakes as hole transport layer in perovskite-based photovoltaics
Resumo:
We propose the use of solution-processed molybdenum disulfide (MoS2) flakes as hole transport layer (HTL) for metal-organic perovskite solar cells. MoS2 bulk crystals are exfoliated in 2-propanol and deposited on perovskite layers by spray coating. We fabricated cells with glass/FTO/compact-TiO2/mesoporous-TiO2/CH3NH3PbI3/spiro- OMeTAD/Au structure and cells with the same structure but with MoS2 flakes as HTL instead of spiro-OMeTAD, the most widely used HTL. The electrical characterization of the cells with MoS2 as HTL show promising power conversion efficiency -η- of 3.9% with respect to cells with pristine spiro-OMeTAD (η=3.1%). Endurance test on 800-hour shelf life has shown higher stability for the MoS2–based cells (ΔPCE/PCE=-17%) with respect to the doped spiro-OMeTAD-based one (ΔPCE/PCE =-45%). Further improvements are expected with the optimization of the MoS2 deposition process
Resumo:
Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.
Resumo:
Many species inhabit fragmented landscapes, resulting either from anthropogenic or from natural processes. The ecological and evolutionary dynamics of spatially structured populations are affected by a complex interplay between endogenous and exogenous factors. The metapopulation approach, simplifying the landscape to a discrete set of patches of breeding habitat surrounded by unsuitable matrix, has become a widely applied paradigm for the study of species inhabiting highly fragmented landscapes. In this thesis, I focus on the construction of biologically realistic models and their parameterization with empirical data, with the general objective of understanding how the interactions between individuals and their spatially structured environment affect ecological and evolutionary processes in fragmented landscapes. I study two hierarchically structured model systems, which are the Glanville fritillary butterfly in the Åland Islands, and a system of two interacting aphid species in the Tvärminne archipelago, both being located in South-Western Finland. The interesting and challenging feature of both study systems is that the population dynamics occur over multiple spatial scales that are linked by various processes. My main emphasis is in the development of mathematical and statistical methodologies. For the Glanville fritillary case study, I first build a Bayesian framework for the estimation of death rates and capture probabilities from mark-recapture data, with the novelty of accounting for variation among individuals in capture probabilities and survival. I then characterize the dispersal phase of the butterflies by deriving a mathematical approximation of a diffusion-based movement model applied to a network of patches. I use the movement model as a building block to construct an individual-based evolutionary model for the Glanville fritillary butterfly metapopulation. I parameterize the evolutionary model using a pattern-oriented approach, and use it to study how the landscape structure affects the evolution of dispersal. For the aphid case study, I develop a Bayesian model of hierarchical multi-scale metapopulation dynamics, where the observed extinction and colonization rates are decomposed into intrinsic rates operating specifically at each spatial scale. In summary, I show how analytical approaches, hierarchical Bayesian methods and individual-based simulations can be used individually or in combination to tackle complex problems from many different viewpoints. In particular, hierarchical Bayesian methods provide a useful tool for decomposing ecological complexity into more tractable components.
Resumo:
Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.
Resumo:
Solidification processes are complex in nature, involving multiple phases and several length scales. The properties of solidified products are dictated by the microstructure, the mactostructure, and various defects present in the casting. These, in turn, are governed by the multiphase transport phenomena Occurring at different length scales. In order to control and improve the quality of cast products, it is important to have a thorough understanding of various physical and physicochemical phenomena Occurring at various length scales. preferably through predictive models and controlled experiments. In this context, the modeling of transport phenomena during alloy solidification has evolved over the last few decades due to the complex multiscale nature of the problem. Despite this, a model accounting for all the important length scales directly is computationally prohibitive. Thus, in the past, single-phase continuum models have often been employed with respect to a single length scale to model solidification processing. However, continuous development in understanding the physics of solidification at various length scales oil one hand and the phenomenal growth of computational power oil the other have allowed researchers to use increasingly complex multiphase/multiscale models in recent. times. These models have allowed greater understanding of the coupled micro/macro nature of the process and have made it possible to predict solute segregation and microstructure evolution at different length scales. In this paper, a brief overview of the current status of modeling of convection and macrosegregation in alloy solidification processing is presented.
Resumo:
Cyclostationary analysis has proven effective in identifying signal components for diagnostic purposes. A key descriptor in this framework is the cyclic power spectrum, traditionally estimated by the averaged cyclic periodogram and the smoothed cyclic periodogram. A lengthy debate about the best estimator finally found a solution in a cornerstone work by Antoni, who proposed a unified form for the two families, thus allowing a detailed statistical study of their properties. Since then, the focus of cyclostationary research has shifted towards algorithms, in terms of computational efficiency and simplicity of implementation. Traditional algorithms have proven computationally inefficient and the sophisticated "cyclostationary" definition of these estimators slowed their spread in the industry. The only attempt to increase the computational efficiency of cyclostationary estimators is represented by the cyclic modulation spectrum. This indicator exploits the relationship between cyclostationarity and envelope analysis. The link with envelope analysis allows a leap in computational efficiency and provides a "way in" for the understanding by industrial engineers. However, the new estimator lies outside the unified form described above and an unbiased version of the indicator has not been proposed. This paper will therefore extend the analysis of envelope-based estimators of the cyclic spectrum, proposing a new approach to include them in the unified form of cyclostationary estimators. This will enable the definition of a new envelope-based algorithm and the detailed analysis of the properties of the cyclic modulation spectrum. The computational efficiency of envelope-based algorithms will be also discussed quantitatively for the first time in comparison with the averaged cyclic periodogram. Finally, the algorithms will be validated with numerical and experimental examples.
Resumo:
Genetic engineering of Bacillus thuringiensis (Bt) Cry proteins has resulted in the synthesis of various novel toxin proteins with enhanced insecticidal activity and specificity towards different insect pests. In this study, a fusion protein consisting of the DI–DII domains of Cry1Ac and garlic lectin (ASAL) has been designed in silico by replacing the DIII domain of Cry1Ac with ASAL. The binding interface between the DI–DII domains of Cry1Ac and lectin has been identified using protein–protein docking studies. Free energy of binding calculations and interaction profiles between the Cry1Ac and lectin domains confirmed the stability of fusion protein. A total of 18 hydrogen bonds was observed in the DI–DII–lectin fusion protein compared to 11 hydrogen bonds in the Cry1Ac (DI–DII–DIII) protein. Molecular mechanics/Poisson–Boltzmann (generalized-Born) surface area [MM/PB (GB) SA] methods were used for predicting free energy of interactions of the fusion proteins. Protein–protein docking studies based on the number of hydrogen bonds, hydrophobic interactions, aromatic–aromatic, aromatic–sulphur, cation–pi interactions and binding energy of Cry1Ac/fusion proteins with the aminopeptidase N (APN) of Manduca sexta rationalised the higher binding affinity of the fusion protein with the APN receptor compared to that of the Cry1Ac–APN complex, as predicted by ZDOCK, Rosetta and ClusPro analysis. The molecular binding interface between the fusion protein and the APN receptor is well packed, analogously to that of the Cry1Ac–APN complex. These findings offer scope for the design and development of customized fusion molecules for improved pest management in crop plants.
Resumo:
Graphene oxide (GO) sheets can form liquid crystals (LCs) in their aqueous dispersions that are more viscous with a stronger LC feature. In this work we combine the viscous LC-GO solution with the blade-coating technique to make GO films, for constructing graphene-based supercapacitors in a scalable way. Reduced GO (rGO) films are prepared by wet chemical methods, using either hydrazine (HZ) or hydroiodic acid (HI). Solid-state supercapacitors with rGO films as electrodes and highly conductive carbon nanotube films as current collectors are fabricated and the capacitive properties of different rGO films are compared. It is found that the HZ-rGO film is superior to the HI-rGO film in achieving high capacitance, owing to the 3D structure of graphene sheets in the electrode. Compared to gelled electrolyte, the use of liquid electrolyte (H2SO4) can further increase the capacitance to 265 F per gram (corresponding to 52 mF per cm2) of the HZ-rGO film.
Resumo:
This paper highlights the Hybrid agent construction model being developed that allows the description and development of autonomous agents in SAGE (Scalable, fault Tolerant Agent Grooming Environment) - a second generation FIPA-Compliant Multi-Agent system. We aim to provide the programmer with a generic and well defined agent architecture enabling the development of sophisticated agents on SAGE, possessing the desired properties of autonomous agents - reactivity, pro-activity, social ability and knowledge based reasoning. © Springer-Verlag Berlin Heidelberg 2005.
Resumo:
A geodesic-based approach using Lamb waves is proposed to locate the acoustic emission (AE) source and damage in an isotropic metallic structure. In the case of the AE (passive) technique, the elastic waves take the shortest path from the source to the sensor array distributed in the structure. The geodesics are computed on the meshed surface of the structure using graph theory based on Dijkstra's algorithm. By propagating the waves in reverse virtually from these sensors along the geodesic path and by locating the first intersection point of these waves, one can get the AE source location. The same approach is extended for detection of damage in a structure. The wave response matrix of the given sensor configuration for the healthy and the damaged structure is obtained experimentally. The healthy and damage response matrix is compared and their difference gives the information about the reflection of waves from the damage. These waves are backpropagated from the sensors and the above method is used to locate the damage by finding the point where intersection of geodesics occurs. In this work, the geodesic approach is shown to be suitable to obtain a practicable source location solution in a more general set-up on any arbitrary surface containing finite discontinuities. Experiments were conducted on aluminum specimens of simple and complex geometry to validate this new method.
Pi-turns in proteins and peptides: Classification, conformation, occurrence, hydration and sequence.
Resumo:
The i + 5-->i hydrogen bonded turn conformation (pi-turn) with the fifth residue adopting alpha L conformation is frequently found at the C-terminus of helices in proteins and hence is speculated to be a "helix termination signal." An analysis of the occurrence of i + 5-->i hydrogen bonded turn conformation at any general position in proteins (not specifically at the helix C-terminus), using coordinates of 228 protein crystal structures determined by X-ray crystallography to better than 2.5 A resolution is reported in this paper. Of 486 detected pi-turn conformations, 367 have the (i + 4)th residue in alpha L conformation, generally occurring at the C-terminus of alpha-helices, consistent with previous observations. However, a significant number (111) of pi-turn conformations occur with (i + 4)th residue in alpha R conformation also, generally occurring in alpha-helices as distortions either at the terminii or at the middle, a novel finding. These two sets of pi-turn conformations are referred to by the names pi alpha L and pi alpha R-turns, respectively, depending upon whether the (i + 4)th residue adopts alpha L or alpha R conformations. Four pi-turns, named pi alpha L'-turns, were noticed to be mirror images of pi alpha L-turns, and four more pi-turns, which have the (i + 4)th residue in beta conformation and denoted as pi beta-turns, occur as a part of hairpin bend connecting twisted beta-strands. Consecutive pi-turns occur, but only with pi alpha R-turns. The preference for amino acid residues is different in pi alpha L and pi alpha R-turns. However, both show a preference for Pro after the C-termini. Hydrophilic residues are preferred at positions i + 1, i + 2, and i + 3 of pi alpha L-turns, whereas positions i and i + 5 prefer hydrophobic residues. Residue i + 4 in pi alpha L-turns is mainly Gly and less often Asn. Although pi alpha R-turns generally occur as distortions in helices, their amino acid preference is different from that of helices. Poor helix formers, such as His, Tyr, and Asn, also were found to be preferred for pi alpha R-turns, whereas good helix former Ala is not preferred. pi-Turns in peptides provide a picture of the pi-turn at atomic resolution. Only nine peptide-based pi-turns are reported so far, and all of them belong to pi alpha L-turn type with an achiral residue in position i + 4. The results are of importance for structure prediction, modeling, and de novo design of proteins.