894 resultados para Two variable oregonator model
Resumo:
Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^
Resumo:
This thesis project is motivated by the potential problem of using observational data to draw inferences about a causal relationship in observational epidemiology research when controlled randomization is not applicable. Instrumental variable (IV) method is one of the statistical tools to overcome this problem. Mendelian randomization study uses genetic variants as IVs in genetic association study. In this thesis, the IV method, as well as standard logistic and linear regression models, is used to investigate the causal association between risk of pancreatic cancer and the circulating levels of soluble receptor for advanced glycation end-products (sRAGE). Higher levels of serum sRAGE were found to be associated with a lower risk of pancreatic cancer in a previous observational study (255 cases and 485 controls). However, such a novel association may be biased by unknown confounding factors. In a case-control study, we aimed to use the IV approach to confirm or refute this observation in a subset of study subjects for whom the genotyping data were available (178 cases and 177 controls). Two-stage IV method using generalized method of moments-structural mean models (GMM-SMM) was conducted and the relative risk (RR) was calculated. In the first stage analysis, we found that the single nucleotide polymorphism (SNP) rs2070600 of the receptor for advanced glycation end-products (AGER) gene meets all three general assumptions for a genetic IV in examining the causal association between sRAGE and risk of pancreatic cancer. The variant allele of SNP rs2070600 of the AGER gene was associated with lower levels of sRAGE, and it was neither associated with risk of pancreatic cancer, nor with the confounding factors. It was a potential strong IV (F statistic = 29.2). However, in the second stage analysis, the GMM-SMM model failed to converge due to non- concaveness probably because of the small sample size. Therefore, the IV analysis could not support the causality of the association between serum sRAGE levels and risk of pancreatic cancer. Nevertheless, these analyses suggest that rs2070600 was a potentially good genetic IV for testing the causality between the risk of pancreatic cancer and sRAGE levels. A larger sample size is required to conduct a credible IV analysis.^
Resumo:
The Two State model describes how drugs activate receptors by inducing or supporting a conformational change in the receptor from “off” to “on”. The beta 2 adrenergic receptor system is the model system which was used to formalize the concept of two states, and the mechanism of hormone agonist stimulation of this receptor is similar to ligand activation of other seven transmembrane receptors. Hormone binding to beta 2 adrenergic receptors stimulates the intracellular production of cyclic adenosine monophosphate (cAMP), which is mediated through the stimulatory guanyl nucleotide binding protein (Gs) interacting with the membrane bound enzyme adenylylcyclase (AC). ^ The effects of cAMP include protein phosphorylation, metabolic regulation and transcriptional regulation. The beta 2 adrenergic receptor system is the most well known of its family of G protein coupled receptors. Ligands have been scrutinized extensively in search of more effective therapeutic agents at this receptor as well as for insight into the biochemical mechanism of receptor activation. Hormone binding to receptor is thought to induce a conformational change in the receptor that increases its affinity for inactive Gs, catalyzes the release of GDP and subsequent binding of GTP and activation of Gs. ^ However, some beta 2 ligands are more efficient at this transformation than others, and the underlying mechanism for this drug specificity is not fully understood. The central problem in pharmacology is the characterization of drugs in their effect on physiological systems, and consequently, the search for a rational scale of drug effectiveness has been the effort of many investigators, which continues to the present time as models are proposed, tested and modified. ^ The major results of this thesis show that for many b2 -adrenergic ligands, the Two State model is quite adequate to explain their activity, but dobutamine (+/−3,4-dihydroxy-N-[3-(4-hydroxyphenyl)-1-methylpropyl]- b -phenethylamine) fails to conform to the predictions of the Two State model. It is a weak partial agonist, but it forms a large amount of high affinity complexes, and these complexes are formed at low concentrations much better than at higher concentrations. Finally, dobutamine causes the beta 2 adrenergic receptor to form high affinity complexes at a much faster rate than can be accounted for by its low efficiency activating AC. Because the Two State model fails to predict the activity of dobutamine in three different ways, it has been disproven in its strictest form. ^
Resumo:
Ocean acidification, the result of increased dissolution of carbon dioxide (CO2) in seawater, is a leading subject of current research. The effects of acidification on non-calcifying macroalgae are, however, still unclear. The current study reports two 1-month studies using two different macroalgae, the red alga Palmaria palmata (Rhodophyta) and the kelp Saccharina latissima (Phaeophyta), exposed to control (pHNBS = 8.04) and increased (pHNBS = 7.82) levels of CO2-induced seawater acidification. The impacts of both increased acidification and time of exposure on net primary production (NPP), respiration (R), dimethylsulphoniopropionate (DMSP) concentrations, and algal growth have been assessed. In P. palmata, although NPP significantly increased during the testing period, it significantly decreased with acidification, whereas R showed a significant decrease with acidification only. S. latissima significantly increased NPP with acidification but not with time, and significantly increased R with both acidification and time, suggesting a concomitant increase in gross primary production. The DMSP concentrations of both species remained unchanged by either acidification or through time during the experimental period. In contrast, algal growth differed markedly between the two experiments, in that P. palmata showed very little growth throughout the experiment, while S. latissima showed substantial growth during the course of the study, with the latter showing a significant difference between the acidified and control treatments. These two experiments suggest that the study species used here were resistant to a short-term exposure to ocean acidification, with some of the differences seen between species possibly linked to different nutrient concentrations between the experiments.
Resumo:
Chinese scientists will start to drill a deep ice core at Kunlun station near Dome A in the near future. Recent work has predicted that Dome A is a location where ice older than 1 million years can be found. We model flow, temperature and the age of the ice by applying a three-dimensional, thermomechanically coupled full-Stokes model to a 70 × 70 km**2 domain around Kunlun station, using isotropic non-linear rheology and different prescribed anisotropic ice fabrics that vary the evolution from isotropic to single maximum at 1/3 or 2/3 depths. The variation in fabric is about as important as the uncertainties in geothermal heat flux in determining the vertical advection which in consequence controls both the basal temperature and the age profile. We find strongly variable basal ages across the domain since the ice varies greatly in thickness, and any basal melting effectively removes very old ice in the deepest parts of the subglacial valleys. Comparison with dated radar isochrones in the upper one third of the ice sheet cannot sufficiently constrain the age of the deeper ice, with uncertainties as large as 500 000 years in the basal age. We also assess basal age and thermal state sensitivities to geothermal heat flux and surface conditions. Despite expectations of modest changes in surface height over a glacial cycle at Dome A, even small variations in the evolution of surface conditions cause large variation in basal conditions, which is consistent with basal accretion features seen in radar surveys.
Resumo:
Matlab script file of a two-dimensional (2-D) peat microtopographical model together with other supplementary files that are required to run the model.
Resumo:
Greenland ice core records indicate that the last deglaciation (~7-21 ka) was punctuated by numerous abrupt climate reversals involving temperature changes of up to 5°C-10°C within decades. However, the cause behind many of these events is uncertain. A likely candidate may have been the input of deglacial meltwater, from the Laurentide ice sheet (LIS), to the high-latitude North Atlantic, which disrupted ocean circulation and triggered cooling. Yet the direct evidence of meltwater input for many of these events has so far remained undetected. In this study, we use the geochemistry (paired Mg/Ca-d18O) of planktonic foraminifera from a sediment core south of Iceland to reconstruct the input of freshwater to the northern North Atlantic during abrupt deglacial climate change. Our record can be placed on the same timescale as ice cores and therefore provides a direct comparison between the timing of freshwater input and climate variability. Meltwater events coincide with the onset of numerous cold intervals, including the Older Dryas (14.0 ka), two events during the Allerød (at ~13.1 and 13.6 ka), the Younger Dryas (12.9 ka), and the 8.2 ka event, supporting a causal link between these abrupt climate changes and meltwater input. During the Bølling-Allerød warm interval, we find that periods of warming are associated with an increased meltwater flux to the northern North Atlantic, which in turn induces abrupt cooling, a cessation in meltwater input, and eventual climate recovery. This implies that feedback between climate and meltwater input produced a highly variable climate. A comparison to published data sets suggests that this feedback likely included fluctuations in the southern margin of the LIS causing rerouting of LIS meltwater between southern and eastern drainage outlets, as proposed by Clark et al. (2001, doi:10.1126/science.1062517).
Resumo:
A two-dimensional finite element model of current flow in the front surface of a PV cell is presented. In order to validate this model we perform an experimental test. Later, particular attention is paid to the effects of non-uniform illumination in the finger direction which is typical in a linear concentrator system. Fill factor, open circuit voltage and efficiency are shown to decrease with increasing degree of non-uniform illumination. It is shown that these detrimental effects can be mitigated significantly by reoptimization of the number of front surface metallization fingers to suit the degree of non-uniformity. The behavior of current flow in the front surface of a cell operating at open circuit voltage under non-uniform illumination is discussed in detail.
Resumo:
The influence of applying European default traffic values to the making of a noise map was evaluated in a typical environment like Palma de Mallorca. To assess these default traffic values, a first model has been created and compared with measured noise levels. Subsequently a second traffic model, improving the input data used for the first one, has been created and validated according to the deviations. Different methodologies were also examined for collecting model input data that would be of higher quality, by analysing the improvement generated in the reduction in the uncertainty of the noise map introduced by the road traffic noise emission
Resumo:
Independent Components Analysis is a Blind Source Separation method that aims to find the pure source signals mixed together in unknown proportions in the observed signals under study. It does this by searching for factors which are mutually statistically independent. It can thus be classified among the latent-variable based methods. Like other methods based on latent variables, a careful investigation has to be carried out to find out which factors are significant and which are not. Therefore, it is important to dispose of a validation procedure to decide on the optimal number of independent components to include in the final model. This can be made complicated by the fact that two consecutive models may differ in the order and signs of similarly-indexed ICs. As well, the structure of the extracted sources can change as a function of the number of factors calculated. Two methods for determining the optimal number of ICs are proposed in this article and applied to simulated and real datasets to demonstrate their performance.
Resumo:
A research has been carried out in two-lanehighways in the Madrid Region to propose an alternativemodel for the speed-flowrelationship using regular loop data. The model is different in shape and, in some cases, slopes with respect to the contents of Highway Capacity Manual (HCM). A model is proposed for a mountainous area road, something for which the HCM does not provide explicitly a solution. The problem of a mountain road with high flows to access a popular recreational area is discussed, and some solutions are proposed. Up to 7 one-way sections of two-lanehighways have been selected, aiming at covering a significant number of different characteristics, to verify the proposed method the different classes of highways on which the Manual classifies them. In order to enunciate the model and to verify the basic variables of these types of roads a high number of data have been used. The counts were collected in the same way that the Madrid Region Highway Agency performs their counts. A total of 1.471 hours have been collected, in periods of 5 minutes. The models have been verified by means of specific statistical test (R2, T-Student, Durbin-Watson, ANOVA, etc.) and with the diagnostics of the contrast of assumptions (normality, linearity, homoscedasticity and independence). The model proposed for this type of highways with base conditions, can explain the different behaviors as traffic volumes increase, and follows a polynomial multiple regression model of order 3, S shaped. As secondary results of this research, the levels of service and the capacities of this road have been measured with the 2000 HCM methodology, and the results discussed. © 2011 Published by Elsevier Ltd.
Resumo:
This paper describes a model of persistence in (C)LP languages and two different and practically very useful ways to implement this model in current systems. The fundamental idea is that persistence is a characteristic of certain dynamic predicates (Le., those which encapsulate state). The main effect of declaring a predicate persistent is that the dynamic changes made to such predicates persist from one execution to the next one. After proposing a syntax for declaring persistent predicates, a simple, file-based implementation of the concept is presented and some examples shown. An additional implementation is presented which stores persistent predicates in an external datábase. The abstraction of the concept of persistence from its implementation allows developing applications which can store their persistent predicates alternatively in files or databases with only a few simple changes to a declaration stating the location and modality used for persistent storage. The paper presents the model, the implementation approach in both the cases of using files and relational databases, a number of optimizations of the process (using information obtained from static global analysis and goal clustering), and performance results from an implementation of these ideas.
Resumo:
The vertical dynamic actions transmitted by railway vehicles to the ballasted track infrastructure is evaluated taking into account models with different degree of detail. In particular, we have studied this matter from a two-dimensional (2D) finite element model to a fully coupled three-dimensional (3D) multi-body finite element model. The vehicle and track are coupled via a non-linear Hertz contact mechanism. The method of Lagrange multipliers is used for the contact constraint enforcement between wheel and rail. Distributed elevation irregularities are generated based on power spectral density (PSD) distributions which are taken into account for the interaction. The numerical simulations are performed in the time domain, using a direct integration method for solving the transient problem due to the contact nonlinearities. The results obtained include contact forces, forces transmitted to the infrastructure (sleeper) by railpads and envelopes of relevant results for several track irregularities and speed ranges. The main contribution of this work is to identify and discuss coincidences and differences between discrete 2D models and continuum 3D models, as wheel as assessing the validity of evaluating the dynamic loading on the track with simplified 2D models
Resumo:
Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.
Resumo:
The purpose of this paper is to use the predictive control to take advantage of the future information in order to improve the reference tracking. The control attempts to increase the bandwidth of the conventional regulators by using the future information of the reference, which is supposed to be known in advance. A method for designing a controller is also proposed. A comparison in simulation with a conventional regulator is made controlling a four-phase Buck converter. Advantages and disadvantages are analyzed based on simulation results.