960 resultados para dynamic causal modeling
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Dynamic stackelberg game with risk-averse players: optimal risk-sharing under asymmetric information
Resumo:
The objective of this paper is to clarify the interactive nature of the leader-follower relationship when both players are endogenously risk-averse. The analysis is placed in the context of a dynamic closed-loop Stackelberg game with private information. The case of a risk-neutral leader, very often discussed in the literature, is only a borderline possibility in the present study. Each player in the game is characterized by a risk-averse type which is unknown to his opponent. The goal of the leader is to implement an optimal incentive compatible risk-sharing contract. The proposed approach provides a qualitative analysis of adaptive risk behavior profiles for asymmetrically informed players in the context of dynamic strategic interactions modelled as incentive Stackelberg games.
Resumo:
The objective of this paper is to re-examine the risk-and effort attitude in the context of strategic dynamic interactions stated as a discrete-time finite-horizon Nash game. The analysis is based on the assumption that players are endogenously risk-and effort-averse. Each player is characterized by distinct risk-and effort-aversion types that are unknown to his opponent. The goal of the game is the optimal risk-and effort-sharing between the players. It generally depends on the individual strategies adopted and, implicitly, on the the players' types or characteristics.
Resumo:
Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.
Resumo:
Jervell and Lange-Nielsen syndrome (JLNS) is an autosomal recessive disorder, clinically characterized by severe cardiac arrhythmias [due to prolonged QTc interval in electrocardiogram (ECG)] and bilateral sensory neural deafness. Molecular defects causal to JLNS are either homozygous or compound heterozygous mutations, predominantly in the KCNQ1 gene and occasionally in the KCNE1 gene. As the molecular defect is bi-allelic, JLNS patients inherit one pathogenic mutation causal to the disorder from each parent. In this report, we show for the first time that such a disorder could also occur due to a spontaneous de novo mutation in the affected individual, not inherited from the parent, which makes this case unique unlike the previously reported JLNS cases.
Resumo:
This study investigates in vitro growth of human urinary tract smooth muscle cells under static conditions and mechanical stimulation. The cells were cultured on collagen type I- and laminin-coated silicon membranes. Using a Flexcell device for mechanical stimulation, a cyclic strain of 0-20% was applied in a strain-stress-time model (stretch, 104 min relaxation, 15 s), imitating physiological bladder filling and voiding. Cell proliferation and alpha-actin, calponin, and caldesmon phenotype marker expression were analyzed. Nonstretched cells showed significant better growth on laminin during the first 8 days, thereafter becoming comparable to cells grown on collagen type I. Cyclic strain significantly reduced cell growth on both surfaces; however, better growth was observed on laminin. Neither the type of surface nor mechanical stimulation influenced the expression pattern of phenotype markers; alpha-actin was predominantly expressed. Coating with the extracellular matrix protein laminin improved in vitro growth of human urinary tract smooth muscle cells.
Resumo:
1. Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2. We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3. Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4. Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error.
Resumo:
The life-cycle parameters of the snail Lymnaea (Radix) luteola and the factors influencing the same have been studied under laboratory conditions. Ins each month, from July 1990 to June 1991, a batch of 100 zero-day old individual were considered for studies. The snails of April batch survived for 19.42 days while those in December batch survived for 87.45 days. The May batch individual though survived for 65.67 days gained maximum shell size (15.84 mm in length) and body weight (419.87 mg). All individuals of April batch died prior to attainment of sexual maturity. In the remaining 11 batches the snails became sexually mature between 32 and 53 days. At this stage, they were with varying shell lengths, 9.3 mm to 13,11 mm in respect to batches. The reproduction period varied from 1-67 days. An individual laid, on an average, 0,25 (March batch) to 443.67 (May batch) eggs in its life-span. A batch of such snails would leave 24312, 22520, 720268, 80408, 76067, 418165, 214, 9202, 0, 0, 2459386 and 127894 individuals at the end of 352nd day. Since the environmental conditions were almost similar the 'dynamic' of population dynamics seems to be involved with the 'strain' of the snail individuals of the batches concerned.
Resumo:
The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.
Resumo:
El beneficio principal de contar con una representación de la potencia causal (Cheng, 1997) es que ésta supone una descripción contexto-independiente de la influencia de una determinada causa sobre el efecto. Por lo tanto, una forma adecuada de poner a prueba la existencia de estos modelos mentales es crear situaciones en las que la gente observa o predice la efectividad de las causas diana en múltiples contextos. La naturaleza trans-situacional de la potencia trae consigo una serie de consecuencias testables que hemos puesto a prueba a lo largo de tres series experimentales. En la primera serie experimental investigamos la transferencia de la fuerza causal, aprendida en un contexto específico, a un contexto en el que la probabilidad o tasa base del efecto es diferente. Los participantes debían predecir la probabilidad del efecto dada la introducción de la causa en el nuevo contexto. En la segunda serie experimental estudiamos las estrategias utilizadas por las personas a la hora de descubrir relaciones causales. De acuerdo con el modelo de la potencia causal, si pretendemos descubrir la potencia de una causa, entonces lo más apropiado es introducirla en el contexto más informativo y menos ambiguo posible. En los distintos experimentos de la serie combinamos tanto contextos como causas probabilísticas y determinísticas. En la tercera serie experimental intentamos extender los hallazgos de Liljeholm & Cheng (2007), en los se encontró que la generalización entre contextos ocurre según las predicciones del modelo de potencia. Parece probable que el procedimiento de dos fases utilizado por los autores promueva la tendencia a ignorar algunos ensayos, generando artificialmente resultados consistentes con los esperados por la potencia. Además, cuando controlamos la P(E|C) independientemente de la potencia, el patrón de resultados se invirtió, contradiciendo lo esperado por el modelo de Cheng. En conclusión, existe cierta evidencia que apoya la existencia de modelos causales pero es necesario buscar formas adecuadas de poner a prueba estos modelos.
Resumo:
Les piles de combustible permeten la transformació eficient de l’energia química de certs combustibles a energia elèctrica a través d’un procés electroquímic. De les diferents tecnologies de piles de combustible, les piles de combustible de tipus PEM són les més competitives i tenen una gran varietat d’aplicacions. No obstant, han de ser alimentades únicament per hidrogen. Per altra banda, l’etanol, un combustible interessant en el marc dels combustibles renovables, és una possible font d’hidrogen. Aquest treball estudia la reformació d’etanol per a l’obtenció d’hidrogen per a alimentar piles de combustible PEM. Només existeixen algunes publicacions que tractin l’obtenció d’hidrogen a partir d’etanol, i aquestes no inclouen l’estudi dinàmic del sistema. Els objectius del treball són el modelat i l’estudi dinàmic de reformadors d’etanol de baixa temperatura. Concretament, proposa un model dinàmic d’un reformador catalític d’etanol amb vapor basat en un catalitzador de cobalt. Aquesta reformació permet obtenir valors alts d’eficiència i valors òptims de monòxid de carboni que evitaran l’enverinament d’una la pila de combustible de tipus PEM. El model, no lineal, es basa en la cinètica obtinguda de diferents assaigs de laboratori. El reformador modelat opera en tres etapes: deshidrogenació d’etanol a acetaldehid i hidrogen, reformat amb vapor d’acetaldehid, i la reacció WGS (Water Gas Shift). El treball també estudia la sensibilitat i controlabilitat del sistema, caracteritzant així el sistema que caldrà controlar. L’anàlisi de controlabilitat es realitza sobre la resposta de dinàmica ràpida obtinguda del balanç de massa del reformador. El model no lineal és linealitzat amb la finalitat d’aplicar eines d’anàlisi com RGA, CN i MRI. El treball ofereix la informació necessària per a avaluar la possible implementació en un laboratori de piles de combustibles PEM alimentades per hidrogen provinent d’un reformador d’etanol.
Resumo:
An accurate sense of time contributes to functions ranging from the perception and anticipation of sensory events to the production of coordinated movements. However, accumulating evidence demonstrates that time perception is subject to strong illusory distortion. In two experiments, we investigated whether the subjective speed of temporal perception is dependent on our visual environment. By presenting human observers with speed-altered movies of a crowded street scene, we modulated performance on subsequent production of "20s" elapsed intervals. Our results indicate that one's visual environment significantly contributes to calibrating our sense of time, independently of any modulation of arousal. This plasticity generates an assay for the integrity of our sense of time and its rehabilitation in clinical pathologies.
Multimodel inference and multimodel averaging in empirical modeling of occupational exposure levels.
Resumo:
Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.
Resumo:
Aim: To investigate static and dynamic visuospatial working memory (VSWM) processes in first-episode psychosis (FEP) patients and explore the validity of such measures as specific trait markers of schizophrenia. Methods: Twenty FEP patients and 20 age-, sex-, laterality- and education-matched controls carried out a dynamic and static VSWM paradigm. At 2-year follow up 13 patients met Diagnostic and Statistical Manual (of Mental Health Disorders) - Fourth Edition (DSM-IV) criteria for schizophrenia, 1 for bipolar disorder, 1 for brief psychotic episode and 5 for schizotypal personality disorder. Results: Compared with controls, the 20 FEP patients showed severe impairment in the dynamic VSWM condition but much less impairment in the static condition. No specific bias in stimulus selection was detected in the two tasks. Two-year follow-up evaluations suggested poorer baseline scores on the dynamic task clearly differentiated the 13 FEP patients who developed schizophrenia from the seven who did not. Conclusions: Results suggest deficits in VSWM in FEP patients. Specific exploratory analyses further suggest that deficit in monitoring-manipulation VSWM processes, especially involved in our dynamic VSWM task, can be a reliable marker of schizophrenia.