888 resultados para data-driven simulation
Resumo:
This paper attempts to estimate the impact of population ageing on house prices. There is considerable debate about whether population ageing puts downwards or upwards pressure on house prices. The empirical approach differs from earlier studies of this relationship, which are mainly regression analyses of macro time-series data. A micro-simulation methodology is adopted that combines a macro-level house price model with a micro-level household formation model. The case study is Scotland, a country that is expected to age rapidly in the future. The parameters of the household formation model are estimated with panel data from the British Household Panel Survey covering the period 1999-2008. The estimates are then used to carry out a set of simulations. The simulations are based on a set of population projections that represent a considerable range in the rate of population ageing. The main finding from the simulations is that population ageing—or more generally changes in age structure—is not likely a main determinant of house prices, at least in Scotland.
Resumo:
Using survey expectations data and Markov-switching models, this paper evaluates the characteristics and evolution of investors' forecast errors about the yen/dollar exchange rate. Since our model is derived from the uncovered interest rate parity (UIRP) condition and our data cover a period of low interest rates, this study is also related to the forward premium puzzle and the currency carry trade strategy. We obtain the following results. First, with the same forecast horizon, exchange rate forecasts are homogeneous among different industry types, but within the same industry, exchange rate forecasts differ if the forecast time horizon is different. In particular, investors tend to undervalue the future exchange rate for long term forecast horizons; however, in the short run they tend to overvalue the future exchange rate. Second, while forecast errors are found to be partly driven by interest rate spreads, evidence against the UIRP is provided regardless of the forecasting time horizon; the forward premium puzzle becomes more significant in shorter term forecasting errors. Consistent with this finding, our coefficients on interest rate spreads provide indirect evidence of the yen carry trade over only a short term forecast horizon. Furthermore, the carry trade seems to be active when there is a clear indication that the interest rate will be low in the future.
Resumo:
Excessive exposure to solar ultraviolet (UV) is the main cause of skin cancer. Specific prevention should be further developed to target overexposed or highly vulnerable populations. A better characterisation of anatomical UV exposure patterns is however needed for specific prevention. To develop a regression model for predicting the UV exposure ratio (ER, ratio between the anatomical dose and the corresponding ground level dose) for each body site without requiring individual measurements. A 3D numeric model (SimUVEx) was used to compute ER for various body sites and postures. A multiple fractional polynomial regression analysis was performed to identify predictors of ER. The regression model used simulation data and its performance was tested on an independent data set. Two input variables were sufficient to explain ER: the cosine of the maximal daily solar zenith angle and the fraction of the sky visible from the body site. The regression model was in good agreement with the simulated data ER (R(2)=0.988). Relative errors up to +20% and -10% were found in daily doses predictions, whereas an average relative error of only 2.4% (-0.03% to 5.4%) was found in yearly dose predictions. The regression model predicts accurately ER and UV doses on the basis of readily available data such as global UV erythemal irradiance measured at ground surface stations or inferred from satellite information. It renders the development of exposure data on a wide temporal and geographical scale possible and opens broad perspectives for epidemiological studies and skin cancer prevention.
Resumo:
Many new gene copies emerged by gene duplication in hominoids, but little is known with respect to their functional evolution. Glutamate dehydrogenase (GLUD) is an enzyme central to the glutamate and energy metabolism of the cell. In addition to the single, GLUD-encoding gene present in all mammals (GLUD1), humans and apes acquired a second GLUD gene (GLUD2) through retroduplication of GLUD1, which codes for an enzyme with unique, potentially brain-adapted properties. Here we show that whereas the GLUD1 parental protein localizes to mitochondria and the cytoplasm, GLUD2 is specifically targeted to mitochondria. Using evolutionary analysis and resurrected ancestral protein variants, we demonstrate that the enhanced mitochondrial targeting specificity of GLUD2 is due to a single positively selected glutamic acid-to-lysine substitution, which was fixed in the N-terminal mitochondrial targeting sequence (MTS) of GLUD2 soon after the duplication event in the hominoid ancestor approximately 18-25 million years ago. This MTS substitution arose in parallel with two crucial adaptive amino acid changes in the enzyme and likely contributed to the functional adaptation of GLUD2 to the glutamate metabolism of the hominoid brain and other tissues. We suggest that rapid, selectively driven subcellular adaptation, as exemplified by GLUD2, represents a common route underlying the emergence of new gene functions.
Resumo:
ABSTRACT: BACKGROUND: Cardiovascular magnetic resonance (CMR) has favorable characteristics for diagnostic evaluation and risk stratification of patients with known or suspected CAD. CMR utilization in CAD detection is growing fast. However, data on its cost-effectiveness are scarce. The goal of this study is to compare the costs of two strategies for detection of significant coronary artery stenoses in patients with suspected coronary artery disease (CAD): 1) Performing CMR first to assess myocardial ischemia and/or infarct scar before referring positive patients (defined as presence of ischemia and/or infarct scar to coronary angiography (CXA) versus 2) a hypothetical CXA performed in all patients as a single test to detect CAD. METHODS: A subgroup of the European CMR pilot registry was used including 2,717 consecutive patients who underwent stress-CMR. From these patients, 21% were positive for CAD (ischemia and/or infarct scar), 73% negative, and 6% uncertain and underwent additional testing. The diagnostic costs were evaluated using invoicing costs of each test performed. Costs analysis was performed from a health care payer perspective in German, United Kingdom, Swiss, and United States health care settings. RESULTS: In the public sectors of the German, United Kingdom, and Swiss health care systems, cost savings from the CMR-driven strategy were 50%, 25% and 23%, respectively, versus outpatient CXA. If CXA was carried out as an inpatient procedure, cost savings were 46%, 50% and 48%, respectively. In the United States context, cost savings were 51% when compared with inpatient CXA, but higher for CMR by 8% versus outpatient CXA. CONCLUSION: This analysis suggests that from an economic perspective, the use of CMR should be encouraged as a management option for patients with suspected CAD.
Resumo:
BACKGROUND: The ambition of most molecular biologists is the understanding of the intricate network of molecular interactions that control biological systems. As scientists uncover the components and the connectivity of these networks, it becomes possible to study their dynamical behavior as a whole and discover what is the specific role of each of their components. Since the behavior of a network is by no means intuitive, it becomes necessary to use computational models to understand its behavior and to be able to make predictions about it. Unfortunately, most current computational models describe small networks due to the scarcity of kinetic data available. To overcome this problem, we previously published a methodology to convert a signaling network into a dynamical system, even in the total absence of kinetic information. In this paper we present a software implementation of such methodology. RESULTS: We developed SQUAD, a software for the dynamic simulation of signaling networks using the standardized qualitative dynamical systems approach. SQUAD converts the network into a discrete dynamical system, and it uses a binary decision diagram algorithm to identify all the steady states of the system. Then, the software creates a continuous dynamical system and localizes its steady states which are located near the steady states of the discrete system. The software permits to make simulations on the continuous system, allowing for the modification of several parameters. Importantly, SQUAD includes a framework for perturbing networks in a manner similar to what is performed in experimental laboratory protocols, for example by activating receptors or knocking out molecular components. Using this software we have been able to successfully reproduce the behavior of the regulatory network implicated in T-helper cell differentiation. CONCLUSION: The simulation of regulatory networks aims at predicting the behavior of a whole system when subject to stimuli, such as drugs, or determine the role of specific components within the network. The predictions can then be used to interpret and/or drive laboratory experiments. SQUAD provides a user-friendly graphical interface, accessible to both computational and experimental biologists for the fast qualitative simulation of large regulatory networks for which kinetic data is not necessarily available.
Resumo:
PSIP1 (PC4 and SFRS1 interacting protein 1) encodes two splice variants: lens epithelium-derived growth factor or p75 (LEDGF/p75) and p52. PSIP1 gene products were shown to be involved in transcriptional regulation, affecting a plethora of cellular processes, including cell proliferation, cell survival, and stress response. Furthermore, LEDGF/p75 has implications for various diseases and infections, including autoimmunity, leukemia, embryo development, psoriasis, and human immunodeficiency virus integration. Here, we reported the first characterization of the PSIP1 promoter. Using 5' RNA ligase-mediated rapid amplification of cDNA ends, we identified novel transcription start sites in different cell types. Using a luciferase reporter system, we identified regulatory elements controlling the expression of LEDGF/p75 and p52. These include (i) minimal promoters (-112/+59 and +609/+781) that drive the basal expression of LEDGF/p75 and of the shorter splice variant p52, respectively; (ii) a sequence (+319/+397) that may control the ratio of LEDGF/p75 expression to p52 expression; and (iii) a strong enhancer (-320/-207) implicated in the modulation of LEDGF/p75 transcriptional activity. Computational, biochemical, and genetic approaches enabled us to identify the transcription factor Sp1 as a key modulator of the PSIP1 promoter, controlling LEDGF/p75 transcription through two binding sites at -72/-64 and -46/-36. Overall, our results provide initial data concerning LEDGF/p75 promoter regulation, giving new insights to further understand its biological function and opening the door for new therapeutic strategies in which LEDGF/p75 is involved.
Resumo:
High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
In this work we present numerical simulations of continuous flow left ventricle assist device implantation with the aim of comparing difference in flow rates and pressure patterns depending on the location of the anastomosis and the rotational speed of the device. Despite the fact that the descending aorta anastomosis approach is less invasive, since it does not require a sternotomy and a cardiopulmonary bypass, its benefits are still controversial. Moreover, the device rotational speed should be correctly chosen to avoid anomalous flow rates and pressure distribution in specific location of the cardiovascular tree. With the aim of assessing the differences between these two approaches and device rotational speed in terms of flow rate and pressure waveforms, we set up numerical simulations of network of one-dimensional models where we account for the presence of an outflow cannula anastomosed to different locations of the aorta. Then, we use the resulting network to compare the results of the two different cannulations for several stages of heart failure and different rotational speed of the device. The inflow boundary data for the heart and the cannulas are obtained from a lumped parameters model of the entire circulatory system with an assist device, which is validated with clinical data. The results show that ascending and descending aorta cannulations lead to similar waveforms and mean flow rate in all the considered cases. Moreover, regardless of the anastomosis region, the rotational speed of the device has an important impact on wave profiles; this effect is more pronounced at high RPM.
Resumo:
The identification of genetically homogeneous groups of individuals is a long standing issue in population genetics. A recent Bayesian algorithm implemented in the software STRUCTURE allows the identification of such groups. However, the ability of this algorithm to detect the true number of clusters (K) in a sample of individuals when patterns of dispersal among populations are not homogeneous has not been tested. The goal of this study is to carry out such tests, using various dispersal scenarios from data generated with an individual-based model. We found that in most cases the estimated 'log probability of data' does not provide a correct estimation of the number of clusters, K. However, using an ad hoc statistic DeltaK based on the rate of change in the log probability of data between successive K values, we found that STRUCTURE accurately detects the uppermost hierarchical level of structure for the scenarios we tested. As might be expected, the results are sensitive to the type of genetic marker used (AFLP vs. microsatellite), the number of loci scored, the number of populations sampled, and the number of individuals typed in each sample.
Resumo:
Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.
Resumo:
This study proposes a theoretical model describing the electrostatically driven step of the alpha 1 b-adrenergic receptor (AR)-G protein recognition. The comparative analysis of the structural-dynamics features of functionally different receptor forms, i.e., the wild type (ground state) and its constitutively active mutants D142A and A293E, was instrumental to gain insight on the receptor-G protein electrostatic and steric complementarity. Rigid body docking simulations between the different forms of the alpha 1 b-AR and the heterotrimeric G alpha q, G alpha s, G alpha i1, and G alpha t suggest that the cytosolic crevice shared by the active receptor and including the second and the third intracellular loops as well as the cytosolic extension of helices 5 and 6, represents the receptor surface with docking complementarity with the G protein. On the other hand, the G protein solvent-exposed portions that recognize the intracellular loops of the activated receptors are the N-terminal portion of alpha 3, alpha G, the alpha G/alpha 4 loop, alpha 4, the alpha 4/beta 6 loop, alpha 5, and the C-terminus. Docking simulations suggest that the two constitutively active mutants D142A and A293E recognize different G proteins with similar selectivity orders, i.e., G alpha q approximately equal to G alpha s > G alpha i > G alpha t. The theoretical models herein proposed might provide useful suggestions for new experiments aiming at exploring the receptor-G protein interface.
Resumo:
The hypothalamic damage induced by neonatal treatment with monosodium l-glutamate (MSG) induces several metabolic abnormalities, resulting in a rat hyperleptinemic-hyperadipose phenotype. This study was conducted to explore the impact of the neonatal MSG treatment, in the adult (120 days old) female rat on: (a) the in vivo and in vitro mineralocorticoid responses to ACTH and angiotensin II (AII); (b) the effect of leptin on ACTH- and AII-stimulated mineralocorticoid secretions by isolated corticoadrenal cells; and (c) abdominal adiposity characteristics. Our data indicate that, compared with age-matched controls, MSG rats displayed: (1) enhanced and reduced mineralocorticoid responses to ACTH and AII treatments, respectively, effects observed in both in vivo and in vitro conditions; (2) adrenal refractoriness to the inhibitory effect of exogenous leptin on ACTH-stimulated aldosterone output by isolated adrenocortical cells; and (3) distorted omental adiposity morphology and function. This study supports that the adult hyperleptinemic MSG female rat is characterized by enhanced ACTH-driven mineralocorticoid function, impaired adrenal leptin sensitivity, and disrupted abdominal adiposity function. MSG rats could counteract undesirable effects of glucocorticoid excess, by developing a reduced AII-driven mineralocorticoid function. Thus, chronic hyperleptinemia could play a protective role against ACTH-mediated allostatic loads in the adrenal leptin resistant, MSG female rat phenotype.
Resumo:
The integration of geophysical data into the subsurface characterization problem has been shown in many cases to significantly improve hydrological knowledge by providing information at spatial scales and locations that is unattainable using conventional hydrological measurement techniques. The investigation of exactly how much benefit can be brought by geophysical data in terms of its effect on hydrological predictions, however, has received considerably less attention in the literature. Here, we examine the potential hydrological benefits brought by a recently introduced simulated annealing (SA) conditional stochastic simulation method designed for the assimilation of diverse hydrogeophysical data sets. We consider the specific case of integrating crosshole ground-penetrating radar (GPR) and borehole porosity log data to characterize the porosity distribution in saturated heterogeneous aquifers. In many cases, porosity is linked to hydraulic conductivity and thus to flow and transport behavior. To perform our evaluation, we first generate a number of synthetic porosity fields exhibiting varying degrees of spatial continuity and structural complexity. Next, we simulate the collection of crosshole GPR data between several boreholes in these fields, and the collection of porosity log data at the borehole locations. The inverted GPR data, together with the porosity logs, are then used to reconstruct the porosity field using the SA-based method, along with a number of other more elementary approaches. Assuming that the grid-cell-scale relationship between porosity and hydraulic conductivity is unique and known, the porosity realizations are then used in groundwater flow and contaminant transport simulations to assess the benefits and limitations of the different approaches.