960 resultados para Spatial models
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
Urban systems consist of several interlinked sub-systems - social, economic, institutional and environmental – each representing a complex system of its own and affecting all the others at various structural and functional levels. An urban system is represented by a number of “human” agents, such as individuals and households, and “non-human” agents, such as buildings, establishments, transports, vehicles and infrastructures. These two categories of agents interact among them and simultaneously produce impact on the system they interact with. Try to understand the type of interactions, their spatial and temporal localisation to allow a very detailed simulation trough models, turn out to be a great effort and is the topic this research deals with. An analysis of urban system complexity is here presented and a state of the art review about the field of urban models is provided. Finally, six international models - MATSim, MobiSim, ANTONIN, TRANSIMS, UrbanSim, ILUTE - are illustrated and then compared.
Resumo:
The Alzheimer’s disease (AD), the most prevalent form of age-related dementia, is a multifactorial and heterogeneous neurodegenerative disease. The molecular mechanisms underlying the pathogenesis of AD are yet largely unknown. However, the etiopathogenesis of AD likely resides in the interaction between genetic and environmental risk factors. Among the different factors that contribute to the pathogenesis of AD, amyloid-beta peptides and the genetic risk factor apoE4 are prominent on the basis of genetic evidence and experimental data. ApoE4 transgenic mice have deficits in spatial learning and memory associated with inflammation and brain atrophy. Evidences suggest that apoE4 is implicated in amyloid-beta accumulation, imbalance of cellular antioxidant system and in apoptotic phenomena. The mechanisms by which apoE4 interacts with other AD risk factors leading to an increased susceptibility to the dementia are still unknown. The aim of this research was to provide new insights into molecular mechanisms of AD neurodegeneration, investigating the effect of amyloid-beta peptides and apoE4 genotype on the modulation of genes and proteins differently involved in cellular processes related to aging and oxidative balance such as PIN1, SIRT1, PSEN1, BDNF, TRX1 and GRX1. In particular, we used human neuroblastoma cells exposed to amyloid-beta or apoE3 and apoE4 proteins at different time-points, and selected brain regions of human apoE3 and apoE4 targeted replacement mice, as in vitro and in vivo models, respectively. All genes and proteins studied in the present investigation are modulated by amyloid-beta and apoE4 in different ways, suggesting their involvement in the neurodegenerative mechanisms underlying the AD. Finally, these proteins might represent novel potential diagnostic and therapeutic targets in AD.
Resumo:
In this thesis, the influence of composition changes on the glass transition behavior of binary liquids in two and three spatial dimensions (2D/3D) is studied in the framework of mode-coupling theory (MCT).The well-established MCT equations are generalized to isotropic and homogeneous multicomponent liquids in arbitrary spatial dimensions. Furthermore, a new method is introduced which allows a fast and precise determination of special properties of glass transition lines. The new equations are then applied to the following model systems: binary mixtures of hard disks/spheres in 2D/3D, binary mixtures of dipolar point particles in 2D, and binary mixtures of dipolar hard disks in 2D. Some general features of the glass transition lines are also discussed. The direct comparison of the binary hard disk/sphere models in 2D/3D shows similar qualitative behavior. Particularly, for binary mixtures of hard disks in 2D the same four so-called mixing effects are identified as have been found before by Götze and Voigtmann for binary hard spheres in 3D [Phys. Rev. E 67, 021502 (2003)]. For instance, depending on the size disparity, adding a second component to a one-component liquid may lead to a stabilization of either the liquid or the glassy state. The MCT results for the 2D system are on a qualitative level in agreement with available computer simulation data. Furthermore, the glass transition diagram found for binary hard disks in 2D strongly resembles the corresponding random close packing diagram. Concerning dipolar systems, it is demonstrated that the experimental system of König et al. [Eur. Phys. J. E 18, 287 (2005)] is well described by binary point dipoles in 2D through a comparison between the experimental partial structure factors and those from computer simulations. For such mixtures of point particles it is demonstrated that MCT predicts always a plasticization effect, i.e. a stabilization of the liquid state due to mixing, in contrast to binary hard disks in 2D or binary hard spheres in 3D. It is demonstrated that the predicted plasticization effect is in qualitative agreement with experimental results. Finally, a glass transition diagram for binary mixtures of dipolar hard disks in 2D is calculated. These results demonstrate that at higher packing fractions there is a competition between the mixing effects occurring for binary hard disks in 2D and those for binary point dipoles in 2D.
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
A critical point in the analysis of ground displacements time series is the development of data driven methods that allow the different sources that generate the observed displacements to be discerned and characterised. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows reducing the dimensionality of the data space maintaining most of the variance of the dataset explained. Anyway, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. The Independent Component Analysis (ICA) is a popular technique adopted to approach this problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, I use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here I present the application of the vbICA technique to GPS position time series. First, I use vbICA on synthetic data that simulate a seismic cycle (interseismic + coseismic + postseismic + seasonal + noise) and a volcanic source, and I study the ability of the algorithm to recover the original (known) sources of deformation. Secondly, I apply vbICA to different tectonically active scenarios, such as the 2009 L'Aquila (central Italy) earthquake, the 2012 Emilia (northern Italy) seismic sequence, and the 2006 Guerrero (Mexico) Slow Slip Event (SSE).
Resumo:
Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
Road traffic accidents (RTA) are an important cause of premature death. We examined socio-demographic and geographical determinants of RTA mortality in Switzerland by linking 2000 census data to RTA mortality records 2000-2005 (ICD-10 codes V00-V99). Data from 5.5 million residents aged 18-94 years, 1744 study areas, and 1620 RTA deaths were analyzed, including 978 deaths (60.4%) in motor vehicle occupants, 254 (15.7%) in motorcyclists, 107 (6.6%) in cyclists, and 259 (16.0%) in pedestrians. Weibull survival models and Bayesian methods were used to calculate hazard ratios (HR), and standardized mortality ratios (SMR) across study areas. Adjusted HR comparing women with men ranged from 0.04 (95% CI 0.02-0.07) in motorcyclists to 0.43 (95% CI 0.32-0.56) in pedestrians. There was a u-shaped relationship with age in motor vehicle occupants and motorcyclists. In cyclists and pedestrians, mortality increased after age 55 years. Mortality was higher in individuals with primary education (HR 1.53; 95% CI 1.29-1.81), and higher in single (HR 1.24; 95% CI 1.05-1.46), widowed (HR 1.31; 95% CI 1.05-1.65) and divorced individuals (HR 1.62; 95% CI 1.33-1.97), compared to persons with tertiary education or married persons. The association with education was particularly strong for pedestrians (HR 1.87; 95% CI 1.20-2.91). RTA mortality increased with decreasing population density of study areas for motor vehicle occupants (test for trend p<0.0001) and motorcyclists (p=0.0021) but not for cyclists (p=0.39) or pedestrians (p=0.29). SMR standardized for socio-demographic and geographical variables ranged from 82 to 190. Prevention efforts should aim to reduce inequities across socio-demographic and educational groups, and across geographical areas, with interventions targeted at high-risk groups and areas, and different traffic users, including pedestrians.
Resumo:
Farm animals may serve as models for evaluating social networks in a controlled environment. We used an automated system to track, at fine temporal and spatial resolution (once per minute, +/- 50 cm) every individual in six herds of dairy cows (Bos taurus). We then analysed the data using social network analyses. Relationships were based on non-random attachment and avoidance relationships in respect to synchronous use and distances observed in three different functional areas (activity, feeding and lying). We found that neither synchrony nor distance between cows was strongly predictable among the three functional areas. The emerging social networks were tightly knit for attachment relationships and less dense for avoidance relationships. These networks loosened up from the feeding and lying area to the activity area, and were less dense for relationships based on synchronicity than on median distance with respect to node degree, relative size of the largest cluster, density and diameter of the network. In addition, synchronicity was higher in dyads of dairy cows that had grown up together and shared their last dry period. This last effect disappeared with increasing herd size. Dairy herds can be characterized by one strongly clustered network including most of the herd members with many non-random attachment and avoidance relationships. Closely synchronous dyads were composed of cows with more intense previous contact. The automatic tracking of a large number of individuals proved promising in acquiring the data necessary for tackling social network analyses.
Resumo:
An automated algorithm for detection of the acetabular rim was developed. Accuracy of the algorithm was validated in a sawbone study and compared against manually conducted digitization attempts, which were established as the ground truth. The latter proved to be reliable and reproducible, demonstrated by almost perfect intra- and interobserver reliability. Validation of the automated algorithm showed no significant difference compared to the manually acquired data in terms of detected version and inclination. Automated detection of the acetabular rim contour and the spatial orientation of the acetabular opening plane can be accurately achieved with this algorithm.
Resumo:
Increasingly, regression models are used when residuals are spatially correlated. Prominent examples include studies in environmental epidemiology to understand the chronic health effects of pollutants. I consider the effects of residual spatial structure on the bias and precision of regression coefficients, developing a simple framework in which to understand the key issues and derive informative analytic results. When the spatial residual is induced by an unmeasured confounder, regression models with spatial random effects and closely-related models such as kriging and penalized splines are biased, even when the residual variance components are known. Analytic and simulation results show how the bias depends on the spatial scales of the covariate and the residual; bias is reduced only when there is variation in the covariate at a scale smaller than the scale of the unmeasured confounding. I also discuss how the scales of the residual and the covariate affect efficiency and uncertainty estimation when the residuals can be considered independent of the covariate. In an application on the association between black carbon particulate matter air pollution and birth weight, controlling for large-scale spatial variation appears to reduce bias from unmeasured confounders, while increasing uncertainty in the estimated pollution effect.
Resumo:
Recent research highlights the promise of remotely-sensed aerosol optical depth (AOD) as a proxy for ground-level PM2.5. Particular interest lies in the information on spatial heterogeneity potentially provided by AOD, with important application to estimating and monitoring pollution exposure for public health purposes. Given the temporal and spatio-temporal correlations reported between AOD and PM2.5 , it is tempting to interpret the spatial patterns in AOD as reflecting patterns in PM2.5 . Here we find only limited spatial associations of AOD from three satellite retrievals with PM2.5 over the eastern U.S. at the daily and yearly levels in 2004. We then use statistical modeling to show that the patterns in monthly average AOD poorly reflect patterns in PM2.5 because of systematic, spatially-correlated error in AOD as a proxy for PM2.5 . Furthermore, when we include AOD as a predictor of monthly PM2.5 in a statistical prediction model, AOD provides little additional information to improve predictions of PM2.5 when included in a model that already accounts for land use, emission sources, meteorology and regional variability. These results suggest caution in using spatial variation in AOD to stand in for spatial variation in ground-level PM2.5 in epidemiological analyses and indicate that when PM2.5 monitoring is available, careful statistical modeling outperforms the use of AOD.