134 resultados para Linear models (Statistics)
Resumo:
Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
It has been repeatedly debated which strategies people rely on in inference. These debates have been difficult to resolve, partially because hypotheses about the decision processes assumed by these strategies have typically been formulated qualitatively, making it hard to test precise quantitative predictions about response times and other behavioral data. One way to increase the precision of strategies is to implement them in cognitive architectures such as ACT-R. Often, however, a given strategy can be implemented in several ways, with each implementation yielding different behavioral predictions. We present and report a study with an experimental paradigm that can help to identify the correct implementations of classic compensatory and non-compensatory strategies such as the take-the-best and tallying heuristics, and the weighted-linear model.
Resumo:
Aim To evaluate the effects of using distinct alternative sets of climatic predictor variables on the performance, spatial predictions and future projections of species distribution models (SDMs) for rare plants in an arid environment. . Location Atacama and Peruvian Deserts, South America (18º30'S - 31º30'S, 0 - 3 000 m) Methods We modelled the present and future potential distributions of 13 species of Heliotropium sect. Cochranea, a plant group with a centre of diversity in the Atacama Desert. We developed and applied a sequential procedure, starting from climate monthly variables, to derive six alternative sets of climatic predictor variables. We used them to fit models with eight modelling techniques within an ensemble forecasting framework, and derived climate change projections for each of them. We evaluated the effects of using these alternative sets of predictor variables on performance, spatial predictions and projections of SDMs using Generalised Linear Mixed Models (GLMM). Results The use of distinct sets of climatic predictor variables did not have a significant effect on overall metrics of model performance, but had significant effects on present and future spatial predictions. Main conclusion Using different sets of climatic predictors can yield the same model fits but different spatial predictions of current and future species distributions. This represents a new form of uncertainty in model-based estimates of extinction risk that may need to be better acknowledged and quantified in future SDM studies.
Resumo:
In dynamic models of energy allocation, assimilated energy is allocated to reproduction, somatic growth, maintenance or storage, and the allocation pattern can change with age. The expected evolutionary outcome is an optimal allocation pattern, but this depends on the environment experienced during the evolutionary process and on the fitness costs and benefits incurred by allocating resources in different ways. Here we review existing treatments which encompass some of the possibilities as regards constant or variable environments and their predictability or unpredictability, and the ways in which production rates and mortality rates depend on body size and composition and age and on the pattern of energy allocation. The optimal policy is to allocate resources where selection pressures are highest, and simultaneous allocation to several body subsystems and reproduction can be optimal if these pressures are equal. This may explain balanced growth commonly observed during ontogeny. Growth ceases at maturity in many models; factors favouring growth after maturity include non-linear trade-offs, variable season length, and production and mortality rates both increasing (or decreasing) functions of body size. We cannot yet say whether these are sufficient to account for the many known cases of growth after maturity and not all reasonable models have yet been explored. Factors favouring storage are also reviewed.
Resumo:
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Resumo:
Experimental research has identified many putative agents of amphibian decline, yet the population-level consequences of these agents remain unknown, owing to lack of information on compensatory density dependence in natural populations. Here, we investigate the relative importance of intrinsic (density-dependent) and extrinsic (climatic) factors impacting the dynamics of a tree frog (Hyla arborea) population over 22 years. A combination of log-linear density dependence and rainfall (with a 2-year time lag corresponding to development time) explain 75% of the variance in the rate of increase. Such fluctuations around a variable return point might be responsible for the seemingly erratic demography and disequilibrium dynamics of many amphibian populations.
Resumo:
As modern molecular biology moves towards the analysis of biological systems as opposed to their individual components, the need for appropriate mathematical and computational techniques for understanding the dynamics and structure of such systems is becoming more pressing. For example, the modeling of biochemical systems using ordinary differential equations (ODEs) based on high-throughput, time-dense profiles is becoming more common-place, which is necessitating the development of improved techniques to estimate model parameters from such data. Due to the high dimensionality of this estimation problem, straight-forward optimization strategies rarely produce correct parameter values, and hence current methods tend to utilize genetic/evolutionary algorithms to perform non-linear parameter fitting. Here, we describe a completely deterministic approach, which is based on interval analysis. This allows us to examine entire sets of parameters, and thus to exhaust the global search within a finite number of steps. In particular, we show how our method may be applied to a generic class of ODEs used for modeling biochemical systems called Generalized Mass Action Models (GMAs). In addition, we show that for GMAs our method is amenable to the technique in interval arithmetic called constraint propagation, which allows great improvement of its efficiency. To illustrate the applicability of our method we apply it to some networks of biochemical reactions appearing in the literature, showing in particular that, in addition to estimating system parameters in the absence of noise, our method may also be used to recover the topology of these networks.
Resumo:
The Polochic-Motagua fault systems (PMFS) are part of the sinistral transform boundary between the North American and Caribbean plates. To the west, these systems interact with the subduction zone of the Cocos plate, forming a subduction-subduction-transform triple junction. The North American plate moves westward relative to the Caribbean plate. This movement does not affect the geometry of the subducted Cocos plate, which implies that deformation is accommodated entirely in the two overriding plates. Structural data, fault kinematic analysis, and geomorphic observations provide new elements that help to understand the late Cenozoic evolution of this triple junction. In the Miocene, extension and shortening occurred south and north of the Motagua fault, respectively. This strain regime migrated northward to the Polochic fault after the late Miocene. This shift is interpreted as a ``pull-up'' of North American blocks into the Caribbean realm. To the west, the PMFS interact with a trench-parallel fault zone that links the Tonala fault to the Jalpatagua fault. These faults bound a fore-arc sliver that is shared by the two overriding plates. We propose that the dextral Jalpatagua fault merges with the sinistral PMFS, leaving behind a suturing structure, the Tonala fault. This tectonic ``zipper'' allows the migration of the triple junction. As a result, the fore-arc sliver comes into contact with the North American plate and helps to maintain a linear subduction zone along the trailing edge of the Caribbean plate. All these processes currently make the triple junction increasingly diffuse as it propagates eastward and inland within both overriding plates.
Resumo:
This paper suggests a method for obtaining efficiency bounds in models containing either only infinite-dimensional parameters or both finite- and infinite-dimensional parameters (semiparametric models). The method is based on a theory of random linear functionals applied to the gradient of the log-likelihood functional and is illustrated by computing the lower bound for Cox's regression model
Resumo:
Decision situations are often characterized by uncertainty: we do not know the values of the different options on all attributes and have to rely on information stored in our memory to decide. Several strategies have been proposed to describe how people make inferences based on knowledge used as cues. The present research shows how declarative memory of ACT-R models could be populated based on internet statistics. This will allow to simulate the performance of decision strategies operating on declarative knowledge based on occurrences and co-occurrences of objects and cues in the environment.
Resumo:
Geophysical tomography captures the spatial distribution of the underlying geophysical property at a relatively high resolution, but the tomographic images tend to be blurred representations of reality and generally fail to reproduce sharp interfaces. Such models may cause significant bias when taken as a basis for predictive flow and transport modeling and are unsuitable for uncertainty assessment. We present a methodology in which tomograms are used to condition multiple-point statistics (MPS) simulations. A large set of geologically reasonable facies realizations and their corresponding synthetically calculated cross-hole radar tomograms are used as a training image. The training image is scanned with a direct sampling algorithm for patterns in the conditioning tomogram, while accounting for the spatially varying resolution of the tomograms. In a post-processing step, only those conditional simulations that predicted the radar traveltimes within the expected data error levels are accepted. The methodology is demonstrated on a two-facies example featuring channels and an aquifer analog of alluvial sedimentary structures with five facies. For both cases, MPS simulations exhibit the sharp interfaces and the geological patterns found in the training image. Compared to unconditioned MPS simulations, the uncertainty in transport predictions is markedly decreased for simulations conditioned to tomograms. As an improvement to other approaches relying on classical smoothness-constrained geophysical tomography, the proposed method allows for: (1) reproduction of sharp interfaces, (2) incorporation of realistic geological constraints and (3) generation of multiple realizations that enables uncertainty assessment.
Resumo:
Genome-wide association studies (GWASs) have identified many genetic variants underlying complex traits. Many detected genetic loci harbor variants that associate with multiple-even distinct-traits. Most current analysis approaches focus on single traits, even though the final results from multiple traits are evaluated together. Such approaches miss the opportunity to systemically integrate the phenome-wide data available for genetic association analysis. In this study, we propose a general approach that can integrate association evidence from summary statistics of multiple traits, either correlated, independent, continuous, or binary traits, which might come from the same or different studies. We allow for trait heterogeneity effects. Population structure and cryptic relatedness can also be controlled. Our simulations suggest that the proposed method has improved statistical power over single-trait analysis in most of the cases we studied. We applied our method to the Continental Origins and Genetic Epidemiology Network (COGENT) African ancestry samples for three blood pressure traits and identified four loci (CHIC2, HOXA-EVX1, IGFBP1/IGFBP3, and CDH17; p < 5.0 × 10(-8)) associated with hypertension-related traits that were missed by a single-trait analysis in the original report. Six additional loci with suggestive association evidence (p < 5.0 × 10(-7)) were also observed, including CACNA1D and WNT3. Our study strongly suggests that analyzing multiple phenotypes can improve statistical power and that such analysis can be executed with the summary statistics from GWASs. Our method also provides a way to study a cross phenotype (CP) association by using summary statistics from GWASs of multiple phenotypes.
Resumo:
QUESTIONS UNDER STUDY: Since tumour burden consumes substantial healthcare resources, precise cancer incidence estimations are pivotal to define future needs of national healthcare. This study aimed to estimate incidence and mortality rates of oesophageal, gastric, pancreatic, hepatic and colorectal cancers up to 2030 in Switzerland. METHODS: Swiss Statistics provides national incidences and mortality rates of various cancers, and models of future developments of the Swiss population. Cancer incidences and mortality rates from 1985 to 2009 were analysed to estimate trends and to predict incidence and mortality rates up to 2029. Linear regressions and Joinpoint analyses were performed to estimate the future trends of incidences and mortality rates. RESULTS: Crude incidences of oesophageal, pancreas, liver and colorectal cancers have steadily increased since 1985, and will continue to increase. Gastric cancer incidence and mortality rates reveal an ongoing decrease. Pancreatic and liver cancer crude mortality rates will keep increasing, whereas colorectal cancer mortality on the contrary will fall. Mortality from oesophageal cancer will plateau or minimally increase. If we consider European population-standardised incidence rates, oesophageal, pancreatic and colorectal cancer incidences are steady. Gastric cancers are diminishing and liver cancers will follow an increasing trend. Standardised mortality rates show a diminution for all but liver cancer. CONCLUSIONS: The oncological burden of gastrointestinal cancer will significantly increase in Switzerland during the next two decades. The crude mortality rates globally show an ongoing increase except for gastric and colorectal cancers. Enlarged healthcare resources to take care of these complex patient groups properly will be needed.