960 resultados para Discrete Data Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several methods and algorithms have recently been proposed that allow for the systematic evaluation of simple neuron models from intracellular or extracellular recordings. Models built in this way generate good quantitative predictions of the future activity of neurons under temporally structured current injection. It is, however, difficult to compare the advantages of various models and algorithms since each model is designed for a different set of data. Here, we report about one of the first attempts to establish a benchmark test that permits a systematic comparison of methods and performances in predicting the activity of rat cortical pyramidal neurons. We present early submissions to the benchmark test and discuss implications for the design of future tests and simple neurons models

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE OF REVIEW: HIV targets primary CD4(+) T cells. The virus depends on the physiological state of its target cells for efficient replication, and, in turn, viral infection perturbs the cellular state significantly. Identifying the virus-host interactions that drive these dynamic changes is important for a better understanding of viral pathogenesis and persistence. The present review focuses on experimental and computational approaches to study the dynamics of viral replication and latency. RECENT FINDINGS: It was recently shown that only a fraction of the inducible latently infected reservoirs are successfully induced upon stimulation in ex-vivo models while additional rounds of stimulation make allowance for reactivation of more latently infected cells. This highlights the potential role of treatment duration and timing as important factors for successful reactivation of latently infected cells. The dynamics of HIV productive infection and latency have been investigated using transcriptome and proteome data. The cellular activation state has shown to be a major determinant of viral reactivation success. Mathematical models of latency have been used to explore the dynamics of the latent viral reservoir decay. SUMMARY: Timing is an important component of biological interactions. Temporal analyses covering aspects of viral life cycle are essential for gathering a comprehensive picture of HIV interaction with the host cell and untangling the complexity of latency. Understanding the dynamic changes tipping the balance between success and failure of HIV particle production might be key to eradicate the viral reservoir.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and Purpose-The safety and efficacy of thrombolysis in cervical artery dissection (CAD) are controversial. The aim of this meta-analysis was to pool all individual patient data and provide a valid estimate of safety and outcome of thrombolysis in CAD.Methods-We performed a systematic literature search on intravenous and intra-arterial thrombolysis in CAD. We calculated the rates of pooled symptomatic intracranial hemorrhage and mortality and indirectly compared them with matched controls from the Safe Implementation of Thrombolysis in Stroke-International Stroke Thrombolysis Register. We applied multivariate regression models to identify predictors of excellent (modified Rankin Scale=0 to 1) and favorable (modified Rankin Scale=0 to 2) outcome.Results-We obtained individual patient data of 180 patients from 14 retrospective series and 22 case reports. Patients were predominantly female (68%), with a mean +/- SD age of 46 +/- 11 years. Most patients presented with severe stroke (median National Institutes of Health Stroke Scale score=16). Treatment was intravenous thrombolysis in 67% and intra-arterial thrombolysis in 33%. Median follow-up was 3 months. The pooled symptomatic intracranial hemorrhage rate was 3.1% (95% CI, 1.3 to 7.2). Overall mortality was 8.1% (95% CI, 4.9 to 13.2), and 41.0% (95% CI, 31.4 to 51.4) had an excellent outcome. Stroke severity was a strong predictor of outcome. Overlapping confidence intervals of end points indicated no relevant differences with matched controls from the Safe Implementation of Thrombolysis in Stroke-International Stroke Thrombolysis Register.Conclusions-Safety and outcome of thrombolysis in patients with CAD-related stroke appear similar to those for stroke from all causes. Based on our findings, thrombolysis should not be withheld in patients with CAD. (Stroke. 2011;42:2515-2520.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cross-hole radar tomography is a useful tool for mapping shallow subsurface electrical properties viz. dielectric permittivity and electrical conductivity. Common practice is to invert cross-hole radar data with ray-based tomographic algorithms using first arrival traveltimes and first cycle amplitudes. However, the resolution of conventional standard ray-based inversion schemes for cross-hole ground-penetrating radar (GPR) is limited because only a fraction of the information contained in the radar data is used. The resolution can be improved significantly by using a full-waveform inversion that considers the entire waveform, or significant parts thereof. A recently developed 2D time-domain vectorial full-waveform crosshole radar inversion code has been modified in the present study by allowing optimized acquisition setups that reduce the acquisition time and computational costs significantly. This is achieved by minimizing the number of transmitter points and maximizing the number of receiver positions. The improved algorithm was employed to invert cross-hole GPR data acquired within a gravel aquifer (4-10 m depth) in the Thur valley, Switzerland. The simulated traces of the final model obtained by the full-waveform inversion fit the observed traces very well in the lower part of the section and reasonably well in the upper part of the section. Compared to the ray-based inversion, the results from the full-waveform inversion show significantly higher resolution images. At either side, 2.5 m distance away from the cross-hole plane, borehole logs were acquired. There is a good correspondence between the conductivity tomograms and the natural gamma logs at the boundary of the gravel layer and the underlying lacustrine clay deposits. Using existing petrophysical models, the inversion results and neutron-neutron logs are converted to porosity. Without any additional calibration, the values obtained for the converted neutron-neutron logs and permittivity results are very close and similar vertical variations can be observed. The full-waveform inversion provides in both cases additional information about the subsurface. Due to the presence of the water table and associated refracted/reflected waves, the upper traces are not well fitted and the upper 2 m in the permittivity and conductivity tomograms are not reliably reconstructed because the unsaturated zone is not incorporated into the inversion domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel and straightforward method for estimating recent migration rates between discrete populations using multilocus genotype data. The approach builds upon a two-step sampling design, where individual genotypes are sampled before and after dispersal. We develop a model that estimates all pairwise backwards migration rates (m(ij), the probability that an individual sampled in population i is a migrant from population j) between a set of populations. The method is validated with simulated data and compared with the methods of BayesAss and Structure. First, we use data for an island model and then we consider more realistic data simulations for a metapopulation of the greater white-toothed shrew (Crocidura russula). We show that the precision and bias of estimates primarily depend upon the proportion of individuals sampled in each population. Weak sampling designs may particularly affect the quality of the coverage provided by 95% highest posterior density intervals. We further show that it is relatively insensitive to the number of loci sampled and the overall strength of genetic structure. The method can easily be extended and makes fewer assumptions about the underlying demographic and genetic processes than currently available methods. It allows backwards migration rates to be estimated across a wide range of realistic conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been repeatedly debated which strategies people rely on in inference. These debates have been difficult to resolve, partially because hypotheses about the decision processes assumed by these strategies have typically been formulated qualitatively, making it hard to test precise quantitative predictions about response times and other behavioral data. One way to increase the precision of strategies is to implement them in cognitive architectures such as ACT-R. Often, however, a given strategy can be implemented in several ways, with each implementation yielding different behavioral predictions. We present and report a study with an experimental paradigm that can help to identify the correct implementations of classic compensatory and non-compensatory strategies such as the take-the-best and tallying heuristics, and the weighted-linear model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study discusses retention criteria for principal components analysis (PCA) applied to Likert scale items typical in psychological questionnaires. The main aim is to recommend applied researchers to restrain from relying only on the eigenvalue-than-one criterion; alternative procedures are suggested for adjusting for sampling error. An additional objective is to add evidence on the consequences of applying this rule when PCA is used with discrete variables. The experimental conditions were studied by means of Monte Carlo sampling including several sample sizes, different number of variables and answer alternatives, and four non-normal distributions. The results suggest that even when all the items and thus the underlying dimensions are independent, eigenvalues greater than one are frequent and they can explain up to 80% of the variance in data, meeting the empirical criterion. The consequences of using Kaiser"s rule are illustrated with a clinical psychology example. The size of the eigenvalues resulted to be a function of the sample size and the number of variables, which is also the case for parallel analysis as previous research shows. To enhance the application of alternative criteria, an R package was developed for deciding the number of principal components to retain by means of confidence intervals constructed about the eigenvalues corresponding to lack of relationship between discrete variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous field mapping has to address two conflicting remote sensing requirements when collecting training data. On one hand, continuous field mapping trains fractional land cover and thus favours mixed training pixels. On the other hand, the spectral signature has to be preferably distinct and thus favours pure training pixels. The aim of this study was to evaluate the sensitivity of training data distribution along fractional and spectral gradients on the resulting mapping performance. We derived four continuous fields (tree, shrubherb, bare, water) from aerial photographs as response variables and processed corresponding spectral signatures from multitemporal Landsat 5 TM data as explanatory variables. Subsequent controlled experiments along fractional cover gradients were then based on generalised linear models. Resulting fractional and spectral distribution differed between single continuous fields, but could be satisfactorily trained and mapped. Pixels with fractional or without respective cover were much more critical than pure full cover pixels. Error distribution of continuous field models was non-uniform with respect to horizontal and vertical spatial distribution of target fields. We conclude that a sampling for continuous field training data should be based on extent and densities in the fractional and spectral, rather than the real spatial space. Consequently, adequate training plots are most probably not systematically distributed in the real spatial space, but cover the gradient and covariate structure of the fractional and spectral space well. (C) 2009 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Joint inversion of crosshole ground-penetrating radar and seismic data can improve model resolution and fidelity of the resultant individual models. Model coupling obtained by minimizing or penalizing some measure of structural dissimilarity between models appears to be the most versatile approach because only weak assumptions about petrophysical relationships are required. Nevertheless, experimental results and petrophysical arguments suggest that when porosity variations are weak in saturated unconsolidated environments, then radar wave speed is approximately linearly related to seismic wave speed. Under such circumstances, model coupling also can be achieved by incorporating cross-covariances in the model regularization. In two case studies, structural similarity is imposed by penalizing models for which the model cross-gradients are nonzero. A first case study demonstrates improvements in model resolution by comparing the resulting models with borehole information, whereas a second case study uses point-spread functions. Although radar seismic wavespeed crossplots are very similar for the two case studies, the models plot in different portions of the graph, suggesting variances in porosity. Both examples display a close, quasilinear relationship between radar seismic wave speed in unconsolidated environments that is described rather well by the corresponding lower Hashin-Shtrikman (HS) bounds. Combining crossplots of the joint inversion models with HS bounds can constrain porosity and pore structure better than individual inversion results can.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surface-based ground penetrating radar (GPR) and electrical resistance tomography (ERT) are common tools for aquifer characterization, because both methods provide data that are sensitive to hydrogeologically relevant quantities. To retrieve bulk subsurface properties at high resolution, we suggest incorporating structural information derived from GPR reflection data when inverting surface ERT data. This reduces resolution limitations, which might hinder quantitative interpretations. Surface-based GPR reflection and ERT data have been recorded on an exposed gravel bar within a restored section of a previously channelized river in northeastern Switzerland to characterize an underlying gravel aquifer. The GPR reflection data acquired over an area of 240×40 m map the aquifer's thickness and two internal sub-horizontal regions with different depositional patterns. The interface between these two regions and the boundary of the aquifer with then underlying clay are incorporated in an unstructured ERT mesh. Subsequent inversions are performed without applying smoothness constraints across these boundaries. Inversion models obtained by using these structural constraints contain subtle resistivity variations within the aquifer that are hardly visible in standard inversion models as a result of strong vertical smearing in the latter. In the upper aquifer region, with high GPR coherency and horizontal layering, the resistivity is moderately high (N300 Ωm). We suggest that this region consists of sediments that were rearranged during more than a century of channelized flow. In the lower low coherency region, the GPR image reveals fluvial features (e.g., foresets) and generally more heterogeneous deposits. In this region, the resistivity is lower (~200 Ωm), which we attribute to increased amounts of fines in some of the well-sorted fluvial deposits. We also find elongated conductive anomalies that correspond to the location of river embankments that were removed in 2002.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene set enrichment (GSE) analysis is a popular framework for condensing information from gene expression profiles into a pathway or signature summary. The strengths of this approach over single gene analysis include noise and dimension reduction, as well as greater biological interpretability. As molecular profiling experiments move beyond simple case-control studies, robust and flexible GSE methodologies are needed that can model pathway activity within highly heterogeneous data sets. To address this challenge, we introduce Gene Set Variation Analysis (GSVA), a GSE method that estimates variation of pathway activity over a sample population in an unsupervised manner. We demonstrate the robustness of GSVA in a comparison with current state of the art sample-wise enrichment methods. Further, we provide examples of its utility in differential pathway activity and survival analysis. Lastly, we show how GSVA works analogously with data from both microarray and RNA-seq experiments. GSVA provides increased power to detect subtle pathway activity changes over a sample population in comparison to corresponding methods. While GSE methods are generally regarded as end points of a bioinformatic analysis, GSVA constitutes a starting point to build pathway-centric models of biology. Moreover, GSVA contributes to the current need of GSE methods for RNA-seq data. GSVA is an open source software package for R which forms part of the Bioconductor project and can be downloaded at http://www.bioconductor.org.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The major intention of the present study was to investigate whether an approach combining the use of niche-based palaeodistribution modeling and phylo-geography would support or modify hypotheses about the Quaternary distributional history derived from phylogeographic methods alone. Our study system comprised two closely related species of Alpine Primula. We used species distribution models based on the extant distribution of the species and last glacial maximum (LGM) climate models to predict the distribution of the two species during the LGM. Phylogeographic data were generated using amplified fragment length polymorphisms (AFLPs). In Primula hirsuta, models of past distribution and phylogeographic data are partly congruent and support the hypothesis of widespread nunatak survival in the Central Alps. Species distribution models (SDMs) allowed us to differentiate between alpine regions that harbor potential nunatak areas and regions that have been colonized from other areas. SDMs revealed that diversity is a good indicator for nunataks, while rarity is a good indicator for peripheral relict populations that were not source for the recolonization of the inner Alps. In P. daonensis, palaeo-distribution models and phylogeographic data are incongruent. Besides the uncertainty inherent to this type of modeling approach (e.g., relatively coarse 1-km grain size), disagreement of models and data may partly be caused by shifts of ecological niche in both species. Nevertheless, we demonstrate that the combination of palaeo-distribution modeling with phylogeographical approaches provides a more differentiated picture of the distributional history of species and partly supports (P. hirsuta) and partly modifies (P. daonensis and P. hirsuta) hypotheses of Quaternary distributional history. Some of the refugial area indicated by palaeodistribution models could not have been identified with phylogeographic data.