52 resultados para Interpretability


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Choosing an adequate measurement instrument depends on the proposed use of the instrument, the concept to be measured, the measurement properties (e.g. internal consistency, reproducibility, content and construct validity, responsiveness, and interpretability), the requirements, the burden for subjects, and costs of the available instruments. As far as measurement properties are concerned, there are no sufficiently specific standards for the evaluation of measurement properties of instruments to measure health status, and also no explicit criteria for what constitutes good measurement properties. In this paper we describe the protocol for the COSMIN study, the objective of which is to develop a checklist that contains COnsensus-based Standards for the selection of health Measurement INstruments, including explicit criteria for satisfying these standards. We will focus on evaluative health related patient-reported outcomes (HR-PROs), i.e. patient-reported health measurement instruments used in a longitudinal design as an outcome measure, excluding health care related PROs, such as satisfaction with care or adherence. The COSMIN standards will be made available in the form of an easily applicable checklist.Method: An international Delphi study will be performed to reach consensus on which and how measurement properties should be assessed, and on criteria for good measurement properties. Two sources of input will be used for the Delphi study: (1) a systematic review of properties, standards and criteria of measurement properties found in systematic reviews of measurement instruments, and (2) an additional literature search of methodological articles presenting a comprehensive checklist of standards and criteria. The Delphi study will consist of four (written) Delphi rounds, with approximately 30 expert panel members with different backgrounds in clinical medicine, biostatistics, psychology, and epidemiology. The final checklist will subsequently be field-tested by assessing the inter-rater reproducibility of the checklist.Discussion: Since the study will mainly be anonymous, problems that are commonly encountered in face-to-face group meetings, such as the dominance of certain persons in the communication process, will be avoided. By performing a Delphi study and involving many experts, the likelihood that the checklist will have sufficient credibility to be accepted and implemented will increase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim This study used data from temperate forest communities to assess: (1) five different stepwise selection methods with generalized additive models, (2) the effect of weighting absences to ensure a prevalence of 0.5, (3) the effect of limiting absences beyond the environmental envelope defined by presences, (4) four different methods for incorporating spatial autocorrelation, and (5) the effect of integrating an interaction factor defined by a regression tree on the residuals of an initial environmental model. Location State of Vaud, western Switzerland. Methods Generalized additive models (GAMs) were fitted using the grasp package (generalized regression analysis and spatial predictions, http://www.cscf.ch/grasp). Results Model selection based on cross-validation appeared to be the best compromise between model stability and performance (parsimony) among the five methods tested. Weighting absences returned models that perform better than models fitted with the original sample prevalence. This appeared to be mainly due to the impact of very low prevalence values on evaluation statistics. Removing zeroes beyond the range of presences on main environmental gradients changed the set of selected predictors, and potentially their response curve shape. Moreover, removing zeroes slightly improved model performance and stability when compared with the baseline model on the same data set. Incorporating a spatial trend predictor improved model performance and stability significantly. Even better models were obtained when including local spatial autocorrelation. A novel approach to include interactions proved to be an efficient way to account for interactions between all predictors at once. Main conclusions Models and spatial predictions of 18 forest communities were significantly improved by using either: (1) cross-validation as a model selection method, (2) weighted absences, (3) limited absences, (4) predictors accounting for spatial autocorrelation, or (5) a factor variable accounting for interactions between all predictors. The final choice of model strategy should depend on the nature of the available data and the specific study aims. Statistical evaluation is useful in searching for the best modelling practice. However, one should not neglect to consider the shapes and interpretability of response curves, as well as the resulting spatial predictions in the final assessment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents the Multiple Kernel Learning (MKL) approach as a modelling and data exploratory tool and applies it to the problem of wind speed mapping. Support Vector Regression (SVR) is used to predict spatial variations of the mean wind speed from terrain features (slopes, terrain curvature, directional derivatives) generated at different spatial scales. Multiple Kernel Learning is applied to learn kernels for individual features and thematic feature subsets, both in the context of feature selection and optimal parameters determination. An empirical study on real-life data confirms the usefulness of MKL as a tool that enhances the interpretability of data-driven models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this tutorial review, we detail both the rationale for as well as the implementation of a set of analyses of surface-recorded event-related potentials (ERPs) that uses the reference-free spatial (i.e. topographic) information available from high-density electrode montages to render statistical information concerning modulations in response strength, latency, and topography both between and within experimental conditions. In these and other ways these topographic analysis methods allow the experimenter to glean additional information and neurophysiologic interpretability beyond what is available from canonical waveform analyses. In this tutorial we present the example of somatosensory evoked potentials (SEPs) in response to stimulation of each hand to illustrate these points. For each step of these analyses, we provide the reader with both a conceptual and mathematical description of how the analysis is carried out, what it yields, and how to interpret its statistical outcome. We show that these topographic analysis methods are intuitive and easy-to-use approaches that can remove much of the guesswork often confronting ERP researchers and also assist in identifying the information contained within high-density ERP datasets

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gene set enrichment (GSE) analysis is a popular framework for condensing information from gene expression profiles into a pathway or signature summary. The strengths of this approach over single gene analysis include noise and dimension reduction, as well as greater biological interpretability. As molecular profiling experiments move beyond simple case-control studies, robust and flexible GSE methodologies are needed that can model pathway activity within highly heterogeneous data sets. To address this challenge, we introduce Gene Set Variation Analysis (GSVA), a GSE method that estimates variation of pathway activity over a sample population in an unsupervised manner. We demonstrate the robustness of GSVA in a comparison with current state of the art sample-wise enrichment methods. Further, we provide examples of its utility in differential pathway activity and survival analysis. Lastly, we show how GSVA works analogously with data from both microarray and RNA-seq experiments. GSVA provides increased power to detect subtle pathway activity changes over a sample population in comparison to corresponding methods. While GSE methods are generally regarded as end points of a bioinformatic analysis, GSVA constitutes a starting point to build pathway-centric models of biology. Moreover, GSVA contributes to the current need of GSE methods for RNA-seq data. GSVA is an open source software package for R which forms part of the Bioconductor project and can be downloaded at http://www.bioconductor.org.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El examen de una normativa para mejorar la información al consumidor plantea observaciones pragmáticas sobre buenas prácticas textuales. La orden 385/2003 de la Generalitat de Catalunya establece el tamaño mínimo de la letra en contratos para facilitar su legibilidad. La interpretación de esta normativa permite considerar el uso pragmático de la letra pequeña. Y se recoge también el ejemplo publicitario de una empresa energética que paradójicamente utiliza, en la actualidad, como recurso de prestigio la letra pequeña en documentos contractuales. GOOD TEXTUAL PRACTICES AND INTERPRETABILITY OF THE FINE PRINT IN CONTRACTS. The review of legislation to improve consumer information raises pragmatic observations on textual practices. The order 385/2003 of the Autonomous Government of Catalonia (Spain) sets the minimum font size in contracts for readability. The interpretation of these rules allows us to consider the pragmatic use of the fine print. And the paper also includes advertising from an energy company that paradoxically used as a resource for prestige the fine print in contract documents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Colorectal cancer (CRC) is a frequently lethal disease with heterogeneous outcomes and drug responses. To resolve inconsistencies among the reported gene expression-based CRC classifications and facilitate clinical translation, we formed an international consortium dedicated to large-scale data sharing and analytics across expert groups. We show marked interconnectivity between six independent classification systems coalescing into four consensus molecular subtypes (CMSs) with distinguishing features: CMS1 (microsatellite instability immune, 14%), hypermutated, microsatellite unstable and strong immune activation; CMS2 (canonical, 37%), epithelial, marked WNT and MYC signaling activation; CMS3 (metabolic, 13%), epithelial and evident metabolic dysregulation; and CMS4 (mesenchymal, 23%), prominent transforming growth factor-β activation, stromal invasion and angiogenesis. Samples with mixed features (13%) possibly represent a transition phenotype or intratumoral heterogeneity. We consider the CMS groups the most robust classification system currently available for CRC-with clear biological interpretability-and the basis for future clinical stratification and subtype-based targeted interventions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual search is an important component of our interaction with our surroundings, allowing us to successfully identify external cues that impact our spatial navigation. Previous research has established fixation duration, fixation count, saccade velocity, and saccade amplitude as important indices of visual search. We examined the Visual Efficiency Detection Index (VEDI) comprising multiple aspects of visual search performance into a single measure of global visual performance. Forty participants, 10 adults ages 22-48, and children ages 6, 8, and 10, completed tests of working memory and visual search in response to stimuli relevant to pedestrian decision making. Results indicated VEDI statistically relates to established indices of visual search in relation to their interpretability for human performance. The VEDI was also sensitive to developmental differences in visual search performance, suggesting insight to its utility in the developmental psychological literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complexity inherent in climate data makes it necessary to introduce more than one statistical tool to the researcher to gain insight into the climate system. Empirical orthogonal function (EOF) analysis is one of the most widely used methods to analyze weather/climate modes of variability and to reduce the dimensionality of the system. Simple structure rotation of EOFs can enhance interpretability of the obtained patterns but cannot provide anything more than temporal uncorrelatedness. In this paper, an alternative rotation method based on independent component analysis (ICA) is considered. The ICA is viewed here as a method of EOF rotation. Starting from an initial EOF solution rather than rotating the loadings toward simplicity, ICA seeks a rotation matrix that maximizes the independence between the components in the time domain. If the underlying climate signals have an independent forcing, one can expect to find loadings with interpretable patterns whose time coefficients have properties that go beyond simple noncorrelation observed in EOFs. The methodology is presented and an application to monthly means sea level pressure (SLP) field is discussed. Among the rotated (to independence) EOFs, the North Atlantic Oscillation (NAO) pattern, an Arctic Oscillation–like pattern, and a Scandinavian-like pattern have been identified. There is the suggestion that the NAO is an intrinsic mode of variability independent of the Pacific.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Brain activity can be measured with several non-invasive neuroimaging modalities, but each modality has inherent limitations with respect to resolution, contrast and interpretability. It is hoped that multimodal integration will address these limitations by using the complementary features of already available data. However, purely statistical integration can prove problematic owing to the disparate signal sources. As an alternative, we propose here an advanced neural population model implemented on an anatomically sound cortical mesh with freely adjustable connectivity, which features proper signal expression through a realistic head model for the electroencephalogram (EEG), as well as a haemodynamic model for functional magnetic resonance imaging based on blood oxygen level dependent contrast (fMRI BOLD). It hence allows simultaneous and realistic predictions of EEG and fMRI BOLD from the same underlying model of neural activity. As proof of principle, we investigate here the influence on simulated brain activity of strengthening visual connectivity. In the future we plan to fit multimodal data with this neural population model. This promises novel, model-based insights into the brain's activity in sleep, rest and task conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides a high-level overview of E-UTRAN interworking and interoperability with existing Third Generation Partnership Project (3GPP) and non-3GPP wireless networks. E-UTRAN access networks (LTE and LTE-A) are currently the latest technologies for 3GPP evolution specified in Release 8, 9 and beyond. These technologies promise higher throughputs and lower latency while also reducing the cost of delivering the services to fit with subscriber demands. 3GPP offers a direct transition path from the current 3GPP UTRAN/GERAN networks to LTE including seamless handover. E-UTRAN and other wireless networks interworking is an option that allows operators to maximize the life of their existing network components before a complete transition to truly 4G networks. Network convergence, backward compatibility and interpretability are regarded as the next major challenge in the evolution and the integration of mobile wireless communications. In this paper, interworking and interoperability between the E-UTRAN Evolved Packet Core (EPC) architecture and 3GPP, 3GPP2 and IEEE based networks are clearly explained. How the EPC is designed to deliver multimedia and facilitate interworking is also explained. Moreover, the seamless handover needed to perform this interworking efficiently is described briefly. This study showed that interoperability and interworking between existing networks and E-UTRAN are highly recommended as an interim solution before the transition to full 4G. Furthermore, wireless operators have to consider a clear interoperability and interworking plan for their existing networks before making a decision to migrate completely to LTE. Interworking provides not only communication between different wireless networks; in many scenarios it contributes to add technical enhancements to one or both environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, in light of minimalist assumptions, some partial UG accessibility accounts to adult second language acquisition have made a distinction between the post-critical period ability to acquire new features based on their LF-interpretability (i.e. interpretable vs. uninterpretable features) (HAWKINS, 2005; HAWKINS; HATTORI, 2006; TSIMPLI; MASTROPAVLOU, 2007; TSIMPLI; DIMITRAKOPOULOU, 2007). The Interpretability Hypothesis (TSIMPLI; MASTROPAVLOU, 2007; TSIMPLI; DIMITRAKOPOULOU, 2007) claims that only uninterpretable features suffer a post-critical period failure and, therefore, cannot be acquired. Conversely, Full Access approaches claim that L2 learners have full access to UG’s entire inventory of features, and that L1/L2 differences obtain outside the narrow syntax. The phenomenon studied herein, adult acquisition of the Overt Pronoun Constraint (OPC) (MONTALBETTI, 1984) and inflected infinitives in nonnative Portuguese, challenges the Interpretability hypothesis insofar as it makes the wrong predictions for what is observed. The present data demonstrate that advanced learners of L2 Portuguese acquire the OPC and the syntax and semantics of inflected infinitives with native-like accuracy. Since inflected infinitives require the acquisition of new uninterpretable φ-features, the present data provide evidence in contra Tsimpli and colleagues’ Interpretability Hypothesis.