16 resultados para testing method

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Recent advances on high-throughput technologies have produced a vast amount of protein sequences, while the number of high-resolution structures has seen a limited increase. This has impelled the production of many strategies to built protein structures from its sequence, generating a considerable amount of alternative models. The selection of the closest model to the native conformation has thus become crucial for structure prediction. Several methods have been developed to score protein models by energies, knowledge-based potentials and combination of both.Results: Here, we present and demonstrate a theory to split the knowledge-based potentials in scoring terms biologically meaningful and to combine them in new scores to predict near-native structures. Our strategy allows circumventing the problem of defining the reference state. In this approach we give the proof for a simple and linear application that can be further improved by optimizing the combination of Zscores. Using the simplest composite score () we obtained predictions similar to state-of-the-art methods. Besides, our approach has the advantage of identifying the most relevant terms involved in the stability of the protein structure. Finally, we also use the composite Zscores to assess the conformation of models and to detect local errors.Conclusion: We have introduced a method to split knowledge-based potentials and to solve the problem of defining a reference state. The new scores have detected near-native structures as accurately as state-of-art methods and have been successful to identify wrongly modeled regions of many near-native conformations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a method to conduct inference in panel VAR models with cross unit interdependencies and time variations in the coefficients. The approach can be used to obtain multi-unit forecasts and leading indicators and to conduct policy analysis in a multiunit setups. The framework of analysis is Bayesian and MCMC methods are used to estimate the posterior distribution of the features of interest. The model is reparametrized to resemble an observable index model and specification searches are discussed. As an example, we construct leading indicators for inflation and GDP growth in the Euro area using G-7 information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is common in econometric applications that several hypothesis tests arecarried out at the same time. The problem then becomes how to decide whichhypotheses to reject, accounting for the multitude of tests. In this paper,we suggest a stepwise multiple testing procedure which asymptoticallycontrols the familywise error rate at a desired level. Compared to relatedsingle-step methods, our procedure is more powerful in the sense that itoften will reject more false hypotheses. In addition, we advocate the useof studentization when it is feasible. Unlike some stepwise methods, ourmethod implicitly captures the joint dependence structure of the teststatistics, which results in increased ability to detect alternativehypotheses. We prove our method asymptotically controls the familywise errorrate under minimal assumptions. We present our methodology in the context ofcomparing several strategies to a common benchmark and deciding whichstrategies actually beat the benchmark. However, our ideas can easily beextended and/or modied to other contexts, such as making inference for theindividual regression coecients in a multiple regression framework. Somesimulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present study was to investigate the relative importance of flooding- and confinement-related environmentalfeatures in explaining macroinvertebrate trait structure and diversity in a pool of wetlands located in a Mediterranean riverfloodplain. To test hypothesized trait-environment relationships, we employed a recently implemented statistical procedure, thefourth-corner method. We found that flooding-related variables, mainly pH and turbidity, were related to traits that confer an abilityof the organism to resist flooding (e.g., small body-shape, protection of eggs) or recuperate faster after flooding (e.g., short life-span, asexual reproduction). In contrast, confinement-related variables, mainly temperature and organic matter, enhanced traits that allow organisms to interact and compete with other organisms (e.g., large size, sexual reproduction) and to efficiently use habitat and resources (e.g., diverse locomotion and feeding strategies). These results are in agreement with predictions made under the River Habitat Templet for lotic ecosystems, and demonstrate the ability of the fourth-corner method to test hypothesis that posit traitenvironment relationships. Trait diversity was slightly higher in flooded than in confined sites, whereas trait richness was not significantly different. This suggests that although trait structure may change in response to the main environmental factors, as evidenced by the fourth-corner method, the number of life-history strategies needed to persist in the face of such constraints remains more or less constant; only their relative dominance differs

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. The aim of this study was to identify new surfactants with low skin irritant properties for use in pharmaceutical and cosmetic formulations, employing cell culture as an alternative method to in vivo testing. In addition, we sought to establish whether potential cytotoxic properties were related to the size of the counterions bound to the surfactants. Methods. Cytotoxicity was assessed in the mouse fibroblast cell line 3T6, and the human keratinocyte cell line NCTC 2544, using the MTT assay and uptake of the vital dye neutral red 24 h after dosing (NRU). Results. Lysine-derivative surfactants showed higher IC50s than did commercial anionic irritant compounds such as sodium dodecyl sulphate, proving to be no more harmful than amphoteric betaines. The aggressiveness of the surfactants depended upon the size of their constituent counterions: surfactants associated with lighter counterions showed a proportionally higher aggressivity than those with heavier ones. Conclusions. Synthetic lysine-derivative anionic surfactants are less irritant than commercial surfactants such as sodium dodecyl sulphate and Hexadecyltrimethylammonium bromide and are similar to Betaines. These surfactants may offer promising applications in pharmaceutical and cosmetic preparations, representing a potential alternative to commercial anionic surfactants as a result of their low irritancy potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines statistical analysis of social reciprocity, that is, the balance between addressing and receiving behaviour in social interactions. Specifically, it focuses on the measurement of social reciprocity by means of directionality and skew-symmetry statistics at different levels. Two statistics have been used as overall measures of social reciprocity at group level: the directional consistency and the skew-symmetry statistics. Furthermore, the skew-symmetry statistic allows social researchers to obtain complementary information at dyadic and individual levels. However, having computed these measures, social researchers may be interested in testing statistical hypotheses regarding social reciprocity. For this reason, it has been developed a statistical procedure, based on Monte Carlo sampling, in order to allow social researchers to describe groups and make statistical decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although usability evaluations have been focused on assessing different contexts of use, no proper specifications have been addressed towards the particular environment of academic websites in the Spanish-speaking context of use. Considering that this context involves hundreds of millions of potential users, the AIPO Association is running the UsabAIPO Project. The ultimate goal is to promote an adequate translation of international standards, methods and ideal values related to usability in order to adapt them to diverse Spanish-related contexts of use. This article presents the main statistical results coming from the Second and Third Stages of the UsabAIPO Project, where the UsabAIPO Heuristic method (based on Heuristic Evaluation techniques) and seven Cognitive Walkthroughs were performed over 69 university websites. The planning and execution of the UsabAIPO Heuristic method and the Cognitive Walkthroughs, the definition of two usability metrics, as well as the outline of the UsabAIPO Heuristic Management System prototype are also sketched.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present research we have set forth a new, simple, Trade-Off model that would allow us to calculate how much debt and, by default, how much equity a company should have, using easily available information and calculating the cost of debt dynamically on the basis of the effect that the capital structure of the company has on the risk of bankruptcy; in an attempt to answer this question. The proposed model has been applied to the companies that make up the Dow Jones Industrial Average (DJIA) in 2007. We have used consolidated financial data from 1996 to 2006, published by Bloomberg. We have used simplex optimization method to find the debt level that maximizes firm value. Then, we compare the estimated debt with real debt of companies using statistical nonparametric Mann-Whitney. The results indicate that 63% of companies do not show a statistically significant difference between the real and the estimated debt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new, quantitative, inference model for environmental reconstruction (transfer function), based for the first time on the simultaneous analysis of multigroup species, has been developed. Quantitative reconstructions based on palaeoecological transfer functions provide a powerful tool for addressing questions of environmental change in a wide range of environments, from oceans to mountain lakes, and over a range of timescales, from decades to millions of years. Much progress has been made in the development of inferences based on multiple proxies but usually these have been considered separately, and the different numeric reconstructions compared and reconciled post-hoc. This paper presents a new method to combine information from multiple biological groups at the reconstruction stage. The aim of the multigroup work was to test the potential of the new approach to making improved inferences of past environmental change by improving upon current reconstruction methodologies. The taxonomic groups analysed include diatoms, chironomids and chrysophyte cysts. We test the new methodology using two cold-environment training-sets, namely mountain lakes from the Pyrenees and the Alps. The use of multiple groups, as opposed to single groupings, was only found to increase the reconstruction skill slightly, as measured by the root mean square error of prediction (leave-one-out cross-validation), in the case of alkalinity, dissolved inorganic carbon and altitude (a surrogate for air-temperature), but not for pH or dissolved CO2. Reasons why the improvement was less than might have been anticipated are discussed. These can include the different life-forms, environmental responses and reaction times of the groups under study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we propose a copula-based method to generate synthetic gene expression data that account for marginal and joint probability distributions features captured from real data. Our method allows us to implant significant genes in the synthetic dataset in a controlled manner, giving the possibility of testing new detection algorithms under more realistic environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work we focus on two indices that quantify directionality and skew-symmetrical patterns in social interactions as measures of social reciprocity: the Directional consistency (DC) and Skew symmetry indices. Although both indices enable researchers to describe social groups, most studies require statistical inferential tests. The main aims of the present study are: firstly, to propose an overall statistical technique for testing null hypotheses regarding social reciprocity in behavioral studies, using the DC and Skew symmetry statistics (Φ) at group level; and secondly, to compare both statistics in order to allow researchers to choose the optimal measure depending on the conditions. In order to allow researchers to make statistical decisions, statistical significance for both statistics has been estimated by means of a Monte Carlo simulation. Furthermore, this study will enable researchers to choose the optimal observational conditions for carrying out their research, as the power of the statistical tests has been estimated.