58 resultados para Standardized Testing
Resumo:
Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
Resumo:
We model the effect of contract standardization on the development of markets andthe law. In a setting in which biased judges can distort contract enforcement, we findthat the introduction of a standard contract reduces enforcement distortions relative toreliance on precedents, exerting two effects: i) it statically expands the volume of trade,but ii) it crowds out the use of open-ended contracts, hindering legal evolution. We shedlight on the large-scale commercial codification undertaken in the nineteenth centuryin many countries (even common-law ones) during a period of booming commerce andlong-distance trade.
Resumo:
This paper develops an approach to rank testing that nests all existing rank tests andsimplifies their asymptotics. The approach is based on the fact that implicit in every ranktest there are estimators of the null spaces of the matrix in question. The approach yieldsmany new insights about the behavior of rank testing statistics under the null as well as localand global alternatives in both the standard and the cointegration setting. The approach alsosuggests many new rank tests based on alternative estimates of the null spaces as well as thenew fixed-b theory. A brief Monte Carlo study illustrates the results.
Resumo:
Although usability evaluations have been focused on assessing different contexts of use, no proper specifications have been addressed towards the particular environment of academic websites in the Spanish-speaking context of use. Considering that this context involves hundreds of millions of potential users, the AIPO Association is running the UsabAIPO Project. The ultimate goal is to promote an adequate translation of international standards, methods and ideal values related to usability in order to adapt them to diverse Spanish-related contexts of use. This article presents the main statistical results coming from the Second and Third Stages of the UsabAIPO Project, where the UsabAIPO Heuristic method (based on Heuristic Evaluation techniques) and seven Cognitive Walkthroughs were performed over 69 university websites. The planning and execution of the UsabAIPO Heuristic method and the Cognitive Walkthroughs, the definition of two usability metrics, as well as the outline of the UsabAIPO Heuristic Management System prototype are also sketched.
Resumo:
In the present research we have set forth a new, simple, Trade-Off model that would allow us to calculate how much debt and, by default, how much equity a company should have, using easily available information and calculating the cost of debt dynamically on the basis of the effect that the capital structure of the company has on the risk of bankruptcy; in an attempt to answer this question. The proposed model has been applied to the companies that make up the Dow Jones Industrial Average (DJIA) in 2007. We have used consolidated financial data from 1996 to 2006, published by Bloomberg. We have used simplex optimization method to find the debt level that maximizes firm value. Then, we compare the estimated debt with real debt of companies using statistical nonparametric Mann-Whitney. The results indicate that 63% of companies do not show a statistically significant difference between the real and the estimated debt.
Resumo:
The Spreading of the Introduced Seaweed Caulerpa taxifolia (Vahl) C. Agardh in the Mediterranean Sea: Testing the Boat Transportation Hypothesis
Resumo:
A new, quantitative, inference model for environmental reconstruction (transfer function), based for the first time on the simultaneous analysis of multigroup species, has been developed. Quantitative reconstructions based on palaeoecological transfer functions provide a powerful tool for addressing questions of environmental change in a wide range of environments, from oceans to mountain lakes, and over a range of timescales, from decades to millions of years. Much progress has been made in the development of inferences based on multiple proxies but usually these have been considered separately, and the different numeric reconstructions compared and reconciled post-hoc. This paper presents a new method to combine information from multiple biological groups at the reconstruction stage. The aim of the multigroup work was to test the potential of the new approach to making improved inferences of past environmental change by improving upon current reconstruction methodologies. The taxonomic groups analysed include diatoms, chironomids and chrysophyte cysts. We test the new methodology using two cold-environment training-sets, namely mountain lakes from the Pyrenees and the Alps. The use of multiple groups, as opposed to single groupings, was only found to increase the reconstruction skill slightly, as measured by the root mean square error of prediction (leave-one-out cross-validation), in the case of alkalinity, dissolved inorganic carbon and altitude (a surrogate for air-temperature), but not for pH or dissolved CO2. Reasons why the improvement was less than might have been anticipated are discussed. These can include the different life-forms, environmental responses and reaction times of the groups under study.
Resumo:
Water is vital to humans and each of us needs at least 1.5 L of safe water a day to drink. Beginning as long ago as 1958 the World Health Organization (WHO) has published guidelines to help ensure water is safe to drink. Focused from the start on monitoring radionuclides in water, and continually cooperating with WHO, the International Standardization Organization (ISO) has been publishing standards on radioactivity test methods since 1978. As reliable, comparable and"fit for purpose" results are an essential requirement for any public health decision based on radioactivity measurements, international standards of tested and validated radionuclide test methods are an important tool for production of such measurements. This paper presents the ISO standards already published that could be used as normative references by testing laboratories in charge of radioactivity monitoring of drinking water as well as those currently under drafting and the prospect of standardized fast test methods in response to a nuclear accident.
Resumo:
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved.
Resumo:
In the present work we focus on two indices that quantify directionality and skew-symmetrical patterns in social interactions as measures of social reciprocity: the Directional consistency (DC) and Skew symmetry indices. Although both indices enable researchers to describe social groups, most studies require statistical inferential tests. The main aims of the present study are: firstly, to propose an overall statistical technique for testing null hypotheses regarding social reciprocity in behavioral studies, using the DC and Skew symmetry statistics (Φ) at group level; and secondly, to compare both statistics in order to allow researchers to choose the optimal measure depending on the conditions. In order to allow researchers to make statistical decisions, statistical significance for both statistics has been estimated by means of a Monte Carlo simulation. Furthermore, this study will enable researchers to choose the optimal observational conditions for carrying out their research, as the power of the statistical tests has been estimated.
Resumo:
We empirically applied the GrooFiWorld agent-based model (Puga-González et al. 2009) in a group of captive mangabeys (Cercocebus torquatus). We analysed several measurements related to aggression and affiliative patterns. The group adopted a combination of despotic and egalitarian behaviours resulting from the behavioural flexibility observed in the Cercopithecinae subfamily. Our study also demonstrates that the GrooFiWorld agent-based model can be extended to other members of the Cercopithecinae subfamily generating parsimonious hypotheses related to the social organization.
Resumo:
Control on regional government budgets is important in a monetary union as lower tiers of government have fewer incentives to consolidate debt. According to the Fiscal Theory of the Price Level; unsustainable non-Ricardian fiscal policies eventually force monetary policy to adjust. Hence, uncoordinated and non-regulated regional fiscal policies would therefore threaten price stability for the monetary union as a whole. However, the union central bank is not without defense. A federal government that internalises the spillover effect of non-Ricardian fiscal policies on the price level can offset non-Ricardian regional fiscal policies. A federal government, which taxes and transfers resources between regions, may compensate for unsustainable regional fiscal policies so as to keep fiscal policy Ricardian on aggregate. Following Canzoneri et al. (2001), we test the validity of the Fiscal Theory of the Price Level for both federal and regional governments in Germany. We find evidence of a spillover effect of unsustainable policies on the price level for other Länder. However, the German federal government offsets this effect on the price level by running Ricardian policies. These results have implications for the regulation of fiscal policies in the EMU.
Resumo:
Mimicry is a central plank of the emotional contagion theory; however, it was only tested with facial and postural emotional stimuli. This study explores the existence of mimicry in voice-to-voice communication by analyzing 8,747 sequences of emotional displays between customers and employees in a call-center context. We listened live to 967 telephone inter-actions, registered the sequences of emotional displays, and analyzed them with a Markov chain. We also explored other propositions of emotional contagion theory that were yet to be tested in vocal contexts. Results supported that mimicry is significantly present at all levels. Our findings fill an important gap in the emotional contagion theory; have practical implications regarding voice-to-voice interactions; and open doors for future vocal mimicry research.